id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.11849 | New classes of groups related to algebraic combinatorics with
applications to isomorphism problems | We introduce two refinements of the class of $5/2$-groups, inspired by the
classes of automorphism groups of configurations and automorphism groups of
unit circulant digraphs. We show that both of these classes have the property
that any two regular cyclic subgroups of a group $G$ in either of these classes
are conjugate in $G$. This generalizes two results in the literature (and
simplifies their proofs) that show that symmetric configurations and unit
circulant digraphs are isomorphic if and only if they are isomorphic by a group
automorphism of ${\mathbb Z}_n$. | Ted Dobson | 2023-05-19T17:41:51Z | http://arxiv.org/abs/2305.11849v1 | # New classes of groups related to algebraic combinatorics with applications to isomorphism problems
###### Abstract.
We introduce two refinements of the class of \(5/2\)-groups, inspired by the classes of automorphism groups of configurations and automorphism groups of unit circulant digraphs. We show that both of these classes have the property that any two regular cyclic subgroups of a group \(G\) in either of these classes are conjugate in \(G\). This generalizes two results in the literature (and simplifies their proofs) that show that symmetric configurations and unit circulant digraphs are isomorphic if and only if they are isomorphic by a group automorphism of \(\mathbb{Z}_{n}\).
**Keywords:** \(5/2\)-closed, unit circulant graph, configuration, CI-group, Cayley isomorphism 2020 Mathematics Subject Classification: Primary 05E18, Secondary 05E30 In [8], a class of groups called \(5/2\)-closed groups was defined which properly contains all automorphism groups of vertex-transitive digraphs. A sufficient condition for the quotient of a \(5/2\)-closed group to be \(5/2\)-closed was given, and this was used to determine all Sylow \(p\)-subgroups of \(5/2\)-closed groups of odd prime-power degree that contain a regular cyclic subgroup. Together with work in [7], this gives the automorphism group of all circulant digraphs of odd prime-power order (Cayley digraphs of \(\mathbb{Z}_{p^{k}}\), \(p\) odd and \(k\geq 1\)).
In this paper, we continue this program and introduce two additional families of groups. We will call these families \(3/2\)-closed and \(9/8\)-closed groups. One should think of \(9/8\)-closed groups as being the analogues of the automorphism groups of configurations, in the way that \(5/2\)-closed groups were inspired by the automorphism groups of vertex-transitive digraphs. Continuing the analogies, \(3/2\)-closed groups are thought of as analogues of the automorphism groups of unit circulant digraphs. We go ahead and define an intermediary class of groups, \(5/4\)-closed groups, but at this time it is not clear if they will be of general interest.
We will show that for \(n\geq 1\), any two regular cyclic subgroups in either a \(3/2\)-closed group \(G\) (Theorem 6.13) or a \(9/8\)-closed group \(G\) (Theorem 5.19) are conjugate in \(G\). These results generalize several known results. First, Toida conjectured that every unit circulant digraph is a CI-digraph. This conjecture was independently verified to be true by two groups of authors, the author and Joy Morris [6] using permutation group techniques, as well as by Muzychuk, Klin and Poschel [21] using the method of Schur. The more general result proven here has a shorter proof. Koike, Kovacs, Marusic, and Muzychuk showed that \(\mathbb{Z}_{n}\) is a CI-group with respect to balanced configurations [16]. We prove this in a much more general situation, as well as with a shorter proof.
Additionally, in Theorem 2.4 we will also give a method of construction of the \(5/2\)-closure of any transitive permutation group \(G\). This construction illustrates that the \(5/2\)-closure is determined internally (i.e. all of the information one needs to construct it is already contained in the group). The \(2\)-closure, by contrast, can be thought of as "external". To construct it, one calculates the orbital digraphs of \(G\), then calculates the automorphism group of each orbital digraphs, and then the \(2\)-closure is the intersection of all such automorphism groups. Calculating the automorphism groups of the orbital digraphs is generally difficult, and the relationship between the automorphism group and the group \(G\) may not be clear.
Finally, there are in the author's view several other interesting results in this paper. In Theorem 5.11 we show an incidence structure where each pair of points is on at most one line and each line contains at least three points (a partial Sylvester-Gallai design) has \(9/8\)-closed automorphism group (as well as for such objects that have been "colored" and "directed"), and that every connected vertex-transitive digraph of girth at least \(5\) also has \(9/8\)-closed automorphism group (Theorem 5.14). In Theorem 6.10 we give the relationship between the classes of \(9/8\)-closed groups and \(5/2\)-closed groups.
## 1. \(5/2\)-closed groups
In this section we summarize \(5/2\)-closed groups. A reader who is familiar with [8] will not see anything new in this section. We will need some basic ideas and notation regarding permutation groups before proceeding.
**Definition 1.1**.: Let \(X\) be a set and \(G\leq S_{X}\) be transitive. A subset \(B\subseteq X\) is a **block** of \(G\) if whenever \(g\in G\), then \(g(B)\cap B=\emptyset\) or \(B\). If \(B=\{x\}\) for some \(x\in X\) or \(B=X\), then \(B\) is a **trivial block**. Any other block is nontrivial. Note that if \(B\) is a block of \(G\), then \(g(B)\) is also a block of \(B\) for every \(g\in G\), and is called a **conjugate block of \(B\)**. The set of all blocks conjugate to \(B\) is a partition of \(X\), called a **block system of \(G\)**.
**Definition 1.2**.: Let \(X\) be a set and \(G\leq S_{X}\) be transitive. If \(N{\unlhd}G\), then the set of orbits of \(N\) is a block system \(\mathcal{B}\) of \(G\). We say \(\mathcal{B}\) is a **normal block system** of \(G\).
**Definition 1.3**.: Let \(G\leq S_{n}\) be transitive with a block system \(\mathcal{B}\). By \(\operatorname{fix}_{G}(\mathcal{B})\) we denote the subgroup of \(G\) that fixes each block of \(\mathcal{B}\) set-wise. That is, \(\operatorname{fix}_{G}(\mathcal{B})=\{g\in G:g(B)=B\text{ for all }B\in \mathcal{B}\}\). If \(\mathcal{C}\) is another block system of \(G\) and each block of \(\mathcal{B}\) is (properly) contained within a block of \(\mathcal{C}\), we write \((\mathcal{B}\prec\mathcal{C})\)\(\mathcal{B}\preceq\mathcal{C}\), and say \(\mathcal{B}\)**refines**\(\mathcal{C}\). We denote the induced action of \(G\) on the block system \(\mathcal{B}\) by \(G/\mathcal{B}\), and the action of an element \(g\in G\) on \(\mathcal{B}\) by \(g/\mathcal{B}\). That is, \(g/\mathcal{B}(B)=B^{\prime}\) if and only if \(g(B)=B^{\prime}\), and \(G/\mathcal{B}=\{g/\mathcal{B}:g\in G\}\). If \(\mathcal{B}\preceq\mathcal{C}\), then \(G/\mathcal{B}\) has a block system \(\mathcal{C}/\mathcal{B}\) whose blocks are the set of blocks of \(\mathcal{B}\) whose union are blocks of \(\mathcal{C}\).
It is easy to see that \(\operatorname{fix}_{G}(\mathcal{B})\) is a normal subgroup of \(G\).
**Definition 1.4**.: Let \(G\leq S_{n}\) with orbit \(\mathcal{O}\), and \(g\in G\). Then \(g\) induces a permutation on \(\mathcal{O}\) by restricting the domain of \(g\) to \(\mathcal{O}\). We denote the resulting permutation in \(S_{\mathcal{O}}\) by \(g^{\mathcal{O}}\). The group \(G^{\mathcal{O}}=\{g^{\mathcal{O}}:g\in G\}\) is the **transitive constituent** of \(G\) on \(\mathcal{O}\).
We now give the necessary definitions to define \(5/2\)-closed groups.
**Definition 1.5**.: It was shown in [8, Lemma 1.7] that for \(G\leq S_{n}\) transitive with a normal block system \(\mathcal{B}\), for each \(B\in\mathcal{B}\) there is a maximal subgroup of \(\mathrm{fix}_{G}(\mathcal{B})\), denoted \(\mathrm{WStab}_{G}(B)\) (the **wreath** stabilizer of \(B\) in \(G\)), such that \(\mathrm{WStab}_{G}(B)^{B}=1\) and for every other block \(B^{\prime}\in\mathcal{B}\), \(\mathrm{WStab}_{G}(B)^{B^{\prime}}\) is either transitive or trivial.
It was also shown in [8, Lemma 1.9] that \(\mathrm{WStab}_{G}(B)\) behaves exactly like a "stabilizer" in that the conjugate of a wreath stabilizer of \(B\) by \(g\in G\) is the wreath stabilizer of \(g(B)\) in \(G\)
**Definition 1.6**.: Let \(G\) be a transitive group that has a normal block system \(\mathcal{B}\). Define a relation \(\equiv\) on \(\mathcal{B}\) by \(B\equiv B^{\prime}\) if and only if \(\mathrm{WStab}_{G}(B)=\mathrm{WStab}_{G}(B^{\prime})\).
It was shown in [8, Lemma 1.11] that \(\equiv\) is an equivalence relation on \(\mathcal{B}\), and that the set of unions of equivalence classes of \(\equiv\) is a block system of \(G\) which is refined by \(\mathcal{B}\). Note that \(B\not\equiv B^{\prime}\) means that \(\mathrm{WStab}_{G}(B)^{B^{\prime}}\) is transitive.
**Definition 1.7**.: We call \(\equiv\) the \(\mathcal{B}\)**-restricting equivalence relation of \(G\)**, and \(\mathcal{E}\) the \(\mathcal{B}\)**-fixer block system of \(G\)**.
**Notation 1.8**.: _Let \(g\in S_{n}\) and \(X\subseteq\mathbb{Z}_{n}\) (we think of \(S_{n}\) as permuting the elements of \(\mathbb{Z}_{n}\)) such that \(g(X)=X\). By \(g|_{X}\) we mean the element of \(S_{n}\) such that \(g|_{X}(y)=g(y)\) if \(y\in X\), while \(g|_{X}(y)=y\) if \(y\notin X\)._
**Definition 1.9**.: Let \(G\leq S_{n}\) be transitive. For \(H\leq G\) transitive with normal block system \(\mathcal{B}_{H}\) let \(\mathcal{E}_{H,\mathcal{B}}\) be the \(\mathcal{B}_{H}\)-fixer block system of \(H\). Suppose that for every transitive subgroup \(H\leq G\), every normal block system \(\mathcal{B}_{H}\) of \(H\), and every \(g\in G\) that fixes each block of \(\mathcal{B}_{H}\) contained in \(E\in\mathcal{E}_{H,\mathcal{B}}\) setwise, we have \(g|_{E}\in G\). We will then say that \(G\) is \(5/2\)**-closed**. For a group \(G\), the \(5/2\)**-closure of \(G\)**, denoted \(G^{(5/2)}\), is the intersection of all \(5/2\)-closed groups which contain \(G\).
The set of \(5/2\)-closed groups was introduced in [8], where their elementary properties were studied. In particular it was shown that the automorphism groups of vertex-transitive digraphs are \(5/2\)-closed [8, Theorem 3.6], and a sufficient condition for the quotient of a \(5/2\)-closed group to be \(5/2\)-closed was given in [8, Theorem 2.3]. We next continue the development of \(5/2\)-closed groups by giving an explicit method for constructing the \(5/2\)-closure of a transitive permutation group, which we will need later.
## 2. Constructing \(G^{(5/2)}\)
Automorphism groups of digraphs, and consequently \(2\)-closed groups, can contain "unexpected" automorphisms, and consequently can be difficult to construct. We next show that, at least intuitively, \(5/2\)-closed groups are much easier to construct. We also show that like \(2\)-closed groups (see [27, Theorem 4.11]), the blocks of the \(5/2\)-closure of a transitive group \(G\) are the same as the group \(G\). We begin with some preliminary results.
**Lemma 2.1**.: _Let \(G\leq S_{n}\) be transitive with normal block systems \(\mathcal{B}\) and \(\mathcal{C}\), with \(\mathcal{E}\) the \(\mathcal{C}\)-fixer block system of \(G\). Then either \(\mathcal{B}\preceq\mathcal{E}\) or \(\mathcal{C}\prec\mathcal{B}\)._
Proof.: The result is trivial if \(\mathcal{E}=\{\mathbb{Z}_{n}\}\), so we assume \(\mathcal{E}\) has at least two blocks. Let \(x\in\mathbb{Z}_{n}\), \(B_{x}\in\mathcal{B}\), \(C_{x}\in\mathcal{C}\) and \(E_{x}\in\mathcal{E}\) that contain \(x\). Then \(\mathrm{WStab}_{G}(C_{x})^{E_{x}}=1\) while \(\mathrm{WStab}_{G}(C_{x})^{E}\), has orbits that are blocks of \(\mathcal{C}\), where \(E\in\mathcal{E}\) with \(E\neq E_{x}\). If \(B_{x}\subseteq E_{x}\) then \(\mathcal{B}\preceq\mathcal{E}\) and we are finished. Otherwise, there is some \(b\in B_{x}\) that
is not contained in \(E_{x}\). As \(\operatorname{WStab}_{G}(C_{x})\leq\operatorname{Stab}_{G}(x)\), the orbit of \(\operatorname{Stab}_{G}(x)\) that contains \(b\) contains the block \(C_{b}\) of \(\mathcal{C}\) that contains \(b\), and is not contained in \(E_{x}\). Also, as \(B_{x}\) is a union of orbits of \(\operatorname{Stab}_{G}(x)\) by [4, Exercise 1.5.9], we have \(C_{b}\subseteq B_{x}\). Let \(y\in B_{x}\cap E_{x}\). Then there exists \(g\in G\) such that \(g(b)=y\), and \(g(C_{b})\subseteq B_{x}\) contains \(y\). Thus, for every element \(z\in B_{x}\cap E_{x}\), the block of \(\mathcal{C}\) that contains \(z\) is contained in \(B_{x}\). Hence \(\mathcal{C}\preceq\mathcal{B}\), and, as \(x\not\in C_{b}\), \(\mathcal{C}\prec\mathcal{B}\).
**Definition 2.2**.: For \(B\subseteq\mathbb{Z}_{n}\) and \(G\leq S_{n}\), the setwise stabilizer of \(B\) in \(G\) is \(\operatorname{Stab}_{G}(B)=\{g\in G:g(B)=B\}\), and is a subgroup of \(G\).
**Lemma 2.3**.: _Let \(G\leq S_{n}\) be transitive, and \(H\leq G\) such that \(H\) is transitive with a normal block system \(\mathcal{C}\). Let \(\mathcal{E}\) be the \(\mathcal{C}\)-fixing block system of \(H\). A partition \(\mathcal{B}\) of \(\mathbb{Z}_{n}\) is a block system of \(G\) if and only if it is a block system of \(F=\langle G,\gamma|_{E}:\gamma\in\operatorname{Stab}_{G}(E)\) and fixes each block of \(\mathcal{C}\) in \(E,E\in\mathcal{E}\rangle\)._
Proof.: As \(G\leq F\), a block system of \(F\) is a block system of \(G\).
For the converse, we may assume that \(\mathcal{E}\neq\{\mathbb{Z}_{n}\}\), as if that is the case, then \(G=F\). Let \(\gamma\in\operatorname{Stab}_{G}(E)\) that fixes each block of \(\mathcal{C}\) contained in \(E\in\mathcal{E}\). We apply Lemma 2.1 to \(H\) and consider the two conclusions of this lemma separately. Suppose \(\mathcal{B}\preceq\mathcal{E}\). Then for each block \(E\in\mathcal{E}\), we have either \(B\subseteq E\) or \(B\cap E=\emptyset\). In the former case, \((\gamma|_{E})(B)=\gamma(B)\), while in the latter case, \(\gamma|_{E}(B)=B\). We conclude that \(\gamma|_{E}\) maps blocks of \(\mathcal{B}\) to blocks of \(\mathcal{B}\). If \(\mathcal{C}\prec\mathcal{B}\), then \(\gamma\) fixes every block of \(\mathcal{B}\) contained in \(E\), and so \((\gamma|_{E})(B)=B\) for every \(B\in\mathcal{B}\). Then \(\mathcal{B}\) is preserved by every element of a generating set of \(F\), and the result follows.
We now wish to fix notation for the next result, which gives a method of constructing the \(5/2\)-closure of a transitive group. Let \(G\leq S_{n}\) be transitive, and \(X=\{\mathcal{B}_{i}:1\leq i\leq t\}\) the set of all normal block systems of some transitive subgroup \(H_{i}\leq G\) (the transitive subgroup \(H_{i}\) may vary as \(i\) varies), with \(\mathcal{B}_{i}\)-restricting block systems \(\mathcal{E}_{i}\) of \(H_{i}\). Let
\[\operatorname{clo}_{5/2}(G) = \langle G,\gamma|_{E_{i}}:\gamma\in\operatorname{Stab}_{G}(E_{i}) \text{ that fixes each block of }\mathcal{B}_{i}\] \[\text{ contained in }E_{i},E_{i}\in\mathcal{E}_{i},1\leq i\leq t\rangle.\]
For an integer \(s\), set \(\operatorname{clo}_{5/2}^{s}(G)\) to be \(\operatorname{clo}_{5/2}(\operatorname{clo}_{5/2}(\cdots(\operatorname{clo}_{5 /2}(G))\ldots))\) (where \(\operatorname{clo}_{5/2}\) is applied \(s\) times). We note that each time \(\operatorname{clo}_{5/2}\) is applied, we do not assume that \(X\) or \(t\) is the same.
The next result is the main result of this section, and shows how to construct \(G^{(5/2)}\) from \(G\), as well as shows that the block systems of \(G\) and \(G^{(5/2)}\) are the same.
**Theorem 2.4**.: _There exists a positive integer \(s\) such that \(\operatorname{clo}_{5/2}^{s}(G)=G^{(5/2)}\). Additionally, the blocks of \(G\) and \(G^{(5/2)}\) are the same._
Proof.: As \(S_{n}\) is finite, there exists a smallest positive integer \(s\) such that \(\operatorname{clo}_{5/2}^{s+1}(G)=\operatorname{clo}_{5/2}^{s}(G)\). Let \(\mathcal{B}\) be a block system of \(G\). Inductively applying Lemma 2.3, we see that \(\mathcal{B}\) is a block system of \(\operatorname{clo}_{5/2}(G)\). Inductively applying the previous fact, we see that \(\mathcal{B}\) is a block system of \(\operatorname{clo}_{5/2}^{s}(G)\). As \(\operatorname{clo}_{5/2}^{s}(G)=\operatorname{clo}_{5/2}^{s+1}(G)\), the block systems \(\mathcal{B}_{1},\ldots,\mathcal{B}_{t}\) for which there is a transitive subgroup \(H_{i}\leq\operatorname{clo}_{5/2}^{s}(G)\) for which \(\mathcal{B}_{i}\) is a normal block system of \(H_{i}\) are the block systems \(\mathcal{C}_{1},\ldots,\mathcal{C}_{t}\) for which there is a transitive subgroup \(K_{i}\) of \(\operatorname{clo}_{5/2}^{s+1}(G)\) with \(\mathcal{C}_{i}\) a normal block system of \(K_{i}\). It then follows by the definition of \(G^{(5/2)}\) that \(\operatorname{clo}_{5/2}^{s}(G)=G^{(5/2)}\). To finish, we have shown
that a block of \(G\) is a block of \(G^{(5/2)}\). As \(G\leq G^{(5/2)}\), a block of \(G^{(5/2)}\) is a block of \(G\). Thus the blocks of \(G\) and \(G^{(5/2)}\) are the same.
Theorem 2.3 gives a theoretical way of computing the \(5/2\)-closure of any transitive group \(G\). First, find all normal block systems of all transitive subgroups of \(G\). Then calculate \(\operatorname{WStab}_{G}(B)\) for \(B\in\mathcal{B}\) and normal block system \(\mathcal{B}\) found previously. Choose a normal block system \(\mathcal{B}\) of some subgroup \(H\leq G\) and calculate the \(\mathcal{B}\)-fixer block system \(\mathcal{E}\) of \(H\), along with the subgroup \(F_{\mathcal{B},E}\) of \(G\) which fixes every block of \(\mathcal{B}\) contained in \(E\in\mathcal{E}\). Then restrict these elements to the appropriate \(E\in\mathcal{E}\). Repeat this for all block systems previously found. Then \(\operatorname{clo}_{5/2}(G)\) is the group generated by \(G\) together with all of the additional elements of \(S_{n}\) that have been found. Repeat this procedure until no new elements are produced.
## 3. The Cayley isomorphism problem
In this section, we give a very brief overview of the Cayley isomorphism problem, contenting ourselves with only the information that is necessary for this paper. References are given at the end of this section to publications where much more information can be found.
**Definition 3.1**.: Let \(G\) be a group and \(S\subseteq G\). Define a **Cayley digraph of \(G\)**, denoted \(\operatorname{Cay}(G,S)\), to be the digraph with vertex set \(V(\operatorname{Cay}(G,S))=G\) and arc set \(A(\operatorname{Cay}(G,S))=\{(g,gs):g\in G,s\in S\}\). We call \(S\) the **connection set of \(\operatorname{Cay}(G,S)\)**.
The Cayley isomorphism problem for Cayley graphs began in 1967 when Adam conjectured [1] that two Cayley graphs \(\operatorname{Cay}(\mathbb{Z}_{n},S)\) and \(\operatorname{Cay}(\mathbb{Z}_{n},T)\) are isomorphic if an only if there is an element \(m\in\mathbb{Z}_{n}^{*}\) (a **multiplier**) such that \(mS=T\). Note that \(mS=T\) is equivalent to \(\operatorname{Cay}(\mathbb{Z}_{n},S)\) and \(\operatorname{Cay}(\mathbb{Z}_{n},T)\) are isomorphic by a group automorphism of \(\mathbb{Z}_{n}\). Also, the image of a Cayley digraph of \(G\) under a group automorphism of \(G\) is a Cayley digraph of \(G\)[9, Lemma 1.2.15]. So Adam was essentially conjecturing that the smallest list of isomorphisms (the group automorphisms of \(G\)) to check for isomorphism would determine isomorphism between Cayley graphs of \(\mathbb{Z}_{n}\). Elspas and Turner [11] in 1970 quickly showed that this conjecture is false for both graphs and digraphs, and the conjecture changed into the problem of which groups \(G\) have the property that any two Cayley (di)graphs of \(G\) are isomorphic if and only if they are isomorphic by a group automorphism of \(G\)?
**Definition 3.2**.: Let \(G\) be a group, and \(S\subseteq G\). We say \(\operatorname{Cay}(G,S)\) is a **CI-digraph** of \(G\) if and only if whenever \(T\subseteq G\) then \(\operatorname{Cay}(G,S)\cong\operatorname{Cay}(G,T)\) if and only if there is \(\alpha\in\operatorname{Aut}(G)\) such that \(\alpha(\operatorname{Cay}(G,S))=\operatorname{Cay}(G,T)\).
**Definition 3.3**.: Let \(G\) be a group and \(g\in G\). Define \(g_{L}\colon G\to G\) by \(g_{L}(x)=gx\). The map \(g_{L}\) is a **left translation of \(G\)**. The **left regular representation of \(G\)**, denoted \(G_{L}\), is \(G_{L}=\{g_{L}:g\in G\}\).
It is straightforward to show \(G_{L}\) is a group, and \(G_{L}\leq\operatorname{Aut}(\operatorname{Cay}(G,S))\) for every \(S\subseteq G\).
Of course, one can also ask the same question for other types of combinatorial objects once we have a natural generalization of the notion of a Cayley graph for them. Sabidussi [23, Lemma 4] has shown that a graph \(\Gamma\) is isomorphic to a Cayley graph of \(G\) if and only if \(\operatorname{Aut}(\Gamma)\) contains a regular subgroup isomorphic
to \(G\). Hence the next definition extends the notion of a Cayley graph to other combinatorial objects.
**Definition 3.4**.: A **Cayley object**\(X\) of a group \(G\) in a class \(\mathcal{K}\) of combinatorial objects is one in which \(G_{L}\leq\operatorname{Aut}(X)\), the automorphism group of \(X\).
We next generalize the notion of a CI-digraph of \(G\) to arbitrary combinatorial objects.
**Definition 3.5**.: For a Cayley object \(X\) of \(G\) in some class of combinatorial objects \(\mathcal{K}\), we say that \(X\) is a **CI-object of \(G\)** if and only if whenever \(X^{\prime}\) is another Cayley object of \(G\) in \(\mathcal{K}\), then \(X\) and \(X^{\prime}\) are isomorphic if and only if \(\alpha(X)=X^{\prime}\) for some \(\alpha\in\operatorname{Aut}(G)\).
Babai characterized CI-objects of a group \(G\)[2, Lemma 3.1].
**Lemma 3.6**.: _Let \(X\) be a Cayley object of \(G\) in some class \(\mathcal{K}\) of combinatorial objects. Then the following are equivalent:_
1. \(X\) _is a CI-object of_ \(G\) _in_ \(\mathcal{K}\)_,_
2. _whenever_ \(\phi\in S_{G}\) _such that_ \(\phi^{-1}G_{L}\phi\leq\operatorname{Aut}(X)\)_,_ \(G_{L}\) _and_ \(\phi^{-1}G_{L}\phi\) _are conjugate in_ \(\operatorname{Aut}(X)\)_._
**Definition 3.7**.: Let \(G\) be a group and \(\mathcal{K}\) a class of combinatorial objects. We say that \(G\) is a **CI-group with respect to \(\mathcal{K}\)** if every Cayley object of \(G\) in \(\mathcal{K}\) is a CI-object of \(G\).
In this paper, we will mainly be concerned with Cayley objects of a cyclic group. Such Cayley objects are called **circulants**.
Much work on the problem of determining which groups are CI-groups with respect to graphs has been done. We refer the reader to [19] for a survey paper on the problem, as well as to [10, Theorem 5.2] for the current list of possible CI-groups with respect to digraphs. See also [9, Chapter 7].
## 4. Some permutation group theory
In this section, we will summarize much of the permutation group theory in the literature which is directly applicable to the problems considered here. Also, we prove one additional result which will be useful. We start with some definitions.
**Definition 4.1**.: Let \(n=p_{1}^{a_{1}}p_{2}^{a_{2}}\cdots p_{r}^{a_{r}}\) be the prime power decomposition of \(n\), and define \(\Omega\colon\mathbb{N}\to\mathbb{N}\) by \(\Omega(n)=\sum_{i=1}^{r}a_{i}\) (so \(m=\Omega(n)\) is the number of prime divisors of \(n\) with repetition allowed). A transitive group \(G\leq S_{n}\) is \(m\)**-step imprimitive** if there exists a sequence of block systems \(\mathcal{B}_{0}\prec\mathcal{B}_{1}\prec\ldots\prec\mathcal{B}_{m}\) of \(G\), where \(\mathcal{B}_{0}\) is the set of all singleton sets and \(\mathcal{B}_{m}\) is \(\{\mathbb{Z}_{n}\}\), and if \(B_{i+1}\in\mathcal{B}_{i+1}\) and \(B_{i}\in\mathcal{B}_{i}\), then \(|B_{i+1}|/|B_{i}|\) is prime, \(0\leq i\leq m-1\). (Technically this last condition is not strictly necessary as \(\mathcal{B}_{i+1}\) is not, by definition, equal to \(\mathcal{B}_{i}\), but we list it anyway to emphasize the property). If, in addition, each \(\mathcal{B}_{i}\) is normal, then we say that \(G\) is **normally \(m\)-step imprimitive**. We call \(\mathcal{B}_{0}\prec\mathcal{B}_{1}\prec\ldots\prec\mathcal{B}_{m}\) an \(m\)**-step imprimitivity sequence of \(G\)**, and a **normal \(m\)-step imprimitivity sequence of \(G\)** if each \(\mathcal{B}_{i}\) is a normal block system of \(G\).
**Definition 4.2**.: Let \(G\leq S_{X}\) and \(H\leq S_{Y}\). Define the **wreath product of \(G\) and \(H\)**, denoted \(G\wr H\), to be the set of all permutations of \(X\times Y\) of the form \((x,y)\mapsto(g(x),h_{x}(y))\), where \(g\in G\) and each \(h_{x}\in H\).
It is straightforward to show that \(G\wr H\) is a group. See [9, SS4.2] for examples concerning the wreath product of groups.
Let \(\langle x\rangle=(\mathbb{Z}_{n})_{L}\) and \(\langle y\rangle\leq S_{n}\) a conjugate of \(\langle x\rangle\). There are many results in the literature dealing with how to conjugate \(\langle y\rangle\) by an element of \(\langle x,y\rangle\) to make it "closer" to \(\langle x\rangle\). In view of Lemma 3.6, these are exactly the types of results we would want when considering the CI-problem for cyclic groups and any class of combinatorial object. Specifically, applying (in this order) [20, Theorem 4.9], [20, Theorem 1.8], [5, Lemma 19], and [5, Lemma 24], one obtains the following result:
**Lemma 4.3**.: _Let \(n\) be a positive integer with prime power decomposition \(n=p_{1}^{a_{1}}p_{2}^{a_{2}}\cdots p_{r}^{a_{r}}\) where each \(a_{i}\geq 1\), \(1\leq i\leq r\), and \(p_{1}>p_{2}>\ldots>p_{r}\). Let \(\langle x\rangle=(\mathbb{Z}_{n})_{L}\), and \(y\in S_{n}\) generate a regular cyclic subgroup. Let \(m=\Omega(n)\). Then there exists \(\delta\in\langle x,y\rangle\) such that \(\langle x,\delta^{-1}y\delta\rangle\) satisfies the following conditions:_
1. \(\langle x,\delta^{-1}y\delta\rangle\) _is normally_ \(m\)_-step imprimitive with normal imprimitivity sequence_ \(\mathcal{B}_{0}\prec\mathcal{B}_{1}\prec\cdots\prec\mathcal{B}_{m}\) _and_ \(|B_{i+1}|/|B_{i}|\geq|B_{i+2}|/|B_{i+1}|\)_,_ \(0\leq i\leq m-2\)_,_
2. _Set_ \(b_{0}=1\)_, and for_ \(1\leq i\leq r\)_, set_ \(b_{i}=\sum_{j=1}^{i}a_{i}\)_. If_ \(b_{i}-1\leq k\leq b_{i+1}-1\)_,_ \(B_{k}\in\mathcal{B}_{k}\)_,_ \(B_{k+1}\in\mathcal{B}_{k+1}\)_, then_ \(|B_{k+1}/|B_{k}|=p_{i+1}\)_,_
3. \(\langle x,\delta^{-1}y\delta\rangle\) _is solvable,_
4. \(\langle x,\delta^{-1}y\delta\rangle\) _is permutation isomorphic to a subgroup of_ \[\operatorname{AGL}(1,p_{r})^{[a_{r}]}\wr\operatorname{AGL}(1,p_{r-1})^{[a_{ r-1}]}\wr\ldots\wr\operatorname{AGL}(1,p_{1})^{[a_{1}]},\] _where_ \(\operatorname{AGL}(1,p_{i})^{[a_{i}]}\) _is the wreath product of_ \(\operatorname{AGL}(1,p_{i})\) _with itself_ \(a_{i}\) _times, and_
5. _if_ \(p_{1}\) _is odd, then_ \(\operatorname{fix}_{\langle x,\delta^{-1}y\delta\rangle}(\mathcal{B}_{1})\) _has a Sylow_ \(p_{1}\)_-subgroup or order at least_ \(p_{1}^{2}\)_, or_ \(\operatorname{fix}_{\langle x,\delta^{-1}y\delta\rangle}(\mathcal{B}_{a_{1}})\) _has a unique cyclic Sylow_ \(p_{1}\)_-subgroup_ \(P=\langle x^{n/p_{1}^{a_{1}}}\rangle\)_,_ \(P\leq\operatorname{Z}(\langle x,\delta^{-1}y\delta\rangle)\)_, and_ \(\operatorname{fix}_{\langle x,\delta^{-1}y\delta\rangle}(\mathcal{B}_{a_{1}})=P\)_._
Note that in (5) if \(p_{1}\) is not odd, then \(r=1\) and \(n\) is a power of \(2\). Our next result removes the hypothesis that \(p_{1}\) is odd in the last part of the previous result.
**Lemma 4.4**.: _Let \(k\geq 2\) and \(x,y\in S_{2^{k}}\) such that \(\langle x\rangle\) and \(\langle y\rangle\) are regular cyclic subgroups and \(\langle x,y\rangle\) is a \(2\)-group. Let \(\mathcal{B}_{0}\prec\mathcal{B}_{1}\prec\cdots\prec\mathcal{B}_{k}\) be a normal \(k\)-step imprimitivity sequence of \(\langle x,y\rangle\). Then \(\langle x,y\rangle=\langle x\rangle\) or \(|\operatorname{fix}_{\langle x,y\rangle}(\mathcal{B}_{1})|\geq 4\)._
Proof.: Notice that \(x^{2^{k-1}},y^{2^{k-1}}\in\operatorname{fix}_{\langle x,y\rangle}(\mathcal{B}_ {1})\). Additionally, as both \(\langle x^{2^{k-1}}\rangle\) and \(\langle y^{2^{k-1}}\rangle\) are semiregular subgroups of order \(2\) that fix each block of \(\mathcal{B}_{1}\) and permutate the elements of a block of \(\mathcal{B}_{1}\) as a transposition, \(\langle x^{2^{k-1}}\rangle=\langle y^{2^{k-1}}\rangle\). Let \(1\leq j\leq k\) be maximum such that \(\langle x^{2^{k-j}}\rangle=\langle y^{2^{k-j}}\rangle\). By the previous argument, \(j\) exists. The result follows if \(j=k\), so we assume without loss of generality that \(1\leq j<k\). Then \(\langle x^{k-j}\rangle=\langle y^{k-j}\rangle\) commutes with every element of \(\langle x\rangle\) and \(\langle y\rangle\), and so \(x^{k-j}\in\operatorname{Z}(\langle x,y\rangle)\). Additionally, as \(\langle x^{k-j}\rangle\in\operatorname{Z}(\langle x,y\rangle)\), we see that \(\operatorname{Stab}_{\langle x,y\rangle}(B_{j})^{B_{j}}\) commutes with \(\langle x^{k-j}\rangle^{B_{j}}\) for every \(B_{j}\in\mathcal{B}_{j}\). As a transitive abelian group is self-centralizing [9, Corollary 2.2.18], we see that \(\operatorname{Stab}_{\langle x,y\rangle}(B_{j})^{B_{j}}=\langle x^{k-j}\rangle^{ B_{j}}\) for every \(B_{j}\in\mathcal{B}_{j}\). Then \(\langle x,y\rangle\) is permutation isomorphic to a subgroup of \((\langle x,y\rangle/\mathcal{B}_{j})\wr(\mathbb{Z}_{2^{k-j}})_{L}\) by the Embedding Theorem [9, Theorem 4.3.1].
Now, by arguments at the beginning of the proof of this result, \(\langle x^{k-j-1}\rangle/\mathcal{B}_{j}=\langle y^{k-j-1}\rangle/\mathcal{B}_ {j}\). Then \(1\neq x^{k-j-1}y^{k-j-1}\in\operatorname{fix}_{\langle x,y\rangle}(\mathcal{B}_ {j})\) as, by hypothesis,
\(\langle y^{k-j-1}\rangle\). As \(\operatorname{Stab}_{\langle x,y\rangle}(B_{j})^{B_{j}}=\langle x^{k-j}\rangle^{B_{ j}}\) for every \(B_{j}\in\mathcal{B}_{j}\), we see that
\[\langle x^{k-j-1}y^{k-j-1}\rangle^{B_{j}}\in\langle x^{k-j}\rangle^{B_{j}}\]
for every \(B_{j}\in\mathcal{B}_{j}\). Then there exists \(a\in\mathbb{Z}\) such that \([(x^{k-j})^{a}(x^{k-j-1}y^{k-j-1})]^{B_{j}}=1\) for some fixed \(B_{j}\in\mathcal{B}_{j}\). But as \(\langle x^{k-j-1}\rangle\neq\langle y^{k-j-1}\rangle\), it cannot be the case that \((x^{k-j})^{a}(x^{k-j-1}y^{k-j-1})=1\). Then there is \(B^{\prime}_{j}\in\mathcal{B}_{j}\) such that
\[[(x^{k-j})^{a}(x^{k-j-1}y^{k-j-1})]^{B^{\prime}_{j}}\neq 1.\]
As \(\operatorname{Stab}_{\langle x,y\rangle}(B_{j})^{B_{j}}=\langle x^{k-j}\rangle ^{B_{j}}\) for every \(B_{j}\in\mathcal{B}_{j}\), \(\operatorname{Stab}_{\langle x,y\rangle}(B^{\prime}_{j})^{B^{\prime}_{j}}\) is a regular cyclic subgroup of order dividing \(2^{j}\), and so contains a unique subgroup of order \(2\). We conclude that if \((x^{k-j})^{a}(x^{k-j-1}y^{k-j-1})\) has order \(2^{\ell}\), then
\[[(x^{k-j})^{a}(x^{k-j-1}y^{k-j-1})]^{2^{\ell-1}}\in\operatorname{fix}_{ \langle x,y\rangle}(\mathcal{B}_{1}),\]
has order \(2\), and has a fixed point. Thus \(\operatorname{fix}_{\langle x,y\rangle}(\mathcal{B}_{1})\neq\langle x^{2^{k-1 }}\rangle\), and the result follows.
## 5. Configurations and \(9/8\)-closed groups
In this section we consider a smaller class of groups than \(5/2\)-closed groups, which we will call \(9/8\)-closed groups (and will define another used later). The reasoning for the various choices of names of families of groups is as follows. In Wielandt's \(k\)-closure hierarchy, the class of groups always becomes larger as \(k\) grows by [27, Theorem 5.10]. The \(1\)-closed groups are just the symmetric groups [27, Theorem 5.11]. So, following Wielandt, we want the family of groups to "grow" as the number grows. We put grow in quotes as we mean intuitively, not necessarily absolutely. Thus we mean that the set of all \(2\)-closed groups is by and large "bigger" than the set of \(9/8\)-closed groups. But not every \(9/8\)-closed group is \(2\)-closed. For classes outside of Wielandt's hierarchy though, inclusion does hold. We use the equivalence relation \(\equiv\) as defined in Definition 1.6 but impose some additional conditions (thus maintaining inclusion).
**Definition 5.1**.: Let \(G\leq S_{n}\) be transitive and \(5/2\)-closed. We will say that \(G\) is \(9/8\)-closed if whenever \(H\leq G\) is transitive with normal block system \(\mathcal{B}\), then the \(\mathcal{B}\)-fixing block system \(\mathcal{E}_{H,\mathcal{B}}\) of \(H\) is \(\{\mathbb{Z}_{n}\}\). Let \(K_{H}\leq G\) be the largest subgroup that has \(\mathcal{B}\) as a block system. We say \(G\) is \(5/4\)-closed if whenever \(H\leq G\) is transitive with normal block system \(\mathcal{B}\) then \(\mathcal{E}_{K_{H},\mathcal{B}}\) is either \(\mathcal{B}\) or \(\mathbb{Z}_{n}\). We say \(G\) is \(3/2\)-closed if \(G\) is \(5/4\)-closed and whenever \(\mathcal{E}_{H,\mathcal{B}}=\mathcal{B}\), then \(\operatorname{fix}_{K}(\mathcal{B})^{B}=S_{B}\) for every \(B\in\mathcal{B}\).
Notice that in the above definition, for the \(5/4\)-closure and \(3/2\)-closure we do not compute the \(\mathcal{B}\)-fixing block system of \(H\), but of the largest subgroup \(K_{H}\) of \(G\) which has \(\mathcal{B}\) as a block system.
Clearly a \(9/8\)-closed group is \(5/4\)-closed, and a \(5/4\)-closed group is \(3/2\)-closed. Also observe that a transitive subgroup of a \(9/8\)-closed group is also \(9/8\)-closed. At present, we have no applications for \(5/4\)-closed groups, but they are an obvious class between \(9/8\)-closed and \(3/2\)-closed groups. We will consider \(3/2\)-closed groups in the next section.
Our next goal is to define most of the different combinatorial objects which we will be considering in this section. The following definitions and results are mostly taken from [8].
**Definition 5.2**.: Let \(X\) be a finite set and \(T\) a subset of \(\cup_{i=1}^{n}X^{i}\) for some \(n<|X|\). Hence \(T\) is a set of \(k_{i}\)-tuples with elements in \(X\) for some set of integers \(k_{1},\ldots,k_{r}<n\). We will call \(T\) a **tuple system** on \(X\). Let \(S=\{\{u_{1},\ldots,u_{t}\}:(u_{1},\ldots,u_{t})\in T\}\), so \(S\) is the set of coordinates of each element of \(T\), and we call it the **set system corresponding to \(T\)**. Let \(m\geq 0\) be an integer. We say a set system \(S\) is \(m\)-intersecting if for every \(U_{1},U_{2}\in S\) with \(U_{1}\neq U_{2}\), \(|U_{1}\cap U_{2}|\leq m\). A tuple system \(T\) will also be called \(1\)-intersecting if its corresponding set system is \(1\)-intersecting.
**Example 5.1**.: The arc set of a finite digraph \(\Gamma\) is a tuple system, with corresponding set system the set of edges of the underlying simple graph. Also, observe that \(A(\Gamma)\) is \(1\)-intersecting, as two different arcs can only share at most one head or tail.
**Definition 5.3**.: Let \(P\) be a set whose elements are called points, and \(L\) be a set whose elements are called lines. An **incidence relation** is an ordered triple \((P,L,\mathcal{I})\) where \(\mathcal{I}\subseteq P\times L\). We say the point \(p\) is on the line \(\ell\) or the line \(\ell\) contains the point \(p\) if \((p,\ell)\in\mathcal{I}\).
**Definition 5.4**.: A **configuration** is an incidence relation \((P,L,\mathcal{I})\) for which there are positive integers \(q\) and \(k\) such that the following conditions hold:
* each of the points is on exactly \(q\) lines,
* each line contains exactly \(k\) points, and
* two points are on at most one line and two lines contain at most one point in common.
Thus a configuration is a \(1\)-intersecting set system. For more information on configurations, see [12], [22], or [9, Section 6.4].
**Definition 5.5**.: Let \(T\) be a tuple (set) system. We say \(T\) is **colored** if each tuple (set) in \(T\) has been identified with a color (or integer). Hence there is a function \(f:T\rightarrow\mathbb{N}\) and \(f(t)\) is simply the color of the tuple (set) \(t\in T\).
We may think of an uncolored tuple (set) system \(T\) as a colored tuple (set) system simply by assigning the same color to every element of \(T\).
**Definition 5.6**.: Let \(T\) and \(T^{\prime}\) be colored tuple (set) systems defined on \(X\) and \(Y\), respectively. An **isomorphism** from \(T\) to \(T^{\prime}\) is a bijection \(f:X\to Y\) that preserves tuples (sets) and colors. That is, \(f(t)\in T^{\prime}\) if and only if \(t\in T\) and for every color \(i\), the image of the set of all tuples (sets) in \(T\) colored \(i\) is the set of all tuples (sets) colored \(i^{\prime}\) in \(T^{\prime}\), for some color \(i^{\prime}\). An **automorphism** of \(T\) is an isomorphism from \(T\) to \(T\) that fixes each color, and the set of all automorphisms of \(T\) is the **automorphism group** of \(T\), denoted \(\operatorname{Aut}(T)\). A colored tuple (set) system \(T\) is **point-transitive** if \(\operatorname{Aut}(T)\) is transitive on \(X\).
In [8, Example 3.7] it was shown that a configuration can be identified with a tuple system in such a way that the automorphism group of the configuration is the same as the automorphism group of the tuple system. Thus results on automorphism groups of tuple systems are also results about automorphism groups of configurations and digraphs.
**Lemma 5.7**.: _Let \(T\) be a colored tuple system with \(C\) the set system corresponding to \(T\). Then \(\operatorname{Aut}(T)\leq\operatorname{Aut}(C)\). Consequently, if \(\operatorname{Aut}(C)\) is \(9/8\)-closed, then \(\operatorname{Aut}(T)\) is \(9/8\)-closed._
Proof.: Let \(\gamma\in\operatorname{Aut}(T)\), with \(t=(t_{1},\ldots,t_{k})\in T\) of color \(k\), so that \(\{t_{i}:1\leq i\leq k\}\in C\). Then \(\gamma(t_{1},\ldots,t_{k})=(\gamma(t_{1}),\ldots,\gamma(t_{k}))\) has color \(k\), and so \(\gamma(\{t_{i}:1\leq i\leq k\})=\{\gamma(t_{i}):1\leq i\leq k\}\in C\). As \(\gamma\) and \(t\) are arbitrary, \(\gamma\in\operatorname{Aut}(C)\) has color \(k\). Thus \(\operatorname{Aut}(T)\leq\operatorname{Aut}(C)\). It is now clear that if for every transitive subgroup \(H\leq\operatorname{Aut}(C)\) with normal subgroup block system \(\mathcal{B}\) has \(\mathcal{B}\)-fixing block system with one block, then the same is true for \(\operatorname{Aut}(T)\). Thus if \(\operatorname{Aut}(C)\) is \(9/8\)-closed, then so \(\operatorname{Aut}(T)\).
The next result is [8, Theorem 3.6]. It gives a partial relationship between \(5/2\)-closed groups and automorphism groups of combinatorial objects.
**Theorem 5.8**.: _Let \(T\) be a point-transitive colored \(1\)-intersecting tuple system. Then \(\operatorname{Aut}(T)\) is \(5/2\)-closed._
We next seek partial relationships between \(9/8\)-closed groups and automorphism groups of combinatorial objects. The next definition gives the largest class of combinatorial objects for which we can show that all objects in the class have automorphism groups that are \(9/8\)-closed.
**Definition 5.9**.: A **partial Sylvester-Gallai design** is an incidence relation \((P,L,\mathcal{I})\) such that
1. any two points are contained on at most one line, and
2. each line has at least three points.
Sylvester-Gallai designs were introduced by Kelly and Nwankpa [15] as generalizations of Sylvester-Gallai configurations, and have the additional property that any two points determine a line.
**Definition 5.10**.: A colored tuple system is **connected** if its corresponding set system \(C\) is connected. That is, for any two vertices \(u,v\) of the corresponding set system, there is a sequence of sets \(e_{1},\ldots,e_{r}\) in \(C\) such that \(u\in e_{1}\), \(v\in e_{r}\), and \(e_{i}\cap e_{i+1}\neq\emptyset\), \(1\leq i\leq r-1\).
**Theorem 5.11**.: _Let \(T\) be a colored tuple system whose set system \(C\) corresponding to \(T\) is a connected partial Sylvester-Gallai design with point-transitive automorphism group. Then \(\operatorname{Aut}(T)\leq\operatorname{Aut}(C)\) are both \(9/8\)-closed._
Proof.: By Lemma 5.7 it suffices to show that \(\operatorname{Aut}(C)\) is \(9/8\)-closed. Let \(C\) satisfy the hypothesis, with \(G\leq\operatorname{Aut}(C)\) transitive with a normal block system \(\mathcal{B}\). Towards a contradiction, suppose \(\operatorname{WStab}_{G}(B)\neq 1\) for some \(B\in\mathcal{B}\) with \(B^{\prime}\in\mathcal{B}\) such that \(\operatorname{WStab}_{G}(B)^{B^{\prime}}\) is transitive. Then the \(\mathcal{B}\)-fixing block system of \(G\) has at least two blocks. As \(C\) is connected, there is some line with points in two different blocks of \(\mathcal{E}\). Let \(E\) be the equivalence class of \(\equiv\) that contains \(B\) and \(E^{\prime}\) the equivalence class of \(\equiv\) that contains \(B^{\prime}\). We may assume without loss of generality (by relabeling if necessary) that some line of \(C\) contains a point \(x\) in \(B\) and a point \(y\) in \(B^{\prime}\). Then there exists \(w\in\operatorname{WStab}_{G}(B)\) such that \(w(y)\neq y\) (and of course \(w(x)=x\)).
As \(C\) is a \(1\)-intersecting set system \(\operatorname{Aut}(C)\) is \(5/2\)-closed by Theorem 5.8. Then \(w|_{E^{\prime}}\in\operatorname{Aut}(C)\), and so we may assume without loss of generality that \(w=w|_{E^{\prime}}\). As any two points are on at most one line, \(L\) cannot contain any point other than \(x\) that is not in \(E^{\prime}\). As each line contains at least three points, there is some \(x\neq z\neq y\) on \(L\), and so \(z\in E^{\prime}\). So \(w(L)\) is a line that contains at least two points in \(E^{\prime}\), namely \(y\) and \(z\), and exactly one point not in \(E^{\prime}\), namely \(x\). As \(\operatorname{fix}_{G}(\mathcal{B})^{B}\) is
transitive and \(\operatorname{Aut}(C)\) is \(5/2\)-closed, there is \(v\in\operatorname{Aut}(C)\) such that \(v(x)\neq x\) but \(v^{E^{\prime}}=1\). Then \(y\) and \(z\) are contained on two different lines, a contradiction.
We next give a sufficient condition to ensure that the automorphism group of a weakly connected vertex-transitive digraph is \(9/8\)-closed (we note that the automorphism group of disconnected vertex-transitive digraph is never \(9/8\)-closed as its automorphism group is a wreath product). We will need a couple of definitions.
**Definition 5.12**.: Let \(G\) be a group, \(H\leq G\), and \(S\subset G\) such that \(S\cap H=\emptyset\) and \(HSH=S\). Define a digraph \(\operatorname{Cos}(G,H,S)\) with vertex set \(V(\operatorname{Cos}(G,H,S))=G/H\) the set of left cosets of \(H\) in \(G\), and arc set \(A(\operatorname{Cos}(G,H,S))=\{(gH,gsH):g\in G\text{ and }s\in S\}\). The digraph \(\operatorname{Cos}(G,H,S)\) is called the **double coset digraph of \(G\)** with **connection set**\(S\) (or \(HSH\)).
Sabidussi has shown [25, Theorem 2] that every vertex-transitive digraph is isomorphic to a double coset digraph (see [9, Theorem 1.3.9] for a more modern proof of this result).
**Definition 5.13**.: Let \(n\) be a positive integer. By \(\vec{K}_{n,n}\) we mean the complete bipartite graph \(K_{n,n}\) oriented so that every arc points from one bipartition set to the other.
**Theorem 5.14**.: _Let \(G\) be a group, \(H\leq G\), and \(S\subset G\) such that \(S\cap H=\emptyset\) and \(S=HSH\). Let \(n=[G:H]\) and \(p\) the smallest prime divisor of \(n\). If \(\operatorname{Cos}(G,H,S)\) is weakly connected and has no subdigraph isomorphic to a \(\vec{K}_{p,p}\), then \(\operatorname{Aut}(\operatorname{Cos}(G,H,S))\) is \(9/8\)-closed. Consequently, if \(\operatorname{Cos}(G,H,S)\) is a graph of girth at least \(5\), then \(\operatorname{Aut}(\operatorname{Cos}(G,H,S))\) is \(9/8\)-closed._
Proof.: Let \(K\leq\operatorname{Aut}(\operatorname{Cos}(G,H,S))\) be transitive with a normal block system \(\mathcal{B}\). Suppose \(\operatorname{WStab}_{K}(B)\neq 1\) for some \(B\in\mathcal{B}\). Then there is \(B\neq B^{\prime}\in\mathcal{B}\) such that \(\operatorname{WStab}_{K}(B)^{B^{\prime}}\) is transitive. As \(\operatorname{Cos}(G,H,S)\) is weakly connected, we may choose \(B\) and \(B^{\prime}\) such that some arc \(a=(gH,gsH)\) in \(\operatorname{Cos}(G,H,S)\) has \(gH\in B\) and \(gsH\in B^{\prime}\). As \(\operatorname{WStab}_{K}(B)^{B^{\prime}}\) is transitive, the arc \((gH,kH)\in A(\operatorname{Cos}(G,H,S))\) for every left coset \(kH\) of \(H\) contained in \(B^{\prime}\). As \(\mathcal{B}\) is normal, we see that \((\ell H,kH)\in A(\operatorname{Cos}(G,H,S))\) for every left coset \(\ell H\) in \(B\) and \(kH\) in \(B^{\prime}\). Hence \(\operatorname{Cos}(G,H,S)\) contains a \(\vec{K}_{b,b}\), where \(b=|B|\). As the order of a block of \(K\) divides \(n\), we see that \(b\geq p\). Hence \(\operatorname{Cos}(G,H,S)\) contains a subdigraph isomorphic to \(\vec{K}_{p,p}\), a contradiction. Thus \(\operatorname{WStab}_{K}(B)=1\). As \(K\) and \(\mathcal{B}\) are arbitrary, \(\operatorname{Aut}(\operatorname{Cos}(G,H,S))\) is \(9/8\)-closed.
In particular, if \(\operatorname{Cos}(G,H,S)\) is a graph of girth at least \(5\), then it has no subdigraph isomorphic to the cycle of length \(4\), which is isomorphic to \(K_{2,2}\). As every prime divisor of \(n\) is at least \(2\), and any subdigraph of \(\operatorname{Cos}(G,H,S))\) isomorphic to \(\vec{K}_{2,2}\) induces a \(K_{2,2}\) in \(\operatorname{Cos}(G,H,S)\), we see by the previous argument that \(\operatorname{Aut}(\operatorname{Cos}(G,H,S))\) is \(9/8\)-closed.
We now give a preliminary result and some definitions in preparation for the main result of this section. This main result will be used to solve the isomorphism problem for many circulant combinatorial objects we have seen.
**Lemma 5.15**.: _Let \(H\leq S_{m}\) and \(K\leq S_{k}\) be transitive. Let \(G=H\times K\) with the canonical action of \(H\times K\) on \(\mathbb{Z}_{m}\times\mathbb{Z}_{k}\). That is, \((h,k)(i,j)=(h(i),k(j))\) for \(h\in H\), \(k\in K\), \(i\in\mathbb{Z}_{m}\), and \(j\in\mathbb{Z}_{k}\). If \(G\) is \(9/8\)-closed, then \(H\) and \(K\) are \(9/8\)-closed._
Proof.: We will show that \(H\) is \(9/8\)-closed, the argument for \(K\) being analogous. Let \(\mathcal{C}\) be a normal block system of \(H\). Let \(L=\operatorname{fix}_{H}(\mathcal{C})\). Viewing \(L\) as an internal subgroup of \(H\times K\) acting canonically on \(\mathbb{Z}_{m}\times\mathbb{Z}_{k}\), we see \(L{\triangleleft}G\) whose set of orbits \(\mathcal{D}\) is a block system of \(H\times K\). As \(H\times K\) acts canonically on \(\mathbb{Z}_{m}\times\mathbb{Z}_{k}\), we see that a block of \(\mathcal{D}\) has the form \(C\times\{j\}\), where \(C\in\mathcal{C}\) and \(j\in\mathbb{Z}_{k}\). Then \(\operatorname{fix}_{H}(\mathcal{C})\times 1_{K}\leq\operatorname{fix}_{H\times K}( \mathcal{D})\). As \(G\) is \(9/8\)-closed, the \(\mathcal{D}\)-fixing block system \(\mathcal{E}_{\mathcal{D}}\) of \(G\) is \(\{\mathbb{Z}_{m}\times\mathbb{Z}_{k}\}\). If the \(\mathcal{C}\)-fixing block system \(\mathcal{E}_{\mathcal{C}}\) of \(H\) is not \(\mathbb{Z}_{m}\), with say \(1\neq\ell\in\operatorname{fix}_{H}(\mathcal{C})=L\) such that \(\ell\in\operatorname{WStab}_{H}(C)\) for some \(C\in\mathcal{C}\), then \(\ell\times 1_{K}\) is contained in \(\operatorname{WStab}_{H\times K}(C\times\{j\})\) for every \(j\in\mathbb{Z}_{k}\) and is not the identity. Thus \(\operatorname{WStab}_{H\times K}(C\times\{j\})\neq 1\), and \(H\) is not \(9/8\)-closed, a contradiction.
**Definition 5.16**.: Let \(G\) be a group. A subgroup \(H\leq G\) is **pronormal** in \(G\) if for every \(g\in G\) there is a \(k\in\langle H,g^{-1}Hg\rangle\) such that \(k^{-1}g^{-1}Hgk=H\).
**Definition 5.17**.: Let \(\pi\) be a set of prime numbers. A group \(G\) is called a \(\pi\)**-group** if every element in \(G\) has order a product of powers of primes in \(\pi\).
So a group \(G\) is a \(\pi\)-group if a prime \(p\) divides the order of an element of \(G\) only if \(p\in\pi\).
**Definition 5.18**.: Let \(\pi\) be a set of primes, and \(G\) a group of order \(n\). We say that \(H\leq G\) is a **Hall \(\pi\)-subgroup** of \(G\) if \(H\) is a \(\pi\)-group and \(|G|/|H|\) is relatively prime to \(p\) for every \(p\in\pi\). That is, if \(p^{a}|n\), \(a\geq 0\), then \(p^{a}\) divides \(|H|\) for every \(p\in\pi\).
The next result is the main theoretical result of this section. We note that if \(G\leq S_{n}\) contains a regular cyclic subgroup, then every block system of \(G\) is normal [9, Theorem 2.2.9]. So in the following result, "block system" is synonymous with "normal block system".
**Theorem 5.19**.: _Let \(R=\langle x\rangle\leq S_{n}\) be a regular cyclic subgroup and \(y\) a conjugate of \(x\) in \(S_{n}\). Suppose that for every conjugate \(y^{\prime}\) of \(y\) in \(\langle x,y\rangle\) and every block system \(\mathcal{B}\) of \(\langle x,y^{\prime}\rangle\), we have that the \(\mathcal{B}\)-fixing block system is \(\{\mathbb{Z}_{n}\}\). Then \(\langle x\rangle\) and \(\langle y\rangle\) are conjugate in \(\langle x,y\rangle\). Consequently, if \(G\leq S_{n}\) is transitive, \(9/8\)-closed, and contains a regular cyclic subgroup \(R\), then \(R\) is pronormal in \(G\)._
Proof.: Let \(R=\langle x\rangle\) and \(\langle y\rangle\) be regular cyclic subgroups that satisfy the hypothesis of the result. We may assume without loss of generality that \(y\) has been conjugated so as to satisfy the conclusion of Lemma 4.3. We will also use all of the notation from both the statement and conclusion of Lemma 4.3. Set \(H=\langle x,y\rangle\).
We proceed by induction on \(r\), the number of distinct prime factors of \(n\). If \(r=1\), then \(n\) is a prime power. Suppose that \(\operatorname{fix}_{H}(\mathcal{B}_{1})\) has order at least \(p^{2}\). By [8, Lemma 1.24]\(\operatorname{WStab}_{H}(\mathcal{B}_{1})=\operatorname{PStab}_{ \operatorname{fix}_{H}(\mathcal{B})}(B)\) has order at least \(p\), where \(B\in\mathcal{B}\) and \(\operatorname{PStab}_{\operatorname{fix}_{H}(\mathcal{B})}(B)\) is point-wise stabilizer of \(B\) in \(\operatorname{fix}_{H}(\mathcal{B})\). But then the \(\mathcal{B}_{1}\)-fixing equivalence class \(\mathcal{E}_{1}\) of \(G\geq H\) is not \(\{\mathbb{Z}_{n}\}\), a contradiction. Thus by either Lemma 4.3 if \(n\) is odd and Lemma 4.4 if \(n\) is even, we see that \(G\) is not \(9/8\)-closed, a contradiction which establishes the result.
Now assume that the result holds for all \(n\) with \(r\geq 1\). Let \(n\) be a positive integer such that \(n\) has \(r+1\) distinct prime divisors, and \(y\in S_{n}\) generate a regular cyclic subgroup such that \(\langle x,y\rangle\) satisfies the conclusion of Lemma 4.3. As \(\langle x,y\rangle\) is solvable, \(\langle x,y\rangle/\mathcal{B}_{a_{1}}\) is also solvable. Let \(\pi=\{p_{i}:2\leq i\leq r+1\}\). Then \(\langle x\rangle/\mathcal{B}_{a_{1}}\) and \(\langle y\rangle/\mathcal{B}_{1}\) are solvable of degree relatively prime to \(p_{1}\), and so
and \(\langle y\rangle/\mathcal{B}_{a_{1}}\) are contained in \(\pi\)-subgroups \(\Pi_{1}\) and \(\Pi_{2}\) of \(\langle x,y\rangle/\mathcal{B}_{a_{1}}\), respectively, by Hall's Theorem [14, Proposition 7.14]. Again by Hall's Theorem, as \(\langle x,y\rangle/\mathcal{B}_{a_{i}}\) is solvable, after conjugation of \(y\) by an element of \(\langle x,y\rangle\), we may assume without loss of generality that \(\Pi_{2}=\Pi_{1}\) and so \(\langle x,y\rangle/\mathcal{B}_{a_{1}}\) is a \(\pi\)-subgroup. We conclude that a Sylow \(p_{1}\)-subgroup of \(\langle x,y\rangle\) is contained in \(\mathrm{fix}_{H}(\mathcal{B}_{a_{1}})\).
As the \(\mathcal{B}_{1}\)-fixing block system of \(\langle x,y\rangle\) is, by hypothesis, \(\{\mathbb{Z}_{n}\}\), we see that a Sylow \(p_{1}\)-subgroup of \(\mathrm{fix}_{H}(\mathcal{B}_{1})\) is cyclic. By Lemma 4.3 (5) if \(p_{1}\) is odd or Lemma 4.4 if \(p_{1}=2\), we have that \(P=\langle x^{n/p_{1}^{a_{1}}}\rangle=\langle y^{n/p_{1}^{a_{1}}}\rangle\) is a Sylow \(p_{1}\)-subgroup of \(\langle x,y\rangle\) and \(P\) is central in \(H\). In particular, \(P\) is contained in the center of its normalizer in \(H\).
We now apply Burnside's Transfer Theorem [9, Theorem 8.2.10] to obtain that \(P\) contains a normal \(p_{1}^{\prime}\)-complement in \(\langle x,y\rangle\), which we will call \(K\). Then \(K,P\trianglelefteqslant(x,y)\), \(\langle x,y\rangle=\langle K,P\rangle\), and as \(P\) and \(K\) have relatively prime orders, \(K\cap P=1\). Thus \(\langle x,y\rangle\cong K\times P\). By Lemma 5.15, we see that \(K\) is \(9/8\)-closed. Hence as \(K\) has degree \(n/p_{1}^{a_{1}}\) with \(r\) distinct factors, we may assume by the induction hypothesis, after conjugation of \(\langle y\rangle\) by an appropriate element of \(\langle x,y\rangle\), that \(\langle x,y\rangle/\mathcal{B}_{a_{1}}=\langle x\rangle/\mathcal{B}_{a_{1}}\). As \(P\) is semiregular and contained in the center of \(\langle x,y\rangle\), we have \(P\) is contained in \(\langle y\rangle\), and so \(\langle y\rangle=\langle x\rangle\). We have shown that \(\langle x\rangle\) and \(\langle y\rangle\) are conjugate in \(\langle x,y\rangle\). That is, that \(\langle x\rangle\) is a pronormal subgroup of \(\langle x,y\rangle\).
The "consequently" part of the result now follows easily. Let \(G\leq S_{n}\) be \(9/8\)-closed, \(R\) a regular cyclic subgroup of \(G\), and \(g\in G\). Set \(R=\langle x\rangle\), and \(y=g^{-1}xg\). If \(G\) is \(9/8\)-closed, then for every conjugate \(y^{\prime}\) of \(y\) in \(\langle x,y\rangle\) and normal block system \(\mathcal{B}\) of \(\langle x,y^{\prime}\rangle\), the \(\mathcal{B}\) fixing block system of \(\langle x,y^{\prime}\rangle\) is \(\{\mathbb{Z}_{n}\}\). By the first part of this result, we see that \(\langle y\rangle\) is conjugate to \(\langle x\rangle\) in \(\langle x,y\rangle\). Thus \(R=\langle x\rangle\) is pronormal in \(G\).
Combining the previous result with Lemma 3.6 we obtain the following result.
**Corollary 5.20**.: _Let \(T\) be a connected circulant colored tuple system of order \(n\) with corresponding set system \(S\) such that \(\mathrm{Aut}(S)\) is \(9/8\)-closed. Then \(T\) is a CI-colored tuple system of \(\mathbb{Z}_{n}\)._
There is a slight and subtle gap in the proof of [16, Theorem 1.1]. Namely, in SS2, they discuss how the isomorphism problem for configurations reduces to the connected case, as it is clear that if one has an isomorphism between connected components of two isomorphic disconnected set systems, then one can write down an isomorphism between the two set systems. However, if the set system is a Cayley set system of \(G\), then the set of components will be the set of left cosets of some subgroup \(H\leq G\)[9, Example 2.3.8]. If \(H\) is a CI-group with respect to the particular set system one is considering, such as configurations, in order to show \(G\) is a CI-group as well, automorphisms of \(H\) must extend to automorphisms of \(G\). In general, though, automorphisms of some subgroup \(H\leq G\) need not extend to automorphisms of \(G\). The final result of this section generalizes [16, Theorem 1.1] (as a symmetric configuration is a partial Sylvester-Gallai design) and fills the small gap in their proof. We need one more natural definition.
**Definition 5.21**.: Let \(T\) be a disconnected colored tuple system. A **component of \(T\)** is the set of all tuples in \(T\) all of whose coordinates are contained in the vertex set of a component of the underlying set system of \(T\).
Note that a partial Sylvester-Gallai design that has a line must have at least \(3\) points.
**Corollary 5.22**.: _Let \(n\geq 3\) be an integer. Then \(\mathbb{Z}_{n}\) is a CI-group with respect to colored tuple systems whose underlying set systems are partial Sylvester-Gallai designs._
Proof.: Let \(T\) be a circulant colored tuple system whose underlying set system \(D\) is a partial Sylvester-Gallai design. First suppose \(D\) is connected. By Theorem 5.11 we have that \(\operatorname{Aut}(T)\leq\operatorname{Aut}(D)\) is \(9/8\)-closed. By Corollary 5.20 we see that \(T\) (and \(D\)) are CI-objects of \(\mathbb{Z}_{n}\).
If \(D\) is a disconnected partial Sylvester-Gallai design, then the component \(S\) of \(T\) which contains \(0\) is a colored tuple system whose underlying set system \(E\) is a partial Sylvester-Gallai design of order \(m=|V(S)|\), for some \(m|n\). It is also circulant, as the components of \(T\) are clearly blocks of \(\operatorname{Aut}(T)\), and a block system of a group that contains a regular abelian group is normal by [9, Theorem 2.2.19]. Additionally, the set of points of \(S\) is a (cyclic) subgroup of \(\mathbb{Z}_{n}\). By the first part of this proof, \(S\) is a CI-object of \(\mathbb{Z}_{m}\). Let \(T^{\prime}\) be a colored circulant tuple system whose underlying set system is a partial Sylvester-Gallai design \(D^{\prime}\) of order \(n\) isomorphic to \(T\). Then \(T^{\prime}\) is disconnected, and letting \(S^{\prime}\) be the connected component of \(T^{\prime}\) that contains \(0\), we see that \(S\) and \(S^{\prime}\) are isomorphic by a group automorphism of the unique subgroup \(H\) of \(\mathbb{Z}_{n}\) of order \(m\). By [18], every automorphism of \(H\) is the restriction of a group automorphism of \(\mathbb{Z}_{n}\) to \(H\), and so there exists a group automorphism \(\alpha\) of \(\mathbb{Z}_{n}\) such that \(\alpha(S)=S^{\prime}\). As the points of the connected components of both \(T\) and \(T^{\prime}\) are the left cosets of \(H\) in \(\mathbb{Z}_{n}\), and \(\alpha\) maps cosets of \(H\) to cosets of \(H\), we see \(\alpha(T)=T^{\prime}\). Thus every disconnected colored tuple system whose underlying set system is a partial Sylvester-Gallai design is also a CI-object of \(\mathbb{Z}_{n}\), and \(\mathbb{Z}_{n}\) is a CI-group with respect to colored tuple systems whose underlying set system is a partial Sylvester-Gallai design.
The authors of [16] ended their paper with the natural question of determining which groups \(G\) are CI-groups with respect to Cayley symmetric configurations of \(G\). We end this section with what we believe are equally obvious questions.
**Problem 5.23**.: _Which groups \(G\) have the property that \(G\) is pronormal in every \(9/8\)-closed group that contains \(G_{L}\)?_
By Theorem 5.11, the next problem is a special case of the previous problem.
**Problem 5.24**.: _Which groups \(G\) have the property that \(G\) is a CI-group with respect to colored tuple systems whose underlying set systems are Cayley partial Sylvester-Gallai designs of \(G\)?_
## 6. Unit circulant digraphs and \(3/2\)-closed groups
**Definition 6.1**.: Let \(n\) be a positive integer and \(S\subseteq\mathbb{Z}_{n}^{*}\), the set of units in \(\mathbb{Z}_{n}\). A **unit circulant digraph** of order \(n\) is a Cayley digraph \(\operatorname{Cay}(\mathbb{Z}_{n},S)\).
Toida [26] conjectured that every unit circulant digraph is a CI-digraph. This conjecture was independently confirmed by Klin, Muzychuk and Poschel [21] (using Schur rings), as well as by the author and Joy Morris [6] (using group theoretic techniques). Our goal in this section is to generalize the fact that Toida's conjecture is true, as well as generalize the notion of a unit circulant digraph to groups other than cyclic groups. It is also very important to note that the proofs given here are much shorter and more complete than those given in [21] or [6]. After some
terms that are needed are defined, we begin with the relationship between the automorphism group of a unit circulant digraph and \(3/2\)- and \(9/8\)-closed groups. We next give several definitions which we will need concerning digraphs.
**Definition 6.2**.: Let \(\Gamma_{1}\) and \(\Gamma_{2}\) be digraphs. The **wreath product of \(\Gamma_{1}\) and \(\Gamma_{2}\)**, denoted \(\Gamma_{1}\wr\Gamma_{2}\), is the digraph with vertex set \(V(\Gamma_{1})\times V(\Gamma_{2})\) and arcs \(((u,v),(u,v^{\prime}))\) for \(u\in V(\Gamma_{1})\) and \((v,v^{\prime})\in A(\Gamma_{2})\) or \(((u,v),(u^{\prime},v^{\prime}))\) where \((u,u^{\prime})\in A(\Gamma_{1})\) and \(v,v^{\prime}\in V(\Gamma_{2})\).
The wreath product of two graphs was introduced by Harary [13], who wanted a graph product whose automorphism group was the wreath product of the automorphism groups of its factor graphs. It is easy to show that \(\operatorname{Aut}(\Gamma_{1})\wr\operatorname{Aut}(\Gamma_{2})\leq \operatorname{Aut}(\Gamma_{1}\wr\Gamma_{2})\).
**Definition 6.3**.: Let \(\Gamma\) be a vertex-transitive digraph whose automorphism group contains a transitive subgroup \(G\) with a block system \(\mathcal{B}\). Define the **block quotient digraph of \(\Gamma\) with respect to \(\mathcal{B}\)**, denoted \(\Gamma/\mathcal{B}\), to be the digraph with vertex set \(\mathcal{B}\) and arc set \(\{(B,B^{\prime}):B,B^{\prime}\in\mathcal{B},B\neq B^{\prime}\), and \((u,v)\in A(\Gamma)\) for some \(u\in B\) and \(v\in B^{\prime}\}\).
See [9, SS4.2] and [9, SS2.6] for more information and examples on the digraph wreath product and block quotient digraphs, respectively.
**Definition 6.4**.: Let \(\Gamma\) be a digraph. Define an equivalence relation \(R\) on \(V(\Gamma)\) by \(u\ R\ v\) if and only if the out- and in-neighbors of \(u\) and \(v\) are the same. Then \(R\) is an equivalence relation on \(V(\Gamma)\). We say \(\Gamma\)**is _irreducible_** if the equivalence classes of \(R\) are singletons, and **reducible** otherwise.
The equivalence relation above was introduced for graphs by Sabidussi [24, Definition 3], and independently rediscovered by Kotlov and Lovasz [17], who call \(u\) and \(v\)**twins**, and Wilson [28], who calls reducible graphs **unworthy**. Sabidussi observed in [25] that \(R\) is a \(G\)-congruence for \(G\leq\operatorname{Aut}(\Gamma)\) and graphs \(\Gamma\). This is true for digraphs as well, and hence the set of equivalence classes of \(R\) is a block system of \(G\) by [9, Theorem 3.2.2]. It is easy to show, using [9, Theorem 4.2.15], that a vertex-transitive digraph \(\Gamma\) is reducible if and only if it can be written as a wreath product \(\Gamma_{1}\wr\bar{K}_{n}\) for some positive integer \(n\geq 2\), where \(\bar{K}_{n}\) is the complement of the complete graph.
**Theorem 6.5**.: _The automorphism group of every unit circulant digraph \(\Gamma\) of order \(n\) is \(3/2\)-closed. Additionally, either \(\operatorname{Aut}(\Gamma)\) is \(9/8\)-closed, or \(\Gamma\) is reducible, \(\operatorname{Aut}(\Gamma)=K\wr S_{m}\) for some \(m\geq 2\), and \(K\leq S_{n/m}\) is \(9/8\)-closed._
Proof.: Let \(\Gamma=\operatorname{Cay}(\mathbb{Z}_{n},S)\) for some \(S\subseteq\mathbb{Z}_{n}^{*}\) be a unit circulant digraph. We will first show that \(\operatorname{Aut}(\Gamma)\) is \(3/2\)-closed. Let \(G=\operatorname{Aut}(\Gamma)\) and \(H\leq G\) transitive with normal block system \(\mathcal{B}\). If \(\operatorname{WStab}_{H}(B)=1\) there is nothing to prove. So we assume \(\operatorname{WStab}_{H}(B)\neq 1\) for some \(B\in\mathcal{B}\), and let \(\mathcal{E}\) be the \(\mathcal{B}\)-fixing block system of \(H\).
Let \(E_{0},E\in\mathcal{E}\) where \(0\in E_{0}\). Then \(\Gamma[E]\) is not an empty graph if and only if \(\Gamma[E_{0}]\) is not an empty graph. Also, \(E_{0}\leq\mathbb{Z}_{n}\) and as \(\Gamma\) is a unit circulant digraph, we see that \(E_{0}\cap S=\emptyset\). Then in \(E_{0}\), between any two distinct blocks of \(\mathcal{B}\) contained within \(E_{0}\), there are no arcs. Similarly, between any two distinct blocks of \(\mathcal{B}\) contained within \(E\), there are no arcs.
Now let \(B,B^{\prime}\in\mathcal{B}\) such that \(B\) and \(B^{\prime}\) are contained in different blocks \(E,E^{\prime}\in\mathcal{E}\). As \(B\not\equiv B^{\prime}\), \(\operatorname{WStab}_{G}(B)^{B^{\prime}}\) is transitive. Hence if there is an arc \((x,x^{\prime})\in A(\Gamma)\) with
\(x\in B\) and \(x^{\prime}\in B^{\prime}\), then \((x,x^{\prime})\in A(\Gamma)\) for every \(x^{\prime}\in B^{\prime}\). As \(\mathcal{B}\) is normal, this implies that \((x,x^{\prime})\in A(\Gamma)\) for every \(x\in B\) and \(x^{\prime}\in B^{\prime}\).
We conclude by [9, Theorem 4.2.15] that \(\Gamma\cong\Gamma/\mathcal{B}\wr\Gamma[B]=\Gamma/\mathcal{B}\wr\bar{K}_{B}\). Thus \(G\) contains \(\operatorname{Aut}(\Gamma/\mathcal{B})\wr S_{B}\) and letting \(L\leq\operatorname{Aut}(\Gamma/\mathcal{B})\wr S_{B}\) be the largest subgroup of \(G\) that has \(\mathcal{B}\) as a block system, we see \(\operatorname{fix}_{L}(B)^{B}=S_{B}\) for every \(B\in\mathcal{B}\). Thus \(\operatorname{Aut}(\Gamma)\) is \(3/2\)-closed by definition.
The above argument also shows that if \(\mathcal{E}\neq\{\mathbb{Z}_{n}\}\), then \(\Gamma\) is reducible. If \(\mathcal{E}=\{\mathbb{Z}_{n}\}\) for every \(H\leq G\) that is transitive and block system \(\mathcal{B}\) of \(H\), then \(G\) is \(9/8\)-closed. It only remains to show that if \(\Gamma\) is reducible, then \(\operatorname{Aut}(\Gamma)=K\wr S_{m}\) for some \(m\geq 2\) and \(K\leq S_{n/m}\) is \(9/8\)-closed.
Let \(m\leq n\) be maximum such that \(\Gamma=\Gamma_{1}\wr\bar{K}_{m}\), where \(\Gamma_{1}\) is a circulant digraph of order \(n/m\). If \(m=n\), then \(\Gamma=\bar{K}_{n}\) (which is a unit circulant digraph). The result then follows with \(K=1\). Otherwise, \(\Gamma_{1}\) has at least \(2\) vertices, and \(\Gamma_{1}\) is irreducible by [25, Lemma 1 (ii) and (iii)] and their digraph analogues. Let \(\mathcal{B}\) be the set of cosets of the unique subgroup \(M\) of \(\mathbb{Z}_{n}\) of order \(m\). Then \(\Gamma/\mathcal{B}=\Gamma_{1}\). Let \(\mathcal{C}/\mathcal{B}\) be any block system of \(K=\operatorname{Aut}(\Gamma_{1})\). Then \(\mathcal{B}\preceq\mathcal{C}\), and so by [8, Lemma 2.2], we see that if \(\mathcal{E}^{\prime}/\mathcal{B}\) is the \(\mathcal{C}/\mathcal{B}\)-fixing block system of \(\operatorname{Aut}(\Gamma_{1})\), then \(\mathcal{E}^{\prime}\) is the \(\mathcal{C}\)-fixing block system of \(G\). As \(m\) is maximum and \(\operatorname{Aut}(\Gamma)\) is \(3/2\)-closed, we see that \(\mathcal{E}^{\prime}/\mathcal{B}=\{\mathbb{Z}_{n}/M\}\), and \(K\) is \(9/8\)-closed.
Our next goal is to give in Theorem 6.10 the relationship between \(9/8\)-closed groups and \(3/2\)-closed groups. We begin with an additional term and preliminary results.
**Definition 6.6**.: Let \(g\in S_{n}\). We define the **support** of \(g\) to be all \(x\in\mathbb{Z}_{n}\) such that \(g(x)\neq x\). Let \(G\leq S_{n}\). The **support** of \(G\) is the union of the support of all elements of \(G\).
**Lemma 6.7**.: _Let \(G\leq S_{n}\) be transitive and contain a subgroup \(H\) with the property that there is \(T\subset\mathbb{Z}_{n}\) such that \(H^{T}=S_{T}\) and \(H^{\mathbb{Z}_{n}\backslash T}=1\). Then there exists a block system \(\mathcal{B}\) of \(G\) such that \(G=(G/\mathcal{B})\wr S_{m}\), for some \(m|n\)._
Proof.: We first establish the following claim: If there is \(K\leq G\) and \(U\subset\mathbb{Z}_{n}\) such that \(K^{U}=S_{U}\), \(K^{\mathbb{Z}_{n}\backslash U}=1\), and \(T\cap U\neq\emptyset\) then \(\langle H,K\rangle^{T\cup U}=S_{T\cup U}\) and \(\langle H,K\rangle^{\mathbb{Z}_{n}\backslash(T\cup U)}=1\). The last condition should be clear as the support of both \(H\) and \(K\) is contained in \(T\cup U\). In order to show that \(\langle H,K\rangle=S_{T\cup U}\), it suffices to show that \(\langle H\cup K\rangle\) contains every transposition on \(T\cup U\). Let \((a,b)\) be a transposition on \(T\cup U\). If either \(a,b\in T\) or \(a,b\in U\), then clearly \((a,b)\in\langle H,K\rangle\). Otherwise, suppose, say, \(a\in T\) and \(b\in U\) but \(a\not\in U\) and \(b\not\in T\), with the other case being analogous. As \(T\cap U\neq\emptyset\), there is \(c\in T\cap U\). Then \((a,c)\in H\), \((c,b)\in K\), and \((a,c)(c,b)(a,c)=(a,b)\in\langle H,K\rangle\). This establishes the claim.
We next assume that \(H\) is chosen so that \(|T|\) is as large as possible. Let \(g\in G\). If \(g(T)\cap T\neq\emptyset\), then by the claim \(\langle H,gHg^{-1}\rangle\) has the property that its induced action on \(T\cup g(T)\) is the symmetric group on \(T\cup g(T)\), while its induced action on \(\mathbb{Z}_{n}\setminus(T\cup g(T))\) is trivial. As \(m\) was chosen so that \(|T|\) is as large as possible, it must be that \(g(T)=T\). Hence \(T\) is a block of \(G\). We then let \(\mathcal{B}\) be the set of all blocks conjugate to \(T\), and the result follows.
**Theorem 6.8**.: _Let \(G\leq S_{n}\) be a \(3/2\)-closed transitive group with a normal block system \(\mathcal{B}\) with \(\mathcal{E}\) the \(\mathcal{B}\)-fixed block system of \(G\). If \(\mathcal{E}\neq\{\mathbb{Z}_{n}\}\), then \(G/\mathcal{B}\) is \(9/8\)-closed._
Proof.: Let \(H\leq G\) be transitive such that \(H/\mathcal{B}\) has a normal block system \(\mathcal{C}/\mathcal{B}\) with \(\mathcal{E}^{\prime}/\mathcal{B}\) the \(\mathcal{C}/\mathcal{B}\)-fizer block system of \(H/\mathcal{B}\). By hypothesis, \(\mathcal{E}=\mathcal{B}\preceq\mathcal{E}^{\prime}\). By [8, Lemma 2.2], the \(\mathcal{C}\)-fizer block system of \(H\) is \(\mathcal{E}^{\prime}\). As \(G\) is \(3/2\)-closed, we see that either \(\mathcal{E}^{\prime}=\{\mathbb{Z}_{n}\}\) and the result follows, or \(\mathcal{E}^{\prime}=\mathcal{C}\) and that \(\mathrm{fix}_{H}(\mathcal{C})^{C}=S_{C}\) for every \(C\in\mathcal{C}\). However, if \(\mathrm{fix}_{H}(\mathcal{C})^{C}=S_{C}\), \(H\) cannot have \(\mathcal{B}\) as a block system as \(\mathcal{B}\prec\mathcal{C}\). But \(\mathcal{B}\) is a block system of \(G\geq H\), a contradiction.
**Definition 6.9**.: Let \(X\) and \(Y\) be sets, and \(G\leq S_{X}\) and \(H\leq S_{Y}\) be transitive groups. Then \(\mathcal{B}=\{\{(x,y):y\in Y\}:x\in X\}\) is a normal block system of \(G\wr H\), called the **lexi-partition of \(G\wr H\) corresponding to \(Y\)**.
**Theorem 6.10**.: _The class of \(3/2\)-closed groups is the disjoint union of the class of \(9/8\)-closed groups together with the class of groups obtained from the \(9/8\)-closed groups by wreathing them with a symmetric group. That is, let \(\mathcal{K}\) be the class of \(9/8\)-closed groups. Then the class of \(3/2\)-closed groups is the union of \(\mathcal{K}\) together with \(\{K\wr S_{n}:K\in\mathcal{K}\ and\ n\geq 2\ is\ an\ integer\}\)._
Proof.: It is clear that \(9/8\)-closed groups are \(3/2\)-closed. Let \(\mathcal{L}=\{K\wr S_{n}:K\in\mathcal{K}\ and\ n\geq 2\ is\ an\ integer\}\). It is also clear that \(\mathcal{K}\cap\mathcal{L}=\emptyset\). We first show that if \(G\in\mathcal{L}\), then \(G\) is \(3/2\)-closed.
Let \(G\in\mathcal{L}\). Write \(G=K\wr S_{n}\), where \(K\in\mathcal{K}\). Let \(H\leq G\) be transitive with a normal block system \(\mathcal{B}\) such that the \(\mathcal{B}\)-fixing block system \(\mathcal{E}\) of \(H\) has more than one block. Let \(\mathcal{C}\) be the lexi-partition of \(G\) with respect to \(S_{n}\). If \(\mathcal{B}=\mathcal{C}\), then we have the largest subgroup of \(G\) with \(\mathcal{B}\) a block system is simply \(G\). As \(G=K\wr S_{n}\), we have that the \(\mathcal{B}\)-fixing block system of \(G\) is \(\mathcal{B}\), and \(\mathrm{fix}_{G}(\mathcal{B})^{B}=S_{n}\). So we need only consider when \(\mathcal{B}\neq\mathcal{C}\).
We proceed by contradiction, and so assume that \(G=K\wr S_{n}\) is not \(3/2\)-closed. We may assume without loss of generality that \(G\) is chosen so that \(G=K\wr S_{n}\) with \(n\geq 2\) as small as possible such that \(K\wr S_{n}\) is not \(3/2\)-closed. As \(\mathcal{B}\) and \(\mathcal{C}\) are block systems of \(G\), there either exist \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\) such that \(|B\cap C|\geq 2\), or \(|B\cap C|\leq 1\) for every \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\). We consider the latter case first, and then reduce the former case to the latter case.
Suppose \(|B\cap C|\leq 1\) for every \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\). Then \(H/\mathcal{C}\leq K\) and \(\mathrm{fix}_{H}(\mathcal{B})/\mathcal{C}\unlhd H/\mathcal{C}\). Additionally, \(H/\mathcal{C}\cong H/\mathrm{fix}_{H}(\mathcal{C})\), and as \(|B\cap C|\leq 1\) for every \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\), we see that \(\mathrm{fix}_{H}(\mathcal{B})\cap\mathrm{fix}_{H}(\mathcal{C})=1\). Thus \(\mathrm{WStab}_{H}(B)\cap\mathrm{fix}_{H}(\mathcal{C})=1\) for every \(B\in\mathcal{B}\). Let \(\mathcal{B}^{\prime}\) be the set of orbits of \(\mathrm{fix}_{H}(\mathcal{B})/\mathcal{C}\). As \(\mathrm{fix}_{H}(\mathcal{B})\cap\mathrm{fix}_{H}(\mathcal{C})=1\), \(\mathcal{B}^{\prime}\) is a normal block system of \(K\) whose blocks are not singletons. As the \(\mathcal{B}\)-fixing block system \(\mathcal{E}\) of \(H\) has more than one block, \(\mathcal{B}\) has more than one block and so \(\mathcal{B}^{\prime}\) has more than one block. Thus \(\mathcal{B}^{\prime}\) is not trivial. Let \(C\in\mathcal{C}\) such that \(|C\cap B|=1\). As \(\mathrm{WStab}_{H}(B)\) fixes \(B\) pointwise, it fixes the point in \(C\cap B\), and so fixes \(C\). Thus \(\mathrm{WStab}_{H}(B)\) fixes each block of \(C\) with a point in \(B\). Hence \((\mathrm{WStab}_{H}(B)/\mathcal{C})^{B/\mathcal{C}}=1\). Let \(B^{\prime}\in\mathcal{B}\) such that \(\mathrm{WStab}_{H}(B)\) is transitive on \(B^{\prime}\). Let \(C^{\prime},C^{\prime\prime}\in\mathcal{C}\) such that \(|B^{\prime}\cap C^{\prime}|=|B^{\prime}\cap C^{\prime\prime}|=1\), with \(\{x^{\prime}\}=B^{\prime}\cap C^{\prime}\) and \(\{x^{\prime\prime}\}=B^{\prime}\cap C^{\prime\prime}\). Then there exists \(g\in\mathrm{WStab}_{H}(B)\) such that \(g(x^{\prime})=x^{\prime\prime}\). Then \(g/\mathcal{C}(C^{\prime})=C^{\prime\prime}\) and so \(\mathrm{WStab}_{H}(B)/\mathcal{C}\) is transitive on \(B^{\prime}/\mathcal{C}\). This implies that the \(\mathcal{B}^{\prime}\)-fixing block system of \(H/\mathcal{C}\leq K\) has at least two blocks, contradicting the assumption that \(K\) is \(9/8\)-closed. Thus we may assume that \(|B\cap C|\geq 2\) for some \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\).
Suppose \(|B\cap C|\geq 2\) for some \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\). Then \(B\cap C\) is a block of \(H\) with \(\mathcal{D}\) the block system of \(H\) that contains \(B\cap C\). As \(G=K\wr S_{n}\) and \(\mathcal{D}\prec\mathcal{C}\), we see that
\(G\) contains a subgroup permutation isomorphic to \(H/\mathcal{D}\wr S_{D}\). By the minimality of \(n\) and that \(n\geq 2\), we see that \(K\wr S_{n-|D|}\) is \(3/2\)-closed. As \(\mathcal{B}\neq\mathcal{C}\), repeating the previous argument, if necessary, we see there is \(m\leq n\) and block system \(\mathcal{F}\prec\mathcal{B}\) (and \(\mathcal{F}\prec\mathcal{C}\)) such that \(K\wr S_{n-m}\) is \(3/2\)-closed and \(|(B/\mathcal{F})\cap(C/\mathcal{F})|\leq 1\) for every \(B\in\mathcal{B}\) and \(C\in\mathcal{C}\). By arguments in the preceding paragraph we see that \(K\) is not \(9/8\)-closed, a contradiction. Hence \(K\wr S_{n}\) is \(3/2\)-closed.
Let \(G\) be \(3/2\)-closed. As a \(9/8\)-closed groups is \(3/2\)-closed, we may assume that \(G\) is not \(9/8\)-closed. Then there exists a transitive subgroup \(H\leq G\) with a normal block system \(\mathcal{B}\) for which the \(\mathcal{B}\)-fixer block system \(\mathcal{E}\) is not \(\{\mathbb{Z}_{n}\}\). As \(G\) is \(3/2\)-closed, we see that \(\mathcal{E}=\mathcal{B}\) and \(\operatorname{fix}_{H}(\mathcal{B})^{B}=S_{B}\) for every \(B\in\mathcal{B}\). We may then write \(H=(H/\mathcal{B})\wr S_{k}\), where \(k=|B|\), \(B\in\mathcal{B}\). By Lemma 6.7, there is a block system \(\mathcal{C}\) of \(G\) such that \(G\cong G/\mathcal{C}\wr S_{m}\) for some \(m|n\). The result now follows by Lemma 6.8.
**Definition 6.11**.: Let \(G\leq S_{n}\) be a transitive group. By the \(3/2\)-closure of \(G\), denoted \(G^{(3/2)}\), we mean the intersection of all \(3/2\)-closed groups that contain \(G\).
It is easy to see that the \(3/2\)-closure of \(G\) is \(3/2\)-closed.
**Lemma 6.12**.: _Let \(G\leq S_{n}\) be a transitive group. Then \(G^{(3/2)}\) is \(3/2\)-closed._
Proof.: Let \(H\leq G^{(3/2)}\) be transitive with a block system \(\mathcal{B}\). Let \(L\) be a \(3/2\)-closed group that contains \(G\) and \(K\leq L\) be the largest subgroup of \(L\) that has \(\mathcal{B}\) as a block system. Then \(H\leq K\). Also, if \(\operatorname{WStab}_{H}(\mathcal{B})\neq 1\), then the \(\mathcal{B}\)-fixing block system of \(H\) is \(\mathcal{B}\). As \(H\leq K\leq L\), the \(\mathcal{B}\)-fixing block system of \(K\) is \(\mathcal{B}\), and \(\operatorname{fix}_{K}(\mathcal{B})^{B}=S_{B}\) for every \(B\in\mathcal{B}\). Let \(M\) be the largest subgroup of \(G^{(3/2)}\) that has \(\mathcal{B}\) as a block system. As \(L\) is arbitrary, the \(\mathcal{B}\)-fixing block system of \(M\) is \(\mathcal{B}\) and \(\operatorname{fix}_{M}(\mathcal{B})^{B}=S_{B}\) for every \(B\in\mathcal{B}\). Thus \(G^{(3/2)}\) is \(3/2\)-closed.
**Theorem 6.13**.: _Let \(x,y\in S_{n}\) such that \(\langle x\rangle\) and \(\langle y\rangle\) are regular cyclic subgroups. Then \(\langle x\rangle\) and \(\langle y\rangle\) are conjugate in \(G=\langle x,y\rangle^{(3/2)}\). Consequently, every CI-object of \(\mathbb{Z}_{n}\) with \(3/2\)-closed automorphism group is a CI-object of \(\mathbb{Z}_{n}\). In particular, every unit circulant digraph of order \(n\) is a CI-digraph of \(\mathbb{Z}_{n}\)._
Proof.: By Theorem 5.19, we may assume that \(G\) is not \(9/8\)-closed, in which case by Theorem 6.10 we see that \(G=K\wr S_{m}\) for some \(9/8\)-closed group \(K\) and \(m|n\). Let \(\mathcal{B}\) be the block system of \(G\) with blocks of size \(m\). By Theorem 5.19 there is \(\delta\in G\) such that \(\langle\delta^{-1}y\delta\rangle/\mathcal{B}=\langle x\rangle/\mathcal{B}\). We may thus assume without loss of generality that \(\langle y\rangle/\mathcal{B}=\langle x\rangle/\mathcal{B}\). As for every \(a\in\mathbb{Z}_{n}^{*}\) we have \(\langle y\rangle\) is conjugate to \(\langle x\rangle\) if and only if \(\langle y\rangle\) is conjugate to \(\langle x^{a}\rangle=\langle x\rangle\), we may assume without loss of generality that \(y/\mathcal{B}=x/\mathcal{B}\). We may then assume that \(\delta/\mathcal{B}\in\langle x\rangle/\mathcal{B}\) as a regular abelian group is self-centralizing [9, Corollary 2.2.18], in which case we may assume without loss of generality that \(\delta/\mathcal{B}=1\). As the \(\mathcal{B}\)-fixing block system of \(H\) is \(\mathcal{E}\) and \(\operatorname{fix}_{K}(\mathcal{B})^{B}=S_{B}\), we have that \(\delta\in H\), and so \(\langle x\rangle\) and \(\langle y\rangle\) are conjugate in \(G\).
Now let \(X\) be a Cayley object of \(\mathbb{Z}_{n}\) in some class \(\mathcal{K}\) of combinatorial objects such that \(\operatorname{Aut}(X)\) has a \(3/2\)-closed automorphism group. By the first part of this result, if \(\phi\in S_{n}\) such that \(\phi^{-1}(\mathbb{Z}_{n})_{L}\phi\leq\operatorname{Aut}(X)\), then there exists \(\delta\in\operatorname{Aut}(X)\) such that \(\delta^{-1}\phi^{-1}(\mathbb{Z}_{n})_{L}\phi\delta=(\mathbb{Z}_{n})_{L}\). Hence by Lemma 3.6, we have that \(X\) is a CI-object of \(\mathbb{Z}_{n}\). In particular, by Theorem 6.5 the automorphism group of a unit circulant digraph is \(3/2\)-closed. Thus every unit circulant digraph is a CI-digraph of \(\mathbb{Z}_{n}\) and the result follows.
With the idea of \(3/2\)-closed groups, we can define "unit" Cayley digraphs of groups other than the cyclic group, as well as double coset digraphs.
**Definition 6.14**.: Let \(G\) be a group, \(H\leq G\), and \(S\subset G\) such that \(HSH=S\). We say that the double coset digraph \(\operatorname{Cos}(G,H,S)\) is a **unit coset digraph of \(G\)** if \(\operatorname{Aut}(\operatorname{Cos}(G,S))\) is \(3/2\)-closed.
**Problem 6.15**.: _For which groups \(G\) is it true that every unit Cayley digraph of \(G\) is a CI-digraph of \(G\)?_
One may ask why, in the previous problem, we did not generalize the notion of a CI-digraph to double coset digraphs, and pose Problem 6.15 in that context. The reason for this is that it was shown in [3] that the isomorphism problem for coset digraphs of \(G\) is equivalent to the isomorphism problem for Cayley digraphs of \(G\). There is thus no need for the more general form.
|
2302.03232 | Linear Optimal Partial Transport Embedding | Optimal transport (OT) has gained popularity due to its various applications
in fields such as machine learning, statistics, and signal processing. However,
the balanced mass requirement limits its performance in practical problems. To
address these limitations, variants of the OT problem, including unbalanced OT,
Optimal partial transport (OPT), and Hellinger Kantorovich (HK), have been
proposed. In this paper, we propose the Linear optimal partial transport (LOPT)
embedding, which extends the (local) linearization technique on OT and HK to
the OPT problem. The proposed embedding allows for faster computation of OPT
distance between pairs of positive measures. Besides our theoretical
contributions, we demonstrate the LOPT embedding technique in point-cloud
interpolation and PCA analysis. | Yikun Bai, Ivan Medri, Rocio Diaz Martin, Rana Muhammad Shahroz Khan, Soheil Kolouri | 2023-02-07T03:28:56Z | http://arxiv.org/abs/2302.03232v5 | # Linear Optimal Partial Transport Embedding
###### Abstract
Optimal transport (OT) has gained popularity due to its various applications in fields such as machine learning, statistics, and signal processing. However, the balanced mass requirement limits its performance in practical problems. To address these limitations, variants of the OT problem, including unbalanced OT, Optimal partial transport (OPT), and Hellinger Kantorovich (HK), have been proposed. In this paper, we propose the Linear optimal partial transport (LOPT) embedding, which extends the (local) linearization technique on OT and HK to the OPT problem. The proposed embedding allows for faster computation of OPT distance between pairs of positive measures. Besides our theoretical contributions, we demonstrate the LOPT embedding technique in point-cloud interpolation and PCA analysis.
Machine Learning, ICML
## 1 Introduction
The Optimal Transport (OT) problem has found numerous applications in machine learning (ML), computer vision, and graphics. The probability metrics and dissimilarity measures emerging from the OT theory, e.g., Wasserstein distances and their variations, are used in diverse applications, including training generative models (Arjovsky et al., 2017; Genevay et al., 2017; Liu et al., 2019), domain adaptation (Courty et al., 2014, 2017), bayesian inference (Kim et al., 2013), regression (Janati et al., 2019), clustering (Ye et al., 2017), learning from graphs (Kolouri et al., 2020) and point sets (Naderializadeh et al., 2021; Nguyen et al., 2023), to name a few. These metrics define a powerful geometry for comparing probability measures with numerous desirable properties, for instance, parameterized geodesics (Ambrosio et al., 2005), barycenters (Cuturi & Doucet, 2014), and a weak Riemannian structure (Villani, 2003).
In large-scale machine learning applications, optimal transport approaches face two main challenges. First, the OT problem is computationally expensive. This has motivated many approximations that lead to significant speedups (Cuturi, 2013; Chizat et al., 2020; Scetbon & marco cuturi, 2022). Second, while OT is designed for comparing probability measures, many ML problems require comparing non-negative measures with varying total amounts of mass. This has led to the recent advances in unbalanced optimal transport (Chizat et al., 2015, 2018; Liero et al., 2018) and optimal partial transport (Caffarelli & McCann, 2010; Figalli, 2010; Figalli & Gigli, 2010). Such unbalanced/partial optimal transport formulations have been recently used to improve minibatch optimal transport (Nguyen et al., 2022) and for point-cloud registration (Bai et al., 2022).
Comparing \(K\) (probability) measures requires the pairwise calculation of transport-based distances, which, despite the significant recent computational speed-ups, remains to be relatively expensive. To address this problem, Wang et al. (2013) proposed the Linear Optimal Transport (LOT) framework, which linearizes the 2-Wasserstein distance utilizing its weak Riemannian structure. In short, the probability measures are embedded into the tangent space at a fixed reference measure (e.g., the measures' Wasserstein barycenter) through a logarithmic map. The Euclidean distances between the embedded measures then approximate the 2-Wasserstein distance between the probability measures. The LOT framework is computationally attractive as it only requires the computation of one optimal transport problem per input measure, reducing the otherwise quadratic cost to linear. Moreover, the framework provides theoretical guarantees on convexifying certain sets of probability measures (Moosmuller & Cloninger, 2023; Aldroubi et al., 2021), which is critical in supervised and unsupervised learning from sets of probability measures. The LOT embedding has recently found diverse applications, from comparing collider events in physics (Cai et al., 2020) and comparing medical images (Basu et al., 2014; Kundu et al., 2018) to permutation invariant pooling for comparing graphs (Kolouri et al., 2020) and point sets (Naderializadeh et al., 2021).
Many applications in ML involve comparing non-negative measures (often empirical measures) with varying total amounts of mass, e.g., domain adaptation (Fatras et al., 2021). Moreover, OT distances (or dissimilarity measures) are often not
robust against outliers and noise, resulting in potentially high transportation costs for outliers. Many recent publications have focused on variants of the OT problem that allow for comparing non-negative measures with unequal mass. For instance, the optimal partial transport (OPT) problem (Caffarelli and McCann, 2010; Figalli, 2010; Figalli and Gigli, 2010), Kantorovich-Rubinstein norm (Guittet, 2002; Lellmann et al., 2014), and the Hellinger-Kantorovich distance (Chizat et al., 2018; Liero et al., 2018). These methods fall under the broad category of "unbalanced optimal transport" (Chizat et al., 2018; Liero et al., 2018). The existing solvers for "unbalanced optimal transport" problems are generally as expensive or more expensive than the OT solvers. Hence, computation time remains a main bottleneck of these approaches.
To reduce the computational burden for comparing unbalanced measures, Cai et al. (2022) proposed a clever extension for the LOT framework to unbalanced nonnegative measures by linearizing the Hellinger-Kantorovich, denoted as Linearized Hellinger-Kantorovich (LHK), distance, with many desirable theoretical properties. However, an unintuitive caveat about HK and LHK formulation is that the geodesic for the transported portion of the mass does not resemble the OT geodesic. In particular, the transported mass does not maintain a constant mass as it is transported (please see Figure 1). In contrast, OPT behaves exactly like OT for the transported mass with the trade-off of losing the Riemannian structure of HK.
**Contributions:** In this paper, inspired by OT geodesics, we provide an OPT interpolation technique using its dynamic formulation and explain how to compute it for empirical distributions using barycentric projections. We use this interpolation to embed the space of measures in a euclidean space using optimal partial transport concerning a reference measure. This allows us to extend the LOT framework to LOPT, a linearized version of OPT. Thus, we reduce the computational burden of OPT while maintaining the decoupling properties between noise (created and destroyed mass) and signal (transported mass) of OPT. We propose a LOPT discrepancy measure and a LOPT interpolating curve and contrast them with their OPT counterparts. Finally, we demonstrate applications of the new framework in point cloud interpolation and PCA analysis, showing that the new technique is more robust to noise.
**Organization:** In section 2, we review Optimal Transport Theory and the Linear Optimal Transport framework to set the basis and intuitions on which we build our new techniques. In Section 3 we review Optimal Partial Transport Theory and present an explicit solution to its Dynamic formulation that we use to introduce the Linear Optimal Partial Transport framework (LOPT). We define LOPT Embedding, LOPT Discrepancy, LOPT interpolation and give explicit ways to work with empirical data. In Section 4 we show applications of the LOPT framework to approximate OPT distances, to interpolate between point cloud datasets, and to preprocess data for PCA analysis. In the appendix, we provide proofs for all the results, new or old, for which we could not find a proof in the literature.
## 2 Background: OT and LOT
### Static Formulation of Optimal Transport
Let \(\mathcal{P}(\Omega)\) be the set of Borel probability measures defined in a convex compact subset \(\Omega\) of \(\mathbb{R}^{d}\), and consider \(\mu^{0},\mu^{j}\in\mathcal{P}(\Omega)\). The Optimal Transport (OT) problem between \(\mu^{0}\) and \(\mu^{j}\) is that of finding the cheapest way to transport _all_ the mass distributed according to the _reference_ measure \(\mu^{0}\) onto a new distribution of mass determined by the _target_ measure \(\mu^{j}\). Mathematically, it was stated by Kantorovich as the minimization problem
\[OT(\mu^{0},\mu^{j}):=\inf_{\gamma\in\Gamma(\mu^{0},\mu^{j})}C( \gamma;\mu^{0},\mu^{j}) \tag{1}\] \[\text{for}\quad\ C(\gamma;\mu^{0},\mu^{j}):=\int_{\Omega^{2}}\|x ^{0}-x^{j}\|^{2}d\gamma(x^{0},x^{j}), \tag{2}\]
where \(\Gamma(\mu^{0},\mu^{j})\) is the set of all joint probability measures in \(\Omega^{2}\) with marginals \(\mu^{0}\) and \(\mu^{j}\). A measure \(\gamma\in\Gamma(\mu^{0},\mu^{j})\) is called a _transportation plan_, and given measurable sets \(A,B\in\Omega\), the coupling \(\gamma(A\times B)\) describes how much mass originally in the set \(A\) is transported into the set \(B\). The squared of the Euclidean distance1\(\|x^{0}-x^{j}\|^{2}\) is interpreted as the cost of transporting a unit mass located at \(x^{0}\) to \(x^{j}\). Therefore, \(C(\gamma;\mu^{0},\mu^{j})\) represents the total cost of moving \(\mu^{0}\) to \(\mu^{j}\) according to \(\gamma\). Finally, we will denote the set of all plans that achieve the infimum in (1), which is non-empty (Villani, 2003), as \(\Gamma^{*}(\mu^{0},\mu^{j})\).
Footnote 1: More general cost functions might be used, but they are beyond the scope of this article.
Under certain conditions (e.g. when \(\mu^{0}\) has continuous density), an optimal plan \(\gamma\) can be induced by a rule/map \(T\) that takes all the mass at each position \(x\) to a unique point \(T(x)\). If that is the case, we say that \(\gamma\) does not split mass and that it is **induced by a map \(\mathbf{T}\)**. In fact, it is concentrated on the graph of \(T\) in the sense that for all measurable sets \(A,B\subset\Omega\)
\(\gamma(A\times B)=\mu^{0}(\{x\in A:\,T(x)\in B\})\), and we will write it as the _pushforward_\(\gamma=(\mathrm{id}\times T)_{\#}\mu^{0}\). Hence, (1) reads as
\[OT(\mu^{0},\mu^{j})=\int_{\Omega}\|x-T(x)\|^{2}d\mu^{0}(x) \tag{3}\]
The function \(T:\Omega\to\Omega\) is called a _Monge map_, and when \(\mu^{0}\) is absolutely continuous it is unique (Brenier, 1991).
Finally, the square root of the optimal value \(OT(\cdot,\cdot)\) is exactly the so-called **Wasserstein distance**, \(W_{2}\), in \(\mathcal{P}(\Omega)\)(Villani, 2003, Th.7.3), and we will call it also as **OT squared distance**. In addition, with this distance, \(\mathcal{P}(\Omega)\) is not only a metric space but also a Riemannian manifold (Villani, 2003). In particular, the tangent space of any \(\mu\in\mathcal{P}(\Omega)\) is \(\mathcal{T}_{\mu}=L^{2}(\Omega;\mathbb{R}^{d},\mu)=\{u:\Omega\to\mathbb{R}^{d}: \|u\|_{\mu}^{2}<\infty\}\), where
\[\|u\|_{\mu}^{2}:=\int_{\Omega}\|u(x)\|^{2}d\mu(x). \tag{4}\]
### Dynamic Formulation of Optimal Transport
To understand the framework of Linear Optimal Transport (LOT) we will use the dynamic formulation of the OT problem. Optimal plans and maps can be viewed as a static way of matching two distributions. They tell us where each mass in the initial distribution should end, but they do not tell the full story of how the system evolves from initial to final configurations.
In the dynamic formulation, we consider \(\rho\in\mathcal{P}([0,1]\times\Omega)\) a curve of measures parametrized in time that describes the distribution of mass \(\rho_{t}:=\rho(t,\cdot)\in\mathcal{P}(\Omega)\) at each instant \(0\leq t\leq 1\). We will require the curve to be sufficiently smooth, to have boundary conditions \(\rho_{0}=\mu^{0}\), \(\rho_{1}=\mu^{j}\), and to satisfy the conservation of mass law. Then, it is well known that there exists a velocity vector field \(v_{t}:=v(t,\cdot)\) such that \(\rho_{t}\) satisfies the continuity equation2 with boundary conditions
Footnote 2: The continuity equation is satisfied weakly or in the sense of distributions. See (Villani, 2003; Santambrogio, 2015).
\[\partial_{t}\rho+\nabla\cdot\rho v=0,\qquad\rho_{0}=\mu^{0},\quad\rho_{1}=\mu^ {j}. \tag{5}\]
Figure 1: The depiction of the HK and OPT geodesics between two measures, at times \(t\in\{0,0.25,0.5,0.75,1\}\). The top row (Blue) represents two initial deltas of mass one located at positions -1.2 and -1. The bottom row (Purple) shows two final deltas of mass one located at 1 and 1.2. At intermediate time steps \(t=0.25,0.5,0.75\), the transported part (middle delta moving from -1 to 1) changes mass for HK while its mass remains constant for OPT. Outer masses (located at -1.2 for initial time \(t=0\), and at 1.2 for final time \(t=1\)) are being destroyed and created, so mass changes are expected. Notably, mass is created/destroyed with a linear rate for OPT and a nonlinear rate for HK. See Appendix H.4 for further analysis.
The length3 of the curve can be stated as \(\int_{[0,1]\times\Omega}\|v\|^{2}d\rho:=\int_{0}^{1}\|v_{t}\|_{\rho_{t}}^{2}dt\), for \(\|\cdot\|_{\rho_{t}}\) as in (4), and \(OT(\mu^{0},\mu^{j})\) coincides with the length of the shortest curve between \(\mu^{0}\) and \(\mu^{j}\)(Benamou & Brenier, 2000). Hence, the dynamical formulation of the OT problem (1) reads as
Footnote 3: Precisely, the length of the curve \(\rho\), with respect to the Wasserstein distance, should be \(\int_{0}^{1}\|v_{t}\|_{\rho_{t}}dt\), but this will make no difference in the solutions of (6) since they are constant speed geodesics.
\[OT(\mu^{0},\mu^{j})=\inf_{(\rho,v)\in\mathcal{CE}(\mu^{0},\mu^{j})}\int_{[0,1] \times\Omega}\|v\|^{2}d\rho, \tag{6}\]
where \(\mathcal{CE}(\mu^{0},\mu^{j})\) is the set of pairs \((\rho,v)\), where \(\rho\in\mathcal{P}([0,1]\times\Omega)\), and \(v:[0,1]\times\Omega\to\mathbb{R}^{d}\), satisfying (5).
Under the assumption of existence of an optimal Monge map \(T\), an optimal solution for (6) can be given explicitly and is pretty intuitive. If a particle starts at position \(x\) and finishes at position \(T(x)\), then for \(0<t<1\) it will be at the point
\[T_{t}(x):=(1-t)x+tT(x). \tag{7}\]
Then, varying both the time \(t\) and \(x\in\Omega\), the mapping (7) can be interpreted as a flow whose time velocity4 is
Footnote 4: For each \((t,x)\in(0,1)\times\Omega\), the vector \(v_{t}(x)\) is well defined as \(T_{t}\) is invertible. See (Santambrogio, 2015, Lemma 5.29).
\[v_{t}(x)=T(x_{0})-x_{0},\qquad\text{ for }x=T_{t}(x_{0}). \tag{8}\]
To obtain the curve of probability measures \(\rho_{t}\), one can evolve \(\mu^{0}\) through the flow \(T_{t}\) using the formula \(\rho_{t}(A)=\mu_{0}(T_{t}^{-1}(A))\) for any measurable set \(A\). That is, \(\rho_{t}\) is the _push-forward_ of \(\mu^{0}\) by \(T_{t}\)
\[\rho_{t}=(T_{t})_{\#}\mu^{0},\qquad 0\leq t\leq 1. \tag{9}\]
The pair \((\rho,v)\) defined by (9) and (8) satisfies the continuity equation (5) and solves (6). Moreover, the curve \(\rho_{t}\) is a _constant speed geodesic_ in \(\mathcal{P}(\Omega)\) between \(\mu^{0}\) and \(\mu^{j}\)(Figalli & Glaudo, 2021), i.e., it satisfies that for all \(0\leq s\leq t\leq 1\)
\[\sqrt{OT(\rho_{s},\rho_{t})}=(t-s)\sqrt{OT(\rho_{0},\rho_{1})}. \tag{10}\]
A confirmation of this comes from comparing the OT cost (3) with (8) obtaining
\[OT(\mu^{0},\mu^{j})=\int_{\Omega}\|v_{0}(x)\|^{2}d\mu^{0}(x) \tag{11}\]
which tells us that we only need the speed at the initial time to compute the total length of the curve. Moreover, \(OT(\mu^{0},\mu^{j})\) coincides with the squared norm of the tangent vector \(v_{0}\) in the tangent space \(\mathcal{T}_{\mu^{0}}\) of \(\mathcal{P}(\Omega)\) at \(\mu^{0}\).
### Linear Optimal Transport Embedding
Inspired by the induced Riemannian geometry of the \(OT\) squared distance, Wang et al. (2013) proposed the so-called **Linear Optimal Transportation** (LOT) framework. Given two target measures \(\mu^{i},\mu^{j}\), the main idea relies on considering a reference measure \(\mu^{0}\) and embed these target measures into the tangent space \(\mathcal{T}_{\mu^{0}}\). This is done by identifying each measure \(\mu^{j}\) with the curve (9) minimizing \(OT(\mu^{0},\mu^{j})\) and computing its velocity (tangent vector) at \(t=0\) using (8).
Formally, let us fix a continuous probability reference measure \(\mu^{0}\). Then, the **LOT embedding**(Moosmuller & Cloninger, 2023) is defined as
\[\mu^{j}\mapsto u^{j}:=T^{j}-\mathrm{id}\qquad\forall\;\mu^{j}\in\mathcal{P}(\Omega) \tag{12}\]
where \(T^{j}\) is the optimal Monge map between \(\mu^{0}\) and \(\mu^{j}\). Notice that by (3), (4) and (11) we have
\[\|u^{j}\|_{\mu^{0}}^{2}=OT(\mu^{0},\mu^{j}). \tag{13}\]
After this embedding, one can use the distance in \(\mathcal{T}_{\mu^{0}}\) between the projected measures to define a new distance in \(\mathcal{P}(\Omega)\) that can be used to approximate \(OT(\mu^{i},\mu^{j})\). The **LOT squared distance** is defined as
\[LOT_{\mu^{0}}(\mu^{i},\mu^{j}):=\|u^{i}-u^{j}\|_{\mu^{0}}^{2}. \tag{14}\]
### LOT in the Discrete Setting
For discrete probability measures \(\mu^{0},\mu^{j}\) of the form
\[\mu^{0}=\sum_{n=1}^{N_{0}}p_{n}^{0}\delta_{x_{n}^{0}},\qquad\mu^{j}=\sum_{n=1}^{ N_{j}}p_{n}^{j}\delta_{x_{n}^{j}}, \tag{15}\]
a Monge map \(T^{j}\) for \(OT(\mu^{0},\mu^{j})\) may not exist. Following Wang et al. (2013), in this setting, the target measure \(\mu^{j}\) can be replaced by a new measure \(\hat{\mu}^{j}\) for which an optimal transport Monge map exists. For that, given an optimal plan \(\gamma^{j}\in\Gamma^{*}(\mu^{0},\mu^{j})\), it can be viewed as a \(N_{0}\times N_{i}\) matrix whose value at position \((n,m)\) represents how much mass from \(x_{n}^{0}\) should be taken to \(x_{m}^{j}\). Then, we define the **OT barycentric projection5** of \(\mu^{j}\)**with respect to \(\mu^{0}\)** as
Footnote 5: We refer to (Ambrosio et al., 2005) for the rigorous definition.
\[\hat{\mu}^{j}:=\sum_{n=1}^{N_{0}}p_{n}^{0}\delta_{\hat{x}_{n}^{j}},\,\text{ where}\quad\hat{x}_{n}^{j}:=\frac{1}{p_{n}^{0}}\sum_{m=1}^{N_{j}}\gamma_{n,m}^{j}x_{m}^{j}. \tag{16}\]
The new measure \(\hat{\mu}^{j}\) is regarded as an \(N_{0}\)-point representation of the target measure \(\mu^{j}\). The following lemma guarantees the existence of a Monge map between \(\mu^{0}\) and \(\hat{\mu}^{j}\).
**Lemma 2.1**.: _Let \(\mu^{0}\) and \(\mu^{j}\) be two discrete probability measures as in (15), and consider an OT barycentric projection \(\hat{\mu}^{j}\) of \(\mu^{j}\) with respect to \(\mu^{0}\) as in (16). Then, the map \(x_{n}^{0}\mapsto\hat{x}_{n}^{j}\) given by (16) solves the OT problem \(OT(\mu^{0},\hat{\mu}^{j})\)._
It is easy to show that if the optimal transport plan \(\gamma^{j}\) is induced by a Monge map, then \(\hat{\mu}^{j}=\mu^{j}\). As a consequence, the OT barycentric projection is an actual projection in the sense that it is idempotent.
Similar to the continuous case (12), given a discrete reference measure \(\mu^{0}\), we can define the **LOT embedding** for a discrete measure \(\mu^{j}\) as the rule
\[\mu^{j}\mapsto u^{j}:=[(\hat{x}_{1}^{j}-x_{1}^{0}),\ldots,(\hat{x}_{N_{0}}^{j }-x_{N_{0}}^{0})].\lx@note{footnote}{In (Wang et al., 2013), LOT is defined by the infimum over all possible optimal pairs $(\gamma^{i},\gamma^{j})$. We do not distinguish these two formulations for convenience in this paper. Additionally, (19) is determined by the choice of $(\gamma^{i},\gamma^{j})$.} \tag{17}\]
The range \(\mathcal{T}_{\mu_{0}}\) of this application is identified with \(\mathbb{R}^{d\times N_{0}}\) with the norm \(\|u\|_{\mu^{0}}:=\sum_{n=1}^{N_{0}}\|u(n)\|^{2}p_{n}^{0}\), where \(u(n)\in\mathbb{R}^{d}\) denotes the \(n\)th entry of \(u\). We call \((\mathbb{R}^{d\times N_{0}},\|\cdot\|_{\mu^{0}})\) the embedding space.
By the discussion above, if the optimal plan \(\gamma^{j}\) for problem OT\((\mu^{0},\mu^{j})\) is induced by a Monge map, then the discrete embedding is consistent with (13) in the sense that
\[\|u^{j}\|_{\mu^{0}}^{2}=OT(\mu^{0},\hat{\mu}^{j})=OT(\mu^{0},\mu^{j}). \tag{18}\]
Hence, as in section 2.3, we can use the distance between embedded measures in \((\mathbb{R}^{d\times N_{0}},\|\cdot\|_{\mu_{0}})\) to define a _discrepancy_ in the space of discrete probabilities that can be used to approximate \(OT(\mu^{i},\mu^{j})\). The **LOT discrepancy7** is defined as
Footnote 7: In (Wang et al., 2013), LOT is defined by the infimum over all possible optimal pairs \((\gamma^{i},\gamma^{j})\). We do not distinguish these two formulations for convenience in this paper. Additionally, (19) is determined by the choice of $(\gamma^{i},\gamma^{j})$.}\]
\[LOT_{\mu^{0}}(\mu^{i},\mu^{j}):=\|u^{i}-u^{j}\|_{\mu^{0}}^{2}. \tag{19}\]
We call it a _discrepancy_ because it is not a squared metric between discrete measures. It does not necessarily satisfy that \(LOT(\mu^{i},\mu^{j})\neq 0\) for every distinct \(\mu^{i},\mu^{j}\). Nevertheless, \(\|u^{i}-u^{j}\|_{\mu^{0}}\) is a metric in the embedding space.
### OT and LOT Geodesics in Discrete Settings
Let \(\mu^{i}\), \(\mu^{j}\) be discrete probability measures as in (15) (with '\(i\)' in place of \(0\)). If an optimal Monge map \(T\) for \(OT(\mu^{i},\mu^{j})\) exists, a constant speed geodesic \(\rho_{t}\) between \(\mu^{i}\) and \(\mu^{j}\), for the OT squared distance, can be found by mimicking (9). Explicitly, with \(T_{t}\) as in (7),
\[\rho_{t}=(T_{t})_{\#}\mu^{i}=\sum_{n=1}^{N_{i}}p_{n}^{i}\delta_{(1-t)x_{n}^{i} +tT(x_{n}^{i})}. \tag{20}\]
In practice, one replaces \(\mu^{j}\) by its OT barycentric projection with respect to \(\mu^{i}\) (and so, the existence of an optimal Monge map is guaranteed by Lemma 2.1).
Now, given a discrete reference \(\mu^{0}\), the LOT discrepancy provides a new structure to the space of discrete probability densities. Therefore, we can provide a substitute for the OT geodesic (20) between \(\mu^{i}\) and \(\mu^{j}\). Assume we have the embeddings \(\mu^{i}\mapsto u^{i}\), \(\mu^{j}\mapsto u^{j}\) as in (17). The geodesic between \(u^{i}\) and \(u^{j}\) in the LOT embedding space \(\mathbb{R}^{d\times N_{0}}\) has the simple form \(u_{t}=(1-t)u^{i}+tu^{j}\). This correlates with the curve \(\hat{\rho}_{t}\) in \(\mathcal{P}(\Omega)\) induced by the map \(\hat{T}:\hat{x}^{i}_{n}\mapsto\hat{x}^{j}_{n}\)8 as
Footnote 8: This map can be understood as the one that transports \(\hat{\mu}^{i}\) onto \(\hat{\mu}^{j}\) pivoting on the reference: \(\hat{\mu}^{i}\mapsto\mu^{0}\mapsto\hat{\mu}^{j}\).
\[\hat{\rho}_{t}:=(\hat{T}_{t})_{\#}\hat{\mu}^{i}=\sum_{n=1}^{N_{0}}p_{n}^{0} \delta_{x^{0}_{n}+u_{t}(n)}. \tag{21}\]
By abuse of notation, we call this curve the **LOT geodesic** between \(\mu^{i}\) and \(\mu^{j}\). Nevertheless, it is a _geodesic between their barycentric projections_ since it satisfies the following.
**Proposition 2.2**.: _Let \(\hat{\rho}_{t}\) be defined as (21), and \(\hat{\rho}_{0}=\hat{\mu}^{i},\hat{\rho}_{1}=\hat{\mu}^{j}\), then for all \(0\leq s\leq t\leq 1\)_
\[\sqrt{LOT_{\mu^{0}}(\hat{\rho}_{s},\hat{\rho}_{t})}=(t-s)\sqrt{LOT_{\mu^{0}}( \hat{\rho}_{0},\hat{\rho}_{1})}. \tag{22}\]
## 3 Linear Optimal Partial Transport Embedding
### Static Formulation of Optimal Partial Transport
In addition to mass transportation, the OPT problem allows mass destruction at the source and mass creation at the target. Let \(\mathcal{M}_{+}(\Omega)\) denote the set of all positive finite Borel measures defined on \(\Omega\). For \(\lambda\geq 0\) the OPT problem between \(\mu^{0},\mu^{j}\in\mathcal{M}_{+}(\Omega)\) can be formulated as
\[OPT_{\lambda}(\mu^{0},\mu^{j}):=\inf_{\gamma\in\Gamma_{\leq}(\mu ^{0},\mu^{j})}C(\gamma;\mu^{0},\mu^{j},\lambda) \tag{23}\] \[\text{for}\quad C(\gamma;\mu^{0},\mu^{j},\lambda):=\int_{\Omega^ {2}}\|x^{0}-x^{j}\|^{2}d\gamma(x^{0},x^{j})\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+ \lambda(|\mu^{0}-\gamma_{0}|+|\mu^{j}-\gamma_{1}|) \tag{24}\]
where \(|\mu^{0}-\gamma_{0}|\) is the total mass of \(\mu^{0}-\gamma_{0}\) (resp. \(|\mu^{j}-\gamma_{1}|\)), and \(\Gamma_{\leq}(\mu^{0},\mu^{j})\) denotes the set of all measures in \(\Omega^{2}\) with marginals \(\gamma_{0}\) and \(\gamma_{1}\) satisfying \(\gamma_{0}\leq\mu^{0}\) (i.e., \(\gamma_{0}(E)\leq\mu^{0}(E)\) for all measurable set \(E\)), and \(\gamma_{1}\leq\mu^{j}\). Here, the mass destruction and creation penalty is linear, parametrized by \(\lambda\). The set of minimizers \(\Gamma_{\leq}^{*}(\mu^{0},\mu^{j})\) of (23) is non-empty (Figalli, 2010). One can further restrict \(\Gamma_{\leq}(\mu^{0},\mu^{j})\) to the set of partial transport plans \(\gamma\) such that \(\|x^{0}-x^{j}\|^{2}<2\lambda\) for all \((x^{0},x^{j})\in\operatorname{supp}(\gamma)\)(Bai et al., 2022, Lemma 3.2). This means that if the usual transportation cost is greater than \(2\lambda\), it is better to create/destroy mass.
### Dynamic Formulation of Optimal Partial Transport
Adding a forcing term \(\zeta\) to the continuity equation (6), one can take into account curves that allow creation and destruction of mass. That is, those who break the conservation of mass law. Thus, it is natural that the minimization problem (23) can be rewritten (Chizat et al., 2018, Th. 5.2) into a dynamic formulation as
\[OPT_{\lambda}(\mu^{0},\mu^{j})=\inf_{(\rho,v,\zeta)\in\mathcal{FCE}(\mu^{0}, \mu^{j})}\int_{[0,1]\times\Omega}\|v\|^{2}d\rho+\lambda|\zeta| \tag{25}\]
where \(\mathcal{FCE}(\mu^{0},\mu^{j})\) is the set of tuples \((\rho,v,\zeta)\) such that \(\rho\in\mathcal{M}_{+}([0,1]\times\Omega)\), \(\zeta\in\mathcal{M}([0,1]\times\Omega)\) (where \(\mathcal{M}\) stands for signed measures) and \(v:[0,1]\times\Omega\to\mathbb{R}^{d}\), satisfying
\[\partial_{t}\rho+\nabla\cdot\rho v=\zeta,\qquad\rho_{0}=\mu^{0},\quad\rho_{1}= \mu^{j}. \tag{26}\]
As in the case of OT, under certain conditions on the minimizers \(\gamma\) of (23), one curve \(\rho_{t}\) that minimizes the dynamic formulation (25) is quite intuitive. We show in the next proposition that it consists of three parts \(\gamma_{t}\), \((1-t)\nu_{0}\) and \(t\nu^{j}\) (see (27), (28), and (29) below). The first is a curve that only transports mass, and the second and third destroy and create mass at constant rates \(|\nu_{0}|\), \(|\nu^{j}|\), respectively.
**Proposition 3.1**.: _Let \(\gamma^{*}\in\Gamma_{\leq}^{*}(\mu^{0},\mu^{j})\) be of the form \(\gamma^{*}=(\mathrm{id}\times T)_{\#}\gamma_{0}^{*}\) for \(T:\Omega\to\Omega\) a (measurable) map. Let_
\[\nu_{0}:=\mu^{0}-\gamma_{0}^{*},\quad\nu^{j}:=\mu^{j}-\gamma_{1}^{ *}, \tag{27}\] \[T_{t}(x):=(1-t)x+tT(x),\quad\gamma_{t}:=(T_{t})_{\#}\gamma_{0}^{*}. \tag{28}\]
_Then, an optimal solution \((\rho,v,\zeta)\) for (25) is given by_
\[\rho_{t}:=\gamma_{t}+(1-t)\nu_{0}+t\nu^{j}, \tag{29}\] \[v_{t}(x):=T(x_{0})-x_{0},\qquad\text{if }x=T_{t}(x_{0}).\] (30) \[\zeta_{t}:=\nu^{j}-\nu_{0} \tag{31}\]
_Moreover, plugging in \((\rho,v,\zeta)\) into (25), it holds that_
\[OPT_{\lambda}(\mu^{0},\mu^{j})=\|v_{0}\|_{\gamma_{0}^{*},2\lambda}^{2}+ \lambda(|\nu_{0}|+|\nu^{j}|), \tag{32}\]
_where \(v_{0}(x)=T(x)-x\) (i.e., \(v_{t}\) at time \(t=0\)), and_
\[\|v\|_{\mu,2\lambda}^{2}:=\int_{\Omega}\min(\|v\|^{2},2\lambda)d\mu,\qquad \text{for }v:\Omega\to\mathbb{R}^{d}.\]
In analogy to the OT squared distance, we also call the optimal partial cost (32) as the **OPT squared distance**.
### Linear Optimal Partial Transport Embedding
**Definition 3.2**.: Let \(\mu^{0}\), \(\mu^{j}\in\mathcal{M}_{+}(\Omega)\) such that \(OPT_{\lambda}(\mu^{0},\mu^{j})\) is solved by a plan induced by a map. The **LOPT embedding** of \(\mu^{j}\) with respect to \(\mu^{0}\) is defined as
\[\mu^{j}\mapsto(u^{j},\bar{\mu}^{j},\nu^{j}):=(v_{0},\gamma_{0},\nu^{j}) \tag{33}\]
where \(v_{0},\gamma_{0},\nu^{j}\) are defined as in Proposition 3.1.
Let us compare the LOPT (33) and LOT (12) embeddings. The first component \(v_{0}\) represents the tangent of the curve that transports mass from the reference to the target. This is exactly the same as the LOT embedding. In contrast to LOT, the second component \(\gamma_{0}\) is necessary since we need to specify what part of the reference is being transported. The third component \(\nu^{j}\) can be thought of as the tangent vector of the part that creates mass. There is no need to save the destroyed mass because it can be inferred from the other quantities.
Now, let \(\mu^{0}\wedge\mu^{j}\) be the _minimum measure_9 between \(\mu^{0}\) and \(\mu^{j}\). By the above definition, \(\mu^{0}\mapsto(u^{0},\bar{\mu}^{0},\nu^{0})=(0,\mu^{0},0)\). Therefore, (32) can be rewritten
Footnote 9: Formally, \(\mu^{0}\wedge\mu^{j}(B):=\inf\left\{\mu^{0}\left(B_{1}\right)+\mu^{j}\left(B_ {2}\right)\right\}\) for every Borel set \(B\), where the infimum is taken over all partitions of \(B_{*}\), i.e. \(B=B_{1}\cup B_{2}\), \(B_{1}\cap B_{2}=\emptyset\), given by Borel sets \(B_{1}\), \(B_{2}\).
\[OPT_{\lambda}(\mu^{0},\mu^{j})=\|u^{0}-u^{j}\|_{\bar{\mu}^{0}\wedge\bar{\mu}^{j },2\lambda}^{2}+\lambda(|\bar{\mu}^{0}-\bar{\mu}^{j}|+|\nu_{0}-\nu^{j}|) \tag{34}\]
This motivates the definition of the **LOPT discrepancy**.10
Footnote 10: \(LOPT_{\lambda}\) is not a rigorous metric.
**Definition 3.3**.: Consider a reference \(\mu^{0}\in\mathcal{M}_{+}(\Omega)\) and target measures \(\mu^{i},\mu^{j}\in\mathcal{M}_{+}(\Omega)\) such that \(OPT_{\lambda}(\mu^{0},\mu^{i})\) and \(OPT_{\lambda}(\mu^{0},\mu^{j})\) can be solved by plans induced by mappings as in the hypothesis of Proposition 3.1. Let \((u^{i},\bar{\mu}^{i},\nu^{i})\) and \((u^{j},\bar{\mu}^{j},\nu^{j})\) be the LOPT embeddings of \(\mu^{i}\) and \(\mu^{j}\) with respect to \(\mu^{0}\). The **LOPT discrepancy** between \(\mu^{i}\) and \(\mu^{j}\) with respect to \(\mu^{0}\) is defined as
\[LOPT_{\mu^{0},\lambda}(\mu^{i},\mu^{j}):=\|u^{i}-u^{j}\|_{\bar{\mu}^{i}\wedge \bar{\mu}^{j},2\lambda}^{2}+\lambda(|\bar{\mu}^{i}-\bar{\mu}^{j}|+|\nu^{i}- \nu^{j}|). \tag{35}\]
Similar to the LOT framework, by equation (34), LOPT can recover OPT when \(\mu^{i}=\mu^{0}\). That is,
\[LOPT_{\mu^{0},\lambda}(\mu^{0},\mu^{j})=OPT_{\lambda}(\mu^{0},\mu^{j}).\]
### LOPT in the Discrete Setting
If \(\mu^{0},\mu^{j}\) are \(N_{0},N_{j}-\)size discrete non-negative measures as in (15) (but not necessarily with total mass 1), the OPT problem (23) can be written as
\[\min_{\gamma\in\Gamma_{\leq}(\mu^{0},\mu^{j})}\sum_{n,m}\|x_{n}^{0}-x_{m}^{j}\|^ {2}\gamma_{n,m}+\lambda(|p^{0}|+|p^{j}|-2|\gamma|)\]
where the set \(\Gamma_{\leq}(\mu^{0},\mu^{j})\) can be viewed as the subset of \(N_{0}\times N_{j}\) matrices with non-negative entries
\[\Gamma_{\leq}(\mu^{0},\mu^{j}):=\{\gamma\in\mathbb{R}_{+}^{N_{0}\times N_{j}}: \gamma 1_{N_{j}}\leq p^{0},\gamma^{T}1_{N_{0}}\leq p^{j}\},\]
where \(1_{N_{0}}\) denotes the \(N_{0}\times 1\) vector whose entries are \(1\) (resp. \(1_{N_{j}}\)), \(p^{0}=[p_{1}^{0},\ldots,p_{N_{0}}^{0}]\) is the vector of weights of \(\mu^{0}\) (resp. \(p^{j}\)), \(\gamma 1_{N_{j}}\leq p^{0}\) means that component-wise holds the '\(\leq\)' (resp. \(\gamma^{T}1_{N_{0}}\leq p^{j}\), where \(\gamma^{T}\) is the transpose of \(\gamma\)), and \(|p^{0}|=\sum_{n=1}^{N_{0}}|p_{n}^{0}|\) is the total mass of \(\mu^{0}\) (resp. \(|p^{j}|,|\gamma|\)). The marginals are \(\gamma_{0}:=\gamma 1_{N_{j}}\), and \(\gamma_{1}:=\gamma^{T}1_{N_{0}}\).
Similar to OT, when an optimal plan \(\gamma^{j}\) for \(OPT_{\lambda}(\mu^{0},\hat{\mu}^{j})\) is not induced by a map, we can replace the target measure \(\mu^{j}\) by an **OPT barycentric projection**\(\hat{\mu}^{j}\) for which a map exists. Therefore, allowing us to apply the LOPT embedding (see (33) and (40) below).
**Definition 3.4**.: Let \(\mu^{0}\) and \(\mu^{j}\) be positive discrete measures, and \(\gamma^{j}\in\Gamma_{\leq}^{*}(\mu^{0},\mu^{j})\). The **OPT barycentric projection11** of \(\mu^{j}\)**with respect to**\(\mu^{0}\) is defined as
Footnote 11: Notice that in (16) we had \(p_{n}^{0}=\sum_{m=1}^{N_{j}}\gamma_{n,m}^{j}\). This leads to introducing \(\hat{p}_{n}^{j}\) as in (37). That is, \(\hat{p}_{n}^{j}\) plays the role of \(p_{n}^{0}\) in the OPT framework. However, here \(\hat{p}_{n}^{j}\) depends on \(\gamma^{j}\) (on its first marginal \(\gamma_{0}^{j}\)) and not only on \(\mu^{0}\), and so we add a superscript ‘\(j\)’.
\[\hat{\mu}^{j} :=\sum_{n=1}^{N_{0}}\hat{p}_{n}^{j}\delta_{\hat{x}_{n}^{j}}, \qquad\text{ where } \tag{36}\] \[\hat{p}_{n}^{j} :=\sum_{m=1}^{N_{j}}\gamma_{n,m}^{j},\qquad 1\leq n\leq N_{0},\] (37) \[\hat{x}_{n}^{j} :=\begin{cases}\frac{1}{\hat{p}_{n}^{j}}\sum_{m=1}^{N_{j}}\gamma_ {n,m}^{j}x_{m}^{j}&\text{if }\hat{p}_{n}^{j}>0\\ x_{n}^{0}&\text{if }\hat{p}_{n}^{j}=0.\end{cases} \tag{38}\]
**Theorem 3.5**.: _In the same setting of Definition 3.4, the map \(x_{n}^{0}\mapsto\hat{x}_{n}^{j}\) given by (38) solves the problem \(OPT_{\lambda}(\mu^{0},\hat{\mu}^{j}),\) in the sense that induces the partial optimal plan \(\hat{\gamma}^{j}=\operatorname{diag}(\hat{p}_{1}^{j},\ldots,\hat{p}_{N_{0}}^{ j})\)._
It is worth noting that when we take a barycentric projection of a measure, some information is lost. Specifically, the information about the part of \(\mu^{j}\) that is not transported from the reference \(\mu^{0}\). This has some minor consequences.
First, unlike (18), the optimal partial transport cost \(OPT_{\lambda}(\mu^{0},\mu^{j})\) changes when we replace \(\mu^{j}\) by \(\hat{\mu}^{j}\). Nevertheless, the following relation holds.
**Theorem 3.6**.: _In the same setting of Definition 3.4, if \(\gamma^{j}\) is induced by a map, then_
\[OPT_{\lambda}(\mu^{0},\mu^{j})=OPT_{\lambda}(\mu^{0},\hat{\mu}^{j})+\lambda(| \mu^{j}|-|\hat{\mu}^{j}|) \tag{39}\]
The second consequence12 is that the LOPT embedding of \(\hat{\mu}^{j}\) will always have a null third component. That is,
Footnote 12: This is indeed an advantage since it allows the range of the embedding to always have the same dimension \(N_{0}\times(d+1)\).
\[\hat{\mu}^{j}\mapsto([\hat{x}_{1}^{j}-x_{1}^{0},\ldots,\hat{x}_{N_{0}}^{j}-x_{ N_{0}}^{0}],\,\sum_{n=1}^{N_{0}}\hat{p}_{n}^{j}\delta_{x_{n}^{0}},\,0). \tag{40}\]
Therefore, we represent this embedding as \(\hat{\mu}^{j}\mapsto(u^{j},\hat{p}^{j})\), for \(u^{j}=[\hat{x}_{1}^{j}-x_{1}^{0},\ldots,\hat{x}_{N_{0}}^{j}-x_{N_{0}}^{0}]\) and \(\hat{p}^{j}=[\hat{p}_{1}^{j},\ldots,\hat{p}_{N_{0}}^{j}]\). The last consequence is given in the next result.
**Proposition 3.7**.: _If \(\mu^{0},\mu^{i},\mu^{j}\) are discrete and satisfy the conditions of Definition 3.3, then_
\[LOPT_{\mu^{0},\lambda}(\mu^{i},\mu^{j})=LOPT_{\mu^{0},\lambda}(\hat{\mu}^{i}, \hat{\mu}^{j})+\lambda C_{i,j} \tag{41}\]
_where \(C_{i,j}=|\mu^{i}|-|\hat{\mu}^{i}|+|\mu^{j}|-|\hat{\mu}^{j}|\)._
As a byproduct we can define the **LOPT discrepancy** for **any** pair of discrete measures \(\mu^{i},\mu^{j}\) as the right-hand side of (41). In practice, unless to approximate \(OPT_{\lambda}(\mu^{i},\mu^{j})\), we set \(C_{i,j}=0\) in (41). That is,
\[LOPT_{\mu^{0},\lambda}(\mu^{i},\mu^{j}):=LOPT_{\mu^{0},\lambda}(\hat{\mu}^{i}, \hat{\mu}^{j}). \tag{42}\]
### OPT and LOPT Interpolation
Inspired by OT and LOT geodesics as defined in section 2.5, but lacking the Riemannian structure provided by the OT squared norm, we propose an OPT interpolation curve and its LOPT approximation.
For the OPT interpolation between two measures \(\mu^{i}\), \(\mu^{j}\) for which exists \(\gamma\in\Gamma_{\leq}^{*}(\mu^{i},\mu^{j})\) of the form \(\gamma=(\mathrm{id}\times T)_{\#}\gamma_{0}\), a natural candidate is the solution \(\rho_{t}\) of the dynamic formulation of \(OPT_{\lambda}(\mu^{i},\mu^{j})\). The exact expression is given by Proposition 3.1. When working with general discrete measures \(\mu^{i}\), \(\mu^{j}\) (as in (15), with '\(i\)' in place of \(0\)) such \(\gamma\) is not guaranteed to exist. Then, we replace the latter with its OPT barycentric projection with respect to \(\mu^{i}\). And by Theorem 3.5 the map \(T:x_{n}^{i}\mapsto\hat{x}_{n}^{j}\) solves \(OPT_{\lambda}(\mu^{i},\hat{\mu}^{j})\) and the **OPT interpolating curve** is13
Footnote 13: \(\hat{p}_{n}^{j}\) are the coefficients of \(\hat{\mu}^{j}\) with respect to \(\mu^{i}\) analogous to (36).
\[t\mapsto\sum_{n=1}^{N_{i}}\hat{p}_{n}^{j}\delta_{(1-t)x_{n}^{i}+tT(x_{n}^{i})} +(1-t)\sum_{n=1}^{N_{i}}(p_{n}^{i}-\hat{p}_{n}^{j})\delta_{x_{n}^{i}}.\]
When working with a multitude of measures, it is convenient to consider a reference \(\mu^{0}\) and embed the measures in \(\mathbb{R}^{(d+1)\times N_{0}}\) using LOPT. Hence, doing computations in a simpler space. Below we provide the LOPT interpolation.
**Definition 3.8**.: Given discrete measures \(\mu^{0},\mu^{i},\mu^{j}\), with \(\mu^{0}\) as the reference, let \((u^{i},\hat{p}^{i}),(u^{j},\hat{p}^{i})\) be the LOPT embeddings of \(\mu^{i},\mu^{j}\). Let \(\hat{p}^{ij}:=\hat{p}^{i}\wedge\hat{p}^{j}\), and \(u_{t}:=(1-t)u^{i}+tu^{j}\). We define the **LOPT interpolating curve** between \(\mu^{i}\) and \(\mu^{j}\) by
\[t\mapsto\sum_{k\in D_{T}}\hat{p}_{k}^{ij}\delta_{x_{k}^{0}+u_{t}(k)}+(1-t)\sum _{k\in D_{D}}(\hat{p}_{k}^{i}-\hat{p}_{k}^{ij})\delta_{x_{k}^{0}+u_{k}^{i}}+t \sum_{k\in D_{C}}(\hat{p}_{k}^{j}-\hat{p}_{k}^{ij})\delta_{x_{k}^{0}+u_{k}^{j}}\]
where \(D_{T}=\{k:\hat{p}_{k}^{ij}>0\}\), \(D_{D}=\{k:\hat{p}_{k}^{i}>\hat{p}_{k}^{ij}\}\) and \(D_{C}=\{k:\hat{p}_{k}^{ij}<\hat{p}_{k}^{j}\}\)) are respectively the sets where we transport, destroy and create mass.
## 4 Applications
**Approximation of OPT Distance:** Similar to LOT (Wang et al., 2013), and Linear Hellinger Kantorovich (LHK) (Cai et al., 2022), we test the approximation performance of OPT using LOPT. Given \(K\) empirical measures \(\{\mu^{i}\}_{i=1}^{K}\), for each pair \((\mu^{i},\mu^{j})\), we compute \(OPT_{\lambda}(\mu^{i},\mu^{j})\) and \(LOPT_{\mu^{0},\lambda}(\mu^{i},\mu^{j})\) and the mean or median of all pairs \((\mu^{i},\mu^{j})\) of relative error defined as
\[\frac{|OPT_{\lambda}(\mu^{i},\mu^{j})-LOPT_{\mu^{0},\lambda}(\mu^{i},\mu^{j} )|}{OPT_{\lambda}(\mu^{i},\mu^{j})}.\]
Similar to LOT and LHK, the choice of \(\mu^{0}\) is critical for the accurate approximation of OPT. If \(\mu^{0}\) is far away from \(\{\mu^{i}\}_{i=1}^{K}\), the linearization is a poor approximation because the mass in \(\mu^{i}\) and \(\mu^{0}\) would only be destroyed or created. In practice, one candidate for \(\mu^{0}\) is the barycenter of the set of measures \(\{\mu^{i}\}\). The OPT can be converted into OT problem (Caffarelli and McCann, 2010), and one can use OT barycenter (Cuturi and Doucet, 2014) to find \(\mu^{0}\).
For our experiments, we created \(K\) point sets of size \(N=500\) for \(K\) different Gaussian distributions in \(\mathbb{R}^{2}\). In particular, \(\mu^{i}\sim\mathcal{N}(m^{i},I)\), where \(m^{i}\) is randomly selected such that \(\|m^{i}\|=\sqrt{3}\) for \(i=1,...,K\). For the reference, we picked an \(N\) point representation of \(\mu^{0}\sim\mathcal{N}(\overline{m},I)\) with \(\overline{m}=\sum m^{i}/K\). We repeated each experiment \(10\) times. To exhibit the effect of the parameter \(\lambda\) in the approximation, the relative errors are shown in Figure 2. For the histogram of the relative errors for each value of \(\lambda\) and each number of measures \(K\), we refer to Figure 6 in the Appendix H. For large \(\lambda\), most mass is transported and \(OT(\mu^{i},\mu^{j})\approx OPT_{\lambda}(\mu^{i},\mu^{j})\), the performance of LOPT is close to that of LOT, and the relative error is small.
In Figure 3 we report wall clock times of OPT vs LOPT for \(\lambda=5\). We use linear programming (Karmarkar, 1984) to solve each OPT problem with a cost of \(\mathcal{O}(N^{3}\text{log}(N))\) each. Thus, computing the OPT distance pair-wisely for \(\{\mu^{i}\}_{i=1}^{K}\) requires
\(\mathcal{O}(K^{2}N^{3}\text{log}(N))\). In contrast, to compute \(LOPT\), we only need to solve \(K\) optimal partial transport problems for the embeddings (see (33) or (40)). Computing LOPT discrepancies after the embeddings is linear. Thus, the total computational cost is \(\mathcal{O}(KN^{3}\text{log}(N)+K^{2}N)\). The experiment was conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 RAM.
**Point Cloud Interpolation:** We test OT geodesic, LOT geodesic, OPT interpolation, and LOPT interpolation on the point cloud MNIST dataset. We compute different transport curves between point sets of the digits 0 and 9. Each digit is a weighted point set \(\{x_{n}^{j},p_{n}^{j}\}_{n=1}^{N_{j}}\), \(j=1,2\), that we consider as a discrete measure of the form \(\mu^{j}=\sum_{n=1}^{N_{j}}p_{n}^{j}\delta_{x_{n}^{j}}+1/N_{j}\sum_{m=1}^{\eta N _{j}}\delta_{y_{m}^{j}}\), where the first sum corresponds to the clean data normalized to have total mass 1, and the second sum is constructed with samples from a uniform distribution acting as noise with total mass \(\eta\). For OPT and LOPT, we use the distributions \(\mu^{j}\) without re-normalization, while for OT and LOT, we re-normalize them. The reference in LOT and LOPT is taken as the OT barycenter of a sample of the digits 0, 1, and 9 not including the ones used for interpolation, and normalized to have unit total mass. We test for \(\eta=0,0.5,0.75\) (see Figure 8 in the Appendix H). The results for \(\eta=0.5\) are shown in Figure 4. We can see that OT and LOT do not eliminate noise points. OPT still retains much of the noise because interpolation is essentially between \(\mu^{1}\) and \(\hat{\mu}^{2}\) (with respect to \(\mu^{1}\)). So \(\mu^{1}\) acts as a reference that still has a lot of noise. In LOPT, by selecting the same reference as LOT we see that the noise significantly decreases.
**PCA analysis:** We compare the results of performing PCA on the embedding space of \(LOT\) and \(LOPT\) for point cloud MNIST. We take 900 digits from the dataset corresponding to digits \(0,1\) and \(3\) in equal proportions. Each element is a point set \(\{x_{n}^{j}\}_{n=1}^{N_{j}}\) that we consider as a discrete measure with added noise. The reference, \(\mu^{0}\), is set to the OT barycenter of 30
Figure 3: Wall-clock time between OPT and LOPT. The LP solver in PythonOT (Flamary et al., 2021) is applied to each individual OPT problem, with \(100N\) maximum number of iterations.
Figure 2: Graphs of the mean and median relative errors between \(OPT_{\lambda}\) and \(LOPT_{\lambda,\mu_{0}}\) as a function of the parameter \(\lambda\).
samples from the clean data. For LOT we re-normalize each \(\mu^{j}\) to have a total mass of 1, while we do not re-normalize for LOPT. Let \(S_{\eta}:=\{\mu^{j}:\text{noise level}=\eta\}_{j=1}^{900}\). We embed \(S_{\eta}\) using LOT, LHK and LOPT and apply PCA on the embedded vectors \(\{w^{j}\}\). In Figure 5 we show the first two principal components of the set of embedded vectors based on LOT, LHK and LOPT for noise levels \(\eta=0,0.75\). It can be seen that when there is no noise, the PCA dimension reduction technique works well for all three embedding methods. When \(\eta=0.75\), the method fails for LOT embedding, but the dimension-reduced data is still separable for LOPT and LHK. For the running time, LOT, LOPT requires 60-80 seconds and LHK requires about 300-350 seconds. The experiments are conducted on a Linux computer with AMD EPYC 7702P CPU with 64 cores and 256GB DDR4 RAM.
We refer the reader to Appendix H for further details and analysis.
## 5 Summary
We proposed a Linear Optimal Partial Transport (LOPT) technique that allows us to embed distributions with different masses into a fixed dimensional space in which several calculations are significantly simplified. We show how to implement this for real data distributions allowing us to reduce the computational cost in applications that would benefit from the use of optimal (partial) transport. We finally provide comparisons with previous techniques and show some concrete applications. In particular, we show that LOPT is more robust or computationally efficient in the presence of noise than previous methods. For future work, we will continue to investigate the comparison of LHK and LOPT, and the potential applications of LOPT in other machine learning and data science tasks, such as Barycenter problems, graph embedding, task similarity measurement in transfer learning, and so on.
Figure 4: We demonstrate the OT geodesic, OPT interpolation, LOT geodesic and LOPT interpolation in MNIST dataset. In LOT geodesic and LOPT interpolation, we use the same reference measure. The percentage of noise \(\eta\) is set to \(0.5\). In OPT and LOPT interpolation, we set \(\lambda=20\). |
2304.01973 | ERM++: An Improved Baseline for Domain Generalization | Domain Generalization (DG) measures a classifier's ability to generalize to
new distributions of data it was not trained on. Recent work has shown that a
hyperparameter-tuned Empirical Risk Minimization (ERM) training procedure, that
is simply minimizing the empirical risk on the source domains, can outperform
most existing DG methods. ERM has achieved such strong results while only
tuning hyper-parameters such as learning rate, weight decay, batch size, and
dropout. However there are additional hyperparameters which further limit
overfitting and catastrophic forgetting. We therefore focus on tuning
previously untuned hyper-parameters, including training amount, initialization,
and additional regularizers. We call the resulting stronger baseline ERM++.
ERM++ improves the performance of DG by over 5% compared to prior ERM baselines
on a standard benchmark of 5 datasets with a ResNet-50 and over 15% with a
ViT-B/16, and outperforms all SOTA methods on DomainBed with both
architectures. We also explore the relationship between DG performance and
similarity to pre-training data, and find that similarity to pre-training data
distributions is an important driver of performance, but that ERM++ with
stronger initializations can deliver strong performance even on dissimilar
datasets.Code is released at https://github.com/piotr-teterwak/erm_plusplus. | Piotr Teterwak, Kuniaki Saito, Theodoros Tsiligkaridis, Kate Saenko, Bryan A. Plummer | 2023-04-04T17:31:15Z | http://arxiv.org/abs/2304.01973v3 | # ERM++: An Improved Baseline for Domain Generalization
###### Abstract
Multi-source Domain Generalization (DG) measures a classifier's ability to generalize to new distributions of data it was not trained on, given several training domains. While several multi-source DG methods have been proposed, they incur additional complexity during training by using domain labels. Recent work has shown that a well-tuned Empirical Risk Minimization (ERM) training procedure, that is simply minimizing the empirical risk on the source domains, can outperform most existing DG methods. We identify several key candidate techniques to further improve ERM performance, such as better utilization of training data, model parameter selection, and weight-space regularization. We call the resulting method ERM++, and show it significantly improves the performance of DG on five multi-source datasets by over 5% compared to standard ERM, and beats state-of-the-art despite being less computationally expensive. Additionally, we demonstrate the efficacy of ERM++ on the WILDS-FMOW dataset, a challenging DG benchmark. We hope that ERM++ becomes a strong baseline for future DG research. Code is released at [https://github.com/piotr-teterwak/erm_plusplus](https://github.com/piotr-teterwak/erm_plusplus).
## 1 Introduction
Domain Generalization (DG) is a crucial problem in the field of machine learning, as it addresses the challenge of building models that perform well on unseen (target) data distributions, without using target data to update the model [7, 39, 64]. This is important in many real-world applications, where the distribution of data may vary between settings, and it is not always feasible to collect and label a large amount of data for each new domain. Similarly, it is not always known a-priori how the distribution on which the model is deployed differs from the training distribution. In multi-source domain generalization, each training sample is labelled as being part of one of several domains. Many advanced methods leverage domain membership explicitly. For example, DANN [18] uses an adversarial loss to match feature distributions across source domains. Adaptive Risk Minimization [62] meta-learns parameters which adapt a model to newly seen distribution shift. Yet, recently DomainBed [20] holistically evaluated methods and found that ERM (Empirical Risk Minimization), outperforms most prior work for DG in a setting where hyper-parameters are tuned. This is all the more impressive since ERM only leverages domain labels in a very weak way; by oversampling minority domains to balance domain sizes in the training data. Advanced techniques do not beat ERM [20] despite strong inductive biases and additional complexities (and hyper-parameters to tune).
In this paper, our goal is to revisit the framework used to benchmark multisource domain generalization problems to ensure that we maximize the performance of baseline methods. As illustrated in Figure 1, our new baseline, ERM++, is able to outperform the state-of-the-art without the need for domain labels, architecture changes or complex training strategies. Instead, we critically evaluate the components of the training pipeline along three major themes. First, we
Figure 1: **ERM++:** We tackle the task of Multi-Source Domain Generalization, where a model is trained on several source domains and evaluated on a different target domain. We do this by improving the classic, and already strong, ERM [20] algorithm with known methodologies. We verify our method on a diverse set of domain shifts, and show that it improves over the best reported numbers in the literature.
explore how the training data is being used, including training length and checkpoint selection. Second, we consider how we initialize network parameters such as the selection of pretraining network and whether or not to fine-tune or freeze layers. Third, we investigate weight-space regularization methods that are often used to help avoid overfitting to the training data.
Revisiting and improving baselines to address shortcomings or incorporating new training techniques can provide new insights into the state of the research on the topic. For example, SimSiam [12] showed that a simple siamese network can perform competitively on self-supervised learning by incorporating a stop-gradient function. Beyer _et al_. [5] show that a few simple techniques, such as longer training or increased augmentation strength, outperform all prior work in knowledge distillation. Wightman _et al_. [55] show that techniques such as longer training can substantially improve ImageNet [14] performance. These works helped provide new insights into their respective tasks, as we aim to do in our work for domain generalization.
Through a careful evaluation of the training framework used to compare DG methods across six diverse domains, we are able to make several interesting observations. For example, we find that improved performance on ImageNet [14] does not necessitate a gain in generalization ability. We also find that many of the hyperparameters such as training time used by many methods (such as DomainBed [20]) result in evaluating models before they have converged. To address this, we utilized an adaptive training procedure that would automatically determine the sufficient training length for a model to obtain the best performance. Compared to the state-of-the-art DG methods such as MIRO [10] and DIWA [42], our approach is able to obtain a 1% gain across five datasets used by DomainBed [20], while also reducing the required training compute by 50% compared with MIRO and 95% compared to DIWA due to reduced need for hyperparameter tuning. Critically, although we also show that using the techniques we identified boosts MIRO and DIWA's performance, the improved DIWA is unable to outperform ERM++. This helps highlight the need for our work.
## 2 Related Works
In this work, we focus on improving ERM [20] for DG, however here we review existing methods for multi-source DG for completeness.
**Domain-invariant feature learning:** In multi-source domain generalization, it is common to leverage the domain labels to learn domain-invariant features. CORAL [46] aligns second-order statistics of different domains. DANN [18] uses an adversarial loss to match feature distributions across source domains. However, these approaches need to acquire a domain label for each sample, which is sometimes expensive to annotate. Furthermore, using domain knowledge to learn domain-invariant features can learn to ignore signals which can be important for new domains, as evidenced by the strong performance of ERM [20]. In fact, Vedantam et al. [50] find low correlation between low source-target discrepancy and good DG performance.
**Domain-Aware Data Augmentation:** Data augmentation is a common tool to diversify training data and expand the training domain [64, 22, 63, 60]. For example, Inter-domain mixup [60] blends the images of different domains, and augmentation with style transfer can further diversify training images [63], though it is expensive. Instead of relying on data augmentation techniques during training on sources, we propose to employ all training samples from the source, including validation data, which expands knowledge about the task. We also propose to use backbones pretrained with strong domain-agnostic augmentation such as Augmix [22], which mixes different synthetic augmentations in pixel space.
**Ensembling:** Deep ensembles are effective for domain generalization [3, 17]. However, they are computationally inefficient, needing to run inference through many models. It has been recently shown that averaging model weights can approximate an ensemble. This can either be from multiple fine-tuning runs [56, 42] or from different points within a single training trajectory [57, 9, 24]. We choose to leverage the ensembles from a single training trajectory for its effectiveness, and do not find further improvement from averaging from multiple trajectories.
**Preventing Catastrophic Forgetting:** Several recent approaches aim to leverage generalizable features from a model pre-trained on large-scale data. Adapting such
a model to the downstream task without forgetting its generalizable representations is the key to achieving generalization. Wortsman et al. [57] interpolate between the pre-trained and adapted model. Kumar et al. [30, 61] mitigate feature distortion by pre-training a linear probe first before fine-tuning the backbone, warmstarting the fine-tuning with a good initialization. MIRO [10] maximizes the mutual information in feature space between the fine-tuned and pre-trained networks. Our approach utilizes warmstart and confirms its effectiveness in diverse settings.
Our approach does not use explicit domain labels, as in the domain invariant methods presented above. Instead, we start with ERM [20], aka just training on the source, and build on top of it with general methods.
## 3 Revisiting training procedures to create ERM++ for Domain Generalization
We study the problem of Multi-Source Domain Generalization for classification. We train a model on training data consisting of multiple domains and evaluate it on data from unseen domains. More formally, let us consider training domains \(d\in\{d_{1},...,d_{n}\}\). A training dataset is constructed using all sample, label pairs in all training domains \(D=\{(X^{d_{1}},Y^{d_{1}})...(X^{d_{n}},Y^{d_{n}})\}\). After training classifier \(f\) on \(D\), it is tested on a held-out testing domain \(d_{test}\). As stated in previous sections, approaches utilizing invariance of the domain or regularization of features can complicate the training. Instead we perform simple empirical risk minimization (ERM), formalized as minimizing the average loss over all samples \(\frac{1}{n}\sum_{i\in D}\ell(x_{i},y_{i})\), and shown to be successful on diverse tasks [42].
Our goal is to investigate the general training components that go into creating an ERM model to help ensure we have maximized its performance. These components include how to effectively use the source data (Section 3.1), considerations when selecting and using pretrained weights (Section 3.2), and weight-space regularization methods that help prevent overfitting to the source domains (Section 3.3). We refer to our new stronger baseline as ERM++, and we summarize the procedure we found to be most effective in Algorithm 1. As our experiments will show, our training procedures and can also be used to improve the performance of any DG method.
### Improved Data Utilization
A key component of training any neural network is utilizing the (often limited) training data effectively. A common practice in the domain generalization literature is to split source datasets into (often 80%/20%) train/validation sets under a fixed number of iterations for each dataset (_e.g._, [20, 9, 42, 3]). The validation data is used to set hyperparameters and perform checkpoint (no. training steps) selection. This approach has two major drawbacks. First, by creating a separate validation set we are sacrificing a significant portion of our labeled data. Second, by training under a fixed (relatively small) number of iterations we ignore the varying convergence rates of different models, which may result in a model underperforming its true ability.
Inspired by the training procedures in metric learning literature (_e.g._, [37, 38, 47, 48, 52]), we reformulate the training pipeline to take into account method convergence rates and to utilize the entire dataset when training the final model for deployment. Specifically, we explore a two-stage training procedure, where in the first stage we use the same train/validation splits as in prior work, and in the second stage we train our model for deployment using the entire (train+validation) dataset.
To accomplish this, when setting hyperparameters in the first stage, we include a new parameter \(\phi\) that sets the training length (_Early Stopping_). Once we have set all the hyperparameters (including \(\phi\)), we train our deployment model using the full dataset as noted earlier, selecting the final checkpoint as our model. More concretely, we continue training until we no longer observe significant performance gains, which we refer to as Long Training (_LT_). Note that this needs only be performed once per model. In this uses training labels more efficiently by training on the Full-Dataset (_FD_).
### Leveraging Pretrained Model Weights
Most domain generalization methods do not train a model from scratch, but rather transfer the weights of an existing model, typically pretrained on ImageNet [14]. There are three main decisions that we explore further: selecting what model weights to transfer (Section 3.2.1), determining what weights to fine-tune or keep frozen (Section 3.2.2), and how to initialize any new weights (_e.g._, to recognize the categories of your dataset) in your network (Section 3.2.3).
#### 3.2.1 Model Weight Selection
Recent work has shown that better ImageNet models have better domain generalization properties for both single-source and multi-source DG [26, 1]. However, this has been explored in the context of varying model size. Therefore, performance gains can be either from a.) improved pre-training dataset (upstream) performance resulting in improved DG or b.) larger models resulting in improved DG performance, regardless of upstream performance. These also disregard the needs of some applications, such as computational requirements (larger models necessitate more resources) or restrictions on architectures due to a shared encoder for a multitask problem. Thus, we explore the effect of different initializations for the same model architecture, specifically a ResNet-50 [21]. We describe them in more detail below:
* **TorchVision Model Weights:** This is the standard ImageNet pretrained initialization present in TorchVision. It was trained with weak augmentations for 90 epochs.
* **AugMix trained network**: AugMix [22] is a method used to improve model consistency using augmentations without training the model on data which is too different from the test data. AugMix takes two augmented views of an image and mixes them in pixel space. Then the model is trained to produce consistent output between two AugMix augmentations and the clean image. Furthermore, split Batchnorm is used as introduced in [58], i.e. learn separate Batchnorm for clean and augmented images. The model is trained for 200 epochs.
* **ResNet A1:** ResNet A1 initializes weights from the training recipe presented in [55]. The model is heavily tuned to find training settings which result in very strong ImageNet performance. Examples include training for 600 epochs, the LAMB optimizer, strong augmentations, and a binary cross-entropy.
* **Meal V2** : MealV2 [45] is a highly performant ensemble, distilled into a ResNet-50. In particular, a SeNet-154 [23] (81.23% ImageNet Top-1) and ResNet-152 (81.02% ImageNet Top-1) are distilled into ResNet-50.
Each of these models has different ImageNet validation accuracies, ranging from 76.13% (TorchVision weights) to 80.7% (Meal-V2 [45]). However, as our experiments will show, simply swapping out the standard initialization for the strongest ImageNet model does not result in the best performance. We empirically find the strongest of these, Augmix [22], and refer to it as _Strong init_.
#### 3.2.2 Finetuning or Freezing Model Weights
It has been shown that what parameters to update during fine-tuning a pre-trained model, and when, can have substantial effects on downstream performance. Surgical finetuning [31] shows that only updating some blocks results in improved performance, but that different datasets require the unfreezing of different blocks, making it unsuitable for a general DG training procedure (as is our goal). Most domain generalization methods will fine-tune most layer weights, with the exception of BatchNorm parameters, which are sometimes kept frozen. In our experiments we compare the effect freezing or finetuning the BatchNorm parameters has on performance, and refer to unfreezing them (_UBN_). The remaining layer weights we finetune as in prior work.
#### 3.2.3 Initializing New Layer Weights
Initializing new layer weights is typically accomplished by giving the new layer weights a random initialization and then training them on the target datasets. However, a recurring observation made by many researchers over the years is that your model may suffer from catastrophic forgetting of the pre-trained features due to the noisy gradients from the newly initialized layer [19, 21, 30, 42]. To address this, researchers would begin training by Warmstart (_WS_) [25, 61] (also commonly referred to as warmup), where the new layer weights are trained with all pretrained weights kept frozen for a few epochs. After this short training cycle, new and old layer weights are finetuned together (sometimes except for BatchNorm layers, as discussed in Section 3.2.2).
### Weight-Space Regularization
Averaging model parameter iterates has a long history within machine learning [3, 9, 24, 43, 56, 57, 42, 35], and improves generalization by converging to flatter minima [24]. Methods can roughly be divided into those which average within a single trajectory [3, 24, 9], or those between different trajectories originating from a single parent [35, 56, 42]. Because the different model parameters averaged can be interpreted as independent members of an ensemble [56], most prior work takes care to ensure the averaged models are sufficiently diverse and each member has strong performance. This is accomplished by cyclic learning rates [24] or searching ranges over which to average [9]. Most recently, Arpit et al. [3] revisit a simple method for parameter averaging where simply all iterates are averaged(_MPA_). We verify that this works in combination with other techniques present in ERM++. In a departure from most of the other improvements explored (wrt using domain labels), we also experiment with training domain experts to induce model diversity(_SMPA_), but find that this does not result in improved performance over within-trajectory averaging.
### ERM++ Computational Cost
ERM++ induces less training cost overhead compared to competing methods. ERM [20], DIWA [42], and MIRO [10] all use expensive hyper-parameter searches, while we simply use reasonable default ones. For example, MIRO [10] searches over 4 \(\lambda\) regularization weight parameters to obtain SOTA results, and DIWA [42] averages 20-60 independent runs. While ERM++ does use longer training for improved performance, it exceeds SOTA even with standard training lengths (see Table 3, Experiment 8). Overall, without long training, ERM++ achieves SOTA accuracy with 50% of the training compute of MIRO and 5% of the compute of DIWA [42], while retaining the same inference overhead. Long training increases this cost by 4x, which could be reasonable to pay for improved performance.
## 4 Experimental Settings
We benchmark our methods on a diverse set of classification datasets used for evaluating multi-source DG:
**OfficeHome**[51] is a 65-way classification problem depicting everyday objects from 4 domains: art, clipart, product, and real, with a total of 15,588 samples.
**DomainNet**[41] is 345-way object classification problem from 6 domains: clipart, infograph, painting, quickdraw, real, and sketch. With a total of 586,575 samples, it is larger than most of the other evaluated datasets in both samples and classes.
**PACS**[33] is a 7-way object classification problem from 4 domains: art, cartoon, photo, and sketch, with 9,991 samples. It helps verify our method in smaller-scale settings.
**VLCS**[16] is a 5-way classification problem from 4 domains: Caltech101, LabelMe, SUN09, and VOC2007. There are 10,729 samples. VLCS is a good test for close OOD; the member datasets are all real photos. The distribution shifts are subtle and simulate real-life scenarios well.
**TerraIncognita**[4] is a 10-way classification problem of animals in wildlife cameras, where the 4 domains are different locations. There are 24,788 samples. This represents a realistic use-case where generalization is indeed critical.
**Wilds-FMOW**[29, 13] is a 62-way land-use classification problem, with satellites from 5 regions as different domains. There are 141,696 samples. Wilds-FMOW is a realistic problem different from the above and not focused on objects which helps validate the broad applicability of ERM++.
We follow the DomainBed training procedure and add additional components from ERM++. In particular, we use the default hyper-parameters from DomainBed [20], _e.g._, a batch size of 32 (per-domain), a learning rate of 5e-5, a ResNet dropout value of 0, and a weight decay of 0. Unless we specify that the "Long Training" component is added, we train models for 15000 steps on DomainNet (following SWAD[9]) and 5000 steps for other datasets, which corresponds to a variable number of epochs dependent on dataset size. If Long Training is used, we extend training by 4x. We train on all source domains except for one, validate the model on held-out data from the sources every 300 steps, and evaluate on the held-out domain.
## 5 Results
Table 1 compares ERM++ to prior work, where we outperform the state-of-the-art across five DomainBed datasets by an average of 1%. The single largest gain was on DomainNet (3% gain), with OfficeHome and PACS obtaining still substantial gains of 1.5-2%. Table 2 demonstrates our training procedure's ability to generalize, where we combine our approach with the two highest performing methods in prior work (DIWA [42] and MIRO [10]). We find that our approach is able to boost the performance of both methods by around 1%. However, one key observation is that our ERM++ model does not compose well with DIWA (_i.e._, the combination is on par with ERM++ alone). This helps demonstrate the importance of our work, as strong training procedures as used by ERM++ model can affect the rank
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline & OfficeHome & PACS & DomainNet & TerralIncognita & VLCS & Avg. \\ \hline MMD [34] & 66.3\(\pm\)\(0.1\) & 84.7\(\pm\)\(0.5\) & 23.4\(\pm\)\(9.5\) & 42.2\(\pm\)\(1.6\) & 77.5\(\pm\)\(0.9\) & 58.8 \\ Mixstyle [64] & 60.4\(\pm\)\(0.3\) & 85.2\(\pm\)\(0.3\) & 34.0\(\pm\)\(0.1\) & 44.0\(\pm\)\(0.7\) & 77.9\(\pm\)\(0.5\) & 60.3 \\ GroupDRO [44] & 66.0\(\pm\)\(0.7\) & 84.4\(\pm\)\(0.8\) & 33.3\(\pm\)\(0.2\) & 43.2\(\pm\)\(1.1\) & 76.7\(\pm\)\(0.6\) & 60.7 \\ IRM [2] & 64.3\(\pm\)\(2.2\) & 83.5\(\pm\)\(0.8\) & 33.9\(\pm\)\(2.8\) & 47.6\(\pm\)\(0.8\) & 78.5\(\pm\)\(0.5\) & 61.6 \\ CDANN [36] & 65.8\(\pm\)\(1.3\) & 82.6\(\pm\)\(0.9\) & 38.3\(\pm\)\(0.3\) & 45.8\(\pm\)\(1.6\) & 77.5\(\pm\)\(0.1\) & 62.0 \\ DANN [18] & 65.9\(\pm\)\(0.6\) & 83.6\(\pm\)\(0.4\) & 38.3\(\pm\)\(0.1\) & 46.7\(\pm\)\(0.5\) & 78.6\(\pm\)\(0.4\) & 62.6 \\ MTL [6] & 66.4\(\pm\)\(0.5\) & 84.6\(\pm\)\(0.5\) & 40.6\(\pm\)\(0.1\) & 45.6\(\pm\)\(1.2\) & 77.2\(\pm\)\(0.4\) & 62.9 \\ Mixup [59, 60, 53] & 68.1\(\pm\)\(0.3\) & 84.6\(\pm\)\(0.6\) & 39.2\(\pm\)\(0.1\) & 47.9\(\pm\)\(0.8\) & 77.4\(\pm\)\(0.6\) & 63.4 \\ MLDG [32] & 66.8\(\pm\)\(0.6\) & 84.9\(\pm\)\(1.0\) & 41.2\(\pm\)\(0.1\) & 47.7\(\pm\)\(0.9\) & 77.2\(\pm\)\(0.4\) & 63.6 \\ ERM [49] & 67.6\(\pm\)\(0.2\) & 84.2\(\pm\)\(0.1\) & 44.0\(\pm\)\(0.1\) & 47.8\(\pm\)\(0.6\) & 77.3\(\pm\)\(0.1\) & 64.2 \\ SagNet [40] & 68.1\(\pm\)\(0.1\) & 86.3\(\pm\)\(0.2\) & 40.3\(\pm\)\(0.1\) & 48.6\(\pm\)\(1.0\) & 77.8\(\pm\)\(0.5\) & 64.2 \\ SelfReg [27] & 67.9\(\pm\)\(0.7\) & 85.6\(\pm\)\(0.4\) & 42.8\(\pm\)\(0.0\) & 47.0\(\pm\)\(0.3\) & 77.8\(\pm\)\(0.9\) & 64.2 \\ CORAL [46] & 68.7\(\pm\)\(0.3\) & 86.2\(\pm\)\(0.3\) & 41.5\(\pm\)\(0.1\) & 47.6\(\pm\)\(1.0\) & 78.8\(\pm\)\(0.6\) & 64.5 \\ mDSDI [8] & 69.2\(\pm\)\(0.4\) & 86.2\(\pm\)\(0.2\) & 42.8\(\pm\)\(0.1\) & 48.1\(\pm\)\(1.4\) & 79.0\(\pm\)\(0.3\) & 65.1 \\ ERM + MIRO [10] & 70.5\(\pm\)\(0.4\) & 85.4\(\pm\)\(0.4\) & 44.3\(\pm\)\(0.2\) & 50.4\(\pm\)\(1.1\) & 79.0\(\pm\)\(0.0\) & 65.9 \\ ERM + SWAD [9] & 70.6\(\pm\)\(0.2\) & 88.1\(\pm\)\(0.1\) & 46.5\(\pm\)\(0.1\) & 50.0\(\pm\)\(0.3\) & 79.1\(\pm\)\(0.1\) & 66.9 \\ CORAL + SWAD [9, 46] & 71.3\(\pm\)\(0.1\) & 88.3\(\pm\)\(0.1\) & 46.8\(\pm\)\(0.0\) & 51.0\(\pm\)\(0.1\) & 78.9\(\pm\)\(0.1\) & 67.3 \\ DIWA [42] & 72.8 & 89.0 & 47.7 & 51.9 & 78.6 & 68.0 \\ ERM + MIRO + SWAD [9, 10] & 72.4\(\pm\)\(0.1\) & 88.4\(\pm\)\(0.1\) & 47.0\(\pm\)\(0.0\) & **52.9\(\pm\)\(0.2\)** & **79.6\(\pm\)\(0.2\)** & 68.1 \\ ERM++ (Ours) & **74.7\(\pm\)\(0.0\)** & **89.8\(\pm\)\(0.3\)** & **50.8\(\pm\)\(0.0\)** & 51.2\(\pm\)\(0.3\) & 78.0\(\pm\)\(0.1\) & **68.9** \\ \hline \end{tabular}
\end{table}
Table 1: **Comparison to recent methods:** Performance of recent methods as reported by [10]. ERM outperforms almost all prior work, especially when combined with techniques such as SWAD and MIRO. ERM++ outperforms all prior work on average. DIWA does not report confidence intervals.
ing of compared methods. We provide a detailed analysis of each component of our training procedures below.
### Data Utilization
**Using the full data (_FD_):** The most common ERM [20] implementation splits off 80% of the source domains for training, and keeps the remaining 20% for hyper-parameter validation and checkpoint selection. By comparing Table 3 in experiments 2 and 3, we show that training on the full data improves over checkpoint selection on a validation set on all datasets except for VLCS. Early Stopping (_ES_) below helps us recover VLCS performance.
**Long training (_LT_):** Prior work has shown that training to proper convergence can have large impacts on transfer learning performance [11]. To explore this setting for DG, we extended training by 4x for each dataset. In other words, DomainNet models are trained for 60K steps while the other datasets are trained for 20K steps. This training length is one where we observe source validation accuracies start to saturate for most datasets (see supplementary). We present the results in Table 3, experiment 4. We find that training for longer, on average, increases performance by 0.5%.
**Early Stopping (_ES_):** Although the training pieces presented so far improve DG performance on the datasets considered on average, one consistent pattern is that VLCS performance degrades in experiments number 3 (Full-Data), 4 (Long training). This suggests that VLCS is a dataset which is prone to overfitting. We observe that this is true even on a validation set constructed from the source domains. Therefore, we propose an additional step where we use 20% validation splits in order to search for the proper number of training steps, and then retrain using the full data. In Table 3, Experiment 6, we see this dramatically improves performance on VLCS without affecting other datasets.
### Pretrained Model Weight Usage
**Warmstart (_WS_)**: In Table 3, we compare to training using a random initialization for the new classification layer (Experiment 4) or by using Warmstart (Experiment 5). We find WS provides a small but consistent boost on average across datasets. We find this is likely due to a decrease in overfitting to the source domains. For example, in Figure 2, we show accuracies plotted across fine-tuning steps for models with and without warm-start for several domains of TerraIncognita. Without Warmstart, performance quickly plateaus and in some cases, _e.g_., location 100 and location 43, performance even decreases. This kind of performance decrease is not benign; it is impossible to detect without access to the test data. Therefore, for reliable deployments of systems which generalize well, training procedures which do not overfit to source domains are important. We verify that WS has a regularization effect by measuring the L2 distance of the final model from initialization (the pre-trained model) and find that the trained weights were more than twice as far without using WS (58.1 with and 122.5 w/o).
**Unfreezing the Batchnorm (_UBN_):** BatchNorm is commonly frozen in current DG recipes for reasons not well justified. However, we find that frozen batch normalization leads to quick over-fitting in the long-training regime. In Figure 3 we can see that frozen batch normalization results in overfitting. In contrast, without frozen batch normalization this is not an issue. As seen in Table 3, Experiment 9, this freezing BN also results in lower performance. It can therefore be concluded that unfrozen BatchNorm, gives an effective regularization effect by randomizing shifting and scaling of features.
**Stronger initializations (_S. Init_):** One of the key components of the standard DG training scheme is initializing the model parameters with a pre-trained model. The effect of the strong initialization for our model is shown in Table 3, experiment 7, where we achieve a 1% boost an average. However, selecting a model takes care. Table 4 compares ResNet-50 models of varying ImageNet performance described in Section 3.2.1. We summarize our findings below:
* Stronger ImageNet performance does not necessarily correspond to better DG performance. In particular, both the ResNet-50 A1 and Meal V2 weights achieves much better ImageNet Top-1 Accuracy than the standard TorchVision weights, but achieve worse DG performance. However, the overall consistency of the AugMix weights across all 5 datasets makes it a reasonable choice.
* Model Distillation, which strongly improves source accuracy, does not increase overall DG performance. Meal-V2 is a distillation of the ensemble if two very strong ImageNet models into a ResNet-50. Interestingly, the student in Meal-V2 is initialized with the same AugMix trained network as we do in our experiments. Therefore, the differences in performance can be strictly attributed to the effects of model distillation. Looking at the results in more detail, as in Table 5, we can see that performance on ImageNet-like domains improves while performance on other domains suffers. This suggests that the distillation process effectively matches the student to the teacher over the data used in the distillation process, at the price of function smoothness away from the distillation data.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline & OH & PA & VL & DN & TI & Avg \\ \hline MIRO + SWAD [10] & 72.4 & 88.4 & **79.6** & 47.0 & 52.9 & 68.1 \\ DIWA [42] & 72.8 & 89.0 & 78.6 & 47.7 & 52.9 & 68.0 \\ \hline ERM++ & 74.7 & **89.8** & 78.0 & **50.8** & 51.2 & 68.9 \\ DIWA [42] + ERM++ & 74.5 & 90.0 & 78.1 & 50.1 & 51.4 & 68.8 \\ MIRO + ERM++ [10] & **76.3** & 88.8 & 77.9 & 50.4 & **53.4** & **69.4** \\ \hline \end{tabular}
\end{table}
Table 2: We combine ERM++ with MIRO [10] and DIWA[42] DIWA slightly degrade performance while MIRO substantially improves performance.
* AugMix is a model trained with generalization to synthetic corruptions as a goal and results in a very strong DG performance. Therefore, while ImageNet Top-1 accuracy is not a good indicator for DG performance, investigating the correlation between synthetic corruption performance and DG performance is promising.
### Weight Space Regularization
**Generalist Model Parameter Averaging (_MPA_):** We confirm that regularizing model parameters by averaging iterates is an important tool in improving domain generalization performance; in Table 3 (Experiments 1 and 2) we compare models trained with and without parameter averaging across timesteps. Specifically, we average the parameters of all training steps after an initial burn-in period of 100 steps. We confirm that such model parameter averaging consistently and substantially improves domain generalization.
**Specialist Model Parameter Averaging (_SMPA_):** We also explored a setting where instead of averaging model weights, we attempt to include diversity between the models being averaged as this has been shown to boost per
Figure 3: **Unfreezing Batchnorm:** Here we show the test curves of the fine-tuning on the held-out painting domain of DomainNet. With frozen BatchNorm, the initial training is faster but it overfits.
Figure 2: **Oracle test performances** : We plot the top-1 accuracy on held-out test domains of TerraIncognita as a function of fine-tuning epochs with and without warmstart. Warmstart substantially decreases overfitting to the source domains.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c c c|c} \hline \hline \# & MPA & FD & LT & WS & ES & S. Init & UBN & 15K & 10K & 11K & 590K & TerraInc & Avg. \\ \hline
1 & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & 67.1 & 85.1 & 76.9 & 44.1 & 45.2 & 63.7 \\
2 & ✓� & ✗ & ✗ & ✗ & ✗ & ✓ & 70.2 & 85.7 & 78.5 & 46.4 & 49.4 & 66.0 \\
3 & ✓� & ✓� & ✗ & ✗ & ✗ & ✓ & 71.5 & 87.3 & 77.4 & 46.8 & 49.8 & 66.5 \\
4 & ✓� & ✓� & ✓� & ✗ & ✗ & ✓ & 71.7 & 88.7 & 76.9 & 48.3 & 49.6 & 67.0 \\
5 & ✓� & ✓� & ✓� & ✗ & ✓ & 72.6 & 88.8 & 77.0 & 48.6 & 49.3 & 67.3 \\
6 & ✓� & ✓� & ✓� & ✓� & ✓� & ✓ & 72.6 & 88.8 & **78.7** & 48.6 & 49.2 & 67.6 \\
7 & ✓� & ✓� & ✓� & ✓� & ✓� & ✓� & **74.7** & 89.8 & 78.0 & **50.8** & **51.2** & **68.9** \\
8 & ✓� & ✓� & ✗� & ✓� & ✓� & ✓� & 74.6 & 87.9 & 78.6 & 49.8 & 51.1 & 68.4 \\
9 & ✓� & ✓� & ✓� & ✓� & ✓� & **74.7** & **90.1** & 78.6 & 49.9 & 49.0 & 68.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: We present the overall ablation for ERM++. (1) ERM [20] baseline with unfrozen BN. (2) MPA: Model parameter averaging, which uniformly improves results. (3) FD: training on the full data. (4) LT: Training for 4x longer, which ensures convergence improves performance by an additional half percent. (5) WS: Warm-starting the classification layer especially improves OfficeHome, but also helps minimize overfitting (Figure 2). (6) ES: Splitting off validation data to find a training length yields substantial gains. (7) S.Init: Initializing the initial parameters to those trained with AugMix brings performance to state of the art. (8) Removing LT from (7) still results in state-of-the-art performance with half of the training cost of MIRO. (9) UBN: When we freeze the BN parameters, we see that performance substantially degrades.
formance [42]. Following [35], we first train a generalist model on all source domains for 5 epochs, then train specialist models for 5 epochs, before averaging parameters. Results on the DomainNet dataset are reported in Table 6. Although averaging specialists improves over ERM, it does not improve over averaging model iterates of a generalist.
### Generalizing Beyond Web-scraped Datasets
We have demonstrated that ERM++ is a highly effective recipe for DG on several datasets: OfficeHome, PACS, DomainNet, and TerraIncognita. These datasets are diverse and represent a strong evaluation of ERM++. However, [15] show that on datasets not consisting of web-scraped data, the correlation between ImageNet performance and transfer performance is quite weak. To verify that this is not the case for ERM++, we perform an ablation study on WILDS-FMOW, a land-use classification dataset, and see that the ERM++ substantially improves over ERM (Table 7).
## 6 Conclusion
This paper develops a strong baseline, ERM++, that can be used to improve the performance of DG models. By identifying several techniques for enhancing ERM, our approach achieves significant gains in DG performance, reporting a 1% average boost over the state-of-the-art on the challenging DomainBed evaluation datasets and demonstrating efficacy in realistic deployment scenarios on WILDS-FMOW. We find that ERM++ can also boost the performance of state-of-the-art methods, but that ERM++ alone may still outperform them. Our results highlight the importance of improving the training procedure for better DG performance and provide a strong baseline for future research. ERM++ opens up opportunities for exploring additional techniques to further improve DG performance.
\begin{table}
\begin{tabular}{l|c c c c c|c|c} \hline \hline & P & I & Q & S & R & C & Av \\ \hline ERM [20] & 51.1 & 21.2 & 13.9 & 52.0 & 63.7 & 63.0 & 44.1 \\ SMPA & 52.9 & **27.2** & 14.3 & 51.3 & 65.6 & 65.2 & 46.1 \\ MPA & **55.2** & 24.0 & **16.7** & **57.4** & **67.0** & **67.49** & **48.0** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Weight Space Regularization: We show experiments different types of parameter averaging for weight regularization on DomainNet. **SMPA** is a specialized model parameter averaging, where we average parameters of domain specialists, while **MPA** averages parameters within a single training trajectory. While both **MPA** and **SMPA** outperform ERM, **MPA** outperforms **SMPA**.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \hline & OfficeHome & PACS & VLCS & DomainNet & TerraIncognita & Average & ImageNet Accuracy \\ \hline TorchVision Weights & 72.2 & 85.9 & 78.5 & 46.9 & 49.7 & 66.6 & 76.1 \\ AugMix Trained Weights [22] & 74.6 & **87.9** & 78.6 & **49.8** & **51.0** & **68.4** & 79.0 \\ Meal V2 [45] & **75.5** & 86.7 & **79.1** & 49.5 & 50.9 & 68.3 & **80.7** \\ ResNet A1 [55] & 70.8 & 82.8 & 77.7 & 43.0 & 37.3 & 62.3 & 80.4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Top-1 Accuracy with different ResNet-50 initialization**: We investigate initialization weights from different pre-training procedures. The differences between different initializations are very substantial, up to about 6%. Interestingly, improved ImageNet accuracy does not strongly correlate with improved performance. In fact, the strongest initialization is from AugMix pretrained weights, with an ImageNet validation accuracy 1.7% less than the strongest model. Additionally, MealV2 is a distilled model from a very strong ensemble, where the student is initialized to AugMix weights. The distillation process doesn’t improve generalization performance overall, improving over AugMix only in domains which resemble ImageNet. This suggests that the distillation process effectively matches the student to the teacher over the data used in the distillation process, at the price of function smoothness away from the distillation data.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \hline & R0 & R1 & R2 & R3 & R4 & Av. \\ \hline ERM [20] & 34.9 & 47.1 & 38.6 & 43.8 & 53.8 & 43.6 \\ ERM++ & **41.5** & 50.3 & 40.5 & 50.4 & 57.7 & 48.1 \\ - Strong Init & 39.2 & 50.1 & 39.5 & 49.5 & 58.7 & 47.4 \\ - WS & 41.3 & **50.4** & **41.0** & **50.6** & **59.3** & **48.5** \\ - UBN & 39.6 & 49.1 & 38.9 & 49.1 & 58.1 & 47.0 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **WILDS-FMOW Top-1 Accuracy:** We show that ERM++ outperforms ERM on this on the challenging WILDS-FMOW classification dataset. We also ablate several components of ERM++. UBN (Unfrozen Batch Norm) and Strong Init (from Augmix) improve performance, while surprisingly WS (warm-start) decreases performance in this particular scenario. We emphasize that ERM++ overall improves over ERM[20].
\begin{table}
\begin{tabular}{l|c c c c c|c|c} \hline \hline & P & C & I & R & Q & S & Av \\ \hline Aug[22] & **57.3** & **68.8** & **25.6** & 70.2 & **17.1** & **59.8** & **49.8** \\ \hline MV2[45] & **57.3** & 68.5 & 25.4 & **70.9** & 16.1 & 59.0 & 49.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Model distillation’s effect on domain generalization: We look at the per-domain accuracy on DomainNet, comparing Augmix training (Aug) and MealV2 (MV2). MealV2 is a method used to distill a large ensemble into a student ResNet-50, where the student is initialized to AugMix weights. The held-out domains considered are (P)aaniting, (C)lipart, (Info, (Real, Q)uickdraw, and (S)ketch. We can see that the distillation process, while dramatically improving ImageNet performance, only slightly changes generalization performance. In particular, generalization gets slightly worse for all domains except for (R)eal, which is the most similar to ImageNet. This is surprising, since it has been shown that both ensembles [3] and larger model [1] improve domain generalization performance. The distillation process seems to match the teacher function poorly on OOD data.**
## Acknowledgment
DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering.
## Appendix A Additional Results
### Per-dataset details
In Tables 8 (OfficeHome), 9 (DomainNet), 10 (VLCS), 11 (TerraIncognita), 12 (PACS), we expand results for the datasets and report accuracies for each held-out domain. We compare ERM++ with reported performances of ERM [20], DIWA [42], SWAD, [9], and MIRO [10]. ERM + SWAD + MIRO and DIWA are the current SOTA for ResNet-50 models for this set of datasets. Overall trends include ERM++ being especially effective at sketch-like domains, indicating a lowered texture bias. On the sketch and clipart domains in DomainNet, ERM++ outperforms prior best performance by over 4%. When we additionally combine MIRO with ERM++, we see much improved performance on OfficeHome and TerraIncognita without much affecting the performance on the other datasets.
### Standard Errors for ERM++ ablation
In Table 13, we reproduce Table 3 from the main paper which shows the impact of the pieces of ERM++, but add standard errors, in addition to the mean of 3 trials. We can see that not only does each component of ERM++ contribute to the improvement of the mean performance but also the final ERM++ (Experiment 7) has the smallest standard errors on three out of five datasets.
### Validation-Set Accuracy Curves
In Figures 11,12,13,14, and 15, we provide source-validation accuracies for each of the 5 datasets, for the number of steps corresponding to _long training_, which is 20000 steps for most datasets except for the larger DomainNet, which is 60000 steps. As one can see, at this point, validation accuracy is saturated for most domains in most datasets, so this training length is reasonable. Prior training lengths are denoted as red vertical lines in these figures, and one can see that for many datasets this is not a sufficient training length. As we describe in Section 5.1 of the main paper, this improves performance by 0.5% on average.
## Appendix B Dataset Visualizations
In Figures 4 (OfficeHome), 5 (DomainNet), 6 (VLCS), 7 (TerraIncognita), 8 (PACS), 9 (FMoW) we show samples of a few classes from each of the datasets, and each domain. As one can see, both the datasets and distribution shifts are quite diverse, highlighting the flexibility of our method. We present some key attributes of the datasets below.
**OfficeHome [51]** Figure 4. This dataset focuses on household objects. The domain shifts are in low-level style mostly, and there is little spatial bias.
**DomainNet [41]** Figure 5. While the real domain is quite similar to what one might expect in ImageNet, the distribution shifts are quite substantial in other domains. Quickdraw and Infograph are particularly challenging, so the 1-3% gains of ERM++ on these domains is meaningful (Table 9).
**VLCS [16]:** Figure 6. Low-level statistics are quite similar between domains in this dataset, however spatial biases differ between domains. For example, Caltetch objects are quite centered, while other domains do not have this trait. For example the LabelMe domain has cars along the side of the image, and there are many chairs in the VOC2007 domain. Furthermore, in some cases the size of the objects differs dramatically. Lastly, there are many ambiguous images in the LabelMe domain (see Figure 10), raising questions about the validity of trying to improve performance on this dataset.
**TerraIncognita [4]:** Figure 7 The background stays consistent, and the animal object frequently takes up a small portion of the frame. At night the images are black-and-white. This is a very realistic dataset, on which is good to test.
**PACS [33]** Figure 8. The subjects tend to be centered, and the sketches are more realistic than the quickdraw setting in DomainNet. Though the domains are similar to that of DomainNet, PACS has fewer than 10000 samples compared to 586000 of DomainNet. Therefore PACS tests the capabilities of ERM++ on smaller data.
**FMoW**: Figure 9. The images differ in region but also in resolution and scale. The distribution shift between FMoW and the pretraining data is large, therefore FmoW represents the ability of ERM++ to perform on non web-scraped data (see Section 5.4 of the main paper).
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline & art & clipart & product & real & avg \\ \hline ERM [20] & 63.1 & 51.9 & 77.2 & 78.1 & 67.6 \\ ERM + SWAD [9] & 66.1 & 57.7 & 78.4 & 80.2 & 70.6 \\ DIWA [42] & 69.2 & 59 & 81.7 & 82.2 & 72.8 \\ ERM + MIRO + SWAD [10] & - & - & - & - & 72.4 \\ ERM++ & 70.7 & **62.2** & 81.8 & 84.0 & 74.7 \\ ERM++ + MIRO & **74.0** & 61.5 & **83.8** & **85.7** & **76.3** \\ \hline \hline \end{tabular}
\end{table}
Table 8: **OfficeHome:** Per-domain top-1 accuracy against reported results of recent top-performing methods SWAD, DIWA, and MIRO. [10] does not report per-domain performance for MIRO, so we only show average for that case. DIWA doesn’t report standard errors. ERM++ not only greatly increases performance relative to SWAD, DIWA, and MIRO but also reduce variance between runs. The largest gains are on the held-out domain with the largest domain shift(clipart), illustrating the ability of ERM++ to improve performance on difficult DG tasks.
Figure 4: **OfficeHome:**Samples from the OfficeHome dataset, from each domain and selected classes. The dataset focuses on household objects. The domain shifts are in low-level style mostly, and there is little spatial bias.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline & painting & clipart & info & real & quickdraw & sketch & avg \\ \hline ERM [20] & 50.1 & 63.0 & 21.2 & 63.7 & 13.9 & 52.9 & 44.0 \\ ERM + SWAD [9] & 53.5 & 66.0 & 22.4 & 65.8 & 16.1 & 55.5 & 46.5 \\ DIWA [42] & 55.4 & 66.2 & 23.3 & 68.7 & 16.5 & 56 & 47.7 \\ ERM + MIRO + SWAD [10] & - & - & - & - & - & - & 47.0 \\ ERM++ & 58.4 & **71.5** & 26.2 & 70.7 & **17.3** & **60.5** & **50.8** \\ ERM++ MIRO & **58.5** & 71.0 & **26.5** & **71.1** & 15.9 & 59.5 & 50.4 \\ \hline \end{tabular}
\end{table}
Table 9: **DomainNet:** Per-domain top-1 accuracy against reported results of recent top-performing methods SWAD, DIWA, and MIRO. [10] does not per-domain performance for MIRO, so we only show average for that case. DIWA doesn’t report standard errors. ERM++ not only greatly increases performance relative to SWAD, DIWA, and MIRO but also reduce variance between runs. Similar to results on OfficeHome (Table 8), the largest performance gains(of larger than 4%) are on domains very different from the source domain(clipart and sketch). This suggests ERM++ is less sensitive to texture bias than ERM [20]. The bias of MIRO to the pre-trained weights manifests in slightly higher performance on close to ImageNet domains like real when combined with ERM++, at the slight expense of performance on other domains.
Figure 5: **DomainNet:** Samples from the DomainNet dataset. While the real domain is quite similar to what one might expect in ImageNet, the distribution shifts are quite substantial in other domains. Quickdraw and Infograph are particularly challenging, so the 1-3% gains of ERM++ on these domains is meaningful (Table 9). While most domains contain primarily shifts in low level statistics (for example, real to painting), Infograph also has many non-centered objects.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline & caltech101 & labelme & sun09 & voc2007 & avg \\ \hline ERM [20] & 97.7 & **64.3** & 73.4 & 74.6 & 77.3 \\ ERM + SWAD [9] & 98.8 & 63.3 & **75.3** & **79.2** & 79.1 \\ DIWA [42] & **98.9** & 62.4 & 73.9 & 78.9 & 78.6 \\ ERM + MIRO + SWAD [9] & - & - & - & - & **79.6** \\ ERM++ & 98.7 & 63.2 & 71.6 & 78.7 & 78.0 \\ ERM++ + MIRO & 99.0 & 62.4 & 71.8 & 78.3 & 77.9 \\ \hline \end{tabular}
\end{table}
Table 10: **VLCS:** Per-domain top-1 accuracy against reported results of recent top-performing methods SWAD, DIWA, and MIRO. [10] does not per-domain performance for MIRO, so we only show average for that case. DIWA doesn’t report standard errors. Although overall performance on VLCS is lower than competing methods, we can see that this drop primarily comes from lower performance on sun09. Furthermore, there are many ambiguous images in the LabelMe domain (see Figure 10), raising questions about the usefulness of trying to train on this domain.
Figure 6: **VLCS:** The low-level statistics are quite similar between domains, however spatial biases differ between domains. Caltech objects are quite centered, while other domains do not have this trait. For example the LabelMe domain has cars along the side of the image, and there are many chairs in the VOC2007 domain. Furthermore, in some cases the size of the objects differs dramatically. Finally, there are many ambiguous images in the LabelMe domain (see Figure 10), raising questions about the usefulness of trying to train on this domain.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline & Location 100 & Location 38 & Location 43 & Location 46 & Average \\ \hline ERM [20] & 54.3 & 42.5 & 55.6 & 38.8 & 47.8 \\ ERM + SWAD [9] & 55.4 & 44.9 & 59.7 & 39.9 & 50.0 \\ DIWA [42] & **57.2** & 50.1 & 60.3 & 39.8 & 51.9 \\ ERM + MIRO + SWAD [10] & - & - & - & - & 52.9 \\ ERM++ & 48.3 & **50.7** & **61.8** & **43.9** & 51.2 \\ ERM++ + MIRO & **60.81** & 48.8 & 61.1 & 42.7 & **53.4** \\ \hline \end{tabular}
\end{table}
Table 11: **TerraIncognita:** Per-domain top-1 accuracy against reported results of recent top-performing methods SWAD, DIWA, and MIRO. [10] does not per-domain performance for MIRO, so we only show average for that case. DIWA doesn’t report standard errors. ERM++ outperforms other methods on 3 out of 4 held out domains despite slighly underperforming on average. However, we point out that ERM++ w/MIRO outperforms both DIWA and MIRO, and improves ERM++ by a further 2%.
Figure 7: **TerraIncognita**: Samples from the TerraIncognita dataset, from each domain and selected classes. The background stays consistent, and the animal object frequently takes up a small portion of the frame. At night the images are black-and-white. This dataset matches realistic deployment scenarios well.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline & art\_painting & cartoon & photo & sketch & avg \\ \hline ERM [20] & 84.7 & 80.8 & 97.2 & 79.3 & 84.2 \\ ERM + SWAD [9] & 89.3 & 83.4 & 97.3 & 82.5 & 88.1 \\ DIWA [42] & 90.6 & 83.4 & 98.2 & 83.8 & 89 \\ ERM + MIRO + SWAD [10] & - & - & - & - & 88.4 \\ ERM++ & **90.6** & 83.7 & 98.1 & **86.6** & **89.8** \\ ERM++ MIRO & 90.2 & **83.8** & **98.6** & 82.4 & 88.8 \\ \hline \end{tabular}
\end{table}
Table 12: **PACS:** Per-domain top-1 accuracy against reported results of recent top-performing methods SWAD, DIWA, and MIRO. [10] does not per-domain performance for MIRO, so we only show average for that case. DIWA doesn’t report standard errors. ERM++ leads to substantial improvement over prior work. As in other dataset (OfficeHome, DomainNet), large performance gains are made on the sketch domain.
Figure 8: **PACS:** Samples from the PACS dataset, from each domain and selected classes. The subjects tend to be centered, and the sketches are more realistic than the quickdraw setting in DomainNet. Though the domains are similar to that of DomainNet, PACS has fewer than 10000 samples compared to 586000 of DomainNet. Therefore PACS tests the capabilities of ERM++ on smaller data.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{12}{c|}{ERM++ Components} & \multicolumn{1}{c|}{OfficeHome} & \multicolumn{1}{c|}{PACS} & \multicolumn{1}{c|}{VLCSS} & \multicolumn{1}{c|}{DomainNet} & \multicolumn{1}{c|}{TerralInc} & \multicolumn{1}{c|}{Avg.} \\ \hline \# & MPA & FD & LT & WS & ES & S. Init & UBN & 15K & 10K & 11K & 590K & 25K & \\ \hline
1 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & 67.1\(\pm\)0.2 & 85.1\(\pm\)0.3 & 76.9\(\pm\)0.6 & 44.1\(\pm\)0.15 & 45.2\(\pm\)0.6 & 63.7 \\
2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & 70.2\(\pm\)0.3 & 85.7\(\pm\)0.2 & 78.5\(\pm\)0.3 & 46.4\(\pm\)0.0 & 49.4\(\pm\)0.4 & 66.0 \\
3 & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ & 71.5\(\pm\)0.1 & 87.3\(\pm\)0.2 & 77.4\(\pm\)0.1 & 46.8\(\pm\)0.0 & 49.8\(\pm\)0.5 & 66.5 \\
4 & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & 71.7\(\pm\)0.1 & 88.7\(\pm\)0.2 & 76.9\(\pm\)0.1 & 48.3\(\pm\)0.0 & 49.6\(\pm\)0.4 & 67.0 \\
5 & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & 72.6\(\pm\)0.1 & 88.8\(\pm\)0.1 & 77.0\(\pm\)0.1 & 48.6\(\pm\)0.0 & 49.3\(\pm\)0.3 & 67.3 \\
6 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & 72.6\(\pm\)0.1 & 88.8\(\pm\)0.1 & **78.7\(\pm\)**0.0 & 48.6\(\pm\)0.0 & 49.2\(\pm\)0.3 & 67.6 \\
7 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **74.7\(\pm\)**0.0 & 89.8\(\pm\)0.3 & 78.0\(\pm\)0.1 & **50.8\(\pm\)**0.0 & **51.2\(\pm\)**0.3 & **68.9** \\
8 & ✓ & ✓ & ✗� & ✓ & ✓ & ✓ & ✓ & ✓ & 74.6\(\pm\)0.1 & 87.9\(\pm\)0.2 & 78.6\(\pm\)0.1 & 49.8\(\pm\)0.0 & 51.1\(\pm\)0.8 & 68.4 \\
9 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & **74.7\(\pm\)**0.2 & **90.1\(\pm\)**0.0 & 78.6\(\pm\)0.1 & 49.9\(\pm\)0.0 & 49.0\(\pm\)0.4 & 68.3 \\ \hline \hline \end{tabular}
\end{table}
Table 13: We present the overall ablation for ERM++, with standard errors. The full ERM++ (Experiment 7), achieves the lowest standard errors on 3 out of 5 datasets, indicating its components decrease variance.. (1) ERM [20] baseline with unfrozen BN. (2) MPA: Model parameter averaging, which uniformly improves results. (3) FD: training on the full data. (4) LT: Training for 4x longer, which ensures convergence improves performance by an additional half percent. (5) WS: Warm-starting the classification layer especially improves OfficeHome, but also helps minimize overfitting (Figure 2 of main paper). (6) ES: Splitting off validation data to find a training length yields substantial gains. (7) S.Init: Initializing the initial parameters to those trained with AugMix brings performance to state of the art. (8) Removing LT from (7) still results in state-of-the-art performance with half of the training cost of MIRO. (9) UBN: When we freeze the BN parameters, we see that performance substantially degrades.
Figure 9: **FMoW:Samples from the TerraIncognita dataset, from each domain and selected classes. The images differ in region but also in resolution and scale. The distribution shift between FMoW and the pretraining data is large, therefore FmoW represents the ability of ERM++ to perform on non web-scraped data (see Section 5.4 of the main paper).**
D.2 for more details ) while DIWA averages 20-60 models and MIRO search for _4_\(\lambda\) weight regularization values in each experiment. Assuming the worst case scenario of training two full passes (one on validation data for number of training steps for _Early Stopping_, and one on full training data with validation data folded in _Full Data_), and the same number of training steps as MIRO; ERM++ costs \(\frac{1}{2}\) that of MIRO while obtaining better performance. In particular, this configuration represents Experiment 8 in Table 3 of the main paper.
For each forward step MIRO there is an additional forward pass of the data through the model which is absent in ERM++. On the other hand, ERM++ does take a forward pass through the running average model to update batch normalization statistics, which is not done in former methods. This means that each forward pass is compute-equivalent for ERM++ and MIRO, for a given architecture.
## Appendix D Reproducibility
We provide code in a zip file along with this supplementary, and will open-source the code upon acceptance.
### Infrastructure
We train on a heterogeneous cluster, primarily on NVIDIA A6000 GPU's. Each experiment is conducted on a single GPU with 4 CPUs. A single run could range from 12-48 hours, depending on number of steps trained.
### Training details
We follow the DomainBed [20] training procedure and add additional components from ERM++. In particular, we use the default hyper-parameters from DomainBed [20], _e.g._, a batch size of 32 (per-domain), a learning rate of 5e-5, a ResNet dropout value of 0, and a weight decay of 0. We use the ADAM optimizer [28] optimizer with \(\beta\) and \(\epsilon\) values set default values from Pytorch 1.12. Unless we specify that the "Long Training" component is added, we train models for 15000 steps on DomainNet (following SWAD[9]) and 5000 steps for other datasets, which corresponds to a variable number of epochs dependent on dataset size. If Long Training is used, we extend training by 4x. We train on all source domains except for one, validate the model on held-out data from the sources every 300 steps(20% of the source data), and evaluate on the held-out domain. If using _Full Data_ we retrain using the full data. We use the same data augmentation techniques as ERM [20]. We use the ResNet-50 architechture in all experiments.
**Model Parameter Averaging details:** If we use Model Parameter Averaging( _MPA_), we begin to keep a running average at the 100th step. If we additionally use warm-start, we only optimize the classification head for the first 500 steps, and start _MPA_ 100 steps after that. For the Specialist Model Parameter Averaging(_SMPA_) experiments (Table 6 of main paper), we first train a generalist model for 15000 steps, then train an independent model for each domain for another 1500 steps. At the end, we average parameters and re-compute batch norm running statistics. This recomputing of BN stats makes sure the averaged model has accurately computed batch norm statistics which may not be a simple average of experts, due to the non-linearity of neural nets.
**Batch Normalization details:** With unfrozen batch normalization( _UBN_), we update the evaluation model BN statistics by averaging the model iterates first (from _MPA_), then then forward propagating the current batch at each step through the evaluation model. In this way, the BN running statistics and model used for inference match.
**Sources of pre-trained weights:** We use torchvision 0.13.1 for vanilla ResNet-50 initialization. For augmix and ResNet-A1 initialized weights, we leverage TIMM [54]12
Footnote 1: Augmix Weights :[https://github.com/rwightman/pytorch-image-models/releases/download/v0](https://github.com/rwightman/pytorch-image-models/releases/download/v0).
**A note on hyper-parameter search:** In this work, we focus on methodological improvements that do not depend on expensive hyper-parameter tuning, and as a result we use default learning rate, weight decay, etc. We demonstrate state-of-the-art performance despite this, and greatly reduce the computational cost of training as a result. However, we believe there is substantial headroom for improvement with further hyper-parameter tuning.
**MIRO Implementation:** We directly follow the MIRO implementation and borrow the lambda weights values from [10] when we combine MIRO with ERM++ in Table 2 of the main paper. ERM++ substantially improves the
Figure 10: **Sample from LabelMe Domain in VLCS:** Is this a dog, person, or chair? Many samples in the LabelMe domain of VLCS are ambigrous but assigned a label (in this case, dog). This raises questions about the usefulness of training on this domain.
performance of MIRO.
**DIWA Implementation:** We follow a simplified version of the DIWA [42] algorithm due to computational reasons; we average the parameters of the three seeds of ERM++. The authors of DIWA show that about half of the performance boost comes from the first few models averaged (Figure 4 of [42]), therefore this is a reasonable approximation of the method. It is interesting that DIWA reduces performance of ERM++, but that ERM++ w/DIWA is still improved over DIWA as reported in [42].
|
2303.15779 | Learnability of Linear Port-Hamiltonian Systems | A complete structure-preserving learning scheme for
single-input/single-output (SISO) linear port-Hamiltonian systems is proposed.
The construction is based on the solution, when possible, of the unique
identification problem for these systems, in ways that reveal fundamental
relationships between classical notions in control theory and crucial
properties in the machine learning context, like structure-preservation and
expressive power. In the canonical case, it is shown that the set of uniquely
identified systems can be explicitly characterized as a smooth manifold endowed
with global Euclidean coordinates, which allows concluding that the parameter
complexity necessary for the replication of the dynamics is only $O(n)$ and not
$O(n^2)$, as suggested by the standard parametrization of these systems.
Furthermore, it is shown that linear port-Hamiltonian systems can be learned
while remaining agnostic about the dimension of the underlying data-generating
system. Numerical experiments show that this methodology can be used to
efficiently estimate linear port-Hamiltonian systems out of input-output
realizations, making the contributions in this paper the first example of a
structure-preserving machine learning paradigm for linear port-Hamiltonian
systems based on explicit representations of this model category. | Juan-Pablo Ortega, Daiying Yin | 2023-03-28T07:47:05Z | http://arxiv.org/abs/2303.15779v1 | # Learnability of Linear Port-Hamiltonian Systems
###### Abstract
A complete structure-preserving learning scheme for single-input/single-output (SISO) linear port-Hamiltonian systems is proposed. The construction is based on the solution, when possible, of the unique identification problem for these systems, in ways that reveal fundamental relationships between classical notions in control theory and crucial properties in the machine learning context, like structure-preservation and expressive power. In the canonical case, it is shown that the set of uniquely identified systems can be explicitly characterized as a smooth manifold endowed with global Euclidean coordinates, which allows concluding that the parameter complexity necessary for the replication of the dynamics is only \(\mathcal{O}(n)\) and not \(\mathcal{O}(n^{2})\), as suggested by the standard parametrization of these systems. Furthermore, it is shown that linear port-Hamiltonian systems can be learned while remaining agnostic about the dimension of the underlying data-generating system. Numerical experiments show that this methodology can be used to efficiently estimate linear port-Hamiltonian systems out of input-output realizations, making the contributions in this paper the first example of a structure-preserving machine learning paradigm for linear port-Hamiltonian systems based on explicit representations of this model category.
**Keywords:** Linear port-Hamiltonian system, machine learning, structure-preserving algorithm, systems theory, physics-informed machine learning, unique identification problem, controllable representation, observable representation, canonical representation.
1
Footnote 1: Juan-Pablo Ortega and Daiving Yin are with the Division of Mathematical Sciences, Nanyang Technological University, Singapore. Their email addresses are [email protected] and [email protected].
###### Contents
* **Glossary of main symbols**
* 1 Introduction
* 2 Preliminaries
* 2.1 State-space systems and morphisms
* 2.2 Hamiltonian and port-Hamiltonian systems
* 2.3 Controllability and observability
* 2.4 The symplectic Lie group and its Lie algebra
* 2.5 Williamson's normal form
* 3 Controllable and observable Hamiltonian representations
* 4 Unique identification of linear port-Hamiltonian systems
* 4.1 The unique identification problem for filters in \(\mathcal{PH}_{n}\)
* 4.2 Equivalence classes of port-Hamiltonian systems by system isomorphisms
* 4.3 The quotient spaces as groupoid orbit spaces
* 4.4 Characterization of canonical port-Hamiltonian systems
* 4.5 The unique identifiability space for canonical port-Hamiltonian systems as a group orbit space
* 5
Global Euclidean coordinates for the unique identifiability space of canonical port-Hamiltonian systems
* 5 Linear port-Hamiltonian systems in normal form are restrictions of higher dimensional ones
* 6 Practical implementation of the results
* 7 Numerical illustrations
* 7.1 Non-dissipative circuit
* 7.2 Positive definite Frenkel-Kontorova model
* 8 Conclusions
## References
* [1]
* 9 Appendices
* 1.1 Proof of Theorem 3.3 (i)
* 9.2 Proof of Theorem 3.3 (ii)
* 9.3 Proof of Theorem 4.5
* 9.4 Proof of Theorem 4.9
* 9.5 Proof of Proposition 4.11
* 9.6 Proof of Proposition 4.17
* 9.7 Proof of Proposition 4.18
* 9.8 Proof of Proposition 4.19
* 9.9 Proof of Proposition 4.20
* 9.10 Proof of Theorem 5.1
* 9.11 Proof of Proposition 5.2
* 9.12 Proof of Proposition 5.3
* 9.13 Proof of Proposition 5.4
* 9.14 A note on the design of discrete integrators on the transformed space
## Glossary of Symbols
* [1] The space of parameters \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) for \(PH_{m,n}\) \(\mathbb{T}^{n}\) The \(n\)-torus \(\mathcal{CH}_{n}/\mathcal{OH}_{n}/\mathcal{CH}_{n}^{0}\) The space of filters induced by \(CH_{n}/OH_{n}/CH_{n}\) with zero initial condition \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) Port-Hamiltonian groupoid, see Proposition 4.11 \(\mathcal{H}_{n}\rightrightarrows\Theta_{CH_{n}}\) Reduced port-Hamiltonian groupoid, see Proposition 4.13 \(\mathcal{PH}_{n}\) The space of input-output dynamics/filters induced by systems in \(PH_{n}\) \(\mathcal{PH}_{n}^{can}\) The space of input-output dynamics/filters induced by systems in \(PH_{n}^{can}\) \(\mathcal{X}_{\uparrow}^{n}\) The set of \(n\)-tuples of distinct real numbers in increasing order \(\mathfrak{sp}(2n,\mathbb{R})\) Lie algebra of the symplectic group \(\sim_{\star}\) An equivalence relation defined on \(\Theta_{CH_{n}}\) \(\sim_{filter}\) The equivalence relation of inducing the same filter \(\sim_{sys}\) The equivalence relation of system automorphism
\(\Theta_{CH_{n}}/\Theta_{OH_{n}}\) The space of parameters \((\mathbf{d},\mathbf{v})\) for \(CH_{n}\) and/or \(OH_{n}\), which are the same
\(\theta_{CH_{n}}/\theta_{OH_{n}}\) The map that send parameters in for \(\Theta_{CH_{n}}/\Theta_{OH_{n}}\) to the corresponding state space system in \(CH_{n}/OH_{n}\)
\(\Theta_{CH_{n}}^{\alpha n}\) The subset of \(\Theta_{CH_{n}}\) that corresponds to canonical systems
\(\Theta_{PH_{n}}\) The space of parameters \((Q,B)\) for \(PH_{n}\)
\(\theta_{PH_{n}}\) The map that sends parameters in for \(\Theta_{PH_{n}}\) to the corresponding state space system in \(PH_{n}\)
\(B\) Input matrix of a port-Hamiltonian system in normal form
\(CH_{n}/OH_{n}\) The space of \(2n\)-dimensional controllable/observable Hamiltonian representations
\(F:\mathcal{Z}\times\mathcal{U}\rightarrow\mathcal{Z}\) State equation
\(H:\mathbb{R}^{2n}\longrightarrow\mathbb{R}\) Hamiltonian function
\(PH_{n}\) The space of \(2n\)-dimensional linear normal form port-Hamiltonian systems (5)
\(PH_{n}^{can}\) The subspace of \(PH_{n}\) consisting of canonical linear normal form port-Hamiltonian systems
\(PH_{m,n}\) The subspace of \(PH_{m}\) containing all \((Q^{\prime},B^{\prime})=\left(O\begin{bmatrix}Q&0\\ 0&\mathbb{I}_{2m-2n}\end{bmatrix}O^{T},O\begin{bmatrix}B\\ 0\end{bmatrix}\right)\), \((Q,B)\in PH_{n}\), \(O\in O(2m,\mathbb{R})\)
\(Q\) Quadratic form that determines a linear Hamiltonian system
\(S_{n}\) Permutation group of \(n\)-elements
\(Sp(2n,\mathbb{R})\) Symplectic group
\(\mathbb{I}_{n}=\begin{bmatrix}0&\mathbb{I}_{n}\\ -\mathbb{I}_{n}&0\end{bmatrix}\) Canonical symplectic matrix
## 1 Introduction
Machine learning has experienced substantial development in recent years due to significant advances in algorithms and a fast growth in computational power. The universal approximation properties of neural networks [12, 13] and other similar families make it possible for them to learn any function with very few prior assumptions. A typical modus operandi in supervised machine learning is first to choose a neural network architecture, to perform forward propagation using available data, to compute some loss function, and then to carry out backward propagation, that is, gradient descent, to recursively optimize the parameters. This paradigm has proved to be very successful in the learning of numerous complicated tasks, including time-series forecasting [14], computer vision [15], and natural language processing [16].
In physics and engineering, machine learning is called to play an essential role in predicting and integrating the equations associated with physical dynamical systems. Physical systems are primarily formulated in terms of ordinary, time-delay, and partial differential equations that can be deduced mostly from variational principles. Consequently, some researchers propose to learn adequately discretized versions of their corresponding vector fields (see, for instance, [17], [18], [20], and references therein). In addition to vector fields learning, researchers have proposed "model-free" methods like _transformers_[19, 20, 21], _reservoir computing_[22, 23, 24, 25], _recurrent neural networks_[2], _convolutional neural networks_[26], or _LSTMs_[28].
Various universal approximation properties theoretically explain the empirical success of some of these approaches (see, for instance, [12, 13, 14, 15]) of some of these learning paradigms. Nevertheless, for physics-related problems, like in mechanics or optics, it is natural to build into the learning algorithm any prior knowledge that we may have about the system based
on physics' first principles. This may include specific forms of the laws of motion, conservation laws, symmetry invariance, as well as other underlying geometric and variational structures. This observation regarding the construction of structure-preserving schemes, has been profusely exploited with much success before the emergence of machine learning in the field of numerical integration [12, 13, 14, 15]. Many examples in that context show how the failure to maintain specific conservation laws can lead to physically inconsistent solutions.
The translation of this idea to the context of machine learning has lead to the emergence of a new domain collectively known as _physics-informed machine learning_ (see [16, 17, 18, 19] and references therein). In the specific case of Hamiltonian systems, the two main structural constraints are that the flow is symplectic and the energy, i.e., the Hamiltonian, is conserved along the flow. Additionally, symmetries are frequently present, which carries the emergence of additional conserved quantities in the form of the so-called momentum maps via Noether's Theorem [13, 14, 15]. These are all examples of qualitative properties to be preserved by the learning algorithms. Needless to say that the above-mentioned "model-free" approaches generically fail to preserve all these structures. With these in mind, several attempts in the literature have been made to develop tailor-made learning algorithms for Hamiltonian systems. For example, in [12, 13] neural methods are proposed to learn the Hamiltonian function directly. In [14], a symplectic recurrent neural network is proposed that uses symplectic integration while matching the predictions and observations and leads to a structure-preserving paradigm. Other structure-preserving methods include the so-called SympNet [15], the generating function neural networks (GFNN) in [14], and the symplectic reversible neural networks in [16]. SympNet constructs a universal approximating family of symplectic maps, while GFNN applies a modified KAM theory to control long-term prediction error. Symplectic reversible neural networks are also proposed as a family of universal approximating maps that concern, in particular, reversible symplectic dynamics. In [15], a parametric framework of learning Hamiltonian state dynamics with control is proposed, with the assumption that the Hamiltonian is separable. Under the same assumption, [13] proposes to learn with a parametrized Hamiltonian in a Taylor series form.
This paper's focus differs from the references mentioned above in two ways. First, these methods are designed to learn the state evolution of Hamiltonian systems, whereas our approach focuses on _learning the input-output dynamics of port-Hamiltonian systems while remaining agnostic about the physical state space_. As will be introduced later on, these systems have an underlying Dirac structure that describes the geometry of numerous physical systems with external inputs [17] and includes the dynamics of the observations of Hamiltonian systems as a particular case. Even though various learning schemes for these systems have been already proposed in the literature [18, 19, 14, 15] most works on the learning of Hamiltonian systems deal with autonomous (separable) Hamiltonian systems on which one assumes access to the entire phase space and not only to its observations. Second, instead of a general nonlinear system for which only approximation error can be possibly estimated, we consider, as a first approach _exclusively linear systems_, in which case, we can obtain explicit representations of linear port-Hamiltonian systems in normal form and characterize the symmetries and quotient spaces associated to the invariance by system automorphisms. Thereby, we propose a structure-preserving learning paradigm with a provable minimal parameter space.
The contributions in this paper are contained in several results that we briefly introduce in the following lines. In Section 2, we define the notion of linear port-Hamiltonian systems in normal form and present some necessary introductory concepts. We start in Theorem 3.3 by introducing system morphisms that allow us to represent any linear port-Hamiltonian system in normal form as the image of another linear system of the same dimension in which the state equation is in controllable canonical form. An obvious observation is that since the original port-Hamiltonian system and the new linear system are linked by a system morphism, the image of the input/output relations of the latter are input/output relations of the former. In particular, the new system can be used to learn to reproduce the input/output dynamics of the original port-Hamiltonian system (for a subspace of initial conditions) and _this learning paradigm is structure-preserving by construction_. Similarly, Theorem 3.3 also contains another type of system morphisms that link any linear port
Hamiltonian system in normal form to some linear system of the same dimension in observable canonical form. Consequently, the input-output relations of the original port-Hamiltonian system with respect to any initial condition can be captured by the observable Hamiltonian representation. Both representations are based on classical techniques from control theory, the Cayley-Hamilton theorem, and are ultimately corollaries of the Williamson normal form [22, 23, 24, 25]. We show that the controllable and observable representations are closely related to each other, and both system morphisms become isomorphisms for canonical port-Hamiltonian systems. However, for the purpose of learning a general port-Hamiltonian system that may not be canonical, we reveal that there is a trade-off between the structure-preserving property and the expressive power. These results establish a strong link between classical notions in the control theory, that is controllability and observability, and those in machine learning, namely, structure-preservation and expressive power.
Based on these explicit constructions and using the parametrizations that come with them, we tackle in Section 4 the unique identifiability of input-output dynamics of linear port-Hamiltonian systems in normal form. Such a characterization is obviously needed to solve the model estimation problem since in applications we only have access to input/output data and different state space systems can induce the same filter that produces that data. This fact has important implications when it comes to the learning of port-Hamiltonian systems out of finite-sample realizations of a given data-generating process because such degeneracy makes impossible its exact recovery. Said differently, it is not the space of port-Hamiltonian systems that needs to be characterized but its quotient space with respect to the equivalence relation defined by the constraint on inducing the same input/output system. We shall see in Subsection 4.1 that the presence of non-canonical systems in \(PH_{n}\) makes it in general difficult to directly characterize that quotient space and we shall settle for the closest to it that we can get, namely, the quotient space by system automorphisms that, as it will be justified, approximates the general case in a certain sense and admits an explicit characterization as a Lie groupoid orbit space (Subsection 4.3). In Subsection 4.4, we restrict our identification analysis to canonical port-Hamiltonian systems and show, first, that in that situation eliminating the system isomorphisms completely identifies the set of input/output systems, and second, that the corresponding quotient spaces can be characterized as orbit spaces with respect to a group (as opposed to groupoids in the general unrestricted case) action, where the group is explicitly given by a semi-direct product. Moreover, (see Subsection 4.6) this orbit space can be explicitly endowed with a smooth manifold structure that has global Euclidean coordinates that can be used at the time of constructing estimation algorithms. Consequently, canonical port-Hamiltonian dynamics can be identified fully and explicitly in either the controllable or the observable Hamiltonian representations and learned by estimating a unique set of parameters in a smooth manifold that is obtained as a group orbit space.
Another learning-related problem that we tackle is that, in applications, one is obliged to remain agnostic as to the dimension of the underlying data-generating port-Hamiltonian system. This leads to the difficulty of choosing the dimension of the controllable/observable Hamiltonian representations. We solve this issue by proving in Theorem 5.1 that, for \(m\geq n\), any \(2n\)-dimensional linear port-Hamiltonian system in normal form can be regarded as the restriction of a \(2m\)-dimensional one to some subspace. This fact, together with some subsequent results, guarantees theoretically that we can choose a sufficiently large \(m\) in practice and parametrize the observable Hamiltonian representation in dimension \(2m\) and use it for learning without assuming any knowledge about the dimension of the data generating system. The paper concludes with some numerical examples in Section 7 that illustrate the viability of the method that we propose in systems with various levels of complexity and dimensions as well as the computational advantages associated with the use of the parameter space in which unique identification is guaranteed.
## 2 Preliminaries
In this section, we introduce various notions and preliminary results necessary to understand the context and the contributions of the paper.
### State-space systems and morphisms
A continuous time state-space system is given by the following two equations
\[\begin{cases}\dot{\mathbf{z}}=F(\mathbf{z},u),\\ y=h(\mathbf{z}),\end{cases} \tag{1}\]
where \(u\in\mathcal{U}\) is the _input_, \(\mathbf{z}\in\mathcal{Z}\) is the _internal state_ and \(F:\mathcal{Z}\times\mathcal{U}\to\mathcal{Z}\) is called the _state map_. The first equation is called the _state equation_ while the second one is usually referred to as the _observation equation_. The solutions of (1) (when available and unique) yield an input/output map that is by construction causal and time-invariant. State-space systems will be sometimes denoted using the triplet \((\mathcal{Z},F,h)\).
**Definition 2.1**.: _A map \(f:\mathcal{Z}_{1}\to\mathcal{Z}_{2}\) is called a system morphism (see [1]) between the continuous-time state-space systems \((\mathcal{Z}_{1},F_{1},h_{1})\) and \((\mathcal{Z}_{2},F_{2},h_{2})\) if it satisfies the following two properties:_
**(i)**: _System equivariance:_ \(f(F_{1}(\mathbf{z}_{1},u))=F_{2}(f(\mathbf{z}_{1}),u)\)_, for all_ \(\mathbf{z}_{1}\in\mathcal{Z}_{1}\) _and_ \(u\in\mathcal{U}\)_._
**(ii)**: _Readout invariance:_ \(h_{1}(\mathbf{z}_{1})=h_{2}(f(\mathbf{z}_{1}))\) _for all_ \(\mathbf{z}_{1}\in\mathcal{Z}_{1}\)_._
As a direct consequence of this definition, composition of system morphisms is again a system morphism. In the case \(f\) is invertible and \(f^{-1}\) is also a morphism, we say that \(f\) is a system isomorphism. An elementary but very important fact is that if \(f:\mathcal{Z}_{1}\to\mathcal{Z}_{2}\) is a linear system-equivariant map between \((\mathcal{Z}_{1},F_{1},h_{1})\) and \((\mathcal{Z}_{2},F_{2},h_{2})\) (\(\mathcal{Z}_{1}\) and \(\mathcal{Z}_{2}\) are in this case vector spaces) then, for any solution \(\mathbf{z}_{1}\in C^{1}(I,\mathcal{Z}_{1})\) of the state equation associated to \(F_{1}\) and to the input \(u\in C^{1}(I,\mathcal{U})\), with \(I\subset\mathbb{R}\) an interval, its image \(f\circ\mathbf{z}_{1}\in C^{1}(I,\mathcal{Z}_{2})\) is a solution for the state space system associated to \(F_{2}\) with the same input. Indeed, for any \(t\in I\) we have, by the linearity and the system equivariance of \(f\):
\[\frac{d}{dt}[f(\mathbf{z}_{1}(t))]=Df(\mathbf{z}_{1}(t))\cdot\dot{\mathbf{z} }_{1}(t)=f(\dot{\mathbf{z}}_{1}(t))=f(F_{1}(\mathbf{z}_{1}(t),u(t)))=F_{2}(f( \mathbf{z}_{1}(t)),u(t)).\]
This fact has as an important consequence that, in general, input/output systems _are not uniquely identified_ since all the system-isomorphic state-space systems yield the same input/output map.
### Hamiltonian and port-Hamiltonian systems
Hamiltonian systems are dynamical systems whose behavior is governed by Hamilton's variational principle. Even though these autonomous systems can be in general formulated on any symplectic manifold [1], we will restrict in this paper to the case in which the phase space is the even dimensional vector space \(\mathbb{R}^{2n}\) endowed with the Darboux canonical symplectic form. In this case, the _Hamiltonian system_ determined by the _Hamiltonian function_\(H\in C^{1}(\mathbb{R}^{2n})\) is given by the differential equation
\[\dot{\mathbf{z}}=\mathbb{J}\frac{\partial H}{\partial\mathbf{z}}, \tag{2}\]
where \(\mathbb{J}=\begin{bmatrix}0&\mathbb{I}_{n}\\ -\mathbb{I}_{n}&0\end{bmatrix}\) is the so-called the _canonical symplectic matrix_. Note that \(-\mathbb{J}=\mathbb{J}^{T}=\mathbb{J}^{-1}\) and hence endows \(\mathbb{R}^{2n}\) also with a complex structure. In this paper, we will denote the canonical symplectic matrix as \(\mathbb{J}\), unless the context requires to specify the dimension, in which case we denote it by \(\mathbb{J}_{n}\).
A _linear_ Hamiltonian system is determined by a quadratic Hamiltonian function \(H(\mathbf{z})=\frac{1}{2}\mathbf{z}^{T}Q\mathbf{z}\), where \(\mathbf{z}\in\mathbb{R}^{2n}\) and \(Q\in\mathbb{M}_{2n}\) is a square matrix that without loss of generality can be assumed to be symmetric. In this case, Hamilton's equations (2) reduce to
\[\dot{\mathbf{z}}=\mathbb{J}Q\mathbf{z}. \tag{3}\]
_Port-Hamiltonian systems_ (see [10]) are state-space systems that generalize autonomous Hamiltonian systems to the case in which external signals or inputs control in a time-varying way the dynamical behavior of the Hamiltonian system. The family of input-state-output port-Hamiltonian systems are those port-Hamiltonian systems with no algebraic constraints on the state-space variables, and where the flow and effort variables of the resistive, control and interaction ports are split into conjugated pairs. In such cases, the implicit representation may be proved (see [10]) to be equivalent to the following explicit form:
\[\begin{cases}\dot{\mathbf{x}}=[J(\mathbf{x})-R(\mathbf{x})]\frac{\partial H}{ \partial\mathbf{x}}(\mathbf{x})+g(\mathbf{x})u,\\ y=g^{T}(\mathbf{x})\frac{\partial H}{\partial\mathbf{x}}(\mathbf{x}),\end{cases} \tag{4}\]
where \((u,y)\) is the input-output pair (corresponding to the control and output conjugated ports), \(J(\mathbf{x})\) is a skew-symmetric interconnection structure and \(R(\mathbf{x})\) is a symmetric positive-definite dissipation matrix. Our work concerns _linear_ port-Hamiltonian systems in the _normal form_ which we define now: a linear port-Hamiltonian system (4) is in normal form if the skew-symmetric matrix \(J\) is constant and equal to the canonical symplectic matrix \(\mathbb{J}\), the Hamiltonian matrix \(Q\) is symmetric positive-definite, and the energy dissipation matrix \(R=0\), in which case (4) takes the form:
\[\begin{cases}\dot{\mathbf{z}}=\mathbb{J}Q\mathbf{z}+Bu,\\ y=B^{T}Q\mathbf{z},\end{cases} \tag{5}\]
with \(\mathbf{z}\in\mathbb{R}^{2n}\), \(u,y\in\mathbb{R}\), and where \(B\in\mathbb{R}^{2n}\) specifies the interconnection structure simultaneously at the input and output levels. By definition, such systems are fully determined by the pair \((Q,B)\), and hence we define by
\[\Theta_{PH_{n}}:=\left\{(Q,B)|0<Q\in\mathbb{M}_{2n},Q=Q^{T},B\in\mathbb{R}^{2n}\right\} \tag{6}\]
the space of _paramters_ of (5). Let \(\theta_{PH_{n}}:\Theta_{PH_{n}}\to PH_{n}\) the map that associates to the parameter \((Q,B)\in\theta_{PH_{n}}\) the corresponding port-Hamiltonian state space system. For convenience, _we shall often use \((Q,B)\) to denote elements in \(PH_{n}\) unless there is a risk of confusion_. Note that the condition \(Q>0\) implies that the origin is a Lyapunov stable equilibrium of (3). All these systems have the existence and uniqueness of solutions property and hence determine a family of _input/output systems_, also known as _filters_, that will be denoted by \(\mathcal{PH}_{n}\). More specifically, the elements in \(\mathcal{PH}_{n}\) are maps \(U_{(Q,B)}:\,C^{1}([0,1])\times\mathbb{R}^{2n}\longrightarrow C^{1}([0,1])\) given by
\[\begin{array}{ccccc}U_{(Q,B)}:&C^{1}([0,1])\times\mathbb{R}^{2n}& \longrightarrow&C^{1}([0,1])\\ &(u,\mathbf{x}_{0})&\longmapsto&U_{(Q,B)}(u,\mathbf{x}_{0})_{t}=B^{T}Qe^{ \int_{0}^{t}e^{-IQs}Bu(s)\,ds+\mathbf{x}_{0}\Big{]}\,,\quad t\in[0,1].\end{array}\]
Note that \(PH_{n}\) includes as a special case linear observations of autonomous linear Hamiltonian systems (case \(B=0\)). Note that as a manifold \(\Theta_{PH_{n}}=\mathcal{S}_{2n}^{+}\times\mathbb{R}^{2n}\), where \(\mathcal{S}_{2n}^{+}\) denotes the space of symmetric positive-definite matrices (SPD). We recall that \(\mathcal{S}_{2n}^{+}\) has a natural differentiable manifold structure whose tangent space at any point is the vector space of symmetric matrices \(\mathcal{S}_{2n}\) (see [12], and references therein).
Port-Hamiltonian systems are also closely linked to the so-called _affine Hamiltonian input-output systems_ that have been considered as a natural extension of Hamiltonian systems with external forces and studied extensively in the literature (see [14] for the deterministic case and [15, 16] for stochastic extensions), which take the form
\[\begin{cases}\dot{\mathbf{x}}=X_{H}(\mathbf{x})+X_{g}(\mathbf{x})u,\\ \tilde{y}=g(\mathbf{x}),\end{cases} \tag{7}\]
where \(X_{H}\) and \(X_{g}\) are the Hamiltonian vector fields of \(H,g\in\in C^{1}(\mathbb{R}^{2n})\). In the linear case, (7) reduces to
\[\begin{cases}\dot{\mathbf{z}}=\mathbb{J}Q\mathbf{z}-\mathbb{J}Bu,\\ \tilde{y}=B^{T}\mathbf{z},\end{cases} \tag{8}\]
The relation between (8) and (5) is that \(\dot{\hat{y}}=B^{T}\dot{\mathbf{z}}=B^{T}\mathbb{J}Q\mathbf{z}=(-\mathbb{J}B)^{T}Q \mathbf{z}\), showing that the time derivative of the affine Hamiltonian input-output system has a port-Hamiltonian structure. Note that in the last equality, we used that \(B^{T}\mathbb{J}B=0\) since \(\mathbb{J}\) is antisymmetric.
Consider now a general linear single-input/single-output system that takes the form
\[\begin{cases}\dot{\mathbf{x}}=A\mathbf{x}+Bu,\\ y=C^{T}\mathbf{x},\end{cases} \tag{9}\]
where \(A\in\mathbb{M}_{n}\), \(B,C\in\mathbb{R}^{n}\). Very often in control theory, it is the so-called transfer matrix rather than the input/output system that is studied. The transfer matrix \(G(s)\) of (9) is defined as \(G(s)=C(\mathbb{I}s-A)B\), which converts the differential equations in the time domain to an algebraic equation in the Laplace frequency domain. It can be proved that the transfer matrix of systems (5) satisfies \(G(s)=-G(-s)\) and that of systems (8) satisfies \(G(s)=G(-s)\). The converse statements also hold for canonical realizations (see definition in next section) [10], [11]. These facts exhibit a strong indication that systems (5) and (8) carry intrinsic symmetries to be explicitly characterized.
### Controllability and observability
Given a general linear system like (9), we recall that its _controllability_ and _observability matrices_ are defined by
\[\left[B\ |\ AB\ |\ \dots\ |\ A^{n-1}B\right]\quad\text{and}\quad\left[ \begin{matrix}C^{T}\\ C^{T}A\\ \vdots\\ C^{T}A^{n-1}\end{matrix}\right],\quad\text{respectively}.\]
The system is called _controllable_ (respectively, _observable_) if its controllability (respectively, observability) matrix has full rank. Any linear controllable (respectively, observable) system can be transformed into the so-called controllable (respectively, observable) canonical forms by using appropriate linear system isomorphisms (see [12]). Conversely, systems in these canonical forms are automatically controllable (respectively, observable). In the next section, we characterize the controllable/observable/canonical systems in the linear port-Hamiltonian category.
Controllability and observability are intertwined concepts in the linear port-Hamiltonian category. Indeed, it can be proved (see [13]) that a linear port-Hamiltonian system without dissipation is controllable and \(\det(Q)\neq 0\), then it is also observable. Conversely, if it is observable, then this implies that \(\det(Q)\neq 0\) and it is also controllable (see [13]). As it is customary in systems theory, we say a linear port-Hamiltonian system in normal form is _canonical_ if it is both controllable and observable. In view of the results that we just recalled, if \(\det(Q)\neq 0\), then either controllability or observability is equivalent to the system being canonical. Furthermore, it can be shown that being canonical is a generic property, that is, the set of canonical systems forms an open and dense subset. We shall denote by \(PH_{n}^{can}\subset PH_{n}\) the subset of \(PH_{n}\) made of canonical linear port-Hamiltonian systems. Later on in the paper, the significance of these observations will become apparent.
### The symplectic Lie group and its Lie algebra
A square matrix \(S\in\mathbb{M}_{2n}\) in dimension \(2n\) is called _symplectic_ if it satisfies \(S^{T}\mathbb{J}S=\mathbb{J}\). The set of all symplectic matrices forms a Lie group denoted by \(Sp(2n,\mathbb{R})\). It is well-known see that if \(S\in Sp(2n,\mathbb{R})\) then \(\det S=\pm 1\) and hence \(Sp(2n,\mathbb{R})\) is a subgroup of the general linear group \(GL(2n,\mathbb{R})\). The Lie algebra \(\mathfrak{sp}(2n,\mathbb{R})\) of \(Sp(2n,\mathbb{R})\) is given by the matrices \(A\in\mathbb{M}_{2n}\) that satisfy the identity \(A^{T}\mathbb{J}+\mathbb{J}A=0\). Equivalently, \(A\in\mathfrak{sp}(2n,\mathbb{R})\) if and only if \(A=\mathbb{J}R\), where \(R\in\mathbb{M}_{2n}\) is symmetric. We will refer to the elements in \(Sp(2n,\mathbb{R})\) as _symplectic matrices_ and to those in \(\mathfrak{sp}(2n,\mathbb{R})\) as _infinitesimally symplectic_.
Notably, the eigenvalues of the elements in \(\mathfrak{sp}(2n,\mathbb{R})\) appear in specific patterns that are spelled out in the following classical proposition (see [1, Section 3.1]).
**Proposition 2.2**.: _The characteristic polynomial of any matrix in \(A\in\mathfrak{sp}(2n,\mathbb{R})\) is even. Thus, if \(\lambda\) is an eigenvalue of \(A\) then so are \(-\lambda\), \(\bar{\lambda}\), and \(-\bar{\lambda}\)._
The importance of this group in our developments is that the (constant) vector field associated with the Hamilton's equations (3) is an element in \(\mathfrak{sp}(2n,\mathbb{R})\). Its flow determines a one-parameter subgroup of elements in \(Sp(2n,\mathbb{R})\). We also introduce the unitary group \(U(n,\mathbb{C})\), which consists of matrices \(U\in\mathbb{M}_{n}(\mathbb{C})\) with \(UU^{*}=U^{*}U=\mathbb{I}_{n}\), where \(U^{*}\) denotes the conjugate transpose of \(U\). We denote by [4]\(U(n)\) the image of \(U(n,\mathbb{C})\) in \(Sp(2n,\mathbb{R})\) by the monomorphism
\[A+iB\to\begin{bmatrix}A&-B\\ B&A\end{bmatrix}. \tag{10}\]
The so called _2-out-of-3 property_[1] implies that \(U(n)=O(2n,\mathbb{R})\cap GL(n,\mathbb{C})\cap Sp(2n,\mathbb{R})\), and it is indeed the intersection of any two out of the three groups.
### Williamson's normal form
The following classical result can be found in [11, 12, 13, 14].
**Theorem 2.3**.: _Let \(M\in\mathbb{M}_{2n}\) be a positive-definite symmetric real matrix. Then_
**(i)**: _There exists a symplectic matrix_ \(S\in Sp(2n,\mathbb{R})\) _such that_ \(M=S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S\)_, with_ \(D=\operatorname{diag}(\mathbf{d})\) \(a\) \(n\)_-dimensional diagonal matrix with positive entries and_ \(\mathbf{d}=\left(d_{1},\ldots,d_{n}\right)^{T}\) _._
**(ii)**: _The values_ \(d_{1},\ldots,d_{n}\) _are independent, up to reordering, on the choice of the symplectic matrix_ \(S\) _used to diagonalize_ \(M\) _._
**(iii)**: _Assume_ \(S\) _and_ \(S^{\prime}\) _are two elements of_ \(Sp(2n,\mathbb{R})\) _such that_ \(M=S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S=S^{\prime T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S^{\prime}\)_, where_ \(D\) _is as above, then_ \(S(S^{\prime})^{-1}\in U(n)\)_._
Later in this paper, we always use the notation \(D=\operatorname{diag}(\mathbf{d})\) to denote that \(D\) is a diagonal matrix with diagonal entries given by the vector \(\mathbf{d}=(d_{1},\ldots,d_{n})^{T}\). The elements \(d_{i}\) in the above theorem are called the _symplectic eigenvalues_ of \(M\) since they are also the eigenvalues of \(\mathbb{J}M\).
**Remark 2.4**.: The above theorem can be generalized to _positive-semidefinite_ real symmetric matrices. Indeed, it can first be shown that if the kernel of \(M\) is a symplectic subspace of \(\mathbb{R}^{2n}\) of dimension \(2m\), then the statement of Theorem 2.3 still holds true holds with the only added feature that exactly \(m\) of the diagonal entries in \(D\) are equal to \(0\) (see [13]). More generally, without the symplecticity assumption, all that it can be said is that there exists \(S\in Sp(2n,\mathbb{R})\) such that \(M=S^{T}\begin{bmatrix}D_{1}&0\\ 0&D_{2}\end{bmatrix}S\) where \(D_{1}\) and \(D_{2}\) may contain diagonal zero entries (see [14, 15]).
## 3 Controllable and observable Hamiltonian representations
In this section, we state two representation results for linear port-Hamiltonian systems in normal form, which are the main building blocks in our learnability results. More precisely, we define two subfamilies of linear systems of the type (9), that are respectively called controllable/observable Hamiltonian representations, that are by construction controllable/observable (Definition 3.1). We subsequently show in Theorem 3.3 that morphisms can be established between the elements in these families and those in the category \(PH_{n}\) of normal form port-Hamiltonian systems.
As it will be spelled out later on in detail, the existence of these morphisms immediately guarantees that the complexity of the family of filters \(\mathcal{PH}_{n}\) is actually not \(\mathcal{O}(n^{2})\), as it could be guessed from (5), but \(\mathcal{O}(n)\). However, the expressive power of our proposed representations is limited for non-canonical port-Hamiltonian systems. For example, the observable representation is guaranteed
to capture all possible input-output dynamics of port-Hamiltonian systems (full expressive power), but it does not always produce port-Hamiltonian dynamics (fails to be structure-preserving). In the controllable case, structure preservation is guaranteed, but there is, in general, no full expressive power. Fortunately, for canonical port-Hamiltonian systems, all the morphisms that we shall introduce become isomorphisms, meaning that they are both structure-preserving and have full expressive power. Roughly speaking, the more canonical a port-Hamiltonian system is, the better the corresponding representations behave in terms of structure-preserving properties and expressive power.
The representations introduced below can be seen as a reparametrization of the elements \((Q,B)\in PH_{n}\) in terms of a diagonal matrix \(D=\operatorname{diag}(\mathbf{d})\in\mathbb{M}_{n}\), \(\mathbf{d}\in\mathbb{R}^{n}\), and a vector \(\mathbf{v}\in\mathbb{R}^{2n}\), where \(D\) is obtained from Williamson's Theorem 2.3 as \(Q=S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S\) and \(\mathbf{v}=S^{-1}B\). This makes it obvious that the learning problem for port-Hamiltonian systems has parameter complexity of at most \(\mathcal{O}(n)\) even if the Hamiltonian matrix has complexity \(\mathcal{O}(n^{2})\).
We emphasize that even in the canonical situation, the availability of the controllable/observable representations does not yet provide a well-specified learning problem for this category since the invariance of these systems under system automorphisms implies the existence of symmetries (or degeneracies) in the parametrizations, which will be the focus of the next section.
The proofs of all our results are provided in the appendices.
**Definition 3.1**.: _Given \(\mathbf{d}=(d_{1},\ldots,d_{n})^{T}\in\mathbb{R}^{n}\), with \(d_{i}>0\), and \(\mathbf{v}\in\mathbb{R}^{2n}\), we say that a \(2n\)-dimensional linear state space system is a controllable Hamiltonian (respectively, observable Hamiltonian) representation if it takes the form_
\[\begin{cases}\hat{\mathbf{s}}=g_{1}^{\text{ctr}}(\mathbf{d})\cdot\mathbf{s}+(0,0,\cdots,0,1)^{T}\cdot u,\,\left(\text{resp., }\begin{cases}\dot{\mathbf{s}}=g_{1}^{obs}(\mathbf{d})\cdot\mathbf{s}+g_{2}^{ obs}(\mathbf{d},\mathbf{v})\cdot u,\\ y=(0,0,\cdots,0,1)\cdot\mathbf{s},\end{cases}\right)\end{cases} \tag{11}\]
_where \(g_{1}^{\text{ctr}}(\mathbf{d})\in\mathbb{M}_{2n}\) and \(g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v})\in\mathbb{M}_{1,2n}\) (respectively, \(g_{1}^{obs}(\mathbf{d})\in\mathbb{M}_{2n}\) and \(g_{2}^{obs}(\mathbf{d},\mathbf{v})\in\mathbb{R}^{2n}\)) are constructed as follows:_
**(i)**: _Given_ \(\mathbf{d}\in\mathbb{R}^{n}\)_, let_ \(\{a_{0},a_{1},\ldots,a_{2n-1}\}\) _be the real coefficients that make_ \(\lambda^{2n}+\sum_{i=0}^{2n-1}a_{i}\cdot\lambda^{i}=(\lambda^{2}+d_{1}^{2})( \lambda^{2}+d_{2}^{2})\ldots(\lambda^{2}+d_{n}^{2})\) _an equality between the two polynomials in_ \(\lambda\)_. Let_ \(a_{2n}=1\) _by convention. Note that the entries_ \(a_{i}\) _with an odd index_ \(i\) _are zero. Define:_
\[g_{1}^{\text{ctr}}(\mathbf{d}):=\begin{bmatrix}0&1&0&\ldots&0\\ 0&0&1&\ldots&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\ldots&1\\ -a_{0}&-a_{1}&-a_{2}&\ldots&-a_{2n-1}\end{bmatrix}_{2n\times 2n}\,,\]
_(respectively,_ \(g_{1}^{obs}(\mathbf{d})=g_{1}^{\text{ctr}}(\mathbf{d})^{\top}\)_)._
**(ii)**: _Given_ \(\mathbf{d}\) _and_ \(\mathbf{v}\)_, then_
\[g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v}):=\begin{bmatrix}0&c_{2n-1}&0&c_{2n -3}&\ldots&0&c_{1}\end{bmatrix},\quad\text{(resp., }g_{2}^{obs}(\mathbf{d},\mathbf{v})=g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v})^{ \top}\,)\]
_where_
\[c_{2k+1}=\mathbf{v}^{T}\begin{bmatrix}F_{k}&0\\ 0&F_{k}\end{bmatrix}\mathbf{v},\]
_for_ \(k=0,\ldots,n-1\)_, and_
\[F_{k}=\begin{bmatrix}f_{1}&&&&\\ &f_{2}&&\text{\Large{0}}\\ &&\ddots&&\\ &\text{\Large{0}}&&f_{n-1}&\\ &&&&f_{n}\end{bmatrix}\]
\[\text{with }f_{l}=d_{l}\cdot\sum_{\begin{subarray}{c}j_{1},\ldots,j_{k}\neq l \\ 1\leq j_{1}<\cdots<j_{k}\leq n\end{subarray}}\left(d_{j_{1}}d_{j_{2}}\cdots d_{j_{ k}}\right)^{2}\text{, }l=1,\ldots,n\text{.}\]
_We denote \(CH_{n}\) (respectively, \(OH_{n}\)) the set of all systems of the form (11), and we call them controllable Hamiltonian (respectively, observable Hamiltonian) representations. The symbol \(\mathcal{CH}_{n}\) (respectively, \(\mathcal{CH}_{n}\)) denotes the set of input/output systems induced by the state space systems in \(CH_{n}\) (respectively, \(OH_{n}\)). We emphasize that the elements of both \(CH_{n}\) and \(OH_{n}\) can be parameterized with the set_
\[\Theta_{CH_{n}}=\Theta_{OH_{n}}:=\left\{(\mathbf{d},\mathbf{v})|d_{i}>0, \mathbf{v}\in\mathbb{R}^{2n}\right\}.\]
_Sometimes later on in the paper we shall write \(a_{i}(\mathbf{d})\) and \(c_{j}(\mathbf{d},\mathbf{v})\) to indicate that \(a_{i}\) and \(c_{j}\) are functions of \(\mathbf{d}\) and \(\mathbf{v}\)._
**Remark 3.2**.: Observe that the controllable and the observable Hamiltonian representations of port-Hamiltonian systems are closely related to each other. The controllable Hamiltonian matrix \(g_{1}^{\text{qtr}}\) is the transpose of the observable Hamiltonian matrix \(g_{1}^{\text{qbs}}\). Moreover, as can be directly observed from the construction, the input and readout matrices of the two representations, that is, \(g_{2}^{\text{qtr}}\) and \(g_{2}^{\text{obs}}\), are transpose of each other.
Consider now the maps \(\theta_{CH_{n}}:\Theta_{CH_{n}}\to CH_{n}\) and \(\theta_{OH_{n}}:\Theta_{OH_{n}}\to OH_{n}\) that associate to each parameter values the corresponding state-space system. Note that the elements in \(CH_{n}\) (respectively, in \(OH_{n}\)) of the form (11) are in canonical controllable (respectively, observable) form in the sense of [10] and they are hence controllable (respectively, observable). Our main result below establishes a relationship between port-Hamiltonian systems and controllable (respectively, observable) Hamiltonian representations as defined above, which will be used later on for considerations on the structure preservation and expressiveness in the modeling of \(PH_{n}\).
**Theorem 3.3**.:
**(i)**: _There exists, for each_ \(S\in Sp(2n,\mathbb{R})\)_, a map_
\[\varphi_{S}: CH_{n} \longrightarrow PH_{n}\] \[\theta_{CH_{n}}(\mathbf{d},\mathbf{v}) \longmapsto \theta_{PH_{n}}\left(S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S,S^{-1}\mathbf{v}\right),\]
_with_ \(D=\mathrm{diag}(\mathbf{d})\)_, such that the controllable Hamiltonian system_ \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in CH_{n}\) _and the port-Hamiltonian image_ \(\varphi_{S}\left(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\right)\in PH_{n}\) _are linked by a linear system morphism_ \(f_{S}^{(\mathbf{d},\mathbf{v})}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}\)_._
**(ii)**: _Given a port-Hamiltonian system_ \(\theta_{PH_{n}}(Q,B)\in PH_{n}\)_, there exists an explicit linear system morphism_ \(f^{(Q,B)}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}\) _between the state space of_ \(\theta_{PH_{n}}(Q,B)\in PH_{n}\) _and that of an observable Hamiltonian system_ \(\theta_{OH_{n}}(\mathbf{d},\mathbf{v})\in OH_{n}\)_, where_ \((\mathbf{d},\mathbf{v})\in\Theta_{OH_{n}}\) _is determined by the Williamson's normal form decomposition of_ \(Q\) _determined by_ \(S\in Sp(2n,\mathbb{R})\)_, that is,_ \(Q=S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}\)_,_ \(D=\mathrm{diag}(\mathbf{d})\) _and_ \(\mathbf{v}=S\cdot B\)_._
**Remark 3.4**.: We emphasize that given \((Q,B)\in\Theta_{PH_{n}}\), the pair \((\mathbf{d},\mathbf{v})\in\Theta_{CH_{n}}/\Theta_{OH_{n}}\) is not uniquely determined by Williamson's decomposition. This can be seen from Theorem 2.3 because the element \(S\in Sp(2n,\mathbb{R})\) in its statement is not unique and the entries \(d_{i}\) of \(\mathbf{d}\) are independent of \(S\) up to their ordering.
**Remark 3.5** (**Controllability, observability, and invertibility)**.:
**(i)**: In the proof of the theorem above (available in the Appendix), we define the linear system morphism \(f_{S}^{(\mathbf{d},\mathbf{v})}:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2n}\) as \(\mathbf{z}=f_{S}^{(\mathbf{d},\mathbf{v})}(\mathbf{s}):=L\mathbf{s}\) and an explicit construction of the matrix \(L\) is provided. It turns out that, the matrix \(L\) is invertible if and only if the image port-Hamiltonian system (5) is controllable, or equivalently, observable. Indeed, using the same notation as in the proof of Theorem 3.3, we have
\[L=S^{-1}\left[L_{1}\mathbf{v}\quad L_{2}\mathbf{v}\quad\cdots\quad L_{2n} \mathbf{v}\right]=\left[S^{-1}L_{1}\mathbf{v}\quad S^{-1}L_{2}\mathbf{v}\quad \cdots\quad S^{-1}L_{2n}\mathbf{v}\right],\]
where \[S^{-1}L_{2n-k}\mathbf{v} =S^{-1}\left[\left(\mathbb{J}_{n}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}\right)^{k}+a_{2n-1}\cdot\left(\mathbb{J}_{n}\begin{bmatrix}D&0 \\ 0&D\end{bmatrix}\right)^{k-1}+\cdots+a_{2n-k}\cdot\mathbb{I}_{2n}\right]\cdot \mathbf{v}\] \[=S^{-1}\left((\mathbb{J}_{n}S^{-T}QS^{-1})^{k}+a_{2n-1}\cdot( \mathbb{J}_{n}S^{-T}QS^{-1})^{k-1}+\cdots+a_{2n-k}\cdot\mathbb{I}_{2n}\right)\cdot SB\] \[=S^{-1}\left((S\mathbb{J}_{n}QS^{-1})^{k}+a_{2n-1}\cdot(S\mathbb{ J}_{n}QS^{-1})^{k-1}+\cdots+a_{2n-k}\cdot\mathbb{I}_{2n}\right)\cdot SB\] \[=\left((\mathbb{J}_{n}Q)^{k}+a_{2n-1}\cdot(\mathbb{J}_{n}Q)^{k-1 }+\cdots+a_{2n-k}\cdot\mathbb{I}_{2n}\right)\cdot B.\] Therefore, \(L\) can be transformed by elementary column operations into the controllability matrix of (5) and hence \(L\) being invertible, i.e. the two systems being isomorphic, is equivalent to the controllability matrix of (5) having full rank (regardless of the choice of \(S\in Sp(2n,\mathbb{R})\)), which is again equivalent to (5) being canonical. Additionally, the condition for \(f_{S}^{(\mathbf{d},\mathbf{v})}\) to be invertible can also be formulated in terms of \(D\) and \(\mathbf{v}\) directly, which we will discuss in Subsection 4.4.
**(ii)**: Systems in \(CH_{n}\) are by construction in controllable canonical form, and are therefore always controllable. If the image system (5) by \(\varphi_{S}\) that we want to learn is controllable (or equivalently, observable), then by the previous point \(L\) is necessarily an invertible matrix which means that (11) and (5) are isomorphic systems by construction. As a consequence, (11) is not only controllable but also observable.
**Remark 3.6** (**Application to structure-preserving system learning)**.:
As a corollary of the previous result, we can use controllable Hamiltonian representations to learn port-Hamiltonian systems in an efficient and structure-preserving fashion. Indeed, given a realization of a port-Hamiltonian system, a system of the type \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in CH_{n}\) can be estimated using an appropriate loss (see Section 7). A representation of this type is more advantageous than the original port-Hamiltonian one for two reasons:
**(i)**: The _model complexity_ of the controllable Hamiltonian representation is only of order \(\mathcal{O}(n)\), as opposed to \(\mathcal{O}(n^{2})\) for the original port-Hamiltonian one.
**(ii)**: This learning scheme is automatically _structure-preserving_. Indeed, once a system \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in CH_{n}\) has been estimated for a given realization, we have shown that there exists a family of linear morphisms, each of which is between the state space of \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in CH_{n}\) and some \(\theta_{PH_{n}}(Q,B)\in PH_{n}\), such that any solution of (11) is automatically a solution of some system in \(PH_{n}\). Hence, even in the presence of estimation errors for \((\mathbf{d},\mathbf{v})\in\theta_{CH_{n}}\), the solutions of \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\) still correspond to a port-Hamiltonian system and hence this structure is _preserved_ by the learning scheme.
**Remark 3.7** (**System learning and expressive power)**.:
Expressive power is an important property of any machine learning paradigm. As a continuation of the previous remarks, we emphasize that there is an important relation between the controllability of a system in \(PH_{n}\) and the expressive power of the corresponding representation in \(CH_{n}\). Indeed, if (5) is controllable, by point (ii) in Remark 3.5, the corresponding preimage system \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in CH_{n}\) can capture all possible solutions of (5), which amounts to the learning scheme based on \(\Theta_{CH_{n}}\) having full expressive power. To see this, let \(\mathbf{z}_{0}\) be an initial state of the controllable system \(\theta_{PH_{n}}(Q,B)\in PH_{n}\) in (5). Since in that case we can find an invertible system isomorphism \(f_{S}^{(\mathbf{d},\mathbf{v})}\) that links it to some \(\theta_{CH_{n}}(\mathbf{d},\mathbf{v})\in\Theta_{CH_{n}}\), there exists some corresponding initial state \(\mathbf{s}_{0}=\left(f_{S}^{(\mathbf{d},\mathbf{v})}\right)^{-1}(\mathbf{z}_{0})\). Then, by Theorem 3.3 and the uniqueness of the solutions of ODEs, the solution of (11) with initial state \(\mathbf{s}_{0}\) is a representation of the solution of (5) with initial state \(\mathbf{z}_{0}\). However, if (5) fails to be controllable (i.e. \(f_{S}^{(\mathbf{d},\mathbf{v})}\) not invertible), then such an initial condition \(\mathbf{s}_{0}\) may not exist. As a rule of
thumb, the more controllable a system of the type (5) is, the higher the rank of \(f_{S}^{(\mathbf{d},\mathbf{v})}\) is, and then the more expressive the corresponding controllable Hamiltonian representations are.
**Remark 3.8** (**Expressive power and structure-preservation of the observable Hamiltonian representation)**.: We emphasize that systems in \(OH_{n}\) always has _full expressive power_ guaranteed by the system morphism in Theorem 3.3. This implies that any input-output dynamics generated by the original port-Hamiltonian system will be captured by the any of the observable Hamiltonian representations in the statement. However, unlike in the controllable case, the system morphism is between \(\theta_{PH_{n}}(Q,B)\in PH_{n}\) and \(\theta_{OH_{n}}(\mathbf{d},\mathbf{v})\in\Theta_{OH_{n}}\). Therefore, unless \((Q,B)\) is canonical, in which case the morphism becomes an isomorphism, we _cannot, in general, assert the structure-preserving property of this representation._
**Remark 3.9** (**Positive semi-definite Hamiltonians)**.: The above results can be easily generalized to positive semi-definite Hamiltonians (PSD) Hamiltonians with the aid of the generalized Williamson's theorem in the references [12, 13, 14] that we briefly discussed in Section 2.5. In general, the number of unknown parameters in the vector \(\mathbf{d}\) is doubled (because of the matrices \(D_{1}\) and \(D_{2}\) that appear in this case), and their relation with the coefficients \(\{a_{0},a_{1},\ldots,a_{2n-1}\}\) has to be modified accordingly, that is, \(\lambda^{2n}+\sum_{i=0}^{2n-1}a_{i}\cdot\lambda^{i}=(\lambda^{2}+d_{1}d_{n+1} )(\lambda^{2}+d_{2}d_{n+2})\ldots(\lambda^{2}+d_{n}d_{2n})\), where some of the \(d_{i}\)'s could be \(0\). The expression for \(g_{1}^{\text{ctr}}(\mathbf{d})\) remains the same, whereas the expression of \(\begin{bmatrix}F_{k}&0\\ 0&F_{k}\end{bmatrix}\) in \(g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v})\) becomes \(\begin{bmatrix}F_{k,0}&0\\ 0&F_{k,1}\end{bmatrix}\), where
\[F_{k,p}=\begin{bmatrix}f_{1,p}&&&&\\ &f_{2,p}&&\text{\Large$\bigcirc$}\\ &\ddots&&\\ &\text{\Large$\bigcirc$}&&f_{n-1,p}\\ &&&f_{n,p}\end{bmatrix}\]
and \(f_{l,p}=d_{np+l}\cdot\sum_{\begin{subarray}{c}j_{1},\ldots,j_{k},\neq l\\ 1\leq j_{1}<\cdots<j_{k}\leq n\end{subarray}}d_{j_{1}}d_{j_{2}}\cdots d_{j_{k}} d_{j_{n+1}}d_{j_{n+2}}\cdots d_{j_{n+k}}\) for \(p=0,1\). In this paper, we mainly deal with positive definite \(Q\), since the nondegeneracy of a positive semi-definite \(Q\) destroys the symmetries studied later on in Section 4.
**Remark 3.10** (**Symmetries of the Hamiltonian representations)**.: The parameterizations of the systems in \(CH_{n}\) and \(OH_{n}\) exhibit obvious symmetries. For example, the functions \(g_{1}^{\text{ctr}}(\mathbf{d})\) and \(g_{1}^{\text{obs}}(\mathbf{d})\) are invariant under the permutation of the diagonal entries \(d_{i}\). Moreover, \(g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v})\) (similarly for \(g_{2}^{\text{obs}}(\mathbf{d},\mathbf{v})\)) contains entries \(c_{2k+1}\) of the form \(\mathbf{v}^{T}\begin{bmatrix}F_{k}&0\\ 0&F_{k}\end{bmatrix}\mathbf{v}=\sum_{i=1}^{n}F_{k}^{(i)}\cdot\begin{bmatrix}v _{i}^{2}+v_{n+i}^{2}\end{bmatrix}\), which is in particular invariant under the rotation of the planes spanned by the \(i\)-th and \((n+i)\)-th entries of \(\mathbf{v}\). These observations will be central in the next section, in which we shall show that these and other symmetries of the representations in \(CH_{n}\) or \(OH_{n}\) are closely related to the system automorphism group of the space \(PH_{n}\).
## 4 Unique identification of linear port-Hamiltonian systems
In this section, we study the unique identifiability of input-output dynamics of linear port-Hamiltonian systems in normal form. Such a characterization is obviously needed to solve the model estimation problem. The rationale is that, in applications, we only have access to input/output data, and different state space systems in \(PH_{n}\) can induce the same filter that produces that data. This
fact has important implications when it comes to the learning of port-Hamiltonian systems out of finite-sample realizations of a given data-generating process \((Q,B)\in PH_{n}\) because such degeneracy makes impossible the exact recovery of \((Q,B)\in PH_{n}\) in that context, no matter how good the properties of the algorithm used for that task are or how much data we have at our disposal. This observation indicates that it is not in the space \(PH_{n}\) that we should look at for unique identification but the quotient space associated to \(PH_{n}\) with respect to certain equivalence relation \(\sim_{filter}\) that uniquely identifies port-Hamiltonian filters, that is, \(\mathcal{PH}_{n}\cong PH_{n}/\sim_{filter}\). However, as we shall see later on in Subsection 4.1, the presence of non-canonical systems in \(PH_{n}\) makes it in general difficult to directly characterize the quotient space \(PH_{n}/\sim_{filter}\).
As we pointed out after Definition 2.1, all the system-isomorphic state-space systems yield the same filter, while a filter can be realized by state-space systems that are not system isomorphic. This means that the equivalence relation of system automorphism \(\sim_{sys}\) is strictly stronger than \(\sim_{filter}.\) Motivated by this fact, we study in Subsection 4.1 how \(\sim_{filter}\) and \(\sim_{sys}\) are related in terms of controllable Hamiltonian representations (which by Theorem 3.3 automatically induce filters in \(\mathcal{PH}_{n}\)), and in Subsection 4.2, we lower our expectations and characterize \(PH_{n}/\sim_{sys}\) as an approximation to \(PH_{n}/\sim_{filter}\). The term _approximation_ in this sentence is justified because \(\sim_{filter}\) and \(\sim_{sys}\)_coincide_ when restricted to the subset of canonical port-Hamiltonian systems \(PH_{n}^{con}\), which is open and dense in \(PH_{n}\). Therefore unique identifiability can be achieved in there by studying \(\sim_{sys}\).
On the other hand, recall that in the previous section, we established a link between \(PH_{n}\) and the representation spaces \(CH_{n}\) and \(OH_{n}\) which, as we saw in Definition 3.1, are both parametrized by the set
\[\boldsymbol{\Theta}_{CH_{n}}=\boldsymbol{\Theta}_{OH_{n}}=\left\{(\mathbf{d}, \mathbf{v})\mid\mathbf{v}\in\mathbb{R}^{2n},\mathbf{d}\in\mathbb{R}^{n},d_{i}> 0,i\in\{1,\ldots,n\}\right\}. \tag{12}\]
Now, it is a natural question to ask what is the equivalence relation that corresponds to \(\sim_{sys}\) on the parameter space \(\boldsymbol{\Theta}_{CH_{n}}\), and if it is possible to explicitly characterize the quotient space \(PH_{n}/\sim_{sys}\) on \(\boldsymbol{\Theta}_{CH_{n}}\) in a certain sense. All these questions are addressed step-by-step in the following subsections.
In Subsection 4.1, we provide sufficient and necessary conditions for two controllable Hamiltonian representations being \(\sim_{filter}\)-equivalent and \(\sim_{sys}\)-equivalent, respectively. In Subsection 4.2, we define an equivalence relation \(\sim_{\star}\) on \(\boldsymbol{\Theta}_{CH_{n}}\) and we show that \(PH_{n}/\sim_{sys}\cong\boldsymbol{\Theta}_{CH_{n}}/\sim_{\star}\) (see Theorem 4.9). In Subsection 4.3, we characterize the equivalence classes \(PH_{n}/\sim_{sys}\) and \(\boldsymbol{\Theta}_{CH_{n}}/\sim_{\star}\) as _Lie groupoid_ orbit spaces.
In Subsection 4.4, we restrict our identification analysis to _canonical port-Hamiltonian systems_\(PH_{n}^{con}\). We first show that the parameter subset \(\boldsymbol{\Theta}_{CH_{n}}^{con}\subset\boldsymbol{\Theta}_{CH_{n}}\) that corresponds to \(PH_{n}^{con}\) is open and dense in \(\boldsymbol{\Theta}_{CH_{n}}\) as it is determined by certain generic non-resonance and nondegeneracy conditions. If we define on \(\boldsymbol{\Theta}_{CH_{n}}\) the equivalence relation \(\sim_{sys}\) of system automorphisms of the corresponding controllable/observable Hamiltonian representations (see Definition 4.4), then it can be proved that, restricted to the canonical subset \(\boldsymbol{\Theta}_{CH_{n}}^{can}\), the equivalence relation \(\sim_{\star}\) coincides with \(\sim_{sys}\), and hence
\[\mathcal{PH}_{n}^{can}\cong PH_{n}^{can}/\sim_{sys}\cong\boldsymbol{\Theta}_{ CH_{n}}^{can}/\sim_{\star}\cong\boldsymbol{\Theta}_{CH_{n}}^{can}/\sim_{sys},\]
where \(\mathcal{PH}_{n}^{can}\) is the space of filters induced by systems in \(PH_{n}^{can}\).
In Subsection 4.5, we prove that the fact that we restricted the above equivalence relations to canonical subsets allows us to characterize the corresponding quotients as orbit spaces with respect to a _group_ (as opposed to groupoids in the general unrestricted case) action, where the group is given by a semi-direct product \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\) that will be specified in detail later on. Finally, in Subsection 4.6, we show that the orbit space \(\boldsymbol{\Theta}_{CH_{n}}^{can}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\) can be explicitly identified as a smooth manifold \(\mathcal{X}_{\uparrow}^{n}\times\mathbb{R}_{+}^{n}\) and endowed with global Euclidean coordinates, and hence
\[\mathcal{PH}_{n}^{can}\cong PH_{n}^{can}/\sim_{sys}\ \ \boldsymbol{\cong}\ \boldsymbol{\Theta}_{CH_{n}}^{can}/\sim_{\star}\ \ \boldsymbol{\cong}\ \boldsymbol{\Theta}_{CH_{n}}^{can}/\sim_{sys}\ \ \boldsymbol{\cong}\ \boldsymbol{\Theta}_{CH_{n}}^{can}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\ \cong\ \mathcal{X}_{\uparrow}^{n}\times\mathbb{R}_{+}^{n}.\]
Consequently, canonical port-Hamiltonian dynamics can be identified fully and explicitly in either the controllable or the observable Hamiltonian representations (11) and learned by estimating a unique set of parameters in a smooth manifold that is obtained as a group orbit space.
### The unique identification problem for filters in \(\mathcal{PH}_{n}\)
In the context of model estimation/machine learning, we would like to characterize and identify the filters that constitute the elements in \(\mathcal{PH}_{n}\). In Section 2.1, we have seen that two systems that are system isomorphic induce the same input-output dynamics, which indicates that these isomorphisms are redundancies/symmetries in \(PH_{n}\). Our aim is to quotient out the symmetries given by system automorphisms and to investigate whether the quotient space uniquely identifies the filters in \(\mathcal{PH}_{n}\).
**Definition 4.1**.:
**(i)**: _The fact that two systems_ \(\theta_{PH_{n}}(Q_{1},B_{1})\) _and_ \(\theta_{PH_{n}}(Q_{2},B_{2})\) _in_ \(PH_{n}\) _induce the same filter defines an equivalence relation in_ \(PH_{n}\)_, which we denote by_ \((Q_{1},B_{1})\sim_{filter}(Q_{2},B_{2})\)_. Consequently, we have by definition_ \(\mathcal{PH}_{n}=PH_{n}/\sim_{filter}\)_, which we call the unique identifiability space._
**(ii)**: _We observe that_ \(\theta_{PH_{n}}(Q_{1},B_{1})\) _and_ \(\theta_{PH_{n}}(Q_{2},B_{2})\) _in_ \(PH_{n}\) _are linearly system isomorphic according to Definition_ 2.1 _if and only if there exists an invertible matrix_ \(L\) _such that_
\[\left\{\begin{aligned} L\mathbb{J}Q_{1}&= \mathbb{J}Q_{2}L\\ LB_{1}&=B_{2}\\ B_{1}^{T}Q_{1}&=B_{2}^{T}Q_{2}L.\end{aligned}\right. \tag{13}\]
_It is straightforward to check that system isomorphisms determine an equivalence relation on_ \(PH_{n}\)_. If_ \(\theta_{PH_{n}}(Q_{1},B_{1})\) _and_ \(\theta_{PH_{n}}(Q_{2},B_{2})\) _are system isomorphic, we write_ \((Q_{1},B_{1})\sim_{sys}(Q_{2},B_{2})\)_. We denote by_ \(PH_{n}/\sim_{sys}\) _the quotient space. The equivalence class in_ \(PH_{n}/\sim_{sys}\) _that contains the element_ \(\theta_{PH_{n}}(Q,B)\) _is denoted by_ \([Q,B]\in PH_{n}/\sim_{sys}\)_._
It is a natural question to ask about the relation between \(PH_{n}/\sim_{sys}\) and \(\mathcal{PH}_{n}\), and if they are the same. Indeed, two distinct elements in \(PH_{n}\) that are \(\sim_{sys}\)-equivalent always induce the same filter in \(\mathcal{PH}_{n}\) (See Subsection 2.1), whereas as we see in the next Example 4.2, a filter in \(\mathcal{PH}_{n}\) could be realized by two elements in \(PH_{n}\) that are not \(\sim_{sys}\)-equivalent since filters identify exclusively the canonical part (that is, the minimal realization) see [10]. Said differently, by going to the quotient space \(PH_{n}/\sim_{sys}\), we remove some redundancies of the set \(PH_{n}\) that yield the same input-output dynamics, but not all.
**Example 4.2**.: Consider two systems \(\theta_{PH_{n}}(Q_{1},B_{1}),\theta_{PH_{n}}(Q_{2},B_{2})\in PH_{n}\) where
\[Q_{1}=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix},Q_{2}=\begin{bmatrix}1&0&0&0\\ 0&2&0&0\\ 0&0&1&0\\ 0&0&0&3\end{bmatrix},B_{1}=B_{2}=\begin{bmatrix}1\\ 0\\ 0\\ 0\end{bmatrix}.\]
Both systems induce the same filter \(y(u)_{t}=\int_{0}^{t}e^{t-s}u(s)ds+e^{t}\left(0,0,\cdots,0,1\right)^{T}\cdot z _{0}\), where \(z_{0}\) is the initial state. However, these two systems cannot be system isomorphic, since by (13) in that case there would exist an invertible \(L\) such that \(L\mathbb{J}Q_{1}=\mathbb{J}Q_{2}L\), and hence \(\mathbb{J}Q_{1}\) would have the same set of eigenvalues as \(\mathbb{J}Q_{2}\), which is not the case.
Despite the difficulties of uniquely identifying all filters in \(\mathcal{PH}_{n}\), we can still uniquely identify the "large" subset \(\mathcal{CH}_{n}^{0}\subset\mathcal{PH}_{n}\) which consists of filters induced by the set of controllable Hamiltonian representations initially at rest, that is, with zero initial condition. Recall that any controllable Hamiltonian representation automatically has the port-Hamiltonian structure by Theorem 3.3 and therefore \(\mathcal{CH}_{n}^{0}\) lies inside \(\mathcal{PH}_{n}\) (Moreover, \(\mathcal{CH}_{n}\) contains all the filters induced by canonical systems in \(PH_{n}\)). Our result below splits into two parts, where the first one characterizes when two elements in \(CH_{n}\) are \(\sim_{sys}\)-equivalent, and the second one characterizes when they induce the same filter in \(\mathcal{CH}_{n}^{0}\). In this way, it will be clear that inducing the same filter is a strictly weaker condition than being system isomorphic. We present a lemma and a definition before we state our main result in this section.
**Lemma 4.3**.: _For \((\mathbf{d}_{1},\mathbf{v}_{1})\) and \((\mathbf{d}_{2},\mathbf{v}_{2})\in\Theta_{CH_{n}}=\Theta_{OH_{n}}\), \(\theta_{CH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}\theta_{CH_{n}}( \mathbf{d}_{2},\mathbf{v}_{2})\) if and only if \(\theta_{OH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}\theta_{OH_{n}}( \mathbf{d}_{2},\mathbf{v}_{2})\)_
Proof.: The proof is basically a restatement of the fact that \(g_{1}^{\text{ctr}}(\mathbf{d})=g_{1}^{\text{obs}}(\mathbf{d})^{T}\) and \(g_{2}^{\text{ctr}}(\mathbf{d},\mathbf{v})=g_{2}^{\text{obs}}(\mathbf{d}, \mathbf{v})^{T}\).
**Definition 4.4**.: _We denote by \(\sim_{sys}\) the equivalence relation of system isomorphism on \(CH_{n}\) and \(OH_{n}\). We shall also denote \((\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}(\mathbf{d}_{2},\mathbf{v}_{2})\) if \(\theta_{CH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}\theta_{CH_{n}}( \mathbf{d}_{2},\mathbf{v}_{2})\) for \((\mathbf{d}_{1},\mathbf{v}_{1}),(\mathbf{d}_{2},\mathbf{v}_{2})\in\Theta_{CH_{ n}}\). The choice of the symbol \(\sim_{sys}\) is justified because system isomorphisms for controllable/observable Hamiltonian representations are indeed equivalent as we showed in Lemma 4.3,_
**Theorem 4.5**.: _Given \((\mathbf{d}_{1},\mathbf{v}_{1})\) and \((\mathbf{d}_{2},\mathbf{v}_{2})\) in \(\Theta_{CH_{n}}\), then_
**(I)**: \(\theta_{CH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}\theta_{CH_{n}}( \mathbf{d}_{2},\mathbf{v}_{2})\) _if and only if_ \(a_{i}(\mathbf{d}_{1})=a_{i}(\mathbf{d}_{2})\) _and_ \(c_{j}(\mathbf{d}_{1},\mathbf{v}_{1})=c_{j}(\mathbf{d}_{2},\mathbf{v}_{2})\) _for all_ \(i,j\)_. In other words, there exists a permutation matrix_ \(P_{\sigma}\in\mathbb{M}_{n}\) _such that, for_ \(D=\operatorname{diag}(\mathbf{d})\) _and_ \(P=\begin{bmatrix}P_{\sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\)_, the following conditions hold true:_
**(i)**: \(P\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}P^{T}=\begin{bmatrix}D_{2}&0\\ 0&D_{2}\end{bmatrix}\)__
**(ii)**: \(\mathbf{v}_{1}^{T}\begin{bmatrix}(F_{1})_{k}&0\\ 0&(F_{1})_{k}\end{bmatrix}\mathbf{v}_{1}=\mathbf{v}_{2}^{T}\begin{bmatrix}(F_ {2})_{k}&0\\ 0&(F_{2})_{k}\end{bmatrix}\mathbf{v}_{2},\;k=0,\ldots,n-1\)__
_The matrices_ \(F_{i}\) _are defined in Theorem_ 3.3_._
**(II)**: \(\theta_{CH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1})\sim_{filter}\theta_{CH_{n}}( \mathbf{d}_{2},\mathbf{v}_{2})\) _under the zero initial state assumption if and only if_ \(e_{i}(\mathbf{d}_{1},\mathbf{v}_{1})=e_{i}(\mathbf{d}_{2},\mathbf{v}_{2})\) _for all_ \(i=1,\ldots,n\)_, where the scalar functions_ \(e_{i}\) _are defined recursively as_
\[\begin{split} e_{1}&=c_{1}\\ e_{2}&=c_{3}-a_{2n-2}\cdot e_{1}\\ e_{3}&=c_{5}-a_{2n-2}\cdot e_{2}-a_{2n-4}\cdot e_{1}\\ &\vdots\\ e_{n}&=c_{2n-1}-a_{2n-2}\cdot e_{n-1}-a_{2n-4}\cdot e_{n-2}-\cdots-a_{2} \cdot e_{1}\end{split} \tag{14}\]
**Remark 4.6**.: In light of the above two propositions, it is clear that two controllable Hamiltonian representations being system isomorphic is a _strictly stronger_ requirement than them inducing the same filter, since if \(a_{i}(\mathbf{d}_{1})=a_{i}(\mathbf{d}_{2})\) and \(c_{j}(\mathbf{d}_{1},\mathbf{v}_{1})=c_{j}(\mathbf{d}_{2},\mathbf{v}_{2})\) for all \(i,j\), then according to (14), \(e_{i}(\mathbf{d}_{1},\mathbf{v}_{1})=e_{i}(\mathbf{d}_{2},\mathbf{v}_{2})\) trivially holds for all \(i\in\mathbb{N}\).
### Equivalence classes of port-Hamiltonian systems by system isomorphisms
We have seen that \(PH_{n}/\sim_{sys}\) is not the set of port-Hamiltonian filters due to the presence of non-canonical systems. However, it is still informative to study the quotient space \(PH_{n}/\sim_{sys}\) because it removes all redundancies in the _canonical_ part, which prove to be crucial when shall we later restrict our attention to canonical systems in Sections 4.5 and 4.6.
In this section, we introduce a manageable characterization of the quotient space \(PH_{n}/\sim_{sys}\) by using parameter spaces. First, motivated by Williamson's theorem, we consider the space \(\Theta_{CH_{n}}\) defined before as the set of all pairs of the form \((\mathbf{d},\mathbf{v})\), where \(\mathbf{d}=(d_{1},d_{2},\ldots,d_{n})^{T}\) with \(d_{i}>0\), and \(\mathbf{v}=(v_{1},v_{2},\ldots,v_{2n})^{T}\in\mathbb{R}^{2n}\). Inspired by the representation results, we now define an equivalence relation \(\sim_{\star}\) on \(\Theta_{CH_{n}}\) as below whose equivalence classes are denoted by \([\mathbf{d},\mathbf{v}]\). The importance of the next definition is that, as we shall prove in Theorem 4.9, the relation \(\sim_{\star}\) on \(\Theta_{CH_{n}}\) plays the same role as \(\sim_{sys}\) on \(PH_{n}\).
**Definition 4.7**.: _The pairs \((\mathbf{d}_{1},\mathbf{v}_{1})\) and \((\mathbf{d}_{2},\mathbf{v}_{2})\) in \(\Theta_{CH_{n}}\) are \(\sim_{\star}\)-equivalent, that is, \((\mathbf{d}_{1},\mathbf{v}_{1})\sim_{\star}(\mathbf{d}_{2},\mathbf{v}_{2})\), if there exists a permutation matrix \(P_{\sigma}\in\mathbb{M}_{n}\) and an invertible matrix \(A\) such that, for \(D_{i}=\operatorname{diag}(\mathbf{d}_{i})\), \(i\in\{1,2\}\) and \(P=\begin{bmatrix}P_{\sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\), the following conditions hold true:_
**(i)**: \(P\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}P^{T}=\begin{bmatrix}D_{2}&0\\ 0&D_{2}\end{bmatrix}\)__
**(ii)**: \(A^{T}\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}A\mathbf{v}_{1}=\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}\mathbf{v}_{1}\)__
**(iii)**: \(A\mathbb{J}\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}=\mathbb{J}\begin{bmatrix}D_{1}&0\\ 0&D_{1}\end{bmatrix}A\)__
**(iv)**: \(\mathbf{v}_{2}=PA\mathbf{v}_{1}\)_._
**Proposition 4.8**.: _The relation \(\sim_{\star}\) defined in Definition 4.7 is an equivalence relation on \(\Theta_{CH_{n}}\)._
In the next subsection, we shall give meaning to \(\sim_{\star}\) in terms of groupoid orbits. Now, we aim to characterize the \(\sim_{sys}\) equivalence relation on \(PH_{n}\) as the \(\sim_{\star}\) equivalence relation on the space \(\Theta_{CH_{n}}\) of \((\mathbf{d},\mathbf{v})\)-pairs, that is, we shall prove that \(\Theta_{CH_{n}}/\sim_{\star}\cong PH_{n}/\sim_{sys}\). This will be proved in three steps. First, we show that for an arbitrary \(S\in Sp(2n,\mathbb{R})\), the map \(\varphi_{S}\) defined in Theorem 3.3 composed with \(\theta_{CH_{n}}\) is compatible with the equivalence relations \(\sim_{\star}\) and \(\sim_{sys}\), that is, \((\mathbf{d}_{1},\mathbf{v}_{1})\sim_{\star}(\mathbf{d}_{2},\mathbf{v}_{2})\) if and only if \(\varphi_{S}(\theta_{CH_{n}}(\mathbf{d}_{1},\mathbf{v}_{1}))\sim_{sys}\varphi_ {S}(\theta_{CH_{n}}(\mathbf{d}_{2},\mathbf{v}_{2}))\). Then, we show that the unique map \(\psi_{S}\) induced by \(\varphi_{S}\circ\theta_{CH_{n}}\) on the quotient spaces does not depend on the choice of \(S\) and hence the family of maps \(\psi_{S}\) parameterized by \(S\in Sp(2n,\mathbb{R})\) induces a unique map \(\Phi:\Theta_{CH_{n}}/\sim_{\star}\to PH_{n}/\sim_{sys}\) which is a homeomorphism.
**Theorem 4.9** (**Characterization of \(PH_{n}/\sim_{sys}\) as \(\Theta_{CH_{n}}/\sim_{\star}\))**.: _Given any arbitrary \(S\in Sp(2n,\mathbb{R})\), the map \(\varphi_{S}\circ\theta_{CH_{n}}\) induces on the quotient spaces a map \(\Phi:\Theta_{CH_{n}}/\sim_{\star}\to PH_{n}/\sim_{sys}\) which does not depend on \(S\in Sp(2n,\mathbb{R})\) and is given by \(\Phi([\mathbf{d},\mathbf{v}]_{\star})=\left[\begin{bmatrix}D&0\\ 0&D\end{bmatrix},\mathbf{v}\right]_{sys}\), where \(D=\operatorname{diag}(\mathbf{d})\). Moreover, \(\Phi\) is a homeomorphism with respect to the quotient topologies._
### The quotient spaces as groupoid orbit spaces
Recall that from a category theory point of view, a group can be seen as a category with a single object where all morphisms are invertible. Groupoids are a natural generalization of this notion and refer to categories with possibly more than one object, where again all morphisms are invertible (see [10] for a comprehensive introduction). As it is customary, groupoids will be denoted with the symbol \(s,t:\mathcal{G}\rightrightarrows M\) (or simply \(\mathcal{G}\rightrightarrows M\)), where \(s\) and \(t\) are the _source_ and the _target_ maps, respectively. Given \(m\in M\), the _groupoid orbit_ that contains this point is given by \(\mathcal{O}_{m}=t\left(s^{-1}(m)\right)\subset M\). The _orbit space_ associated to \(\mathcal{G}\rightrightarrows M\) is denoted by \(M/\mathcal{G}\).
In this section, we provide an alternative point of view for Theorem 4.9 in terms of groupoid orbits. More precisely, we show first that the set of equivalence classes \(PH_{n}/\sim_{sys}\) (resp. \(\Theta_{CH_{n}}/\sim_{\star}\)) is the orbit space \(PH_{n}/\mathcal{G}_{n}\) (resp. \(\Theta_{CH_{n}}/\mathcal{H}_{n}\)) of a groupoid \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) (resp. \(\mathcal{H}_{n}\rightrightarrows\Theta_{CH_{n}}\)) which we construct in the following paragraphs. In a second step we show that the statement in Theorem 4.9 is equivalent to saying that the orbit spaces \(PH_{n}/\sim_{sys}\) and \(\Theta_{CH_{n}}/\mathcal{H}_{n}\) of the two groupoids coincide.
**Definition 4.10**.:
1. _Let_ \(\mathcal{G}_{n}:=\{(L,(Q,B))\,|L\in GL(2n,\mathbb{R}),(Q,B)\in PH_{n}\) _such that (i)_ \(\mathbb{J}^{T}L\mathbb{J}QL^{-1}\) _is symmetric positive-definite (ii)_ \(B=\mathbb{J}^{T}L^{T}\mathbb{J}LB\}\)_._
2. _Let the target and source maps_ \(\alpha,\beta:\mathcal{G}_{n}\to PH_{n}\) _be defined as_ \(\alpha(L,(Q,B)):=(\mathbb{J}^{T}L\mathbb{J}QL^{-1},LB)\) _and_ \(\beta(L,(Q,B)):=(Q,B)\)
3. _Define the set of composable pairs as_ \(\mathcal{G}_{n}^{(2)}:=\{(((L_{1},(Q_{1},B_{1})),(L_{2},(Q_{2},B_{2})))\mid\beta((L_ {1},(Q_{1},B_{1})))\\ =\alpha((L_{2},(Q_{2},B_{2})))\}\)_._
4. _Let the multiplication map_ \(m:\mathcal{G}_{n}^{(2)}\rightarrow\mathcal{G}_{n}\) _be defined as_ \(m((L_{1},(Q_{1},B_{1})),(L_{2},(Q_{2},B_{2})))=(L_{1}L_{2},(Q_{2},B_{2}))\)_._
5. _Let the identity section_ \(\epsilon:PH_{n}\rightarrow\mathcal{G}_{n}\) _be defined as_ \(\epsilon(Q,B):=(\mathbb{I}_{2n},(Q,B))\)_._
6. _Let the inversion map_ \(i:\mathcal{G}_{n}\rightarrow\mathcal{G}_{n}\) _be defined as_ \(i(L,(Q,B)):=(L^{-1},(\mathbb{J}^{T}L\mathbb{J}QL^{-1},LB))\)_._
**Proposition 4.11**.: _The definition above determines a Lie groupoid \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) with \(\mathcal{G}_{n}\) the total space, \(PH_{n}\) the base space, and structure maps \(\alpha,\beta,m,\epsilon,i\). We refer to \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) as the port-Hamiltonian groupoid. The orbit space of this groupoid \(PH_{n}/\mathcal{G}_{n}\) coincides with \(PH_{n}/\sim_{sys}\)._
**Definition 4.12**.:
1. _Let_ \(\mathcal{H}_{n}:=\big{\{}\left((P_{\sigma},A),(\mathbf{d},\mathbf{v})\right)|P _{\sigma}\;\in\;\mathbb{M}_{n}\) _is a permutation matrix,_ \(A\;\in\;GL(2n,\mathbb{R}),(\mathbf{d},\mathbf{v})\;\in\;\mathcal{G}_{CH_{n}}\)_, such that (i)_ \(A^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}A\mathbf{v}=\begin{bmatrix}D&0\\ 0&D\end{bmatrix}\mathbf{v}\) _and (ii)_ \(A\mathbb{J}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}=\mathbb{J}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}A,\text{ where }D=\mathrm{diag}(\mathbf{d})\big{\}}\)_._
2. _Let the target and source maps_ \(\alpha,\beta:\mathcal{H}_{n}\rightarrow\mathcal{G}_{CH_{n}}\) _be defined as_ \(\alpha((P_{\sigma},A),(\mathbf{d},\mathbf{v})):=(\mathbf{d},\mathbf{v})\) _and_ \(\beta((P_{\sigma},A),(\mathbf{d},\mathbf{v})):=(P_{\sigma}\mathbf{d},PA \mathbf{v})\)_, where_ \(P=\begin{bmatrix}P_{\sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\)_._
3. _Define the set of composable pairs as_ \(\mathcal{H}_{n}^{(2)}:=\big{\{}\big{(}((P_{\sigma,1},A_{1}),(\mathbf{d}_{1}, \mathbf{v}_{1})),((P_{\sigma,2},A_{2}),(\mathbf{d}_{2},\mathbf{v}_{2}))\big{)}\mid \beta((P_{\sigma,2},A_{2}),(\mathbf{d}_{2},\mathbf{v}_{2}))=\alpha((P_{\sigma, 1},A_{1}),(\mathbf{d}_{1},\mathbf{v}_{1}))\big{\}}\)_._
4. _Let the multiplication map_ \(m:\mathcal{H}_{n}^{(2)}\rightarrow\mathcal{H}_{n}\) _be defined as_ \(m\big{(}((P_{\sigma,1},A_{1}),(\mathbf{d}_{1},\mathbf{v}_{1})),((P_{\sigma,2},A _{2}),(\mathbf{d}_{2},\mathbf{v}_{2}))\big{)}\\ =((P_{\sigma,2}P_{\sigma,1},P_{\sigma,1}^{T}A_{2}P_{\sigma,1}A_{1}),(\mathbf{ d}_{1},\mathbf{v}_{1}))\)_._
5. _Let the identity section_ \(\epsilon:\mathcal{G}_{CH_{n}}\rightarrow\mathcal{H}_{n}\) _be defined as_ \(\epsilon(\mathbf{d},\mathbf{v}):=((\mathbb{I}_{n},\mathbb{I}_{2n}),(\mathbf{d}, \mathbf{v}))\)_._
6. _Let the inversion map_ \(i:\mathcal{H}_{n}\rightarrow\mathcal{H}_{n}\) _be defined as_ \(i((P_{\sigma},A),(\mathbf{d},\mathbf{v})):=((P_{\sigma}^{T},P_{\sigma}A^{-1}P_{ \sigma}^{T}),(P_{\sigma}\mathbf{d},PA\mathbf{v}))\)_._
**Proposition 4.13**.: _The definition above determines a Lie groupoid \(\mathcal{H}_{n}\rightrightarrows\mathcal{G}_{CH_{n}}\) with \(\mathcal{H}_{n}\) the total space, \(\mathcal{G}_{CH_{n}}\) the base space, and structure maps \(\alpha,\beta,m,\epsilon,i\). We refer to \(\mathcal{H}_{n}\rightrightarrows\mathcal{G}_{CH_{n}}\) as the reduced port-Hamiltonian groupoid. The orbit space of this groupoid \(\mathcal{G}_{CH_{n}}/\mathcal{H}_{n}\) coincides with \(\mathcal{G}_{CH_{n}}/\sim_{\star}\)._
Theorem 4.9 can now be restated in terms of the elements that we just introduced.
**Theorem 4.14**.: _The orbit spaces of the Lie groupoids \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) and \(\mathcal{H}_{n}\rightrightarrows\mathcal{G}_{CH_{n}}\) are isomorphic._
### Characterization of canonical port-Hamiltonian systems
In Subsections 4.2 and 4.3 we have provided a characterization of \(PH_{n}/\sim_{sys}\) in terms of \(\mathcal{G}_{CH_{n}}/\sim_{\star}\) and groupoid orbit spaces. Recall from Subsection 4.1 that the difficulty of the unique identifiability of filters in \(\mathcal{P}\mathcal{H}_{n}\) comes from the possible presence of non-canonical systems, without which the equivalence relation \(\sim_{filter}\) coincides with \(\sim_{sys}\). Hence, it is worth studying what the quotient spaces above look like when restricted to the subset that contains only canonical port-Hamiltonian systems. In this section, we take a step in that direction.
Recall that a port-Hamiltonian system in \(PH_{n}\) of the form (5) is controllable (or equivalently, observable/canonical) if and only if
\[\det\left(\left[B\ |\ \mathbb{J}QB\ |\ \ldots\ |\ (\mathbb{J}Q)^{2n-1}B \right]\right)\neq 0. \tag{15}\]
Using the Williamson decomposition of \(Q\) into \(D\) and \(S\), and \(v:=S\cdot B\), this is equivalent to
\[\det\left(\left[\mathbf{v}\ \left|\ \mathbb{J}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}\mathbf{v}\ \right|\ \ldots\ \left|\ \begin{pmatrix}\mathbb{J} \begin{bmatrix}D&0\\ 0&D\end{bmatrix}\end{pmatrix}^{2n-1}\mathbf{v}\right]\right)\neq 0. \tag{16}\]
Denote by \(PH_{n}^{can}\) (respectively, \(\Theta_{CH_{n}}^{can}\)) the subset of \(PH_{n}\) (respectively, \(\Theta_{CH_{n}}\)) made of systems that satisfy (15) (respectively, (16)). As an immediate consequence, it holds true that
\[\mathcal{P}\mathcal{H}_{n}^{can}\cong PH_{n}^{can}/\sim_{\textit{filter}} \cong PH_{n}^{can}/\sim_{\textit{sys}}.\]
We now characterize the space of pairs \((\mathbf{d},\mathbf{v})\in\Theta_{CH_{n}}\) that correspond to canonical port-Hamiltonian systems in normal form. The calculation of the determinant in (16) yields \(\big{(}\prod_{i=1}^{n}d_{i}\big{)}\cdot\big{(}\prod_{1\leq j<k\leq n}(d_{j}+d_ {k})^{2}(d_{j}-d_{k})^{2}\big{)}\cdot\big{(}\prod_{l=1}^{n}(v_{l}^{2}+v_{n+l}^{ 2})\big{)}\) up to a sign. Therefore,
\[\Theta_{CH_{n}}^{can}=\big{\{}(\mathbf{d},\mathbf{v})\in\Theta_{CH_{n}}|\ \ \text{the entries of $\mathbf{d}$ are all different and $v_{l}^{2}+v_{n+l}^{2}>0$ for all $l\in\{1,\ldots,n\}$}\big{\}}\,.\]
We shall refer to the statement on the entries of \(\mathbf{d}\) being all different as the _non-resonance condition_ and to \(v_{l}^{2}+v_{n+l}^{2}>0\) for all \(l\in\{1,\ldots,n\}\) as the _nondegeneracy_ condition. There might be a concern about whether different choices of the matrix \(S\) lead to different vectors \(\mathbf{v}\) and hence the notion of nondegeneracy would be ill-defined. This is indeed not a problem since, as we show in Remark 4.15 below, once the non-resonance condition is assumed, different vectors \(\mathbf{v}\) are obtained by rotating the planes spanned by each and every pair of \(l\)-th and \(n+l\)-th entries, which preserves the value of \(v_{l}^{2}+v_{n+l}^{2}\). Thus, the nondegeneracy condition is actually based on the non-resonance condition.
**Remark 4.15** (**Williamson's decomposition in the canonical case)**.: We have mentioned in Theorem 2.3**(**iii**) that two symplectic matrices \(S\) and \(S^{\prime}\) that Williamson decompose the same \(Q\) differ by a unitary matrix. We now note that for an element \(Q\) that satisfies the non-resonance condition, \(S\) and \(S^{\prime}\) do not only differ by an arbitrary \(U\in U(n)\), (see (10) for the definition of \(U(n)\)) but by a special one \(R\) that has the form
\[R=\left[\begin{array}{ccccc}\cos\theta_{1}&&&\mbox{\Large$0$}\\ &\ddots&&\ddots&\\ \mbox{\Large$0$}&&\cos\theta_{n}&\mbox{\Large$0$}\\ \hline\sin\theta_{1}&&&\mbox{\Large$0$}\\ &\ddots&&\ddots&\\ \mbox{\Large$0$}&&\sin\theta_{n}&\mbox{\Large$0$}\\ \end{array}\right]. \tag{17}\]
This fact accounts for part of a symmetry that we shall spell out later on. The proof of this fact is purely computational: the assumption that the diagonal entries of \(D\) are all positive and distinct, the fact that \(U\) satisfies the equation \(U\begin{bmatrix}D&0\\ 0&D\end{bmatrix}U^{T}=\begin{bmatrix}D&0\\ 0&D\end{bmatrix}\) and, at the same time, \(U\in U(n)=SO(2n,\mathbb{R})\cap Sp(2n,\mathbb{R})\), guarantees the claim.
**Remark 4.16** (**Being canonical is a generic property)**.: It is well-known that the set of canonical systems, as a subset of all linear systems, corresponds to a Zariski open set, which is open and dense in the usual topology [16]. In particular, this also holds for linear port-Hamiltonian systems. Therefore, \(PH_{n}^{can}\) is open and dense in \(PH_{n}\). On the other hand, using the characterization provided above, it is clear that \(\Theta_{CH_{n}}^{can}\) is also open and dense in \(\Theta_{CH_{n}}\).
The isomorphism in Theorem 4.9 naturally restricts to canonical subsets, that is \(PH_{n}^{can}/\sim_{\textit{sys}}\cong\Theta_{CH_{n}}^{can}/\sim_{\star}\). On the other hand, we will see below another isomorphism result involving \(PH_{n}^{can}/\sim_{\textit{sys}}\),
**Proposition 4.17** (**Characterization of \(PH_{n}^{can}/\sim_{\textit{sys}}\) as \(\Theta_{CH_{n}}^{can}/\sim_{\textit{sys}}\))**.: _The map \(\Phi:\Theta_{CH_{n}}^{can}/\sim_{\textit{sys}}\rightarrow\textit{PH}_{n}^{can}/ \sim_{\textit{sys}}\) defined by \(\Phi([\mathbf{d},\mathbf{v}]_{\textit{sys}})=\left[\begin{bmatrix}D&0\\ 0&D\end{bmatrix},\mathbf{v}\right]_{\textit{sys}}\), where \(D=\operatorname{diag}(\mathbf{d})\), is an isomorphism._
We just proved that both \(\Theta_{CH_{n}}^{can}/\sim_{\star}\) and \(\Theta_{CH_{n}}^{can}/\sim_{\textit{sys}}\) are isomorphic to \(PH_{n}^{can}/\sim_{\textit{sys}}\), and even via the same ismorphism \(\Phi\). Therefore, the equivalence relations \(\sim_{\star}\) and \(\sim_{\textit{sys}}\)_coincide_ when restricted to \(\Theta_{CH_{n}}^{can}\).
To summarize, we have proved in this subsection that
\[\mathcal{P}\mathcal{H}_{n}^{can}\cong PH_{n}^{can}/\sim_{\textit{sys}}\ \ \cong\ \Theta_{CH_{n}}^{can}/\sim_{\star}\ \ \cong\ \Theta_{CH_{n}}^{can}/\sim_{\textit{sys}}.\]
In the next subsection, we continue the investigation of the above chain of isomorphisms.
### The unique identifiability space for canonical port-Hamiltonian systems as a group orbit space
In Subsection 4.3, it is proved that the quotient space \(PH_{n}/\sim_{sys}\) can be treated as a Lie groupoid orbit space. We now show that the restricted quotient space to canonical port-Hamiltonian systems, that is, \(PH_{n}^{can}/\sim_{sys}\), is isomorphic to the orbit space of a certain group action on \(\mathcal{O}_{CH_{n}}^{can}\), where the group is a semi-direct product of the \(n\)-permutation group and the \(n\)-torus, that is, \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\). The intuition behind this fact is that restricting to the subset of canonical systems \(PH_{n}^{can}\) removes the degeneracies in \(PH_{n}\), which allows to reduce the symmetry of the Lie groupoid \(\mathcal{G}_{n}\rightrightarrows PH_{n}\) to that of the Lie group \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\).
We start by defining the group action. First, let the permutation group \(S_{n}\) act on \(\mathbb{R}^{n}\) by permuting the entries \(d_{i}\) of the vector \(\mathbf{d}\in\mathbb{R}^{n}\). For each \(i\in\{1,\ldots,n\}\) the circle \(S^{1}\) acts on the plane spanned by the \(i\)-th and \((n+i)\)-th entries of \(\mathbf{v}\) by rotations. More precisely, we define the action of \(S_{n}\) on elements \(\mathbf{d}\) and \(\mathbf{v}\) as
\[\Gamma_{\sigma}\big{(}(d_{1},\ldots,d_{n})^{T}\big{)}=(d_{\sigma(1)},\ldots,d _{\sigma(n)})^{T}=P_{\sigma}\cdot(d_{1},\ldots,d_{n})^{T}\]
where \(P_{\sigma}\) is the corresponding permutation matrix and
\[\Gamma_{\sigma}\big{(}(v_{1},\ldots,v_{2n})^{T}\big{)}=(v_{\sigma(1)},\ldots, v_{\sigma(n)},v_{n+\sigma(1)},\ldots,v_{n+\sigma(n)})^{T}=\begin{bmatrix}P_{ \sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\cdot(v_{1},\ldots,v_{2n})^{T},\]
respectively. Then the \(\sigma\)-action on a pair \((\mathbf{d},\mathbf{v})\) is understood as acting on \(\mathbf{d}\) and \(\mathbf{v}\) simultaneously. We also define the action of the \(i\)-th circle of the torus \(\mathbb{T}^{n}\) as the planar rotation of the space spanned by the \(i\)-th and \((n+i)\)-th entries of \(\mathbf{v}\). This torus action is understood to leave \(\mathbf{d}\) invariant. More concretely, it is the action
\[\Gamma_{\theta_{i}}\big{(}(d_{1},\ldots,d_{n},v_{1},\ldots,v_{2n} )^{T}\big{)}\] \[=(d_{1},\ldots,d_{n},v_{1},\ldots,v_{i-1},cos\theta_{i}v_{i}-sin \theta_{i}v_{n+i},v_{i+1},\ldots,v_{n},\] \[v_{n+1},\ldots,v_{n+i-1},sin\theta_{i}v_{i}+cos\theta_{i}v_{n+i},v_{n+i+1},\ldots,v_{2n})^{T}.\]
With these actions of the groups \(S_{n}\) and \(\mathbb{T}^{n}\) on \(\mathcal{O}_{CH_{n}}\) we define the map \(\Gamma_{(\sigma,(\theta_{1},\ldots,\theta_{n})^{T})}:(\mathbb{R}_{+}^{n}\times \mathbb{R}^{2n})\to(\mathbb{R}_{+}^{n}\times\mathbb{R}^{2n})\) as
\[\Gamma_{(\sigma,(\theta_{1},\ldots,\theta_{n})^{T})}(\mathbf{d}, \mathbf{v})=\Gamma_{\theta_{1}}\circ\cdots\circ\Gamma_{\theta_{n}}\circ\Gamma _{\sigma}(\mathbf{d},\mathbf{v})\\ =(P_{\sigma}\cdot\mathbf{d},\Gamma_{\theta_{1}}\circ\cdots\circ \Gamma_{\theta_{n}}\bigg{(}\begin{bmatrix}P_{\sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\cdot\mathbf{v}\bigg{)})=(P_{\sigma}\cdot\mathbf{d}, RP\cdot\mathbf{v}), \tag{18}\]
which constitutes an action of the semi-direct product group \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\), where \(\phi:S_{n}\to Aut(\mathbb{T}^{n})\) is given by the permutation \(\phi(\sigma)((\theta_{1},\ldots,\theta_{n})^{T})=P_{\sigma}\cdot(\theta_{1}, \ldots,\theta_{n})^{T}\). Note that the matrix of \(\Gamma_{\theta_{1}}\circ\cdots\circ\Gamma_{\theta_{n}}\) is given by (17), \(P_{\sigma}\) is the permutation matrix that corresponds to \(\sigma\in S_{n}\), and \(P=\begin{bmatrix}P_{\sigma}&0\\ 0&P_{\sigma}\end{bmatrix}\).
**Proposition 4.18**.: _The map \(\Gamma_{(\sigma,(\theta_{1},\ldots,\theta_{n})^{T})}\) defined as (18) for \(\sigma\in S_{n}\) and \((\theta_{1},\ldots,\theta_{n})^{T}\in\mathbb{T}^{n}\) is a left group action of \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\) on \(\mathcal{O}_{CH_{n}}\)._
Using the definition of the \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)-action on \(\mathcal{O}_{CH_{n}}\), two elements \((\mathbf{d}_{1},\mathbf{v}_{1}),(\mathbf{d}_{2},\mathbf{v}_{2})\in\Theta_{CH_{n}}\) are in the same orbit if and only if the following conditions hold true for some \(\sigma\in S_{n}\):
**(i)**: \(d_{2,i}=d_{1,\sigma(i)}\),
**(ii)**: \(v_{2,i}^{2}+v_{2,n+i}^{2}=v_{1,\sigma(i)}^{2}+v_{1,n+\sigma(i)}^{2},\ i=1, \ldots,n\).
By Theorem 4.5 **(I)**, parts **(i)** and **(ii)** it is clear that there is a close relation between the the \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)-action and the equivalence relation \(\sim_{\star}\) on \(\Theta_{CH_{n}}\). The next proposition demonstrates that the orbits of the \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)-action coincide with the equivalence classes of the relation \(\sim_{sys}\) when we restrict our attention to the subset \(\Theta_{CH_{n}}^{can}\).
**Proposition 4.19** (**Characterization of \(\mathcal{O}_{CH_{n}}^{can}/\sim_{sys}\) as \(\Theta_{CH_{n}}^{can}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\))**.: _Given \((\mathbf{d}_{1},\mathbf{v}_{1})\) and \((\mathbf{d}_{2},\mathbf{v}_{2})\) in \(\Theta_{CH_{n}}^{can}\), then \((\mathbf{d}_{1},\mathbf{v}_{1})\sim_{sys}(\mathbf{d}_{2},\mathbf{v}_{2})\) if and only if \((\mathbf{d}_{1},\mathbf{v}_{1})\) and \((\mathbf{d}_{2},\mathbf{v}_{2})\) lie in the same orbit of the \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)-action._
Global Euclidean coordinates for the unique identifiability space of canonical port-Hamiltonian systems
Recall from Section 4.4 that \(\Theta^{can}_{CH_{n}}\) contains pairs \((\mathbf{d},\mathbf{v})\) where \(\mathbf{d}\in\mathbb{R}^{n}_{+}\) and \(\mathbf{v}\in\mathbb{R}^{2n}\) are such that the entries \(d_{l}\)'s are all distinct and \(v_{l}^{2}+v_{n+l}^{2}>0\) for all \(l=1,\ldots,n\). We define for convenience a function \(\mathcal{R}:\mathbb{R}^{2n}\to\mathbb{R}^{n}_{\geq 0}\) as \(\mathcal{R}((v_{1},\ldots,v_{2n})^{T})=\big{(}v_{1}^{2}+v_{n+1}^{2},\ldots,v_{ n}^{2}+v_{2n}^{2}\big{)}\).
Now observe that the quotient space \(\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\) naturally has a smooth manifold structure. We briefly prove this in the following lines. Note that the torus \(\mathbb{T}^{n}\) is a connected abelian compact Lie group. The symmetry group \(S_{n}\) is a finite group, and hence compact as well. Thus, it is easy to see that the semi-direct product \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\) is also a compact Lie group, and hence its action on \(\Theta^{can}_{CH_{n}}\) is automatically proper. On the other hand, since \(\Theta^{can}_{CH_{n}}\) is the space of \((\mathbf{d},\mathbf{v})\) pairs satisfying that \(\mathbf{d}\) contains distinct entries and \(\mathcal{R}(\mathbf{v})^{(l)}>0\) for \(l=1,\ldots,n\), it necessarily holds that the only element in \(S_{n}\rtimes_{\phi}\mathbb{T}^{n}\) that possibly keep any element in \(\Theta^{can}_{CH_{n}}\) invariant is the identity, which implies the \((S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)-action on \(\Theta^{can}_{CH_{n}}\) is free. Classical results in Lie theory [11, Proposition 2.3.8] guarantee that \(\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\) admits a unique smooth structure such that the quotient map \(\pi:\Theta^{can}_{CH_{n}}\to\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{ T}^{n})\) is a submersion. With this as a motivation, we try to find the quotient space explicitly in the following.
For a fixed \(\mathbf{d}\), we denote by \(\mathbf{d}_{\uparrow}\) the reordered vector constructed out of \(\mathbf{d}\) by placing the entries in increasing order. Denote by \(\mathcal{X}^{n}_{\uparrow}\) the set of \(\mathbf{d}\in\mathbb{R}^{n}_{+}\) with distinct positive entries in increasing order. We have then the following proposition that explicitly characterizes the quotient space \(\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\).
**Proposition 4.20** (**Global Euclidean coordinates for the orbit space \(\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\)**).: _The map \(f:\Theta^{can}_{CH_{n}}/(S_{n}\rtimes_{\phi}\mathbb{T}^{n})\to\mathcal{X}^{n}_ {\uparrow}\times\mathbb{R}^{n}_{+}\) defined by \(f([\mathbf{d},\mathbf{v}])=(\mathbf{d}_{\uparrow},\mathcal{R}(\Gamma_{\sigma}( \mathbf{v})))\), where \(\sigma\in S_{n}\) is the unique permutation such that \(\Gamma_{\sigma}(\mathbf{d})=\mathbf{d}_{\uparrow}\), is an isomorphism._
## 5 Linear port-Hamiltonian systems in normal form are restrictions of higher dimensional ones
In this section, we prove a theorem (Theorem 5.1), inspired by the classical Kalman Decomposition [13], which says the filter induced by any \((Q,B)\in PH_{n}\) can be regarded as that induced by some \((Q^{\prime},B^{\prime})\in PH_{m}\), where \(m\) can be any integer that is at least \(n\). The motivation for these considerations is given by the fact that in many practical situations in which an input/ouput system has to be learned, the dimension of the underlying state-space system is not known. In that situation, we may want to have the flexibility of considering the actual system that needs to be learned as a lower-dimensional restriction of a much larger-dimensional one that we have picked for the learning task.
We shall carry this out by producing an explicit injective system morphism between the state space of \((Q,B)\) and that of \((Q^{\prime},B^{\prime})\) in our next Theorem 5.1. In Proposition 5.2, we show that the quotient space \(PH_{n}/\sim_{sys}\) can be characterized as \(PH_{m,n}/\sim_{sys}\), where \(PH_{m,n}\subset PH_{m}\) is the space containing all the systems of the form \((Q^{\prime},B^{\prime})\). Motivated by the developments in Section 4, we then characterize the pair \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) that corresponds to \((Q^{\prime},B^{\prime})\) in Proposition 5.3. Eventually, in Proposition 5.4, we show that the isomorphism \(PH_{n}/\sim_{sys}\ \cong\ \Theta_{CH_{n}}/\sim_{\star}\) can be lifted to high dimension as well. We shall comment further at the end of this section on the significance of the above-mentioned results in the context of machine learning.
The following theorem states that the filter induced by \((Q,B)\in PH_{n}\) can be reproduced using systems in an arbitrarily higher dimension.
**Theorem 5.1**.: _Given any system \((Q,B)\in PH_{n}\), then_
**(i)**: _For any_ \(m\geq n\)_, there exists an orthogonal matrix_ \(O\in O(2m,\mathbb{R})\) _such that the filter induced by_
\[(Q^{\prime},B^{\prime})=\left(O\begin{bmatrix}Q&0\\ 0&\mathbb{I}_{2m-2n}\end{bmatrix}O^{T},O\begin{bmatrix}B\\ 0\end{bmatrix}\right)\in PH_{m}\text{ coincides with that induced by }(Q,B).\]
**(ii)**: _The map_ \(f:\mathbb{R}^{2n}\to\mathbb{R}^{2m}\) _defined by_ \(f(\mathbf{z})=O\begin{bmatrix}\mathbb{I}_{2n}\\ 0\end{bmatrix}\cdot\mathbf{z}\) _is an injective system morphism between the state spaces of_ \((Q,B)\) _and_ \((Q^{\prime},B^{\prime})\)_._
As it can be seen in the proof (included in Appendix 9.10), the matrix \(O\in O(2m,\mathbb{R})\) above is constructed so that
\[O\begin{bmatrix}\mathbb{J}_{n}&0\\ 0&\mathbb{J}_{m-n}\end{bmatrix}O^{T}=\mathbb{J}_{m}. \tag{19}\]
From now on, we denote by \(PH_{m,n}\subset PH_{m}\) the space of linear port-Hamiltonian systems parametrized by pairs \((Q^{\prime},B^{\prime})\) of the form \(\left(O\begin{bmatrix}Q&0\\ 0&\mathbb{I}_{2m-2n}\end{bmatrix}O^{T},O\begin{bmatrix}B\\ 0\end{bmatrix}\right)\), where \(O\in O(2m,\mathbb{R})\) satisfies (19), and equip it with the system automorphism relation \(\sim_{sys}\) defined on \(PH_{m}\). The following proposition states that the space of possible input-output filters induced by \(PH_{n}\) is indeed the same as those induced by \(PH_{m,n}\). This means we can exactly reproduce the filters of \(2n\)-dimensional port-Hamiltonian systems in higher dimension by simply considering the elements \((Q^{\prime},B^{\prime})\) in \(PH_{m,n}\).
**Proposition 5.2**.: _The function \(f:PH_{n}/\sim_{sys}\to PH_{m,n}/\sim_{sys}\) defined by_
\[f([Q,B]_{sys})=\left[O\begin{bmatrix}Q&0\\ 0&\mathbb{I}_{2m-2n}\end{bmatrix}O^{T},O\begin{bmatrix}B\\ 0\end{bmatrix}\right]_{sys}\]
_is an isomorphism, where \(O\in O(2m,\mathbb{R})\) is as in Theorem 5.1 and hence satisfies (19)._
Recall that for a system \((Q,B)\in PH_{n}\), we derive the corresponding object \((\mathbf{d},\mathbf{v})\in\Theta_{CH_{n}}\) from Williamson's decomposition \(Q=S^{T}\begin{bmatrix}D&0\\ 0&D\end{bmatrix}S\) and \(\mathbf{v}=S^{-1}B\). We have seen that \((Q^{\prime},B^{\prime})\in PH_{m,n}\subset PH_{m}\) is also a linear port-Hamiltonian system in normal form. Therefore, it makes sense to investigate the relation between \((\mathbf{d},\mathbf{v})\) and the element \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) which corresponds to \((Q^{\prime},B^{\prime})\). The following proposition asserts that \(\mathbf{d}^{\prime}\) can be obtained from \(\mathbf{d}\) by padding it with ones and, similarly, \(\mathbf{v}^{\prime}\) can be obtained by splitting \(\mathbf{v}\) and padding each segment with zeros.
**Proposition 5.3** (**Symplectic eigenvalues for corresponding higher dimensional systems)**.: _Let \((Q,B)\) and \((Q^{\prime},B^{\prime})\) be as in Theorem 5.1, and let \(\mathbf{d}\) and \(\mathbf{d}^{\prime}\) be their corresponding symplectic eigenvalues. Then, up to reordering, \(\mathbf{d}^{\prime}\!=(d_{1},\cdots,d_{n},1,1,\ldots,1)^{T}\). Even though \(\mathbf{v}\) and \(\mathbf{v}^{\prime}\) are not uniquely determined (See Remark 3.4), there exists a choice of \(\mathbf{v}^{\prime}\) that is related to \(\mathbf{v}=(v_{1},\cdots,v_{n},v_{n+1},\cdots,v_{2n})^{T}\) via_
\[\mathbf{v}^{\prime}=\big{(}v_{1},\cdots,v_{n},\underbrace{0,\cdots,0}_{m-n},v _{n+1},\cdots,v_{2n},\underbrace{0\cdots\ 0}_{m-n}\big{)}^{T}.\]
From the above proposition, we call \(\mathbf{d}^{\prime}\) the extended symplectic eigenvalues and \(\mathbf{v}^{\prime}\) the extended vector. Now we define the space \(\Theta_{CH_{m,n}}\) as the set of all pairs of the form \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) and equip \(\Theta_{CH_{m,n}}\) with the equivalence relation \(\sim_{\star}\) as in Definition 4.7 but in dimension \(m\) instead of \(n\). Recall that we proved \(\Theta_{CH_{n}}/\sim_{\star}\ \cong\ PH_{n}/\sim_{sys}\). Now we proceed to show that the above isomorphism in dimension \(2n\) can be lifted to dimension \(2m\) by considering only the restricted parameter spaces with vectors of the form \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) and \((Q^{\prime},B^{\prime})\).
**Proposition 5.4**.: _The function \(f:\Theta_{CH_{m,n}}/\sim_{\star}\to PH_{m,n}/\sim_{sys}\) defined by_
\[f([\mathbf{d}^{\prime},\mathbf{v}^{\prime}]_{\sim_{\star}})=\left[\begin{bmatrix} D^{\prime}&0\\ 0&D^{\prime}\end{bmatrix},\mathbf{v}^{\prime}\right]_{sys}\]
_where \(D^{\prime}=\mathrm{diag}(\mathbf{d}^{\prime})\), is an isomorphism._
Note that in general \(\mathbf{d}^{\prime}\) contains repeated symplectic eigenvalues because of all the ones used in the extension and that \(v_{1}^{\prime 2}+v_{m+l}^{\prime 2}=0\) for \(l>n\). Therefore, it is impossible that \(\Theta_{CH_{m,n}}\) contains canonical systems for \(m>n\). In other words, lifting \(PH_{n}\) to \(PH_{m,n}\)_introduces degeneracies that exclude the possibility of the systems being canonical._
We emphasize that the above-mentioned series of results are crucial in machine learning applications. Very often in practice, the dimension \(2n\) of the underlying data-generating process, that is, the latent port-Hamiltonian system (5), is not known, causing a problem when choosing the dimension of the controllable/observable Hamiltonian representation for learning. This issue can be solved by composing the morphism in Theorem 5.1**(ii)** (which is injective) and the one in Theorem 3.3 (not necessarily injective). The composition of system morphisms is still a system morphism, this time between the underlying system \(\theta_{PH_{n}}(Q,B)\) and the observable Hamiltonian representation in an arbitrarily higher dimension \(2m\geq 2n\). In this way, the observable Hamiltonian representation in dimension \(2m\) still has full expressive power to represent any \(2n\)-dimensional system in \(PH_{n}\), and hence can be used for learning. Practically, one can choose a sufficiently large \(m\), and parameterize the observable Hamiltonian representation using \((\mathbf{d},\mathbf{v})\) (we use the notation \((\mathbf{d},\mathbf{v})\) instead of \((\mathbf{d}^{\prime},\mathbf{v}^{\prime})\) because practically we do not know what \(n\) is) and then estimate them. We emphasize that the higher-dimensional port-Hamiltonian systems are in general not canonical, hence the \((\mathbf{d},\mathbf{v})\)-pair that corresponds to the data-generating process is not guaranteed to be unique. Still, we always know there is at least one choice of \((\mathbf{d},\mathbf{v})\) that works no matter how large an \(m\) we choose, and which is constructed using the recipe in Proposition 5.3.
## 6 Practical implementation of the results
We start with a diagram that summarizes the results that we have proved.
**Theorem 6.1**.: _The following diagram holds true using the isomorphisms explicitly constructed in all the preceeding results._
We now comment on how to use the results contained in the diagram above depending on the different learning situations that we may encounter. Indeed, we can use our statements to tackle three different learning scenarios:
* Case 1: The target port-Hamiltonian system (the data generating process that we want to learn) is canonical and its state-space dimension is known, that is, \(\theta_{PH_{n}}(Q,B)\in PH_{n}^{can}\) with \(n\) known. This is the most favorable situation in the sense that we can exactly represent the system \(\theta_{PH_{n}}(Q,B)\) by either the controllable or the observable Hamiltonian representation, which are both isomorphic to the original system. Furthermore, since in this case \(\sim_{filter}\) coincides with \(\sim_{sys}\), the filter in \(\mathcal{PH}_{n}^{can}\) induced by \(\theta_{PH_{n}}(Q,B)\) can be uniquely identified with an element \((\mathbf{d}_{\uparrow},R)\in\mathcal{X}_{\uparrow}^{n}\times\mathbb{R}_{+}^{n}\), which are the unique parameters that need to be estimated.
* Case 2: The target port-Hamiltonian system is not guaranteed to be canonical but its dimension is known, that is, \(\theta_{PH_{n}}(Q,B)\in PH_{n}\) with \(n\) known. In this case, there is a trade-off between the controllable Hamiltonian representation and the observable one. As mentioned
before, the controllable one will be structure-preserving but its expressive power depends on the controllability of the target system \(\theta_{PH_{n}}(Q,B)\). On the other hand, the observable one always possesses full expressive power but does not always guarantee the port-Hamiltonian structure of the induced filter.
* Case 3: We are agnostic about the dimension of the target port-Hamiltonian system, that is, given \(\theta_{PH_{n}}(Q,B)\in PH_{n}\) with \(n\) unknown. In this case, we need to choose a sufficiently large \(m\) so that \(m\geq n\), then based on composition of system morphisms, it suffices to learn some \((\mathbf{d},\mathbf{v})\in\Theta_{CH_{m}}\) and use the \(2m\)-dimensional observable Hamiltonian representation to reproduce the input-output dynamics of \((Q,B)\). Due to the loss of the canonical property, such a \((\mathbf{d},\mathbf{v})\) pair may not be unique. Additionally, we do not know the dimension \(2n\) of the data generating process we ignore how many ones are used to pad \(\mathbf{d}\) (and similarly, how many zeros are padded into the vector \(\mathbf{v}\)). However, we do know that an element \((\mathbf{d},\mathbf{v})\) exists in some \(\Theta_{CH_{m,n}}\subset\Theta_{CH_{m}}\) which is given by Proposition 5.3.
An important special case is when there is no input to the port-Hamiltonian system, that is, \(u(t)=0\). In this case, the port-Hamiltonian system reduces to a linear Hamiltonian system with an arbitrary linear readout matrix because \(Q\) is positive-definite and hence invertible. We emphasize that the observable Hamiltonian representation in a higher dimension is totally independent of \(B\) since it is simply given by
\[\begin{cases}\dot{\mathbf{s}}=g_{1}^{obs}(\mathbf{d})\cdot\mathbf{s},\\ y=(0,0,\cdots,0,1)\cdot\mathbf{s},\end{cases} \tag{20}\]
In other words, Hamiltonian systems with linear readout can be learned by adjusting the initial state \(\mathbf{s}_{0}\) and symplectic eigenvalues \(d_{i}\), without even knowing the linear readout function that yields the observations.
## 7 Numerical illustrations
In this section, we present two numerical examples to demonstrate the effectiveness of our representation results from a learning point of view.
### Non-dissipative circuit
Similar to an example in [14], we consider a circuit consisting of a power source with voltage \(V=u(t)\), together with ten parallelizations, each of them containing a capacitor \(C_{i}\) with charge \(Q_{i}\) and an inductor \(L_{i}\) with magnetic flux linkage \(\phi_{i}\) for \(i=1,\dots,10\) (see Figure 1). Using Kirchhoff laws, we obtain the following port-Hamiltonian system in normal form (21) and (22), where the Hamiltonian of the system is
\[H(Q_{1},\dots,Q_{5},\phi_{1},\dots,\phi_{5})=\frac{Q_{1}^{2}}{2C_{1}}+\dots+ \frac{Q_{5}^{2}}{2C_{5}}+\frac{\phi_{1}^{2}}{2L_{1}}+\dots+\frac{\phi_{5}^{2}} {2L_{5}}.\]
\[\begin{bmatrix}\dot{Q}_{1}\\ \vdots\\ \dot{Q}_{5}\\ \dot{\phi}_{1}\\ \vdots\\ \dot{\phi}_{5}\end{bmatrix}=\begin{bmatrix}0&\mathbb{I}_{5}\\ -\mathbb{I}_{5}&0\end{bmatrix}\cdot\begin{bmatrix}\frac{\partial H}{\partial Q _{5}}\\ \frac{\partial H}{\partial Q_{5}}\\ \frac{\partial H}{\partial\phi_{1}}\\ \vdots\\ \frac{\partial H}{\partial\phi_{5}}\end{bmatrix}+\begin{bmatrix}0\\ 0\\ 0\\ 1\\ \vdots\\ 1\end{bmatrix}\cdot u \tag{21}\]
\[y=\frac{\partial H}{\partial\phi_{1}}+\frac{\partial H}{\partial\phi_{2}}+ \dots+\frac{\partial H}{\partial\phi_{5}}, \tag{22}\]
This port-Hamiltonian system treats the power supply \(V=u\) as input and the current through the power supply, that is \(y\), as output. One verifies that such a system is _non-canonical_. Our
purpose is to learn the input-output behavior of this system without any access to the internal physical state and training only with input-output observations.
In our implementation, we choose for simplicity \(C_{i}=1\) and \(L_{i}=1\) for \(i=1,\ldots,5\). We choose to learn with a 10-dimensional observable Hamiltonian representation to show that the dynamics can be captured even in the non-canonical case. (Indeed, with our choice of \(C_{i}\)s and \(L_{i}\)s, the system is readily checked to be noncanonical). We randomly generate an initial condition for the ground-truth system and integrate it using Euler's method (see Appendix 9.14 for more sophisticated structure-preserving integration methods) with a discretization step of 0.01 for 1000 time steps. The input is chosen as \(u(t)=\sin(t)\). The 1000 pairs of input and output data will be used as training data. During the training phase, we estimate the initial state \(\mathbf{x}\in\mathbb{R}^{10}\) as well as the parameters \(\mathbf{d}\in\mathbb{R}^{5}_{+}\) and \(\mathbf{v}\in\mathbb{R}^{10}\). This is carried out via gradient descent using a learning rate of \(\lambda=0.1\) for 500 epochs. At each gradient descent iteration we integrate the state-space equations corresponding to the current parameter values over 1000 times steps with Euler's method and then we compute the squared error with respect to the training set.
We set a testing period of 4000 time steps and demonstrate the robustness of our approach by not only testing our trained model on the original input \(u(t)=\sin(t)\) but evaluating on other three commonly used input signals (see Figure 2, 3, 4 and 5). The numerical experiments provide a strong indication that the underlying system is learned independently of the input signal and is robust with respect to various forms of inputs.
Figure 1: Lossless circuit port-Hamiltonian system
Figure 2: Training and testing on a sinusoidal signal.
### Positive definite Frenkel-Kontorova model
As a second example, we consider a modification of the well-known Frenkel-Kontorova model such that it becomes a linear port-Hamiltonian system with a positive-definite Hamiltonian function. Recall that the general form of Frenkel-Kontorova model describes the motion of classical particles with nearest neighbor interactions using periodic potentials. The Hamiltonian function can be
Figure 4: Testing on a square signal. The training had been carried out using a sinusoidal signal. See Figure 2.
Figure 5: Testing on a ramp signal. The training had been carried out using a sinusoidal signal. See Figure 2.
Figure 3: Testing on a constant signal. The training had been carried out using a sinusoidal signal. See Figure 2.
written as
\[H=\sum_{n=1}^{N}\left[\frac{1}{2}\cdot\dot{q}_{n}^{2}+\left(1-\cos q_{n}+\frac{1}{ 2}g\cdot(q_{n+1}-q_{n}-a_{0})^{2}\right)\right].\]
Since we are dealing with linear systems, we remove the periodic potential and rescale the potential coefficient. By fixing \(a_{0}=0\), we obtain the Hamiltonian
\[H=\frac{1}{2}\cdot\sum_{n=1}^{N}\left[\dot{q}_{n}^{2}+(q_{n+1}-q_{n})^{2} \right].\]
In order to consider a Hamiltonian that is strictly positive definite, we add a term \(\frac{1}{2}q_{1}^{2}\) to the Hamiltonian, which carries the physical meaning that the particle \(q_{1}\) interacts with the origin via a spring. In summary, our model of interest now has the positive-definite Hamiltonian
\[H=\frac{1}{2}\cdot\sum_{n=1}^{N}\left[\dot{q}_{n}^{2}+(q_{n+1}-q_{n})^{2} \right]+\frac{1}{2}\cdot q_{1}^{2}=\frac{1}{2}\cdot\sum_{n=1}^{N}\left[p_{n}^ {2}+(q_{n+1}-q_{n})^{2}\right]+\frac{1}{2}\cdot q_{1}^{2}.\]
For the sake of simplicity, consider a Hamiltonian systems with two unit mass particles (so that \(p_{i}=\dot{q}_{i}\)) and an external force \(F=u\) that is imposed on the first particle. This gives a linear port-Hamiltonian system in normal form as below with the output being the velocity of the first particle.
\[\begin{bmatrix}\dot{q}_{1}\\ \dot{q}_{2}\\ \dot{p}_{1}\\ \dot{p}_{2}\end{bmatrix}=\begin{bmatrix}0&\mathbb{I}_{2}\\ -\mathbb{I}_{2}&0\end{bmatrix}\cdot\begin{bmatrix}\frac{\partial H}{\partial q_ {1}}\\ \frac{\partial H}{\partial q_{2}}\\ \frac{\partial H}{\partial p_{1}}\\ \frac{\partial H}{\partial p_{2}}\end{bmatrix}+\begin{bmatrix}0\\ 0\\ 1\\ 0\end{bmatrix}\cdot u \tag{23}\]
\[y=\frac{\partial H}{\partial p_{1}}. \tag{24}\]
In contrast to the first example, this system is _canonical_. Therefore, based on our theoretical results, any input-output dynamics can be captured by either a controllable or an observable Hamiltonian representation and furthermore, it is possible to uniquely identify the system by learning the parameters in the quotient space \(\mathcal{X}_{\uparrow}^{2}\times\mathbb{R}_{+}^{2}\).
For the sake of the numerical illustration, we choose the initial state condition \(\mathbf{x}=(2,1,-3,-3)^{T}\) for the ground-truth system and integrate it \(1000\) time steps times using Euler's method with step of \(0.01\) (see Appendix 9.14 for more sophisticated structure-preserving integration methods), where the input is chosen as \(u(t)=\sin(t)\). The \(1000\) pairs of input and output data are then used as training data.
As motivated above, we apply two different training mechanisms in which we learn the initial state condition and the parameter values of the model using both the natural parameters from \(\varTheta_{CH_{n}}\) of the observable Hamiltonian representation and those in the unique identifiability space \(\mathcal{X}_{\uparrow}^{2}\times\mathbb{R}_{+}^{2}\). As in the previous example, we carry out the training using gradient descent with a learning rate of \(\lambda=0.02\) over \(1500\) epochs out of randomly chosen initial values for the initial state condition and the model parameters in \(\varTheta_{CH_{n}}\) and \(\mathcal{X}_{\uparrow}^{2}\times\mathbb{R}_{+}^{2}\).
We record the validation error during \(1500\) gradient descent iterations of both training mechanisms to compare their convergence rates. Heuristically, it should be expected that the rate of convergence is faster when the models are trained using the coordinates that provide unique identifiability. This is empirically confirmed in Figure 6 (indeed, unique identifiability provides exponentially faster convergence). After \(1500\) iterations, the prediction accuracy when training was carried out using the unique identifiability space significantly outperforms the other setting, as can be seen in Figure 7. Moreover, we found that the learned parameters \(\mathbf{d}\in\mathcal{X}_{\uparrow}^{2}\) are exactly the same as the eigenvalues of the Hamiltonian matrix, which is theoretically guaranteed by unique identifiability. It is worth emphasizing that despite the difference in the convergence rates both mechanisms eventually lead to perfect path continuations of the input-output dynamics after enough training iterations.
## 8 Conclusions
In this paper we have introduced a complete structure-preserving learning scheme for single-input/single-output (SISO) linear port-Hamiltonian systems. The construction is based on the solution, when possible, of the unique identification problem for these systems, in ways that reveal fundamental relationships between classical notions in control theory and crucial properties in the machine learning context, like structure-preservation and expressive power.
The main building block in our construction is a representation result that we introduced for linear port-Hamiltonian systems in normal form that provides two subfamilies of linear systems that are by construction controllable and observable (Definition 3.1). We showed that morphisms can be established between the elements in these families and those in the category of normal form port-Hamiltonian systems. The existence of these morphisms immediately guarantees that the complexity of the family of port-Hamiltonian filters is actually not \(\mathcal{O}(n^{2})\), as it could be guessed from the standard parametrization of this family, but \(\mathcal{O}(n)\). We showed that the expressive power of our proposed representations is limited for non-canonical port-Hamiltonian systems. Indeed, we saw that the observable representation is guaranteed to capture all possible input-output dynamics of port-Hamiltonian systems (full expressive power), but it does not always produce port-Hamiltonian dynamics (fails to be structure-preserving). In the controllable case, structure preservation is guaranteed, but there is, in general, no full expressive power. For canonical port-Hamiltonian systems, these representations are both structure-preserving and have full expressive power.
We saw that even in the canonical situation, the availability of the controllable/observable representations did not yet provide a well-specified learning problem for this category since the invariance
Figure 6: Logarithm of validation errors of the two training mechanisms based on using the natural parameters of the observable representation and the unique identifiability space
Figure 7: Training and testing performance of the two training mechanisms after 1500 gradient descent iterations based on using the natural parameters of the observable representation (pane (a)) and the unique identifiability space (pane (b))
of these systems under system automorphisms implies the existence of symmetries (or degeneracies) in those parametrizations. We tackled this problem by solving the unique identifiability of input-output dynamics of linear port-Hamiltonian systems in normal form by characterizing the quotient space by system automorphisms as a Lie groupoid orbit space. Moreover, we showed that in the canonical case the corresponding quotient spaces can be characterized as orbit spaces with respect to an explicit group action and that can be explicitly endowed with a smooth manifold structure that has global Euclidean coordinates that can be used at the time of constructing estimation algorithms. Consequently, we showed that canonical port-Hamiltonian dynamics can be identified fully and explicitly in either the controllable or the observable Hamiltonian representations and learned by estimating a unique set of parameters in a smooth manifold that is obtained as a group orbit space. Additionally, we complemented this learning scheme with results that allow us to extend it to situations where we remain agnostic as to the dimension of the underlying data-generating port-Hamiltonian system.
We concluded the paper with some numerical examples that illustrate the viability of the method that we propose in systems with various levels of complexity and dimensions as well as the computational advantages associated with the use of the parameter space in which unique identification is guaranteed.
## Acknowledgments
The authors thank Lyudmila Grigoryeva for helpful discussions and remarks and acknowledge partial financial support from the Swiss National Science Foundation (grant number 175801/1) and the School of Physical and Mathematical Sciences of the Nanyang Technological University. DY is funded by the Nanyang President's Graduate Scholarship of Nanyang Technological University.
|
2308.00760 | Manipulating Topological Quantum Phase Transitions of Kitaev's Quantum
Spin Liquids with Electric Fields | Highly entangled excitations such as Majorana fermions of Kitaev quantum spin
liquids have been proposed to be utilized for future quantum science and
technology, and a deeper understanding of such excitations has been strongly
desired. Here we demonstrate that Majorana fermion's mass and associated
topological quantum phase transitions in the Kitaev quantum spin liquids may be
manipulated by using electric fields in sharp contrast to the common belief
that an insulator is inert under weak electric fields due to charge energy
gaps. Using general symmetry analysis with perturbation and exact
diagonalization, we uncover the universal phase diagrams with electric and
magnetic fields. We also provide distinctive experimental signatures to
identify Kitaev quantum spin liquids with electric fields, especially in
connection with the candidate materials such as $\alpha$-RuCl3. | Pureum Noh, Kyusung Hwang, Eun-Gook Moon | 2023-08-01T18:01:03Z | http://arxiv.org/abs/2308.00760v1 | # Manipulating Topological Quantum Phase Transitions
###### Abstract
Highly entangled excitations such as Majorana fermions of Kitaev quantum spin liquids have been proposed to be utilized for future quantum science and technology, and a deeper understanding of such excitations has been strongly desired. Here we demonstrate that Majorana fermion's mass and associated topological quantum phase transitions in the Kitaev quantum spin liquids may be manipulated by using electric fields in sharp contrast to the common belief that an insulator is inert under weak electric fields due to charge energy gaps. Using general symmetry analysis with perturbation and exact diagonalization, we uncover the universal phase diagrams with electric and magnetic fields. We also provide distinctive experimental signatures to identify Kitaev quantum spin liquids with electric fields, especially in connection with the candidate materials such as \(\alpha\)-RuCl\({}_{3}\).
_Introduction:_ Quantum spin liquids (QSLs) intrinsically host an enormous amount of quantum entanglement, which has attracted a great deal of interest in the research of future science and technology [1; 2; 3; 4; 5]. The intrinsic massive entanglement prevents quantum spin liquids from developing a trivial magnetic ordering, and instead emergent novel excitations may appear in QSLs. Kitaev quantum spin liquid (KQSL) is one of QSL that has attracted significant attention [6]. In KQSLs, the interactions between spin degrees of freedom are exactly solvable, leading to emergent Majorana fermions and Abelian or non-Abelian anyons. These exotic properties make KQSLs promising platforms for topological quantum computation [7; 8].
The search for candidate materials that can exhibit KQSL has been a major challenge in the field of condensed matter physics. In recent years, significant progress has been made in identifying and characterizing KQSL candidate materials, such as \(\alpha\)-RuCl\({}_{3}\)[9; 10; 11; 12; 13; 14; 15; 16; 17] and Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\)[18; 19; 20], through various experiments [21; 22; 23; 24]. One of the unique features of KQSLs is their response to external magnetic fields, which can induce exotic phases such as a chiral spin liquid [6]. Despite the theoretical predictions, the experimental investigation of KQSLs in magnetic fields has remained challenging due to the need to explore a narrow range of magnetic field [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38].
In this letter, we demonstrate striking characteristics of electric-field-driven topological quantum phase transitions (TQPTs). First, varying with the amplitude of electric fields (\(E\)), we find TQPTs between critical states and bulk energy-gapped states. The former (latter) states host a Fermi surface of Majorana fermions, also called Majorana-Fermi surface (MFS) [39], (topological invariants) for \(E<E_{c}^{l}\) (\(E>E_{c}^{l}\)). We remark that the presence of such TQPTs is in drastic contrast to the conventional wisdom in the literature that the size of the Fermi surfaces of Majorana fermions is proportional to \(E\). Second, by rotating an electric field, we find the possibility of the two types of TQPTs between the phases with opposite topological invariants. One type is conventional in the sense that TQPTs appear with quantum critical points, but the other type permits in-between quantum critical states. Remarkably, the two types of TQPTs are only possible for the intermediate amplitude of electric fields because they are washed away for small enough electric fields and KQSLs become unstable for strong enough electric fields. By utilizing the characteristics, we also propose how to detect KQSLs in the candidate materials such as \(\alpha\)-RuCl\({}_{3}\).
_Model Hamiltonian._ Let us consider the isotropic Kitaev model under electric (\(\mathbf{E}\)) and magnetic fields (\(\mathbf{h}\)) to be specific and discuss its generalization below. The Hamiltonian is
\[H(\mathbf{h},\mathbf{E})=K\sum_{\langle i,j\rangle_{\gamma}}S_{i}^{\gamma}S_{j }^{\gamma}-\mathbf{h}\cdot\mathbf{S}-\mathbf{E}\cdot\mathbf{P},\]
where \(\langle i,j\rangle_{\gamma}\) are for the nearest-neighbor bonds with a
Figure 1: The universal topological phase transition by electric fields. (a) Diagram of the direction of electric (\(\mathbf{E}\)) and magnetic fields (\(\mathbf{h}\)). The magnetic field is in the direction of \(\hat{b}\), and the electric field lies in the \(\hat{b}-\hat{c}\) plane. (b) The universal phase diagram at angles of the electric field (\(\psi_{E}\)) and the strength of the electric field (\(E\)) ranging from \(0\) to \(|\mathbf{h}|\). The Chern number (\(\nu\)) is not defined due to the Majorana-Fermi surface (MFS) in the gray area, and the lower critical strength \(E_{c}^{l}\) has the order of \(\frac{|\mathbf{h}|^{2}}{\Delta f}\). Dashed blue and dotted black lines indicate topological phase transitions.
component \(\gamma\in\{x,y,z\}\) and \(S_{j}^{\gamma}\) is a \(\gamma\) component spin operator at a site \(j\)[6]. The total spin operator is defined as \(\mathbf{S}=\sum_{j}\mathbf{S}_{j}\), and the interaction parameter (\(K\)) for the bond-dependent exchange interaction is introduced.
The explicit form of the electric polarization operator (\(\mathbf{P}\)) may be obtained by microscopic analysis [40; 41; 42], and for our purposes, it is enough to utilize the symmetry approach, following the previous works [39; 42]. Since \(\mathbf{P}\) is even under the time-reversal transformation and odd under the space-inversion transformation, the polarization operator becomes, \(P^{\mu}\equiv\sum_{\langle i,j\rangle,\,\mathbf{P}_{\gamma}^{\mu}}\cdot( \mathbf{S}_{i}\times\mathbf{S}_{j})\) where \(\mathbf{P}_{\gamma}^{\mu}\) is a vector with \(27\) components. For the isotropic Kitaev model, only five of \(27\) parameters are independent [39], and for clarity, we utilize the Hamiltonian,
\[H(\mathbf{h},\mathbf{E})=K\sum_{\langle i,j\rangle_{\gamma}}S_{i}^{\gamma}S_{j }^{\gamma}-\mathbf{h}\cdot\sum_{j}\mathbf{S}_{j}-\mathbf{E}\cdot\sum_{\langle i,j\rangle_{\gamma}}\mathbf{S}_{i}\times\mathbf{S}_{j},\]
in the main text as our prime example. We refer to supplementary materials (SM) for discussions about generic cases.
Let us first consider the symmetries of the Hamiltonian. For an ideal monolayer system, the Hamiltonian \(H(0,0)\) enjoys \(\mathbb{D}_{3}\), where \(\mathbb{D}_{3}\) is for the dihedral group of order \(6\), in addition to the spacial inversion (\(\mathcal{P}\)) and the time-reversal (\(\mathcal{T}\)). Turning on a magnetic field, the time-reversal and \(\mathbb{D}_{3}\) symmetries are completely broken except in certain directions of the magnetic fields. For example, the Hamiltonian \(H(\mathbf{h}\parallel\hat{\mathbf{b}},0)\) only enjoys the two-fold rotational symmetry along \(\hat{\mathbf{b}}\) and \(\mathcal{P}\). See Figure 1(a) for the notation of the directions of fields.
Physical quantities are characterized by representations of symmetry groups. For example, the thermal Hall conductivity (\(\kappa_{ab}\)) is odd under \(\mathcal{T}\) and \(C_{2}(\hat{\mathbf{b}})\), and it is even under \(\mathcal{P}\) (Table 1). It has been well understood that the two-fold rotational symmetry (\(C_{2}(\hat{\mathbf{b}})\)) protects the gapless condition of Majorana fermions in Kitaev quantum spin liquids.
Turning on an electric field, all the symmetries are completely broken except in the two cases:
* \(\mathbf{E}\parallel\hat{\mathbf{b}}\), \(\mathbb{G}_{b}\equiv\{C_{2}(\hat{\mathbf{b}}),\big{(}C_{2}(\hat{\mathbf{b}}) \big{)}^{2}\}\),
* \(\mathbf{E}\parallel\hat{\mathbf{c}}\), \(\mathbb{G}_{c}\equiv\{\mathcal{P}C_{2}(\hat{\mathbf{b}}),\big{(}\mathcal{P}C_ {2}(\hat{\mathbf{b}})\big{)}^{2}\}\),
where \(\mathbb{G}_{b,c}\) are the symmetry groups for each case. We note that the case of \(\mathbf{E}\parallel\hat{\mathbf{c}}\) is not invariant under \(C_{2}(\hat{\mathbf{b}})\) and \(\mathcal{P}\) symmetries but invariant under the combination, \(\mathcal{P}C_{2}(\hat{\mathbf{b}})\). Below, we show that both \(\mathbb{G}_{b}\) and \(\mathbb{G}_{c}\) protect the gapless Majorana fermions in KQSLs though their effects have significant differences in terms of TQPTs.
_Weak electric and magnetic fields._ Following the original approach of Kitaev [6], we utilize perturbative calculations with the Majorana representation of quantum spins (\(S_{j}^{\gamma}=ic_{j}b_{j}^{\gamma}\)) with four Majorana fermions (\(b_{j}^{\gamma},c_{j}\)) at a site \(j\) for weak electric and magnetic fields (\(|\mathbf{h}|,|\mathbf{E}|\ll K\)). The low-energy effective Hamiltonian below the flux gap (\(\Delta_{f}\)) becomes
\[H_{\rm eff}(\mathbf{h},\mathbf{E})=\frac{1}{2}\sum_{\mathbf{k}}\Psi_{\mathbf{ k}}^{\dagger}\left(\sum_{a=0,1,2,3}\epsilon_{a}(\mathbf{k},\mathbf{h}, \mathbf{E})\tau^{a}\right)\Psi_{\mathbf{k}},\]
with a two component spinor, \(\Psi_{\mathbf{k}}=(c_{\mathbf{k},A},c_{\mathbf{k},B})^{T}\), and \(c_{\mathbf{r},A(B)}=\sqrt{\frac{2}{N}}\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot \mathbf{r}}c_{\mathbf{k},A(B)}\). The identity and Pauli matrices in the sublattice spinor space (\(\tau^{0,1,2,3}\)) are introduced with the energy functions, \(\epsilon_{0,1,2,3}(\mathbf{k},\mathbf{h},\mathbf{E})\). The eigenenergy of the Hamiltonian is
\[E_{\pm}(\mathbf{k},\mathbf{h},\mathbf{E})=\epsilon_{0}(\mathbf{k},\mathbf{h}, \mathbf{E})\pm\sqrt{\sum_{a=1,2,3}\epsilon_{a}(\mathbf{k},\mathbf{h},\mathbf{E })^{2}}.\]
Without electric and magnetic fields, the energy functions vanish at the corners of the first Brillouin zone (\(\mathbf{k}=\pm\mathbf{K}_{M}\)), and the linear dispersion is determined by \(\epsilon_{1}(\mathbf{k},0,0)\) and \(\epsilon_{2}(\mathbf{k},0,0)\). Thus, in the regime of weak electric and magnetic fields, the presence of energy-gap or Fermi-surfaces of Majorana fermions is mainly determined by the chemical potential function (\(\mu(\mathbf{h},\mathbf{E})\equiv\epsilon_{0}(\mathbf{K}_{M},\mathbf{h}, \mathbf{E})\)) and the mass function (\(m(\mathbf{h},\mathbf{E})\equiv\epsilon_{3}(\mathbf{K}_{M},\mathbf{h}, \mathbf{E})\)). If \(|\mu(\mathbf{h},\mathbf{E})|<|m(\mathbf{h},\mathbf{E})|\), there is an energy gap, and the topological invariant (\(\nu\)) is given by the sign of \(m(\mathbf{h},\mathbf{E})\). As for the case of \(|\mu(\mathbf{h},\mathbf{E})|>|m(\mathbf{h},\mathbf{E})|\), the topological invariant is not defined because of the presence of MFS.
One can understand the symmetry properties of the energy functions by extending the original discussion of the projective representation by Kitaev [6] (see also SM). Note that \(m(\mathbf{h},\mathbf{E})\) is in the same representation of
\begin{table}
\begin{tabular}{c||c|c|c|c} \hline Physical quantities & \(\mathcal{T}\) & \(\mathcal{P}\) & \(C_{2}(\mathbf{b})\) & \(\mathcal{P}C_{2}(\mathbf{b})\) \\ \hline \hline \(\kappa_{ab}(\mathbf{h},\mathbf{E})\) & odd & even & odd & odd \\ \hline \(\nu\) & odd & even & odd & odd \\ \hline \(m(\mathbf{h},\mathbf{E})\) & odd & even & odd & odd \\ \hline \(h_{x}+h_{y}+h_{z}\) & odd & even & odd & odd \\ \hline \(h_{x}h_{y}h_{z}\) & odd & even & odd & odd \\ \hline \(E_{c}(E_{a}h_{a}+E_{b}h_{b})\) & odd & even & odd & odd \\ \hline \(h_{a}(E_{a}^{2}-E_{b}^{2})-2h_{b}E_{a}E_{b}\) & odd & even & odd & odd \\ \hline \hline \(\mu(\mathbf{h},\mathbf{E})\) & odd & odd & odd & even \\ \hline \(h_{a}E_{b}-h_{b}E_{a}\) & odd & odd & odd & even \\ \hline \(E_{c}h_{b}(h_{b}^{2}-3h_{a}^{2})\) & odd & odd & odd & even \\ \hline \end{tabular}
\end{table}
Table 1: Symmetry properties of physical observables under the time-reversal (\(\mathcal{T}\)), inversion (\(\mathcal{P}\)) and the two-fold rotation (\(C_{2}(\hat{\mathbf{b}})\)). The thermal Hall coefficient (\(\kappa_{ab}\)), topological invariant (\(\nu\)), and the mass function \(m(\mathbf{h},\mathbf{E})\) are in the same representation while the chemical potential function (\(\mu(\mathbf{h},\mathbf{E})\)) is in a different representation. We also present the functions of electric and magnetic fields in the two representations. All the quantities are invariant under three-fold rotations. See SM for more detailed information.
while \(\mu({\bf h},{\bf E})\) is in a different representation since it is odd under the inversion symmetry.
Our strategy is to utilize the symmetry properties of \(m({\bf E},{\bf h})\) and \(\mu({\bf E},{\bf h})\), which can be applied beyond the pure Kitaev model. For simplicity, we consider a magnetic field along a bond direction and an electric field on the bc plane with an angle \(\psi_{E}\),
\[{\bf h}=h\hat{\bf b},\quad{\bf E}=E(\cos\psi_{E}\hat{\bf b}+\sin\psi_{E}\hat{ \bf c}).\]
We find that
\[m({\bf h},{\bf E})=c_{m}\frac{hE^{2}}{\Delta_{f}^{2}}\sin(2\psi_{E}),\quad\mu( {\bf h},{\bf E})\ =\ c_{\mu}\frac{h^{3}E}{\Delta_{f}^{3}}\sin(\psi_{E})\]
up to the fourth order of electric and magnetic fields with two dimensionless constants \((c_{m},c_{\mu})\). Note that the forms of \(m({\bf h},{\bf E})\) and \(\mu({\bf h},{\bf E})\) for generic field directions are presented in SM.
A few remarks are as follows. First, the symmetry properties of the mass function (\(m({\bf E},{\bf h})\)) enforce the zero conditions,
\[m({\bf h},{\bf E})=0,\quad{\bf E}\parallel\hat{\bf b}\,\,\,{\rm or}\,\,\,{\bf E }\parallel\hat{\bf c}, \tag{1}\]
with magnetic fields along the bond directions, \({\bf h}\parallel\hat{\bf b}\). The zero conditions guarantees the existence of the gapless Majorana excitations. Second, the symmetry properties of the chemical potential function give the zero condtion,
\[\mu({\bf h},{\bf E})=0,\quad{\bf E}\parallel\hat{\bf b}, \tag{2}\]
with magnetic fields along the bond directions, \({\bf h}\parallel\hat{\bf b}\). On the other hand, \(\mu({\bf h},{\bf E})\) is not generically zero for \({\bf E}\parallel\hat{\bf c}\), which indicates that the Majorana Fermi surfaces may appear near \({\bf E}\parallel\hat{\bf c}\) because \(|\mu({\bf h},{\bf E})|\) is generically bigger than \(|m({\bf h},{\bf E})|\).
_Exact diagonalization._ We further solve the model Hamiltonian on a 24-site cluster with the periodic boundary condition by using exact diagonalization. We determine the phase diagram for ferromagnetic Kitaev interaction (see SM for antiferromagnetic Kitaev interaction) by computing (i) the ground state energy second derivatives, \(-\partial^{2}E_{\rm gs}/\partial\xi^{2}\) (\(\xi=h,E\)), (ii) the \(\mathbb{Z}_{2}\) flux, \(\langle\hat{W}_{p}\rangle\), and (iii) the spin structure factor, \(S({\bf q})=\frac{1}{N}\sum_{i,j}\langle{\bf S}_{i}\cdot{\bf S}_{j}\rangle e^{i{ \bf q}({\bf r}_{i}-{\bf r}_{j})}\), as illustrated in Figure 2. The electric and magnetic fields are along the \(c\)-axis (\({\bf h},{\bf E}\parallel{\bf c}\)) for illustration.
A few remarks are as follows. First, we find that the KQSL phase with a ferromagnetic Kitaev interaction is more stable under an electric field than a magnetic field while the KQSL with an antiferromagnetic Kitaev interaction is more stable under a magnetic field as shown in SM. Second, the upper critical electric field is \(E_{c}^{u}\approx 0.27\) without a magnetic field, critical magnetic field is \(h_{c}\approx 0.03\) without a electric field and increases turning on an electric field. The increase indicates that electric and magnetic fields are synergetic to stabilize KQSLs. Third, we also find nearby phases marked by dashed lines. The ferromagnetic phases (FM-1,2,3,4) show spin moment canting within the plane and out of the plane due to the electric and magnetic fields. Although they are distinguished by the ground state energy second derivatives \(-\partial^{2}E_{\rm gs}/\partial\xi^{2}\) (\(\xi=h,E\)), we do not find any qualitative difference among the FM phases [Fig. 2(d,e)]. Fourth, the introduction of non-Kitaev interactions such as the Heisenberg interaction modifies phase diagrams quantitatively but not qualitatively.
Based on the results of exact diagonalization, we conclude that the stability of the KQSLs under electric and magnetic fields is guaranteed though their critical field values depend on details of the microscopic Hamiltonian. Then, the symmetry properties of Majorana fermions become very useful for stable KQSLs. Namely, the two zero conditions, Eqns (1) and (2), are solely determined by the symmetry properties of \(\mathbb{G}_{b,c}\), indicating that the zero conditions even work beyond the pure Kitaev model. It is straightforward to show that non-Kitaev interaction terms induce effective interaction between gapless Majorana fermions which are known to be irrelevant in the sense of renormalization group analysis, in addition to trivial renormalization of velocity. From now on, unless stated otherwise, we utilize the symmetry properties and our results hold beyond the pure Kitaev model.
_Two types of TQPTs._ In KQSLs, we uncover the two types of TQPTs. The first one is conventional in the sense that topological phases with \(\nu=\pm 1\) are generically connected through a quantum critical point, named type-I TQPT. In other words, gapless Majorana fermions appear only at quantum critical points. Not only the zero
Figure 2: Phase diagram of the ferromagnetic Kitaev system (\(K=-1\)). (a) The phase diagram in the plane of \(h\) and \(E\). FM-1,2,3,4: ferromagnetically ordered phases. FP: field polarized phase. The color code means the \(\mathbb{Z}_{2}\) flux expectation value \(\langle\hat{W}_{p}\rangle\), and the dashed lines indicate the phase boundaries determined by the ground state energy second derivative \(-\partial^{2}E_{gs}/\partial\xi^{2}\) (\(\xi=h,E\)). (b-e) Spin structure factors for different phases. In each plot, the two hexagons denote the first and second Brillouin zones in momentum space. In all the results, the magnetic field and electric field are both aligned along the \(c\)-axis (\({\bf h},{\bf E}\parallel{\bf c}\)).
conditions (\(m(\mathbf{h},\mathbf{E})=\mu(\mathbf{h},\mathbf{E})=0\)) but also the exclusion of Majorana Fermi surfaces are necessary to find such quantum critical points. The former is satisfied by \(\mathbf{E}\parallel\hat{\mathbf{b}}\) and the latter is fulfilled by \(|m(\mathbf{h},\mathbf{E})|>|\mu(\mathbf{h},\mathbf{E})|\) near \(\mathbf{E}\parallel\hat{\mathbf{b}}\). Then, we obtain the condition of the type-I TQPTs,
\[\mathbf{E}\parallel\hat{\mathbf{b}},\quad|\mathbf{E}|>E_{c}^{l},\quad(\text{ Type}\,\text{I}) \tag{3}\]
where the lower critical electric field (\(E_{c}^{l}\)) is to exclude Majorana Fermi surfaces. Its value is determined by microscopic information. For example, the pure Kitaev model gives \(E_{c}^{l}=(c_{\mu}h^{2})/(2c_{m}\Delta_{f})\), and the critical points are illustrated in the dashed blue line in in Figure 1(b). The second one is unconventional in the sense that topological phases with \(\nu=\pm 1\) are generically connected through quantum critical states with Majorana Fermi surfaces, named Type-II TQPT. The transition lines are determined by the condition,
\[|m(\mathbf{h},\mathbf{E})|=|\mu(\mathbf{h},\mathbf{E})|>0,\quad(\text{Type}\, \text{II}) \tag{4}\]
as illustrated in the dotted black line in Fig. 1(b). We note that the type-II TQPT completely disappear with the fine-tuned condition, \(c_{\mu}=0\), for the pure Kitaev model. In other words, the presence of \(\mu(\mathbf{h},\mathbf{E})\) is essential to the presence of type-II TQPTs, and both the electric and magnetic fields are necessary, as pointed out previously [39].
Discussion and conclusion:We propose that electric-field-driven TQPTs may be utilized to identify KQSLs. Varying with the amplitude of electric fields, TQPTs between critical states and bulk energy-gapped states generically appear. With \(\mathbf{h}\parallel\hat{\mathbf{b}}\), a small electric field (\(E<E_{c}^{l}\)) cannot introduce an energy gap of Majorana fermions while a large electric field (\(E_{c}^{l}<E<E_{c}^{u}\)) induces a topological phase with a well-defined bulk energy gap. Such phase transitions may be readily observable in specific heat experiments, which carry low-energy excitations, as illustrated in Figure 3(a). Furthermore, the rotation of an electric field is a natural way to observe the two types of TQPTs for \(E_{c}^{l}<E<E_{c}^{u}\). We note that the type-II TQPTs around \(\mathbf{E}\parallel\hat{\mathbf{c}}\) are are natural outcomes of the zero conditions, Eqns (1) and (2), which give a non-zero value of specific heat over temperature (\(C_{v}/T\)) in the zero temperature limit. Thus, the field angle dependence specific heat naturally has the two-fold symmetric behavior as illustrated in Figure 3(b). Such characteristics are in drastic contrast to other paramagnetic phases including partially polarized phases whose ground states are adiabatically connected to a simple product state without quantum entanglement [43].
We further discuss energy scales of electric-field-driven TQPTs. Setting the Kitaev interaction term as unity, it is well known that the flux energy gap is \(\Delta_{f}\sim 0.07\)[6] and the KQSL is stable below \(h_{c}\sim 0.03\), which gives the lower critical electric field (\(E_{c}^{l}\sim 0.006\)) assuming that \(c_{\mu}\) and \(c_{m}\) are same order of magnitude. For real materials [41; 49; 43], the amplitude electric field is estimated as \(E\sim 4\times 10^{-3}\text{meV}\sim 0.01\Delta_{f}\) for the strength of electric field \(10^{6}\)V/m. Though our estimation needs to be scrutinized for real candidate materials, we expect that electric-field-driven TQPTs may be observable in experiments.
In conclusion, we investigate the electric-field-driven TQPTs in KQSLs. In sharp contrast to the common belief that an insulator is inert under weak electric fields due to charge energy gaps, KQSLs may host significant effects with small electric fields because of non-trivial symmetry properties of Majorana fermions of KQSLs. We find TQPTs between critical states and bulk energy-gapped states varying with the amplitude of electric fields. Also, by rotating an electric field, we find the possibility of the two types of TQPTs between the phases with opposite topological invariants. Such TQPTs are associated with characteristic structures of gapless excitations, and thus we propose intriguing specific heat signatures in candidate materials of KQSLs such as \(\alpha\)-RuCl\({}_{3}\).
Acknowledgements : We thank Ara Go and Takasada Shibauchi for earlier collaboration and invaluable discussion about experimental setups. P.N. and E.-G.M. were supported by the National Research Foundation of Korea funded by the Ministry of Science and ICT (No. 2021R1A2C4001847, No. 2022M3H4A1A04074153, No. 2023M3K5A1094813) and National Measurement Standard Services and Technical Services for SME funded by Korea Research Institute of Standards and Science (KRISS - 2022 - GP2022-0014). K.H. was supported by Individual Grant (No. PG071403) of Korea Institute for Advanced Study (KIAS) where computations were performed on clusters at the Center for Advanced Computation.
Figure 3: Schematic low temperature specific heat (\(C_{v}\))with magnetic fields and electric fields. (a) Temperature (\(T\)) dependence of \(\frac{C_{v}}{T}\) with fixed electric and magnetic fields. (b) Angle dependence (\(\psi_{E}\)) of specific heat for \(E>E_{c}^{l}\) with a fixed temperature. |
2303.02205 | The Awkward World of Python and C++ | There are undeniable benefits of binding Python and C++ to take advantage of
the best features of both languages. This is especially relevant to the HEP and
other scientific communities that have invested heavily in the C++ frameworks
and are rapidly moving their data analyses to Python. Version 2 of Awkward
Array, a Scikit-HEP Python library, introduces a set of header-only C++
libraries that do not depend on any application binary interface. Users can
directly include these libraries in their compilation instead of linking
against platform-specific libraries. This new development makes the integration
of Awkward Arrays into other projects easier and more portable, as the
implementation is easily separable from the rest of the Awkward Array codebase.
The code is minimal; it does not include all of the code needed to use Awkward
Arrays in Python, nor does it include references to Python or pybind11. The C++
users can use it to make arrays and then copy them to Python without any
specialized data types - only raw buffers, strings, and integers. This C++ code
also simplifies the process of just-in-time (JIT) compilation in ROOT. This
implementation approach solves some of the drawbacks, like packaging projects
where native dependencies can be challenging. In this paper, we demonstrate the
technique to integrate C++ and Python using a header-only approach. We also
describe the implementation of a new LayoutBuilder and a GrowableBuffer.
Furthermore, examples of wrapping the C++ data into Awkward Arrays and exposing
Awkward Arrays to C++ without copying them are discussed. | Manasvi Goyal, Ianna Osborne, Jim Pivarski | 2023-03-03T20:33:50Z | http://arxiv.org/abs/2303.02205v2 | # The Awkward World of Python and C++
###### Abstract
There are undeniable benefits of binding Python and C++ to take advantage of the best features of both languages. This is especially relevant to the HEP and other scientific communities that have invested heavily in the C++ frameworks and are rapidly moving their data analyses to Python. Version 2 of Awkward Array, a Scikit-HEP Python library, introduces a set of header-only C++ libraries that do not depend on any application binary interface. Users can directly include these libraries in their compilation rather than linking against platform-specific libraries. This new development makes the integration of Awkward Arrays into other projects easier and more portable as the implementation is easily separable from the rest of the Awkward Array codebase. The code is minimal, it does not include all of the code needed to use Awkward Arrays in Python, nor does it include references to Python or pybind11. The C++ users can use it to make arrays and then copy them to Python without any specialized data types - only raw buffers, strings, and integers. This C++ code also simplifies the process of just-in-time (JIT) compilation in ROOT. This implementation approach solves some of the drawbacks, like packaging projects where native dependencies can be challenging. In this paper, we demonstrate the technique to integrate C++ and Python by using a header-only approach. We also describe the implementation of a new LayoutBuilder and a GrowableBuffer. Furthermore, examples of wrapping the C++ data into Awkward Arrays and exposing Awkward Arrays to C++ without copying them are discussed.
## 1 Introduction
Awkward Array [1] is an important tool for physics analysis in Python for High Energy Physics (HEP) community. It is a part of the Scikit-HEP [2] ecosystem. Nested, variable-length lists ("ragged" or "jagged" arrays), records with differently typed fields, missing data, and other heterogeneous data (union/variant types) can be defined as a set of primitives using NumPy-like [3] phrases in Python [4]. In Awkward arrays, a single, user-facing ak.Array consists of one small tree with large, contiguous data buffers attached to each node [5], as shown in Figure 1. Compiled operations are performed on these data buffers, not the objects they represent.
In this work, we present new tools for creating Awkward Arrays in C++. Previously, the main codebase was written in C++ [6] with the idea that downstream code would link to libawkward.so, but that route is full of hidden issues [7]. The method of a small, header-only library that only fills array buffers for downstream code to pass from C++ to Python using C-types only has considerably more promise.
## 2 Python-C++ Integration
Nowadays, more front-end users use Python [8], but large-scale processing still needs to have high performance of C++ [7]. That is why we combine Python and C++ to take advantage
of the best features of both languages so that we can have a Python user interface and, at the same time, take advantage of the performance and memory management of C++. HEP and other scientific communities have extensively invested in the C++ frameworks and are swiftly migrating their data analyses to Python. These communities are particularly interested in bridging the gap between the two languages [8]. This raises an important question: _'How to do Python-C++ integration the right way?'_, which is addressed in the following sections.
## 3 The 'Header-Only' Approach
A set of header-only C++ libraries has been introduced to address the issues in the Python-C++ integration in Awkward Arrays [7]. These templated C++ libraries are not dependent on any application binary interface (ABI). They can be directly included in a project's compilation without the need to link against platform-specific libraries. This 'header-only' approach not only simplifies the production of Awkward Arrays in a project but also enhances the portability of the Awkward Arrays. The code is minimal and does not constitute all of the code required to use Awkward Arrays in Python. It contains no references to Python or Python bindings. The header files can be used by C++ users to create Awkward Arrays, which can then be copied into Python without any specialized data types - only raw buffers, strings, and integers. This approach addresses the issue of packaging projects with native dependencies.
## 4 LayoutBuilder
A 'layout' consists of composable elements that determine how an array is structured. It can only build a specific view determined by the layout Form. LayoutBuilder [9] is a set of compile time, templated static C++ classes implemented entirely in a header-only library. It uses a header-only GrowableBuffer (Figure 2), which is implemented as a linked list with smart pointers. awkward::LayoutBuilder specializes an Awkward data structure using C++ templates, which can be filled and converted to a Python Awkward Array through ak.from_buffers. The data comes out of LayoutBuilder as a set of named buffers and a JSON [10] Form. The Form is a unique description of an Awkward Array and returns a std::string that tells Awkward Array how to put everything together. LayoutBuilder is part of an awkward-cpp package that is separate from an awkward package. Both packages are individually pip-installable. The code doesn't have helper methods to pass the data to Python, so different projects can use different binding generators. The code relies on generalized lambda expressions to deduce parameter type
Figure 1: Structure of an Awkward Array with nested variable-length lists and records, color-coded with an array example.
during compile time that is available from the C++14 standard.
ArrayBuilder [11] and LayoutBuilder are both used to create Awkward Arrays. The main difference between a LayoutBuilder and an ArrayBuilder is that the data types that can be appended to the LayoutBuilder are defined in advance, while any data types can be appended to an ArrayBuilder. LayoutBuilder is designed to build Awkward Arrays faster. The flexibility of ArrayBuilder comes with performance limitations since it needs to discover the data type, while LayoutBuilder knows it in advance.
## 5 User Interface of LayoutBuilder
This section explains the user interface of LayoutBuilder with the help of an example of an Awkward Array with nested records and variable-length lists.
### Phases of LayoutBuilder
There are three phases of using LayoutBuilder:
1. **Constructing a LayoutBuilder:** from variadic templates (It is an implicit template instantiation).
2. **Filling the LayoutBuilder:** while repeatedly walking over the raw pointers.
3. **Taking the data out to user-allocated buffers:** then, the user can pass them to Python.
### Illustrative Example
An example of RecordBuilder is illustrated in Listing 1. The first step is to include the LayoutBuilder header file (see [9] for the installation instructions). Next, the RecordBuilder is constructed with variadic templates. The contents of a RecordBuilder are heterogeneous type containers (std::tuple) that take the other Builders as the template parameters. The field names are non-type template parameters defined by the user. Currently, it is not possible to template on strings as this functionality comes only from C++20 and onwards. Therefore, for passing the field names as template parameters to the RecordBuilder, a user-defined field_map, with enumerated type field ID as keys and the field names as value, has to be provided. In the case of multiple RecordBuilder, a user-defined map has to be specified for each of the RecordBuilder used.
After that, the LayoutBuilder buffers are filled with the required data as shown in Listing 1. To make sure there are no errors while filling these buffers, the user can check their validity by using the is_valid() method, which can be called on every entry if they want to trade safety for speed. The example translates into the following Awkward Array in Python:
[{"x": 1.1, "y": [1]}, {"x": 2.2, "y": []}, {"x": 3.3, "y": [1, 2]},]
Figure 2: Awkward Array GrowableBuffer implemented as a linked list with multiple panels, each of size = 5, that are allocated as needed, i.e., when the GrowableBuffer runs out of space.
#include "awkward/LayoutBuilder.h" ```
enumField:std::size_t{x,y}; UserDefinedMapfields_map({ Field::x,"x"}, {Field::y,"y"}}); //ConstructingaLayoutBuilderfromvariadictemplates! RecordBuilder< RecordField::x,NumpyBuilder<double>>, RecordField<Field::y,ListOffsetBuilder<int64_t,NumpyBuilder<int32_t>>> >builder(fields_map); auto&x_builder=builder.field<Field::x>(); auto&y_builder=builder.field<Field::y>(); //FillingtheLayoutBuilder x_builder.append(1.1); auto&y_subbuilder=y_builder.begin_list(); y_subbuilder.append(1); y_builder.end_list(); x_builder.append(2.2); y_builder.begin_list(); y_builder.end_list(); x_builder.append(3.3); y_builder.begin_list(); y_subbuilder.append(1); y_subbuilder.append(2); y_builder.end_list();
```
Listing 1: Example of a LayoutBuilder with nested records and variable-length lists.
We want NumPy to own the array buffers so that they get deleted when the Awkward Array goes out of Python scope, not when the LayoutBuilder goes out of C++ scope. The hand-off, therefore, needs a few steps:
1. Retrieve the set of buffer names and their sizes (as a number of bytes). std::map<std::string,size_t>names_nbytes={}; builder.buffer_nbytes(names_nbytes);
2. Allocate memory for these buffers in Python with np.empty(nbytes, type = np.uint8) and get void* pointers to these buffers by casting the output of numpy_array.ctypes.data.
3. Let the LayoutBuilder fill these buffers. std::map<std::string,void*>buffers; builder.to_buffers(buffers);
4. Finally, JSON Form is generated with: std::string form=builder.form();
The Form generated for the example in Listing 1 is shown in Listing 2. Now, everything can be passed over the border from C++ to Python using pybind11's [12] py::buffer_protocol for the buffers, as well as an integer for the length and a string for the Form. If the user ever needs to make a change in the format of the records (add, remove, rename, or change the field type), there is no need to change anything in the Python-C++ interface. All of that is contained in the specialization of the C++ template and the filling procedure, which are both in the C++ code.
{"class": "RecordArray", "contents": { "x": {"class": "NumpyArray", "primitive": "float64", "form_key": "node1"}, "y": {"class": "ListOffsetArray", "offsets": "i64", "content": { "class": "NumpyArray", "primitive": "int32", "form_key": "node3"}, "form_key": "node2"}" }, "form_key": "node0"}
Listing 2: Awkward Array Form for the example in Listing 1.
## 6 Applications
The header-only approach allows for multiple applications in both static and dynamic projects. Awkward RDataFrameFrame [13] uses the C++ header-only libraries to simplify the process of just-in-time (JIT) compilation in ROOT [14]. The ak.from_rdataframe[15] function converts the selected ROOT RDDataFrame[16] columns as native Awkward Arrays. The templated header-only implementation constructs the Form from the primitive data types [17]. The generation of all the types via templates makes it easier to dynamically generate LayoutBuilder from strings in Python and then compile it with Cling [18].
Another application of header-only LayoutBuilder could be in the c tappinge [19] project, which is currently in the planning stage. c tappinge is a framework for prototyping the low-level data processing algorithms for the Cherenkov Telescope Array [20]. It does some processing on structured ("awkward") event data, and the developers want to refactor their implementation to use Awkward Array. They already have C++ code that iterates over the custom file format, which has array types that are known at compile-time. The easiest way to Awkward Arrays in this project is to use a LayoutBuilder to fill the buffers and then send them to Python through pybind11.
## 7 Conclusion
The header-only approach presented in this paper facilitates Awkward Arrays Python-C++ integration and enhances their portability. A set of templated header-only libraries use only C-types (integers, strings, and raw buffers) to build Awkward Arrays and send them to Python by generating a JSON Form. A standalone awkward header-only C++ package opens up the doors for users to analyze their data in Python. Awkward Arrays can be seamlessly integrated with external projects without linking against platform-specific libraries or worrying about native
dependencies. This new development also allows the extension of the use cases of Awkward Arrays to scientific communities beyond HEP.
## 8 Acknowledgment
This work is supported by NSF cooperative agreement OAC-1836650 (IRIS-HEP) and NSF cooperative agreement PHY-2121686 (US-CMS LHC Ops).
|
2305.01482 | Multitask learning in Audio Captioning: a sentence embedding regression
loss acts as a regularizer | In this work, we propose to study the performance of a model trained with a
sentence embedding regression loss component for the Automated Audio Captioning
task. This task aims to build systems that can describe audio content with a
single sentence written in natural language. Most systems are trained with the
standard Cross-Entropy loss, which does not take into account the semantic
closeness of the sentence. We found that adding a sentence embedding loss term
reduces overfitting, but also increased SPIDEr from 0.397 to 0.418 in our first
setting on the AudioCaps corpus. When we increased the weight decay value, we
found our model to be much closer to the current state-of-the-art methods, with
a SPIDEr score up to 0.444 compared to a 0.475 score. Moreover, this model uses
eight times less trainable parameters. In this training setting, the sentence
embedding loss has no more impact on the model performance. | Etienne Labbé, Julien Pinquier, Thomas Pellegrini | 2023-05-02T15:03:20Z | http://arxiv.org/abs/2305.01482v1 | # Multitask learning in Audio Captioning: a sentence embedding regression loss acts as a regularizer
###### Abstract
In this work, we propose to study the performance of a model trained with a sentence embedding regression loss component for the Automated Audio Captioning task. This task aims to build systems that can describe audio content with a single sentence written in natural language. Most systems are trained with the standard Cross-Entropy loss, which does not take into account the semantic closeness of the sentence. We found that adding a sentence embedding loss term reduces overfitting, but also increased SPIDEr from 0.397 to 0.418 in our first setting on the AudioCaps corpus. When we increased the weight decay value, we found our model to be much closer to the current state-of-the-art methods, with a SPIDEr score up to 0.444 compared to a 0.475 score. Moreover, this model uses eight times less trainable parameters. In this training setting, the sentence embedding loss has no more impact on the model performance.
sound event description, multitask learning, audio language task, overfitting, sentence embedding regression loss, semantic loss
## I Introduction
In recent years, new machine learning systems have been significantly improved for text processing, generation, and understanding, leading to the use of natural language as a global interface between humans and machines. Free-form text can contain much more information than a predefined set of classes, which could improve the machine understanding of our world. In audio, most of the tasks are focused on classification and localization of sound events. Following this idea, the Automated Audio Captioning (AAC) task appeared in 2017 [1] and aims to create systems that generate a sentence written in natural language that describes an audio file. The audio can contain various sound events (human, natural, domestic, urban, music, effects...) of different lengths, recorded with different devices and in different scenes. The description can contain any kind of detail in the audio, with temporal or spatial relations between them (followed by, in the background...) or different characterizations (high-pitched, short, repetitive...). Since the descriptions are written by humans, we need to consider different words used to describe similar sounds (_Birds are calling / chirping / singing / tweeting_), different sentence structures (_A door that needs to be oiled / A door with squeaky hinges_), subjectivity (_Man speaks in a foreign language_), high-level descriptions (_A vulgar man speaks / Unintelligible conversation_), and vagueness (_Someone speaks_ instead of _A man gives a speech over a reverberating microphone_).
In AAC, most approaches use deep learning models trained with the standard Cross-Entropy (CE) loss. However, this loss tends to generate repetitive and generic content [2] and does not take into account synonyms, various sentences structures or the semantic closeness. Several studies introduced another criterion, the Self-Critical Sequence Training [3] (SCST) used in reinforcement learning to fine-tune the model directly on a metric instead of the loss. This technique relies on sampling the next word to generate a new sentence. If this sentence has a higher score than the original one, the model is rewarded and the outputs probabilities for this new sentence are encouraged. However, this technique leads to degenerated sentences [4], with repetitive n-grams without syntactical correctness.
Motivated by the limitations of CE and SCST, in this work, we attempted to add a Sentence Embedding Regression (SER) loss used in [5] to improve our model. We begin this paper by describing our baseline system and then explain how to add SER loss. We present related work in which we compare and then describe the detailed hyperparameters. Finally, we present the results and discuss the differences.
## II Baseline system description
We use an encoder-decoder architecture widely used in AAC systems, with an encoder pre-trained on AudioSet [6] to extract a strong representation of sounds events. More specifically, we used the CNN14_DecisionLevel_Att audio encoder from the Pre-trained Audio Neural Networks study (PANN) [7], with the pre-trained weights available on Zenodo1. This architecture gives the best results on the classification of sound events in the audio captioning dataset part when compared to the other PANN architectures available. We have found that freezing weights does not decrease performance while significantly speeding up the training process. This encoder provides sequences of embeddings of dimension \(31\times 2048\) for ten-second long audio recordings. On top of that, we add a projection layer to get 256-dimensional embeddings to match the input dimension of the decoder \(d_{model}\).
Footnote 1: [https://zenodo.org/record/3987831](https://zenodo.org/record/3987831)
The decoder is a standard transformer decoder [8] with 6 layers, 4 attention heads per layer, a global embedding size \(d_{model}\) set to 256 and a global dropout probability of 0.2. We also used the GELU [9] activation layer in the decoder.
The decoder is trained using teacher forcing, _i.e._ gives the ground truth previous reference tokens to the model to predict the next one. The baseline criterion is the standard CE loss over the whole sequence between the output probabilities and the reference token classes.
During inference, we used the beam search algorithm with a beam size set to 2 since higher value does not bring improvements. We conditioned the sentence generation to improve performance and overall caption quality: by limiting the prediction length to a minimum of 3 tokens and a maximum of 30 tokens and by forbidding the model to generate the same token twice, except for stop-word tokens predefined in the Natural Language ToolKit (NLTK) [10] package. These constraints reduce the number of invalid sentences and repetitions and give a slight improvement in the performance of our model.
## III Adding a sentence embedding regression loss
### _Sentence-BERT model_
The Sentence-BERT [11] (SBERT) model is a transformer-based model which combines a BERT [12] model with a pooling and a projection layer to produce a single embedding of a fixed size of 768 values for a given sentence.
Available SBERT models have been trained on two text databases: the Stanford Natural Language Inference (SNLI) [13] and the Multi-Genre Natural Language Inference (MultiNLI) [14]. These datasets contain pairs of two sentences annotated with contradiction, entailment or neutral label. To learn sentence semantic, two SBERT embedding are fed to a classification layer, which must predicts the label of the pair.
### _SER loss_
To use the SBERT model to improve our model, we need to use the same token units to have the same sequence size. BERT uses WordPiece tokens [15] instead of words which form a vocabulary of 30522 different units in our experiments.
Fig. 1 resumes the whole procedure and layers used. During training phase, we use audio features and the ground truth previous tokens to generate the next token embeddings named \(\hat{e}_{t}\). These embeddings are used in two different parts of the model. First, they are projected to logits using a classifier for the standard CE loss \(\mathcal{L}_{t}\). The token embeddings are also projected from 256 to 768-dimensional embedding to match the SBERT embedding input shape. The resulting embedding \(\hat{e}_{s}\) is used as input with the ground truth embedding for the SER loss component \(\mathcal{L}_{s}\). In order to use the SBERT model to train our model, we need to remove the first layer of SBERT which maps tokens IDs to embedding vectors (named "Embed" in the figure), since this layer is not differentiable. At inference time, only the classifier branch is used.
We tried several regression criteria for the \(\mathcal{L}_{s}\) loss: CosineEmbeddingLoss, L1Loss, MSELoss and SmoothL1Loss. The best one that we obtained is the SmoothL1Loss [16], a regression function which combines MSE and L1Loss, described in equation (1):
\[\mathcal{L}_{s}(\hat{e}_{s},e_{s})=\begin{cases}(\hat{e}_{s}-e_{s})^{2}\cdot \frac{1}{2\beta}&\text{if }|\hat{e}_{s}-e_{s}|<\beta\\ |\hat{e}_{s}-e_{s}|-\frac{\beta}{2}&\text{otherwise}\end{cases} \tag{1}\]
The \(\beta\) hyperparameter control whether MSE or L1Loss must be used. We kept the standard CE as our first component \(\mathcal{L}_{t}\) to help the model to produce syntactically valid sentences. The final loss is given by equation (2) and sum \(\mathcal{L}_{t}\) and \(\mathcal{L}_{s}\), weighted by a coefficient \(\lambda\):
\[\mathcal{L}=\mathcal{L}_{t}+\lambda\cdot\mathcal{L}_{s} \tag{2}\]
## IV Related work
The current state-of-the-art on AudioCaps [17] is a full transformer architecture named Audio Captioning Transformer (ACT) [18], pre-trained on AudioSet like PANN models. The system uses mixup [19] to improve generalization during training between audio waveform and spectrograms and concatenate the corresponding captions. For inference, Gaussian noise and SpecAugment [20] are used to produce several variants of the same sample and are given to the model. The intermediate representations of the same example are averaged to produce a better sentence.
In [21], the authors proposed to use a pre-trained transformer decoder named BART [22] to generate better sentences. Their first encoder (YAMNet) predicts the names of the predicted AudioSet classes to improve the audio representation. These classes names are the inputs of the BART embedding layer and are added to the PANN encoder audio embeddings. The decoder is the pretrained BART transformer decoder part. We named their model BYP in the table for shortening BART+YAMNet+PANNs.
Fig. 1: Overview of our proposed training method. The blue boxes are the pre-trained frozen layers, the purple ones the trainable layers and the orange ones the functions. The SBERT block contains the SBERT model with its first embedding layer removed.
An approach similar to ours has been proposed in [5]. There are notable differences (the audio encoder, optimizer and hyperparameters for optimization and generation) which provide a stronger baseline. Unlike them, we train our model with only one phase (CE+SER losses) instead of two (CE then CE+SER losses).
## V Experimental setup
### _Dataset_
We train and evaluate our models on the AudioCaps [23] dataset, which is the largest known audio-language dataset with human generated captions. The audio files are 10-second clips from AudioSet [6] and are extracted from YouTube videos. Since some of the original videos are removed or unavailable, our version of the dataset contains 46230 over 49838 files in training subset, 464 over 495 in validation subset and 912 over 975 files in testing subset. Each audio is described by one caption in the training subset and five captions in the validation and testing subsets.
### _Metrics_
We focused only on the captioning metrics and decided to dismiss the translation metrics (BLEU, ROUGE, METEOR) which are mainly based on n-gram overlapping. CIDEr-D [24] computes the cosine similarity of the TF-IDF scores for common n-grams in candidates and references. SPICE [25] computes the F1-score of the graph edges representing the semantic propositions extracted from the sentences using a parser and grammar rules. SPIDEr [26] is the average of CIDEr-D and SPICE and mainly used to rank AAC systems. Since we are also studying a sentence similarity loss, we decided to add three model-based metrics from [27]: SBERT, FluErr and FENSE. The SBERT metric correspond to the cosine similarity of the sentence embedding extracted using a SBERT model. FluErr is the fluency error rate detected by a model trained to detect common errors made by captioning systems like incomplete sentence, repeated event, repeated adverb, missing conjunction and missing verb. The FENSE metric is the SBERT score for each sentence, unless an error in the FluErr metric is detected, then it will be divided by 10. Finally, the last metric "#Words" is the number of unique words used in the candidates in the whole subset.
### _Hyperparameters_
Hyperparameters are crucial for training deep learning systems. We found that the model can obtain drastically different scores when trained using different sets of hyperparameters. We optimized our hyperparameters to maximize the FENSE score on the validation subset of AudioCaps. We train our model for a total of 100 epochs \(K\) and with a batch size of 512 samples on a single GPU. We used the AdamW optimizer [28] with a weight decay (wd) of \(10^{-6}\) in the first experiments and set to \(2\) to limit overfitting in the second setting. The weight decay is not applied to the bias weights of the network. We also denote that the network does not converge when using the standard Adam [29] optimizer with a large wd. The initial learning rate \(\text{Ir}_{0}\) is set to \(5\cdot 10^{-4}\) at the beginning of the training, and the values of \(\beta_{1}\) and \(\beta_{2}\) are respectively set to 0.9 and 0.999. We used cosine scheduler decay updated at the end of each epoch \(k\) with the following rule: \(\text{lr}_{k}=\frac{1}{2}\big{(}1+\cos(\frac{k\pi}{K})\big{)}\text{Ir}_{0}\). The captions are put in lowercase and all the punctuation characters are erased. We clipp the gradient by 12-norm to 10 to stabilize training and add label smoothing set to 0.1 to the CE loss component. To select our best model among epochs, we used the highest FENSE score instead of using the CE loss on the validation subset. Using FENSE allows you to choose a later training epoch than with the loss of validation function, which gives better results.
For the sentence embedding regression method, we used the paraphrase-TinyBERT-L6-v2 model2, since it is the one used in the metrics SBERT and FENSE. We also tried larger models (all-mpnet-base-v2 and all-mpnet-base-v1), but it does not bring any improvements. The chosen model contains 67M parameters and its weights are frozen during training. We have set \(\beta\) to 1 in the \(\mathcal{L}_{s}\) function. We tried several values for \(\lambda\) (1, 10, 100, 1000, and 10000), and found that 100 is the best parameter for sentence embedding regression. Higher values decrease performance, while lower values have no impact on multitasking compared to the baseline.
Footnote 2: [https://www.sbert.net/docs/pretrained_models.html](https://www.sbert.net/docs/pretrained_models.html)
## VI Results
We reported the scores in table I of our baseline method using a word tokenizer, the method using the SBERT tokenizer (baseline+SBERT tokens) and with the SER loss method (baseline+SBERT tokens+SER loss). All of our scores are averaged over 5 seeds. We also added the other SER method scores named CNN10-trans, the current state-of-the-art scores (Multi-TTA and BYP) and the cross-referencing scores. Cross-referencing is performed by excluding one of the five captions for each audio file and using it as a candidate sentence, while the other four remain the ground truth references. This process is repeated five times to compute an average human agreement score, which we call "cross-references."
### _Discussion_
Using a small wd value, the SER loss shows a slight improvement in FENSE and SPIDEr scores respectively increased from 0.595 to 0.607 and from 0.397 to 0.418. Moreover, when we introduce a large wd value value to prevent overfitting, we can see a large improvement in FENSE and SPIDEr scores from 0.595 to 0.619 and from 0.397 to 0.445 respectively with our baseline. Nevertheless, even if the SER loss also benefits from the use of a large wd value, the resulting scores became very close to the new baseline which also uses this wd value and the regularization effect given by the SER loss seems no longer significant.
In Fig. 2, we can see that the increase in the validation learning curve is reduced by the SER loss with a small wd,
which means that it limits the overfitting of our model and explains the gain obtained in the table. The validation losses and SBERT cosine similarities in the figures show that the regularization with a large wd works very well on the model.
The increase in the number of trainable parameters between our baseline and our baseline+SBERT tokens (12.M to 25.7M) comes from the increase in vocabulary (4724 to 30522) which drastically grows the number of parameters in the classifier and the input embedding layer in the decoder part. The method using SBERT tokens and large wd value has become our new best model according to SPIDEr and FENSE, although it is closely followed by our methods using the same decay. Our methods are also very close to the current state-of-the-art method which obtain 0.475 compared to our best SPIDEr scores of 0.445 and 0.444, despite the fact that we have eight and four times fewer trainable parameters for our baseline and baseline+SBERT tokens, respectively.
### _Qualitative analysis_
Tables II and III show several examples of the sentences generated by our model over different training procedures. The baseline system using small wd value seems to try to use more synonyms but fail more often to provide a good description, like in II. When using large wd value, the system uses even less words and more generic sentence structures, despite being more accurate. In this case, we also denote that the number of words used in the testing subset decreased from an average of 485.8 words to 387.2 in our baselines systems using small and large wd value, respectively. The reduction is even more important when we add the SER loss, with only 348.8 words used on average.
## VII Conclusions
In this work, we studied the addition of a sentence embedding regression loss component to improve an Automated Audio Captioning system. We searched for the most optimal configuration for our baseline by optimizing hyperparameters, conditioning generation and using a stronger pre-trained encoder. We discovered that the SER loss component seems
Fig. 2: CE losses and SBERT cosine similarities over epochs on validation.
to limit overfitting for AAC systems, but it does not bring improvement anymore when combined with a stronger regularization method like a large weight decay value. We also noticed that even if the main metrics (SPIDEr, FENSE) are improved by the regularization methods, they do not take into account the diversity of the words used. The diversity of words used could be taken into account in future captioning metrics, or directly by the model during learning like in [30].
|
2306.06865 | Deep denoising autoencoder-based non-invasive blood flow detection for
arteriovenous fistula | Clinical guidelines underscore the importance of regularly monitoring and
surveilling arteriovenous fistula (AVF) access in hemodialysis patients to
promptly detect any dysfunction. Although phono-angiography/sound analysis
overcomes the limitations of standardized AVF stenosis diagnosis tool, prior
studies have depended on conventional feature extraction methods, restricting
their applicability in diverse contexts. In contrast, representation learning
captures fundamental underlying factors that can be readily transferred across
different contexts. We propose an approach based on deep denoising autoencoders
(DAEs) that perform dimensionality reduction and reconstruction tasks using the
waveform obtained through one-level discrete wavelet transform, utilizing
representation learning. Our results demonstrate that the latent representation
generated by the DAE surpasses expectations with an accuracy of 0.93. The
incorporation of noise-mixing and the utilization of a noise-to-clean scheme
effectively enhance the discriminative capabilities of the latent
representation. Moreover, when employed to identify patient-specific
characteristics, the latent representation exhibited performance by surpassing
an accuracy of 0.92. Appropriate light-weighted methods can restore the
detection performance of the excessively reduced dimensionality version and
enable operation on less computational devices. Our findings suggest that
representation learning is a more feasible approach for extracting auscultation
features in AVF, leading to improved generalization and applicability across
multiple tasks. The manipulation of latent representations holds immense
potential for future advancements. Further investigations in this area are
promising and warrant continued exploration. | Li-Chin Chen, Yi-Heng Lin, Li-Ning Peng, Feng-Ming Wang, Yu-Hsin Chen, Po-Hsun Huang, Shang-Feng Yang, Yu Tsao | 2023-06-12T04:46:01Z | http://arxiv.org/abs/2306.06865v1 | # Deep denoising autoencoder-based non-invasive blood flow detection for arteriovenous fistula
###### Abstract
Clinical guidelines underscore the importance of regularly monitoring and surveilling arteriovenous fistula (AVF) access in hemodialysis patients to promptly detect any dysfunction. Although phonologically/sound analysis overcomes the limitations of standardized AVF stenosis diagnosis tool, prior studies have depended on conventional feature extraction methods, which are susceptible to non-stationarity, incapable of capturing individual patient characteristics, and unable to account for variations based on the severity and positioning of stenosis, thereby restricting their applicability in diverse contexts. In contrast, representation learning captures fundamental underlying factors that can be readily transferred across different contexts. We propose an approach based on deep denoising autoencoders (DAEs) that perform dimensionality reduction and reconstruction tasks using the waveform obtained through one-level discrete wavelet transform, utilizing representation learning. Our results demonstrate that the latent representation generated by the DAE surpasses expectations with an accuracy of 0.93. The incorporation of noise-mixing and the utilization of a noise-to-clean scheme effectively enhance the discriminative capabilities of the latent representation. Moreover, when employed to identify patient-specific characteristics, the latent representation exhibited performance by surpassing an accuracy of 0.92. Appropriate light-weighted methods can restore the detection performance of the excessively reduced dimensionality version and enable operation on less computational devices. Our findings suggest that representation learning is a more feasible approach for extracting auscultation features in AVF, leading to improved generalization and applicability across multiple tasks. The manipulation of latent representations holds immense potential for future advancements. Further investigations in this area are promising and warrant continued exploration.
Arteriousrous fistula, deep denoising autoencoder, latent representation, pretrain model, representation learning, vascular access surveillance.
## 1 Introduction
To ensure optimal dialysis treatment, individuals undergoing hemodialysis (HD) necessitate an adequate vascular access that remains stable over time. The arteriovenous fistula (AVF), an anastomosis expertly crafted between arteries and veins, emerges as the preferred choice due to its diminished morbidity rate and prolonged patency [1]. Nevertheless, the occurrence of AVF stenosis resulting from neointimal hyperplasia and subsequent reduction in blood flow can culminate in vascular thrombosis and AVF failure [2]. Numerous investigations have reported AVF patency rates ranging between 50% and 80% at the conclusion of the first year following creation, with figures declining to 20% and 60% at the conclusion of the second year [3]. Consequently, the preservation of a functional AVF persists as an obstacle for HD patients.
While angiography stands as the definitive method for diagnosing AVF stenosis, it is burdened by invasiveness, high costs, protracted procedures, and associated side effects [4]. On the other hand, alternative non-invasive approaches, such as color-duplex ultrasound and physical examination (PE), present themselves as viable options. However, color-duplex ultrasound necessitates the availability of appropriate equipment and proficient personnel, while PE mandates the expertise of skilled operators who employ visual inspection, palpation, and auscultation [5, 6]. It is worth noting that PE can be susceptible to operator-dependent variations, leading to mixed outcomes in terms of accuracy in detecting and localizing AVF stenosis [7].
The guidelines established by the Kidney Disease Outcomes Quality Initiative (KDQI) emphasize the importance of regular monitoring and surveillance of vascular access to enable the timely detection of dysfunction [6, 8, 9, 10]. Consequently, there arises a need for a straightforward, cost-effective, and less reliant approach that minimizes the reliance on specific devices and personnel. This would facilitate the seamless implementation of routine auscultation for AVFs.
Phono-angiography/sound analysis, being a non-invasive method, requires portable and cost-effective equipment, which has garnered considerable attention in the development of diagnostic tools for stenosis and thrombosis surveillance. The audible sound generated by turbulent blood flow and vessel vibrations can be analyzed to indicate the state of the fistula [10, 11]. Numerous studies have explored diagnostic tools for stenosis surveillance based on distinctive acoustic characteristics [5, 7, 11, 12, 13, 14]. However, the extraction and transformation of feature-specific information may suffer from limited generalizability, limiting the applicability of the results in different contexts.
In contrast, representation learning offers a technique wherein a latent, low-dimensional code embedding is learned, capturing the posterior distribution of the underlying factors that explain the observed input. This code can be easily transferred to construct a classifier for other tasks [15, 16]. The fundamental idea of this study is to develop an end-to-end, non-invasive technique for detecting AVF blood flow, utilizing representation learning. Such an assistive tool simplifies and standarizes auscultation, rendering it feasible for nephrologists, nurses, and even patients themselves. Additionally, it enables continuous monitoring of the progression of arteriovenous vessels in a non-invasive manner.
### Blood flow of arteriovenous fistula
In a mature AVF, the blood flow typically ranges from 600 to 1200 ml/min [17]. Both low and high volumes can lead to undesirable outcomes. Studies have proposed active surveillance and preemptive repair of subclinical stenosis when the blood flow falls below 750 ml/min, aiming to reduce thrombosis rates, costs, and prolong the functional lifespan of AVFs [18, 19]. Conversely, blood flow exceeding 1500 ml/min has been associated with an increased risk of distal ischemia, known as steal syndrome [20]. This phenomenon affects approximately 1-20% of HD patients with upper-arm AVFs and is characterized by digital coolness, pallor, mild paresthesia, and, in severe cases, tissue necrosis [21]. Regular monitoring of blood flow enables the early detection of AVF stenosis, which plays a crucial role in salvaging access function [22].
From an acoustical standpoint, a mature AVF exhibits a low-pitched continuous bruit that can be perceived throughout both systole and diastole, with heightened intensity near the arterial anastomosis. These bruits are the audible sounds originating from the fistula, which can be discerned through a stethoscope [11]. Conversely, in the presence of a stenosis, a high-pitched systolic bruit manifests distal to the stenosis, followed by a normal bruit proximally [6, 8, 17].
### Signal characters and feature extractions for AVF
Previous research has revealed significant variations in the acoustical characteristics of AVFs. Among the most frequently discussed acoustical features of AVFs is the pitch resulting from the bruit within the vessel. Some studies [10, 13, 14, 23] suggest that a higher degree of stenosis is indicated by a high-pitched bruit. Additionally, certain research indicates that a higher velocity of blood flow corresponds to a higher frequency [5]. Other studies identify specific frequencies for stenosis detection, such as frequencies above 200-300 Hz [24] or around 700-800 Hz [25]. There are arguments proposing the combination of amplitude and frequency information for more comprehensive analysis [11], as well as the simultaneous consideration of time and frequency domain information [13]. Conversely, some researchers argue that frequency analysis should differentiate between the systolic and diastolic phases [5, 10, 26].
These varying findings align with the fact that AVF ausculations are subjective, dependent on staff expertise, subject to non-stationarity, specific to individual patient characteristics, and differ based on the severity levels and positioning of stenosis [5, 6, 7, 10, 11].
#### 2.2.1 Various feature extraction transformations
In the evaluation of AVFs, several feature extraction transformations have been employed. These include the fast Fourier transform (FFT) [24], short-time Fourier transform (STFT) [5], wavelet transform (WT) [11, 25, 27, 28], Mel spectrograms [26], and intrinsic mode functions (IMF) [14]. Some studies propose combining multiple coefficients, such as incorporating the ratio of frequency power, Mel-frequency cepstral coefficients (MFCC), and normalized cross-correlation coefficient [12], or combining power spectral density (PSD) and wavelet decomposition [27], or utilizing the mean and variation of the center of frequency and energy ratio within a defined frequency band [5]. Wang _et al._[13] further introduced the S-transform, which preserves information from blood flow sounds in both the time and frequency domains simultaneously.
In line with the differences between the systolic and diastolic phases, heartbeats peaks and periods have also been detected [11, 28], proving to be distinguishable when multiple frequency filtering techniques are applied [5, 11, 23]. The variations observed among current studies highlight the absence of a consensus regarding an optimal feature extraction transformation.
#### 2.2.2 Diverse classification labels
Another factor contributing to the difficulty in comparing related works is the variability in the AVF vascular access indicators targeted by each study. Some studies classified the fistula into six staff-defined conditions, ranging from the best to the worst condition [24, 28], while others categorized the sounds into five types, including normal, hard, high, intermittent, and whistling [29]. Alternatively, some studies employed a binary classification to denote stenosis above or below 50% [5, 23, 26]. Other classifications were based on indicators such as a resistance index (RI) above or below 60%, which indicates the difficulty of blood flow to the distal end [12], or a luminal diameter above or below 50% in the AVF vessel, also referred to as the size or width of the vessel [13].
Furthermore, while guidelines recommend weekly surveillance of the vascular access for early dysfunction detection, most studies focused on stenosis based on significant contrast conditions, such as before and after percutaneous transluminal angioplasty (PTA) [5, 26], or stenosis versus non-stenosis [11, 13, 14, 23, 24]. These approaches do not align with the objective of early surveillance.
#### I-A3 Varied puncture site measurements
The puncture location of the AVF can be categorized into different sites, such as the site of arteriovenous anastomosis (site 1), arterial puncture site (site 2), venous puncture site (site 3), and so on (as illustrated in Fig. 1). The anastomotic site is located near the wrist, distal to the heart, while the proximal end of the AVF is situated proximal to the heart [13]. Each site exhibits distinct signal characteristics. Some studies focused on analyzing a single site [12, 13, 26, 28], while others measured multiple sites [5, 11, 14, 23]; however, the discussions predominantly revolved around the characteristics of each site individually. The combination or utilization of information from different sites has not been thoroughly explored.
#### I-A4 Limited sample size
The evaluation of AVF was typically conducted by healthcare professionals, and often only the assessment results were recorded. The use of stethoscope recordings was not a regular practice. Furthermore, the collection of stethoscope recordings required a quiet room to minimize background noise. The quality of the stethoscope and the recording equipment could also impact data collection and study quality. As a result, the collection of stethoscope recordings during AVF auscultation was typically done in a trial-based scenario, which added to the practical burden and resulted in a limited number of sampled data. The recruited patient sizes in the studies ranged from 5 to 74 [5, 11, 12, 13, 14, 26, 27, 28, 29], which is considered small sample size for machine learning applications.
To address the aforementioned constraints in current research, this study employs deep neural networks to overcome the challenges with the following design:
1. Initially, a pretrain model is trained for dimensionality reduction and reconstruction. The latent representation learned from this model serves as efficient acoustic features, accommodating the non-stationary and patient-specific characteristics of auscultation recordings. The generalizability of this representation is also assessed.
2. Blood flow measurement, a widely recognized indicator for vascular access, exhibits strong predictive power in early detection of dysfunction and AVF complications [7]. Hence, it is adopted as the prediction label in this study.
3. Information from different puncture sites is analyzed, and the combination of latent information is explored.
4. To mitigate the limitation of a small dataset and enhance robust representation learning, a noise-mixing approach is employed to augment the dataset.
5. Considering the applicability of the proposed method to lightweight devices with limited computational capabilities, such as stethoscopes or wearable devices, a further dimensionality reduction technique is demonstrated while preserving approximate predictive ability.
## II Methods
In this study, we propose a representation learning approach based on the architecture of a deep denoising autoencoder (DAE), as depicted in Fig.2. The DAE is trained to perform dimensionality reduction and reconstruction tasks using the waveform of one-level coefficients obtained after applying discrete wavelet transform (DWT). The latent representation generated by the DAE is utilized for phono-angiography analysis in the downstream task.
### Deep autoencoder and deep denoising autoencoder
Deep autoencoder (AE) neural networks [30] are feed-forward multi-layer neural networks that aim to reconstruct the input data itself. The AE consists of an encoder and a decoder. The encoder, denoted as \(f_{\theta}\), transforms the input \(x\) into a hidden representation \(y\) through a deterministic mapping:
\[f_{\theta}(x)=s(Wx+b), \tag{1}\]
where \(s(\cdot)\) is a non-linear activation function, and \(\theta=\{W,b\}\) represents the weights and biases of the encoder. The decoder,
Fig. 1: Hemodialysis process, blood flow detection, and positioning of auscultation sites.
Fig. 2: Model architecture DAE. _E_: encoder; _D_: decoder; _L_: latent representation; \(\copyright\): subtraction of the two latent representation.
denoted as \(g_{\theta^{\prime}}\), reconstructs the hidden representation \(y\) into a latent space \(z\):
\[z=g_{\theta^{\prime}}(y)=s(W^{\prime}y+b^{\prime}), \tag{2}\]
where \(\theta^{\prime}=\{W^{\prime},b^{\prime}\}\) represents the weights and biases of the decoder. The goal of the autoencoder is to minimize the discrepancy between the original input \(x\) and its reconstructed output \(z\)[31].
The constraint of reducing dimensionality at the bottleneck layer separates useful information from noise and less informative details. This dimensionality reduction helps remove irrelevant variations and focuses on the essential aspects of the data, enhancing classification performance. By training an AE on a reconstruction task, the bottleneck layer becomes specialized in encoding discriminative features, which can be highly informative for other classification tasks.
DAE takes a further approach to reconstruction by considering noisy inputs [32], [33]. The DAE introduces a noise-adding step to the initial input \(x\), denoted as noisy input \(\widetilde{x}\). The encoder then maps \(\widetilde{x}\) to a hidden representation:
\[y=f_{\theta}(\widetilde{x})=s(W\widetilde{x}+b). \tag{3}\]
The decoder reconstructs the clean input \(x\) from the hidden representation \(y\):
\[z=g_{\theta^{\prime}}(y). \tag{4}\]
The parameters \(\theta\) and \(\theta^{\prime}\) are trained to minimize the average reconstruction error over a training set, aiming to make the reconstructed output \(z\) as close as possible to the clean input \(x\).
The use of a denoising autoencoder allows the model to learn robust representations that are less sensitive to noise and variations in the input data. It can help to capture essential features while discarding irrelevant details and noise, leading to improved classification performance.
### Experimental design
#### 2.2.1 Patient recruitment
Patients were recruited from the HD unit of Chung-Hsin General Hospital. The inclusion criteria for the study were patients with functioning AVFs who were undergoing regular HD treatment. Exclusion criteria included age younger than 20 years, unwillingness or inability to undergo scheduled exams or follow-up regularly, and inability to provide written informed consent. After enrollment, electronic stethoscopes were used to collect auscultation signals at three sites of a mature AVF: arteriovenous anastomosis (site 1), arterial puncture site (site 2), and venous puncture site (site 3). The relative positions of these sites are shown in Fig. 1. AVF blood flows were measured using a Transonic(r) Flow-QC(r) Hemodialysis Monitor, which introduced saline into the venous line with the dialysis lines in a normal position. Recirculation was calculated based on the change in blood concentration between the venous sensor and arterial sensor [34]. The study was approved by the institutional review board of Chung-Hsin General Hospital (817) 109A-56, and all procedures were conducted in accordance with the principles outlined in the Declaration of Helsinki. Informed written consent was obtained from all participants prior to enrollment.
Since the measured blood flow indicates the vessel access between site 2 and 3, this research focused on the auscultation recordings from site 2 and 3. A total of 199 patients were initially recruited, but patients who lacked labels or had incomplete recordings (e.g., lack of recordings from site 2 or 3) were excluded from the analysis. Ultimately, 171 patients were included in the study. The blood flow detection task was designed as a three-class classification, categorizing blood flow into \(<\)750 ml/min, 750-1500 ml/min, and \(>\)1500 ml/min, representing different clinical requirements for HD patients.
#### 2.2.2 Data preprocessing
The audio files were saved in the waveform audio file format (.wav) and digitized using a 16-bit analog-to-digital converter with a sample rate of 8 kHz. Preprocessing steps were applied to the audio recordings as follows: (1) Blank gaps before and after the auscultation sounds were removed. (2) The amplitude of the recordings was normalized. (3) The middle part of the recordings was segmented to avoid artifacts caused by placing and removing the stethoscope.
To extract pitch-specific acoustic features, the DWT was applied using a biorthogonal wavelet. Three levels of coefficients were obtained after the low-pass filter (LPF) and were denoted as \(w_{L1}\), \(w_{L2}\), and \(w_{L3}\). The waveform, FFT, and STFT were computed for the original signal and the three level coefficients, as shown in Fig. 3. The FFT and STFT representations were then normalized using the absolute values and the natural logarithm of one plus the input (\(log1p\)) transformation. The \(log1p\) transformation ensures that the values are above zero and normalizes potential errors that could be introduced by the distribution between positive and negative values [35], [36].
To overcome the limitation of a small number of samples,
Figure 3: Wavelet transform process. L: low-pass filter (LPF) - downsampling; H: high-pass filter (HPF) + downsampling; 1:2: downsampling by the factor of 2; \(w_{L1}\): one-level low-pass filtered coefficients; \(w_{L2}\): two-level low-pass filtered coefficients; \(w_{L3}\): three-level low-pass filtered coefficients; \(w_{H1}\): one-level high-pass filtered coefficients; \(w_{H2}\): two-level high-pass filtered coefficients; \(w_{H2}\): three-level high-pass filtered coefficients; FFT: fast fourier transform; STFT: short-time fourier transform.
the auscultation audio recordings were mixed with seven different types of noise, including five colored noises (white, blue, violet, brown, and pink noises) and two types of background noise with people talking. The noise was added at two different volumes, resulting in a total of 2,394 mixed noisy auscultation recordings for site 2 and 3, respectively.
#### Ii-B3 Model design
The architecture of the encoder and decoder for one-dimensional signals, which include waveform and FFT, consisted of three layers of fully-connected layers with Leaky Rectified Linear Unit (LeakyReLU) activation functions between each layer. The output sizes of the encoder layers were set to 5000, 1000, and 100, while the decoder layers had output sizes of 100, 1000, and 5000. For two-dimensional signals such as STFT, the architecture involved three layers of one-dimensional Convolutional (Conv1D) layers and max-pooling layers with Rectified Linear Unit (ReLU) activation functions between them. The encoder layers had filter sizes of 64, 32, and 16, and the decoder layers had filter sizes of 16, 32, and 64. The kernel sizes for the max-pooling layers were set to 2. The model was trained using the Adam optimizer and the mean squared error (MSE) loss function, aiming to minimize the discrepancy between the input and output. As the downstream task involved a small number of samples, the training was based on a Radial Basis Function (RBF)-kerneled Support Vector Machine (SVM) [37, 38]. The cost parameter (\(C\)) was set to 10 for the SVM.
#### Ii-B4 Training strategies and classification metrics
The data were initially split into training and testing datasets for the pretraining of the encoder and decoder. The latent representation generated during the testing phase served as the dataset for training the downstream task. This dataset was then further divided into training and testing sets after balanced sampling. All downstream tasks were performed based on balanced sampling. The training process is illustrated in Fig. 4.
The quality of the learned latent representation was assessed using discrimination analysis. The autoencoder (AE) was set as the baseline and trained to reconstruct both the original clean signals (clean-to-clean) and the noise-mixed signals (noisy-to-noisy). The comparison between the two reconstructions indicated the effectiveness of enlarging the dataset. Additionally, the DAE was compared as it reconstructed clean audio from noisy audio, demonstrating the effectiveness of the asymmetric input and output approach.
To conduct a comprehensive analysis, various feature extraction methods proposed previously were tested for blood flow detection, including S-transform [13], IMF [14, 39], and Mel spectrogram [26]. The latent representation from different combinations of sites 2 and 3 were examined, as well as their individual representations. Furthermore, the generalization ability of the latent representation was assessed by predicting patient-specific information such as gender, hypertension (HTN) diagnosis, and diabetes mellitus (DM) diagnosis.
To achieve further dimensionality reduction on portable devices, the latent representation was condensed using principal component analysis (PCA) to lower dimensionalities. It was then concatenated with demographic information, including gender, age, HTN, and DM, as shown in Fig. 5. Categorical variables were encoded using one-hot encoding, and numeric variables were normalized using the \(log1p\) function. Three algorithms based on different mechanisms were examined: SVM, k-nearest neighbors (KNN) [40], and Light Gradient Boosting Machine (LightGBM) [41]. The number of cluster in KNN were set to 3; The number of boosted trees in LightGBM was set to 100, and the learning rate was set to 0.05.
The reported classification metrics included the area under the receiver operating characteristic (AUROC) curve, accuracy, sensitivity, specificity, precision, and F1 score [42]. These metrics were calculated by averaging the performance over ten runs of the training and testing process. Taking the mean performances of the testing samples over multiple runs was considered more representative than a single validation result. Since all the metrics are percentage indicators and higher scores represent better performance, a general average score (Avg.) was calculated to provide an overall measure of performance across all metrics. This helps assess the overall performance of the model.
## III Results
Table I showcases the demographic profile of the recruited patients. A majority of the recruited patients were of the male gender (59.65%), with diagnoses of HTN (72.52%) and DM (64.91%). The average age of the patients was 67 years, with the majority falling within the range of adequate blood flow (47.95%). Fig. 6 showcases the visualization of an audio sample in its original form, as well as \(w_{L1}\), \(w_{L2}\), and \(w_{L3}\), respectively. Each level progressively filters out more information from the higher frequency band.
By comparing Table II(a) (n = 171) with (b) (n = 2,394), it becomes evident that augmenting the dataset through the noise-mixing approach proves to be effici
Fig. 4: Model training process. \(E\): encoder; \(D\): decoder; \(L\): latent representation; n: number of samples.
Fig. 5: Model architecture DAE multi-modality combination. \(E\): encoder; \(D\): decoder; \(L\): latent representation; \(\copyright\): subtraction of the two latent representation; \(\oplus\): concatenation of the two latent representation; PCA: principal component analysis.
enhances the discrimination capabilities of the latent representation. Furthermore, a comparison between Table 2(b) and (c) highlights the impact of an asymmetric input and output. It is noteworthy that the noise-to-clean scheme exhibits the ability to improve performance. Employing a lower level of DWT does not necessarily lead to performance improvement.
The most exceptional latent representation, which serves as the baseline for subsequent comparisons, is generated by the DAE using \(w_{L1}\), achieving an average score of 0.95. The representations learned from the waveform, FFT, and STFT of \(w_{L1}\) were compared (presented in Table 3). The results indicate that the waveform is the most viable feature for generating a discriminative latent representation for downstream task classification.
Table 4(a) presents the outcomes obtained by applying previously proposed methods in our scenario. However, none of these methods exhibited sufficient distinctiveness for AVF blood flow detection. In Table 4(b), we showcase the downstream task performance based on the representation of individual sites as well as the subtraction of site 2-3. While each individual site achieved a satisfactory accuracy of 0.90, the subtraction of site 2-3 surpassed other combinations, such as concatenation and addition. Table 4(c) provides an overview of the performance when patient-specific information was set as the classification target. All approaches yielded commendable performance, surpassing an average score of 0.93.
Utilizing the original latent representation (_dim_ = 100) as a reference, Table 4(d) exhibits the outcomes obtained by reducing the dimensionality to 5, 4, and 2. As the dimensionality decreased, the performance also declined. However, when the condensed representation (_dim_ = 2) is concatenated with the demographic information, the performance can be restored to a level approximating the best-performing version using LightGBM, as illustrated in Table 4(e). Fig. 7 provides a visualization of the feature importance as determined by the tree-based algorithm employed for branching. The results indicate that the model places the highest value on the condensed representation, followed by age, the presence of DM, gender, and HTN diagnosis.
## 4 Discussion
Prior investigations [32] have underscored the perils of training deep neural network models directly on the supervised target through gradient descent, as random initialization may not yield optimal performance. Conversely, commencing the training process with a pre-trained model has demonstrated its efficacy in enhancing generalization. In our study, we employed DAEs to generate a distinctive representation well-suited for detecting blood flow. Our findings indicate that representation learning presents a more viable approach for extracting auscultation features in AVF. Feature extraction methods based on signal characteristics may exhibit high specificity to particular prediction scenarios and lack ease of transferability to other contexts. In contrast, the acquired latent representation showcases improved generalization and applicability in non-extreme contrast scenarios (e.g., stenosis and
Figure 6: Wavelet transform from level 1 to 3. \(w_{L1}\): one-level low-pass filtered coefficients; \(w_{L2}\): two-level low-pass filtered coefficients; \(w_{L3}\): three-level low-pass filtered coefficients.
non-stenosis) as well as other patient-specific characteristics. While our study did not simultaneously collect different access indicators, such as RI or luminal diameter, to validate transferability, our findings suggest that a well-learned representation captures additional patient-specific information that can be transferred to multiple tasks. Furthermore, AVF blood flow serves as a more comprehensive measurement aligning with the need for early surveillance and supporting the detection of stenosis and other dysfunctions [17, 22]. Additionally, we have successfully applied the proposed architecture to pathological voice quality detection, yielding satisfactory outcomes [43].
Effective representation learning necessitates a sufficient amount of data. Despite our relatively large sample size of recruited patients, the original clean signal alone was insufficient to generate a well-learned representation. However, augmentation methods, such as the noise-mixing approach, offer a simple and feasible solution to overcome limitations in data size. Forcing the model to reconstruct the clean signal from the noisy signal proves to be an effective approach for generating a more representative representation. Our results demonstrate that the time domain information captured in the waveform is adequate for generating a well-learned representation, whereas the frequency domain information and time-dependent windows converted using FFT and STFT do not appear to be essential. Previous studies have also highlighted the sufficiency of time domain information for
Figure 7: Feature importance of LightGBM. LightGBM: light gradient-boosting machine; S1: dimensional 1 of the condensed signal; S2: dimensional 2 of the condensed signal; HTN: diagnosed with hypertension; DM: diagnosed with diabetes mellitus.
turbulent sound analysis [44], while additional information in the FFT window may introduce noise and lead to averaging within [28]. The fixed window width of STFT may not be ideal for accurately tracking dynamic signals [13]. Moreover, the inclusion of additional, less informative details can impede precise reconstruction, thereby generating a less representative representation. While a one-level DWT discards less informative details, excessive information loss (e.g., \(w_{L3}\)) hampers prediction performance.
The intensity of the bruit is most pronounced near the arterial anastomosis (site 1), followed by the arterial and venous puncture sites (site 2 and 3). Subtracting the latent representation of site 3 from site 2 effectively indicates the blood flow loss in between, thus demonstrating distinguishable results. Previous works [45, 46] have demonstrated the possibility of reconstructing diverse images of real subjects through interpolations in latent space, highlighting the feasibility of manipulating latent representations to generate targeted outcomes.
Excessive dimensionality reduction at the bottleneck of AEs may hinder reconstruction performance. Consequently, we opted to condense the dimensionality after generating adequate representations. Our results illustrate that prediction performance can be restored by concatenating a vector of six elements using an appropriate machine learning method. The concatenated vector comprises heterogeneous information, necessitating the identification of a threshold to discriminate between different categories. This task can be effectively handled by tree-based algorithms [47]. The condensed representation continues to exhibit greater discriminative power compared to other variables, as indicated by its high value in tree-based algorithms. Furthermore, numeric variables tend to demonstrate higher discriminative capability than categorical variables.
## 5 Conclusion
Our study showcased the effectiveness of representation learning using DAEs for non-invasive AVF blood flow detection. This approach proved to be highly accurate and capable of capturing patient-specific information, enabling its application in various contexts. Furthermore, the learned representations maintained high performance even under highly condensed conditions. The manipulation of latent representations holds great promise for future advancements. Further exploration of the generated latent representation can enhance the development of smart stethoscopes and pave the way for future applications.
|
2301.13217 | Gaussian-boson-sampling-enhanced dense subgraph finding shows limited
advantage over efficient classical algorithms | Recent claims of achieving exponential quantum advantage have attracted
attention to Gaussian boson sampling (GBS), a potential application of which is
dense subgraph finding. We investigate the effects of sources of error
including loss and spectral impurity on GBS applied to dense subgraph finding
algorithms. We find that the effectiveness of these algorithms is remarkably
robust to errors, to such an extent that there exist efficient classical
algorithms that can simulate the underlying GBS. These results imply that the
speedup of GBS-based algorithms for the dense subgraph problem over classical
approaches is at most polynomial, though this could be achieved on a quantum
device with dramatically less stringent requirements on loss and photon purity
than general GBS. | Naomi R. Solomons, Oliver F. Thomas, Dara P. S. McCutcheon | 2023-01-30T19:00:03Z | http://arxiv.org/abs/2301.13217v1 | # Gaussian-boson-sampling-enhanced dense subgraph finding shows
###### Abstract
Recent claims of achieving exponential quantum advantage have attracted attention to Gaussian boson sampling (GBS), a potential application of which is dense subgraph finding. We investigate the effects of sources of error including loss and spectral impurity on GBS applied to dense subgraph finding algorithms. We find that the effectiveness of these algorithms is remarkably robust to errors, to such an extent that there exist efficient classical algorithms that can simulate the underlying GBS. These results imply that the speedup of GBS-based algorithms for the dense subgraph problem over classical approaches is at most polynomial, though this could be achieved on a quantum device with dramatically less stringent requirements on loss and photon purity than general GBS.
## I Introduction
Gaussian boson sampling (GBS) is a non-universal model of quantum computation which samples from the photon number distribution of Gaussian squeezed states passed through a passive linear interferometer [1]. Unlike universal quantum computers, GBS can be realised by currently available quantum devices at a scale which is not efficiently simulable by a classical computer, and hence has been the subject of early claims of quantum advantage [2; 3; 4], challenging the extended Church-Turing thesis [5]. Recent proposals have suggested that GBS can be used for a number of different applications [6], several of which involve examining properties of graphs. These include dense subgraph identification, a problem which occurs in computing [7], computational biology [8; 9], and finance [10], as well as being used to predict molecular docking configurations [11]. Given the conjectured exponential complexity of simulating GBS, there is the potential that GBS enables genuine useful quantum computing applications offering exponential speedup.
However, current and near-term GBS experiments are likely to be significantly affected by error, having an impact on the effectiveness of GBS-based algorithms and the corresponding extent of any speedup over classical approaches. For example, in the presence of sufficient photon distinguishability and loss, GBS experiments can be efficiently classically simulated [12; 13]. On the other hand, accurately modelling imperfect photon purity requires the simulation of multiple spectral modes increasing the simulation complexity, and the impact of spectral impurity is subsequently not well understood in the context of large scale GBS.
This work uses methods for simulating GBS as described in [14], applied to the stochastic densest \(k\)-subgraph (DkS) finding algorithms in [15; 6]. We examine how effective GBS experiments are for this application, when including the effects of spectrally impure sources and loss. The densest \(k\)-subgraph problem is known to be NP-hard in general, and as such an efficient quantum algorithm is unlikely (as it is widely thought that NP is not in BQP, the class of problems solvable by a quantum computer, so NP-hard problems cannot be efficiently solved by any quantum computer, let alone GBS). Nevertheless it is important to know how errors will affect quantum approaches, and to elucidate the origin of any speedups offered, polynomial or otherwise. We show that the DkS finding algorithms are extremely robust to these sources of errors, even in regimes that may be efficient to simulate classically. These results suggest that GBS applied to the DkS problem is likely to offer only a polynomial speedup over classical computing approaches at best, but that this advantage does not impose challenging hardware requirements on loss and photon purity, and could be realised more readily than a general GBS device. Given the level or errors that can be tolerated for these algorithms, we speculate whether any advantage offered on quantum devices should really be considered 'quantum', or if 'analogue' or 'optical' may be more fitting terminology.
## II Background
### Gaussian boson sampling
A Gaussian state is defined as a quantum state in which the Wigner function \(\mathcal{W}(\mathbf{p},\mathbf{q})\) is Gaussian, and thus an \(m\)-mode Gaussian state can be completely defined by a \(2m\times 2m\) covariance matrix \(\mathbf{\sigma}\), and a \(2m\) displacement
vector \(\vec{D}\)[16]. In the following, we will assume \(\vec{D}=\vec{0}\).
Gaussian boson sampling consists of measurements in the Fock basis of an input Gaussian squeezed state passed through an interferometer. The measurement probabilities for photon number resolving (PNR) detectors are given by [1]:
\[P(S;\{s_{i}\})=\frac{2^{N}}{s_{1}!s_{2}!...s_{n}!\sqrt{|\sigma_{Q}|}}\text{Haf}( \mathcal{A}_{S}), \tag{1}\]
in which \(S\) is the subset of modes involved in the detection outcome, \(s_{i}\) is the number of photons measured in mode \(i\), \(N=\sum_{i}s_{i}\), and \(\sigma_{Q}=\sigma+\mathds{1}\). Given the matrix \(\mathcal{A}=(X\otimes\mathds{1})(\mathds{1}-2\sigma_{Q}^{-1})\), we construct \(\mathcal{A}_{S}\) by repeating the \(i\)'th row and column of \(\mathcal{A}\) according to \(s_{i}\).
The difficulty of classically simulating GBS is a result of the difficulty of calculating the matrix hafnian: \(\text{Haf}(A)=\sum_{\mu\in M}\left(\prod_{k=1}^{|S|}A_{\mu_{2k-1},\mu_{2k}}\right)\), where \(M\) is the set of perfect matchings of \(S\), the different ways of 'pairing' the indices (every permutation in which \(\mu_{2k}<\mu_{2(k+1)}\) and \(\mu_{2k}<\mu_{2k+1}\)). Each measurement carried out in a GBS experiment draws a sample from the distribution described by Eq. 1. Generating samples according to this distribution by direct calculation is #P-hard [17]. In the ideal case, the best algorithm for simulating GBS scales with the number of modes \(m\) and the measured number of photons \(N\) as \(O(mN^{3}2^{N/2})\)[18]. The complexity of drawing samples from an experimental apparatus is linear in the number of samples, but alongside the difficulties of physically constructing the device, there is the initial computational overhead of calculating the correct interferometer settings to produce the desired output. Matrix decompositions, e.g. Williamson and Bloch-Messiah, are needed to find the transformations to generate the correct state; these generally require \(O(m^{3})\) time in practice [19].
### Mapping graphs to Gaussian states
A graph \(G\) is defined by a collection of vertices \(V\) and connecting edges \(E\), and is characterised by an adjacency matrix \(A\). Here we restrict ourselves to equal-weight, undirected graphs, although all graphs can be represented as Gaussian states [20]. To find this state we use the procedure defined in Ref. [6]. Firstly, the adjacency matrix \(A\) is diagonalised find the eigenvalues \(\{\lambda\}\). Next we pick the scaling parameter \(c\) such that \(c<\lambda_{\text{max}}^{-1}\). We then construct \(\mathcal{A}=c(A\oplus A)\), and the covariance matrix can be found using
\[\mathbf{\sigma}=2(\mathds{1}-X\mathcal{A})^{-1}-\mathds{1},\qquad\text{with} \qquad X=\begin{pmatrix}0&\mathds{1}\\ \mathds{1}&0\end{pmatrix}. \tag{2}\]
This means that the probability of sampled outcomes will be proportional to \(|\text{Haf}(A)|^{2}\) (as \(\text{Haf}(A\oplus A)=|\text{Haf}(A)|^{2}\)). The scaling parameter (\(c\)) ensures \(\mathbf{\sigma}\) corresponds to physical squeezing values, and can be chosen to optimise the photon number distribution [11]. In principle the expected photon number can be arbitrarily high, but in practice this is limited by the amount of squeezing possible. The largest eigenvalue of the adjacency matrix is bounded above by the largest vertex degree in the graph, hence more well-connected graphs are likely to require greater squeezing levels [21].
The algorithms described in the next section are intended for use in the collision-free regime, which means that any outcomes containing multiple photons in one mode have negligible probability and can be ignored. For this reason we consider the use of threshold detectors which do not distinguish between photon numbers greater than zero. However, higher values of \(c\), which result in more collisions, tend to favour more samples being drawn in the preferred photon number or click subspace for D\(k\)S, and hence improve the efficiency of the algorithm. As the size of the graphs increases, the probability of collisions decreases, hence allowing data collected in this work to use higher values of \(c\).
### Dense subgraph identification
We will consider densest \(k\)-subgraph identification, the problem of finding the subgraph of \(k\) vertices with the largest density within the input graph \(G\). The density of \(G\) is given by: \(\rho(G)=\frac{2|E(G)|}{|V(G)|(|V(G)|-1)}\), in which \(|V(G)|\) and \(|E(G)|\) represent the number of vertices and edges in the graph, respectively. The maximum possible density of a graph is therefore 1, for a fully connected graph (a clique). The D\(k\)S problem is NP-hard [22], although solutions to variations on this problem can be found in polynomial time [23; 24], as can approximate solutions [25].
Consider a GBS experiment, where the Gaussian state leaving the interferometer has a covariance matrix as defined in Eq. 2 for some graph \(G\). We can then associate subsets \(S\) of output modes, corresponding to measurement outcomes, to subgraphs of \(G\), where the modes in \(S\) correspond to the vertices in the subgraph. As shown in [15], the number of perfect matchings in the graph, and hence the size of the hafnian of the adjacency matrix, is strongly correlated with the number of edges in a graph, and hence more dense subgraphs are the most likely sampling results. GBS can therefore used to seed algorithms that identify the densest subgraphs.
## Methods
### Boson sampling generated subgraphs
Different classical and quantum-enhanced algorithms exist for the DkS problem. In this work, we focus on
the simplest approach, where the classical case involves sampling from the uniform distribution of \(k\)-vertex subgraphs, and the quantum-enhanced case takes samples using GBS and retains only those subgraphs of the correct size (the random search algorithm in [15]), using these to select subgraphs. We tune the scaling parameter \(c\) to maximise the probability of click patterns of the correct size. Our approach can therefore be considered a best-case scenario for the GBS algorithm, where no overhead in required number of samples is incurred to acquire \(k\)-vertex subgraphs. In the Supplementary Material we explore the performance of the base (photon number varying) GBS algorithm, and also compare to the performance of simulated annealing algorithms [15]. We use as a benchmark a deterministic classical algorithm, consisting of iteratively removing the lowest-degree vertex [26], which does not always find the densest subgraph, but is guaranteed to find one of a density within a reasonable approximation ratio.
Our main text focuses on one representative, randomly generated graph, with further examples (showing qualitatively similar results) considered in the Supplementary Material. To create the graphs in this work, we use the Erdos-Renyi form [27] for generating a random graph \(G(M,\rho)\), of size \(M\) with density \(\rho\), as described in the Supplementary Material. The quantum Gaussian optics emulator used to model outcome statistics is available online [28].
### Modelling spectral impurities
We introduce a new method for inserting internal degrees of freedom into a Gaussian state which represents a specific graph. Our method is general enough to work for any physical states, and can be thought of as replacing the idealised single (spectral) mode squeezers used to generate the state with more physical sources which have imperfections. The procedure to substitute in imperfect sources is as follows:
1. Construct a graph \(G\) with adjacency matrix \(A\).
2. Choose a scaling value, \(c\), and construct a quantum state, \(\mathbf{\sigma}\), from the graph, according to Eq. 2.
3. Perform the Williamson decomposition to find the symplectic transformation that generates the state \(\mathbf{\sigma}=\mathbf{M}\mathds{1}\mathbf{M}^{\dagger}\), \(\mathbf{M}=\mathbf{\sigma}^{1/2}\) (for a pure state).
4. Perform the Bloch-Messiah decomposition on the symplectic matrix to get the unitaries and single mode squeezers, \(\mathbf{M}=\mathbf{U}\mathbf{M}^{D}\mathbf{V}^{\dagger}\).
5. Replace the unitaries \(\mathbf{U}\) and \(\mathbf{V}\) with the full sized versions including spectral modes, \(\mathcal{U}=\mathbf{U}\otimes\mathds{1}_{N_{F}}\), \(\mathcal{V}^{\dagger}=\mathbf{V}^{\dagger}\otimes\mathds{1}_{N_{F}}\) (in which \(N_{F}\) is the number of spectral modes).
6. Replace the set of single mode squeezers with a set of single spatial mode but multiple spectral mode squeezers \(\mathbf{M}^{D}\mapsto\mathcal{M}^{D}\). These multiple spectral mode squeezers are normalised to the same amount of squeezing as the single spectral mode. Detectors are used that trace over the spectral modes, i.e. non-frequency resolving detectors.
7. The multimode state can then be constructed using \(\mathbf{\sigma}=\mathcal{M}\mathcal{M}^{\dagger}\), where \(\mathcal{M}=\mathcal{U}\mathcal{M}^{D}\mathcal{V}^{\dagger}\). To measure photons in the multimode formalism, we follow [14].
In order to characterise the sources, we first find the non-zero Schmidt coefficients. To do this, we pick a set of basis functions, typically the Hermite polynomials for integrated optics, although any complete set of normalised functions is suitable. We then construct the vector of singular values \(\{S\}\), using a set of coefficients, \(X=\{x_{i}=s_{i}^{2}\}\). Remarkably, this allows a large number of geometric type distributions of the set of \(X\), which can be parameterised by only two numbers - here we use the labels \(l\), \(b\). We define the set \(X\) as \(X_{l,b}=\{x_{1}\}\cup\{x_{i}=k_{l,b}(i)(1-x_{1}),2\leq i\leq l\}\,,\) so that by construction the normalisation condition on the values \(k_{l,b}(i)\) is satisfied, \(\sum_{i}k_{l,b}(i)=1\). One useful form for the set \(k_{l,b}\) is the geometric scaling, \(k_{l,b}(i)=b^{l-i}/(\sum_{j=1}^{l-1}b^{j-1}).\) Here \(l\) is the number of non-zero Schmidt coefficients and \(b\) is the base which sets the factor for the geometric scaling between the elements \(x_{i}\). More information on this procedure is given in the Supplementary Information.
## III Results
The first graph we consider has \(n=24\) vertices and a density of \(0.362\). The Supplementary Material also considers a graph with density \(0.196\), and a modified version containing a clique. We run the algorithm searching for the densest subgraph of \(k=8\) vertices, which has density \(0.714\). Fig. 1 compares the performance of the classical (purple) and quantum-enhanced random sampling algorithms in the error free case (green). Shown also by the grey grid line is the result of the deterministic classical algorithm. The number of steps on the x-axis refers to the number of random subgraphs drawn from the target \(k\)-click subspace, with the final density being the maximum. The results are averaged over \(1000\) iterations. We can see that the quantum-enhanced algorithms outperform the classical algorithms, similarly to [15]. The deterministic algorithm performs well, but is quickly overtaken by the quantum-enhanced algorithms. Shown also is the GBS algorithm including different loss rates; Fig. 1 (top) as indicated. We do this by (after constructing the multi-spectral mode state) applying uniform loss to all modes. We also vary the spectral purity of the sources used in the GBS emulation using the method described above,
shown in Fig. 1 (bottom). Interestingly, the quantum advantage offered by the GBS algorithm appears to be remarkably robust to loss and spectral impurity errors.
In order to see how the quantum advantage scales, we now investigate graphs of different sizes, all generated using the Erdos-Renyi form [27] with \(\rho=0.4\). We draw samples corresponding to \(k\)-vertex subgraphs where \(k\) is the closest integer to \(\sqrt{n}\), either from the uniform distribution (for the classical case) or from a simulated GBS device. Fig. 2 compares the average densities of these subgraphs after drawing 1000 samples, including with loss and spectral impurity. In the bottom plot, showing the difference between the quantum samples drawn and the classical samples, we see that the extent of the quantum advantage here appears to decrease with increasing graph size with a potential plateau approached above 50.
## IV Discussion
It is clear that the quantum-enhanced D\(k\)S algorithms are resilient against the forms of error considered. A considerable speedup (compared to the classical case) is shown by the examples with high levels of error (up to 60% loss and sources with purity 0.4), with little difference between the effectiveness of these and the simulations with no error. This can in part be understood from Fig. 3, which shows the outcome probability distributions within the 8-click space (the samples used in Fig. 1) in the presence of loss and spectral impurity. Although some subtle differences can be seen in the distributions, the overall structure appears unchanged - namely, that samples with the high probabilities retain high probabilities. This gives some intuition as to why the algorithms continue to select dense subgraphs in the presence of errors.
It is worth noting that we have considered the performance of the quantum-enhanced algorithm when samples have been selected from the appropriate photon number subspace. This reflects the fact that in realistic implementations, samples drawn outside of this subspace can be manipulated into graphs of the correct side, with minimal overheads (for example, by removing low degree ver
Figure 1: Performance of the classical (purple) and quantum-enhanced random sampling algorithm (green), in the presence of varying amounts of loss (top) and spectral impurity (bottom) as indicated. The grey line shows the performance of the deterministic classical algorithm [26]. The final maximum (density) of sampled subgraphs is shown as a function of sample batch size.
Figure 2: Top: Average density of sampled \(\sqrt{n}\)-subgraphs as a function of graph size \(n\) with initial density 0.4. The variance is shown (shaded region) for the classical and quantum case with no error. Bottom: difference between the average density of sampled subgraphs, in the quantum case, compared to the classical case.
tices, adding vertices by combining graphs from different samples). As such the postselected implementation presented here effectively represents the best-case scenario for the quantum algorithm. Without this postselection the speedup remains but is reduced as more samples need to be drawn. This is considered further in the Supplementary Material.
When varying the graph size \(n\), we see that the same pattern holds, in which simulations with loss and spectrally impure sources perform well. It also seems that the speedup of the quantum algorithm is smaller with larger \(n\), and indicates a potential plateau at larger graph sizes. In addition, it can be seen that low spectral impurity has less of an impact than increased levels of loss. The variance of subgraph density in these samples is, in general, lower than that for the case with loss. This suggests that the experiment in this case may occasionally not be able to sample the densest subgraph, but also avoids sampling very low density subgraphs.
GBS with a high level of photon distinguishability is known to be efficient to simulate classically [12]. Similarly, increasing the spectral impurity approaches the limit of simulating thermal states, which are easier to simulate [29], and hence reduces the usefulness of quantum resources. For a sufficient level of loss, GBS permits an efficient classical simulation [13]. Following the analysis of Ref. [13], with loss of at least 43.7%, the GBS simulated here is efficiently classically simulable. In the present case as loss is increased, we increase the squeezing to compensate in order to maximise the likelihood of \(k\)-click events (as described in the Supplementary Material). This increases the loss threshold above which the GBS is classically simulable, up to a maximum of 47.3% for these parameters. As such, although the algorithms appear robust to errors, the level of errors means that efficient classical algorithms exist which perform the same underlying sampling task. We conclude that the apparent quantum advantage of GBS applied to the DKS is at best polynomial in run time.
The levels of loss and impurity tolerable suggest that these algorithms are not fully exploiting truly quantum mechanical effects such as quantum interference and entanglement. We note that it is not straightforward to engineer a graph problem requiring such phenomena since not all Gaussian states necessarily correspond to graphs in the manner described, i.e. adjacency matrices with real entries. Our results suggest future work could examine whether our findings are due to the structure of Gaussian states described by graphs, and if, for example, this is due to the limited treewidth of randomly produced graphs [30]. Our investigations have found no examples of graphs corresponding to Gaussian states that are not robust to loss and impurity errors. The code used here is open source and can be run with a variety of different graph types [28].
## IV Conclusion
We have used new methods of simulating Gaussian boson sampling to show that quantum-enhanced dense subgraph finding is particularly robust to spectral error and loss. This suggests that efficient classical methods exist that can implement the same algorithm as the quantum device, albeit with a potential polynomial overhead from simulating drawing samples. The implication is that a quantum device with modest requirements on loss and spectral purity could be used for the dense subgraph problem with little loss in performance, though this in itself raises questions as to whether the advantage gained can be said to be 'quantum' in the sense that it is generally understood when applied to algorithms. This work has focused on studying Erdos-Renyi random graphs, and further work could be done to generalise these results to different graphs with potentially more elaborate structures. Furthermore, more study is needed into the Gaussian states generated from graphs, and whether the conclusions drawn here apply to other applications of GBS. These results favor ongoing speculation as to the possibility of efficiently simulating GBS within certain regimes,
Figure 3: Probability distribution within the 8-click subspace, in the ideal case (purple) and with loss (green, top) and spectrally impure sources (green, bottom).
in particular whether it is possible to efficiently calculate hafnians of matrices with all positive values.
## Acknowledgements
We would like to thank Patrick Yard and Anthony Laing for feedback on the manuscript, and Ryan Mann for useful discussions. N.R.S. is supported by the Quantum Engineering Centre for Doctoral Training, EPSRC grant EP/SO23607/1.
|
2302.12459 | Multi-RIS-Enabled 3D Sidelink Positioning | Positioning is expected to be a core function in intelligent transportation
systems (ITSs) to support communication and location-based services, such as
autonomous driving, traffic control, etc. With the advent of low-cost
reflective reconfigurable intelligent surfaces (RISs) to be deployed in beyond
5G/6G networks, extra anchors with high angular resolutions can boost signal
quality and makes high-precision positioning with extended coverage possible in
ITS scenarios. However, the passive nature of the RIS requires a signal source
such as a base station (BS), which limits the positioning service in extreme
situations, such as tunnels or dense urban areas, where 5G/6G BSs are not
accessible. In this work, we show that with the assistance of (at least) two
RISs and sidelink communication between two user equipments (UEs), these UEs
can be localized even without any BSs involvement. A two-stage 3D sidelink
positioning algorithm is proposed, benchmarked by the derived Cram\'er-Rao
bounds. The effects of multipath and RIS profile designs on positioning
performance are evaluated, and several scenarios with different RIS and UE
locations are discussed for localizability analysis. Simulation results
demonstrate the promising positioning accuracy of the proposed BS-free sidelink
communication system in challenging ITS scenarios. Additionally, we propose and
evaluate several solutions to eliminate potential blind areas where positioning
performance is poor, such as removing clock offset via round-trip
communication, adding geometrical prior or constraints, as well as introducing
more RISs. | Hui Chen, Pinjun Zheng, Musa Furkan Keskin, Tareq Al-Naffouri, Henk Wymeersch | 2023-02-24T05:18:06Z | http://arxiv.org/abs/2302.12459v2 | # Multi-RIS-Enabled 3D Sidelink Positioning
###### Abstract
Positioning is expected to be a core function in intelligent transportation systems (ITSs) to support communication and location-based services, such as autonomous driving, traffic control, etc. With the advent of low-cost reflective reconfigurable intelligent surfaces (RISs) to be deployed in beyond 5G/6G networks, extra anchors with high angular resolutions can boost signal quality and makes high-precision positioning with extended coverage possible in ITS scenarios. However, the passive nature of the RIS requires a signal source such as a base station (BS), which limits the positioning service in extreme situations, such as tunnels or dense urban areas, where 5G/6G BSs are not accessible. In this work, we show that with the assistance of (at least) two RISs and sidelink communication between two user equipments (UEs), these UEs can be localized even without any BSs involvement. A two-stage 3D sidelink positioning algorithm is proposed, benchmarked by the derived Cramer-Rao bounds. The effects of multipath and RIS profile designs on positioning performance are evaluated, and several scenarios with different RIS and UE locations are discussed for localizability analysis. Simulation results demonstrate the promising positioning accuracy of the proposed BS-free sidelink communication system in challenging ITS scenarios. Additionally, we propose and evaluate several solutions to eliminate potential blind areas where positioning performance is poor, such as removing clock offset via round-trip communication, adding geometrical prior or constraints, as well as introducing more RISs.
3D positioning, intelligent transportation system, 5G/6G, sidelink communication, reconfigurable intelligent surface.
## I Introduction
An intelligent transportation system (ITS) is a key component under the concept of smart cities, aiming at reducing congestion, power consumption, and casualties [1, 2, 3]. Precise positioning and high data-rate communications are the key enablers to realizing ITS to obtain the state information of all the vehicles and to share the information between devices (and control center, if needed). With the increased frequency and bandwidth of the millimeter wave (mmWave) and terahertz (THz) systems, these two functionalities, namely, communication and positioning, are integrated and can benefit from each other [4, 5]. Position information can be extracted from channel estimation using radio signals, which can further assist communication with handover [6], and re-establishment of communication links [5]. Such integration provides versatile 5G/6G radio systems that can support both communications and positioning functions without introducing extra infrastructure deployments.
Positioning in the 5G new radio (NR) has been studied in TR38.855 [7], and initial efforts are carried out in both academia and industry. Based on existing mmWave positioning research works, huge potential has been shown in angle-based positioning [8], multipath resolvability [9], positioning under mobility [10], and 6D positioning scenarios [11]. Verification and evaluation of the onsite positioning systems have also been carried out with 5G base stations in indoor [12, 13] and outdoor scenarios [14]. However, no existing works have reported 5G positioning with comparable performance reported in theoretical analysis or expected in future use cases. The model mismatch (e.g., caused by hardware impairment [15], the effect of multipath [16], erroneous motion model [17]) and harsh propagation channels [18] constitute major factors that prevent the radio positioning system from achieving high-accuracy performance. These factors cause errors in channel parameter estimation and further affect the positioning performance, especially for UEs located far away from the BSs, where SNR is low and angle estimation error propagates with distance. Laying out a denser network with more active anchors can mitigate the above-mentioned positioning errors. However, network deployment cost increases with densification (especially for positioning where usually multiple base stations (BSs) are needed at the same time), requiring new enablers to accomplish positioning tasks.
One of the most promising enablers is sidelink communication (or device-to-device communication), introduced in 3GPP Release 12 [19], and more recently standardized in Release 16 [20] to support FR1 and the mmWave range FR2. With direct communication between devices, cooperative positioning is possible, which reduces the requirement for densely deployed BSs. In general, relative position information between each device/vehicle can be obtained in a cooperative positioning network given a sufficient number of vehicles [21]. With an anchor provided in a global coordinate system, the true positions of all the devices can be obtained. Moreover, the sidelink can also be implemented in partial coverage and out-of-coverage areas for positioning, where the relative location will be beneficial to vehicles in various applications such as platooning, collision avoidance, and so on [22].
Another promising technology that has been studied extensively for positioning (yet not standardized) is reconfigurable intelligent surface (RIS) [23, 24, 25].1 RISs consist of configurable elements with the ability to reshape the channel by
changing the phase of the incident signals. For communication, RISs are able to provide improved signal-to-noise ratio (SNR), reduced interference, and extended coverage under blockage. From the positioning point of view, RISs can work as additional passive anchors and provide high-resolution angular information by virtue of a large number of RIS elements. With the assistance of RIS, various positioning scenarios are created, with the simplest scenario being that a UE can be localized in a single-input-single-output (SISO) system [10]. In bi-static and multi-static sensing scenarios, when an object is equipped with a RIS, the object can be passively localized with transmitter and receiver anchors [29]. More recent works show joint RIS calibration and UE positioning can be performed simultaneously within a multiple-input-multiple-output (MIMO) system, providing a practical solution for RIS calibration [30]. All of these works have shown a huge potential for RIS in B5G/6G positioning.
Both sidelink communication and RIS are promising enablers for 5G/6G positioning, which have been separately studied in most of the works. The discussion on the potential of combining these two technologies for positioning appears quite recently [31, 32]. In [31], sidelink positioning with RISs is discussed at a high level without any technical details. The work in [32] requires the cooperation of multiple UEs and RISs with different states (e.g., enabled or disabled), and only time-of-arrival information is considered without benefiting from the high angular resolution of RISs. To the best of our knowledge, this is the first technical work that discusses RIS-enabled 3D sidelink positioning. We will show that **with a sufficient number of RISs (at least two) involved, the 3D positions of two single-antenna UEs can be estimated using sidelink communication even without any BSs**, making ubiquitous positioning possible.
In this work, we consider a 3D SISO sidelink communication scenario with two UEs and several RIS anchors. The contributions of this work can be summarized as follows:
* We formulate the problem of multi-RIS enabled 3D SISO sidelink positioning. In this scenario, the RISs (at least two) are working as passive anchors with known positions and orientations. With sidelink communication, the 3D positions of both UEs and the clock offset between them can be estimated. This positioning scenario applies both for one-way (e.g., for power-limited devices as receivers) or two-way (e.g., when better positioning performance is required) sidelink communication, where the system setup keeps unchanged.
* We derive the Cramer-Rao bounds (CRBs) for both channel parameter estimation and positioning, which serve several purposes: a) to benchmark the proposed positioning algorithms; b) to evaluate the different designs of RIS profiles; c) to provide guidelines on blind areas (where positioning task cannot be completed) evaluation and anchor deployment optimization.
* We adopt a time-orthogonal RIS profile design scheme to assist channel estimation by differentiating the LOS path, and each of the RIS paths from each other. With this scheme, we design positioning-oriented RIS profiles based on directional and derivative codebooks from prior UE information, which can be further improved with power control.
* We develop a low-complexity channel parameter estimation to obtain the delays and spatial frequencies (separate estimation of angle-of-arrival (AOA) and angle-of-departure (AOD) is not possible in this scenario due to inherent ambiguity, which will be described in Section II-C). Based on the delay and spatial frequency estimates from multiple RISs, a 3D-search positioning algorithm is developed to estimate the 3D positions of both UEs and the clock offset between them. In addition, maximum likelihood estimators for channel parameter estimation and positioning are also formulated for refining results.
* Extensive simulations are carried out to show the effectiveness of the derived performance analysis and the proposed algorithm. The effects of multipath and RIS profile designs on positioning performance are evaluated. Several RIS deployment strategies (e.g., placed on one side or both sides of the road), and further sidelink positioning system designs are suggested.
The structure of this paper is organized as follows. Section II discusses the system model, based on which problem formulation will be described. The performance analysis, including the lower bounds for channel parameters and position estimation, is provided in Section III. Section IV details the methodology of the RIS profile design and positioning algorithm. Simulation results are presented in Section V, followed by the conclusion of this work in Section VI.
_Notations and Symbols:_ Italic letters denote scalars (e.g. \(a\)), bold lower-case letters denote vectors (e.g. \(\mathbf{a}\)), and bold upper-case letters denote matrices (e.g. \(\mathbf{A}\)). \((\cdot)^{\top}\), \((\cdot)^{\mathsf{H}}\), \((\cdot)^{*}\), \((\cdot)^{-1}\), \(\text{tr}(\cdot)\), and \(\lVert\cdot\rVert\) represent the transpose, Hermitian transpose, complex conjugate, inverse, trace, and \(\ell\)-2 norm operations, respectively; \(\mathbf{A}\odot\mathbf{B}\), \(\mathbf{A}\otimes\mathbf{B}\), \(\mathbf{a}\circ\mathbf{b}\) are the Hadamard product, Kronecker product, and outer product, respectively; \([\cdot,\ \cdot,\ \cdots,\cdot]^{\top}\) denotes a column vector; \(\text{tr}(\cdot)\) returns the trace of a matrix, \([\cdot]_{i,j}\) is the element in the \(i\)-th row, \(j\)-th column of a matrix, and \([\cdot]_{a;b,c,d}\) is the submatrix constructed from the \(a\)-th to the \(b\)-th row, and the \(c\)-th to \(d\)-th column of a matrix; \(\angle(a)\) returns the phase of a complex number \(a\); \(\mathbf{1}_{N}\) denotes an \(N\times 1\) all ones vector, and \(\mathbf{I}_{N}\) denotes a size-\(N\) identity matrix.
## II System Model
In this section, we describe the geometry model, signal model, and problem statement of the considered multi-RIS-enabled 3D sidelink positioning.
### _Geometry Model_
We consider a 3D SISO scenario with \(L>1\) RISs and two unsynchronized single-antenna user equipments (UEs), where the 3D positions of both UEs need to be estimated via sidelink communication, as shown in Fig. 1. The transmitter and the receiver UEs are located at \(\mathbf{p}_{\text{T}}\), \(\mathbf{p}_{\text{R}}\in\mathbb{R}^{3}\), respectively. The positions (array centers) and orientations of \(L\) RISs are denoted by \(\mathbf{p}_{1},\ldots,\mathbf{p}_{L}\in\mathbb{R}^{3}\), and Euler angle vectors
\(\mathbf{o}_{1},\ldots,\mathbf{o}_{L}\in\mathbb{R}^{3}\) (which can be mapped into rotation matrices \(\mathbf{R}_{1},\ldots,\mathbf{R}_{L}\in\text{SO}(3)\)[5]), respectively. For simplicity, we assume all the RISs consist of \(N=N_{1}\times N_{2}\) RIS elements with \(N_{1}\) and \(N_{2}\) as the number of rows and columns, respectively. In addition, without loss of generality, all the RIS elements are located on the Y-Z plane of each RIS's local coordinate system with the \(n\)-th element located at \(\mathbf{z}_{n}=[0,z_{n,2},z_{n,3}]\). The AOA \(\boldsymbol{\varphi}_{\text{A},\ell}\) from the transmitter (TX) UE to the \(\ell\)-th RIS and the AOD \(\boldsymbol{\varphi}_{\text{D},\ell}\) from the same RIS to the receiver (RX) UE can then be expressed as
\[\boldsymbol{\varphi}_{\text{A},\ell} =\begin{bmatrix}\phi_{\text{A},\ell}\\ \theta_{\text{A},\ell}\end{bmatrix}=\begin{bmatrix}\arctan 2(t_{\text{T},\ell,2},t_ {\text{T},\ell,1})\\ \arcsin(t_{\text{T},\ell,3})\end{bmatrix}, \tag{1}\] \[\boldsymbol{\varphi}_{\text{D},\ell} =\begin{bmatrix}\phi_{\text{D},\ell}\\ \theta_{\text{D},\ell}\end{bmatrix}=\begin{bmatrix}\arctan 2(t_{\text{R},\ell,2},t_ {\text{R},\ell,1})\\ \arcsin(t_{\text{R},\ell,3})\end{bmatrix}, \tag{2}\]
where \(\phi\) and \(\theta\) are the azimuth and elevation angles, respectively. Let \(\mathbf{t}_{\text{T},\ell}=[t_{\text{T},\ell,1},t_{\text{T},\ell,2},t_{\text{T },\ell,3}]^{\top}\) and \(\mathbf{t}_{\text{R},\ell}=[t_{\text{R},\ell,1},t_{\text{R},\ell,2},t_{\text{ R},\ell,3}]^{\top}\) denote the direction vectors in the local coordinate system of the \(\ell\)-th RIS to the TX and RX, respectively. These vectors can be expressed using global positions \(\mathbf{p}_{\text{T}}\), \(\mathbf{p}_{\text{R}}\), \(\mathbf{p}_{\ell}\) and rotation matrix \(\mathbf{R}_{\ell}\) as
\[\mathbf{t}_{\text{T},\ell}=\mathbf{R}_{\ell}^{-1}\frac{\mathbf{p }_{\text{T}}-\mathbf{p}_{\ell}}{\|\mathbf{p}_{\text{T}}-\mathbf{p}_{\ell}\|}= \begin{bmatrix}\cos(\phi_{\text{A},\ell})\cos(\theta_{\text{A},\ell})\\ \sin(\phi_{\text{A},\ell})\cos(\theta_{\text{A},\ell})\\ \sin(\theta_{\text{A},\ell})\end{bmatrix}, \tag{3}\] \[\mathbf{t}_{\text{R},\ell}=\mathbf{R}_{\ell}^{-1}\frac{\mathbf{p }_{\text{R}}-\mathbf{p}_{\ell}}{\|\mathbf{p}_{\text{R}}-\mathbf{p}_{\ell}\|}= \begin{bmatrix}\cos(\phi_{\text{D},\ell})\cos(\theta_{\text{D},\ell})\\ \sin(\phi_{\text{D},\ell})\cos(\theta_{\text{D},\ell})\\ \sin(\theta_{\text{D},\ell})\end{bmatrix}. \tag{4}\]
### _Signal Model_
Assume \(K\) subcarriers are adopted in the sidelink communication, and \(G\) orthogonal frequency-division multiplexing (OFDM) symbols are sent during the coherence time. The received signal block \(\mathbf{Y}\in\mathbb{C}^{K\times G}\) can be formulated as
\[\mathbf{Y}=\mathbf{Y}_{\text{U}}+\mathbf{Y}_{\text{R}}+\mathbf{N}, \tag{5}\]
where \(\mathbf{N}\in\mathbb{C}^{K\times G}\) is the additive white Gaussian noise matrix with each element \(n_{k,g}\sim\mathcal{CN}(0,\sigma_{n}^{2})\) and \(\sigma_{n}^{2}=WN_{0}\) depending on the bandwidth \(W\) and the noise power spectral density (PSD) \(N_{0}\), \(\mathbf{Y}_{\text{U}}\) and \(\mathbf{Y}_{\text{R}}\) are the received signal matrix of the uncontrollable paths and RIS paths, including multipath effect which can be modeled as Rician fading [33, 34], as
\[\mathbf{Y}_{\text{U}}=\underbrace{\frac{\rho_{0}\sqrt{K_{r}}}{ \sqrt{K_{r}+1}}\mathbf{D}(\tau_{0})}_{\text{LOS channel}}\odot\mathbf{X}+ \underbrace{\frac{\rho_{0}}{\sqrt{K_{r}+1}}\mathbf{H}_{\text{U,MP}}}_{\text{ multipath channel}}\odot\mathbf{X}, \tag{6}\] \[\mathbf{Y}_{\text{R}}= \sum_{\ell=1}^{L}\Big{(}\underbrace{\frac{\rho_{\ell}\sqrt{K_{r}}} {\sqrt{K_{r}+1}}\mathbf{D}(\tau_{\ell})\odot\mathbf{A}_{\ell}(\boldsymbol{ \psi}_{\ell})}_{\ell\text{th RIS channel}}\odot\mathbf{X}\] (7) \[+\underbrace{\frac{\rho_{\ell}}{\sqrt{K_{r}+1}}\mathbf{H}_{\text{ R},\text{MP}}}_{\text{multipath channel}}\odot\mathbf{X}\Big{)}.\]
where \(K_{r}\) is the Rician \(K\)-factor that represents the power ratio between the deterministic channel and the random multipath channels, and the entries of \(\mathbf{H}_{\text{U,MP}}\) and \(\mathbf{H}_{\text{R,MP}}\) are independent and identically distributed (i.i.d.) \(\mathcal{CN}(0,1)\) random variables that model the random multipath effect of LOS and RIS channels.2 The complex channel gains of the line-of-sight (LOS) path, and the \(\ell\)-th RIS path are denoted as \(\rho_{0}\), and \(\rho_{\ell}\) (\(\ell\geq 1\)), respectively. The subscripts also apply to the signal propagation delays of different paths such as \(\tau_{0}\) and \(\tau_{\ell}\) (\(\ell\geq 1\)). The AOA and AOD of the \(l\)-th RIS path are denoted as \(\boldsymbol{\psi}_{\ell}=[\boldsymbol{\varphi}_{\text{A},\ell}^{\top}, \boldsymbol{\varphi}_{\text{D},\ell}^{\top}]^{\top}\), defined in (1) and (2). The pilot signal matrix \(\mathbf{X}\) is defined as
Footnote 2: The Rician fading model assumes that the coherence time of multipath is short, and the RIS channel in (7) is obtained by ignoring the triple-bounced path (i.e., TX-object-RIS-object-RX), where the details can be found in Appendix A.
\[\mathbf{X}=\sqrt{P}\mathbf{x}\boldsymbol{\delta}^{\top}\in\mathbb{C}^{K\times G },\quad\boldsymbol{\delta}=[\delta_{1},\ldots,\delta_{G}]^{\top}\ \ \ (\|\boldsymbol{\delta}\|=\sqrt{G}), \tag{8}\]
where \(\mathbf{x}\in\mathbb{C}^{K}\) (\(|x_{k}|=1\)) represents the transmitted symbols for \(K\) subcarriers, and the transmission power of the \(g\)-th transmission is \(\delta_{g}^{2}P\), where \(\boldsymbol{\delta}=\mathbf{1}_{G}\) indicates a constant transmit power during \(G\) transmissions. Here, we use the same \(\mathbf{x}\) for all the \(G\) transmissions for simplicity, and the aim of introducing \(\boldsymbol{\delta}\) is to implement a const
Fig. 1: Illustration of multi-RIS-enabled 3D sidelink positioning. With the help of multiple (at least two) RISs, the 3D positions of both UEs (with an unknown clock offset) can be estimated through a one-way sidelink communication, even without BSs involved.
each transmission/RIS beam to enhance positioning accuracy with the same total transmission power, which will be detailed in Section IV. The delay matrix \(\mathbf{D}(\tau_{\ell})=\mathbf{d}(\tau_{\ell})\mathbf{1}_{\overline{c}}^{ \top}\in\mathbb{C}^{K\times G}\) contains the delay information of the \(\ell\)-th path across different subcarriers as3
Footnote 3: We assume the movement within the coherence time is negligible and hence the delay at the \(k\)-th subcarrier is identical across different transmissions.
\[[\mathbf{D}(\tau_{\ell})]_{k,g}=d_{k}(\tau_{\ell})=e^{-j2\pi\mathbb{k}\Delta_{f }\tau_{\ell}}, \tag{9}\]
with \(\Delta_{f}=W/K\) as the subcarrier spacing, \(W\) as the bandwidth. The delay \(\tau_{\ell}\) of the \(\ell\)-th path can be expressed as
\[\tau_{0}=\frac{d_{0}+B}{c}=\frac{\|\mathbf{p}_{\text{T}}-\mathbf{p}_{\text{R} }\|+B}{c}, \tag{10}\]
\[\tau_{\ell} =\frac{d_{\text{T},\ell}+d_{\text{R},\ell}+B}{c} \tag{11}\] \[=\frac{\|\mathbf{p}_{\ell}-\mathbf{p}_{\text{T}}\|+\|\mathbf{p} _{\ell}-\mathbf{p}_{\text{R}}\|+B}{c},\ (\ell\geq 1),\]
with \(B\) indicating the clock offset (converted to meters) between the two UEs. The matrix \(\mathbf{A}_{\ell}(\mathbf{\psi}_{\ell})\in\mathbb{C}^{K\times G}\) captures the effect of RIS phase modulation with each element expressed as
\[[\mathbf{A}_{\ell}(\mathbf{\psi}_{\ell})]_{k,g} =a_{g}(\mathbf{\psi}_{\ell})=\mathbf{a}(\mathbf{\varphi}_{\text{D},\ell}) ^{\top}\mathbf{\Omega}_{\ell,g}\mathbf{a}(\mathbf{\varphi}_{\text{A},\ell}) \tag{12}\] \[=\mathbf{\omega}_{\ell,g}^{\top}(\mathbf{a}(\mathbf{\varphi}_{\text{D}, \ell})\odot\mathbf{a}(\mathbf{\varphi}_{\text{A},\ell})),\]
where \(\mathbf{\Omega}_{\ell,g}=\text{diag}(\mathbf{\omega}_{\ell,g})\in\mathbb{C}^{N \times N}\) is a diagonal matrix and \(\mathbf{\omega}_{\ell,g}=[\omega_{\ell,g,1},\dots,\omega_{\ell,g,N}]\) (\([\omega_{\ell,g,n}]=1\)) is a vector containing all the RIS element coefficients. The steering vectors \(\mathbf{a}(\mathbf{\varphi}_{\text{A}})\) and \(\mathbf{a}(\mathbf{\varphi}_{\text{D}})\) (based on the far-field assumption) can be expressed as
\[\mathbf{a}(\mathbf{\varphi})=e^{j\frac{2\pi\ell}{c}\mathbf{Z}^{\top}\mathbf{t}( \mathbf{\varphi})}, \tag{13}\]
with \(\mathbf{Z}=[\mathbf{z}_{1},\dots,\mathbf{z}_{n}]\in\mathbb{R}^{3\times N}\) containing the positions of all the RIS elements, and \(\mathbf{t}(\mathbf{\varphi})\) can be obtained from (3) and (4).
Considering high delay resolution (due to a large bandwidth) in mmWave/sub-THz systems, high angular resolution (due to a large RIS size), and the randomness of multipath channels without providing information for positioning, we will focus on deterministic channels in algorithm design and FIM analysis. The simplified channel model can be given based on (6) and (7) by setting \(K_{r}\rightarrow\infty\) as
\[\mathbf{Y}_{\text{U}} =\underbrace{\rho_{\text{0}}\mathbf{D}(\tau_{0})}_{\text{LOS channel}}\odot\mathbf{X}, \tag{14}\] \[\mathbf{Y}_{\text{R}} =\underbrace{\sum_{\ell=1}^{L}\rho_{\text{f}}\mathbf{D}(\tau_{ \ell})\odot\mathbf{A}_{\ell}(\mathbf{\psi}_{\ell})}_{\text{RIS channel}}\odot\mathbf{X}. \tag{15}\]
However, the effect of multipath on the positioning will be evaluated in the simulation section in Sec. V-B.
### _Problem Statement_
Based on the signal model, we are able to formulate the 3D sidelink positioning problem. Since the AOD and AOA are both unknown, we further define a steering vector [30]
\[\mathbf{a}_{\text{R}}(\mathbf{\varphi}_{\text{D}},\mathbf{\varphi}_{\text{A}})= \mathbf{a}(\mathbf{\varphi}_{\text{D}})\odot\mathbf{a}(\mathbf{\varphi}_{\text{A}})=e ^{j\frac{2\pi\ell_{\text{f}}}{c}\mathbf{Z}^{\top}\mathbf{t}_{\text{R}}(\mathbf{ \varphi}_{\text{D}},\mathbf{\varphi}_{\text{A}})}, \tag{16}\]
where
\[\mathbf{t}_{\text{R}}(\mathbf{\varphi}_{\text{D}},\mathbf{\varphi}_{\text{A}})= \mathbf{t}(\mathbf{\varphi}_{\text{D}})+\mathbf{t}(\mathbf{\varphi}_{\text{A}}). \tag{17}\]
Note that the first row of the matrix \(\mathbf{Z}\) contains all zeros (RIS elements are located on the local Y-Z plane), meaning the first element of the vector \(\mathbf{t}_{\text{R}}\) cannot be estimated.
To support positioning, we define a channel parameter vector as \(\mathbf{\eta}=[\eta_{0},\eta_{1},\dots,\eta_{L}]\in\mathbb{R}^{SL+3}\) with \(\mathbf{\eta}_{0}=[\tau_{0},\alpha_{0},\beta_{0}]^{\top}\) and \(\mathbf{\eta}_{\ell}=[\xi_{\ell},\zeta_{\ell},\tau_{\ell},\alpha_{\ell},\beta_{ \ell}]^{\top}\) containing the channel information of the LOS channel and \(\ell\)-th RIS channel. In the vector \(\mathbf{\eta}\), \(\alpha\) and \(\beta\) are the amplitude and phase of the complex channel gain (i.e., \(\rho=\alpha e^{-j\beta}\)), \(\tau\) is the delay, \(\xi\) and \(\zeta\) are the second and third entry of the vector \(\mathbf{t}_{\text{R}}(\mathbf{\varphi}_{\text{D}},\mathbf{\varphi}_{\text{A}})\) as
\[\xi_{\ell} =\sin(\phi_{\text{A},\ell})\cos(\theta_{\text{A},\ell})+\sin(\phi _{\text{D},\ell})\cos(\theta_{\text{D},\ell}), \tag{18}\] \[\zeta_{\ell} =\sin(\theta_{\text{A},\ell})+\sin(\theta_{\text{D},\ell}). \tag{19}\]
With new defined spatial frequency \(\xi_{\ell}\) and \(\zeta_{\ell}\), the matrix \(\mathbf{A}_{\ell}(\mathbf{\psi}_{\ell})\) in (12) can also be expressed as
\[[\mathbf{A}_{\ell}(\xi_{\ell},\zeta_{\ell})]_{g,k}=\mathbf{\omega}_{\ell,g}^{\top} \mathbf{a}_{\text{R}}(\xi_{\ell},\zeta_{\ell})=\mathbf{\omega}_{\ell,g}^{\top}e^{j \frac{2\pi\ell_{\text{f}}}{c}\mathbf{Z}^{\top}[0,\xi_{\ell},\zeta_{\ell}]^{\top}}. \tag{20}\]
By removing the parameters \(\alpha_{\ell},\beta_{\ell}\) (\(r=0,\dots,L\)), a nuisance-free channel parameter vector can be obtained as \(\mathbf{\eta}_{\text{N}}=[\eta_{\text{N},0},\mathbf{\eta}_{\text{N},1},\dots,\mathbf{\eta}_{ \text{N},L}]\in\mathbb{R}^{3L+1}\) with \(\eta_{\text{N},0}=\tau_{0}\), \(\mathbf{\eta}_{\text{N},\ell}=[\xi_{\ell},\zeta_{\ell},\tau_{\ell}]^{\top}\). We further define a state unknown vector \(\mathbf{s}=[\mathbf{p}_{\text{T}}^{\top},\mathbf{p}_{\text{R}}^{\top},B, \alpha_{0},\beta_{0},\dots,\alpha_{L},\beta_{L}]^{\top}\in\mathbb{R}^{3L+7}\). Similarly, a nuisance-free state unknown vector can be defined as \(\mathbf{s}_{\text{N}}=[\mathbf{p}_{\text{T}}^{\top},\mathbf{p}_{\text{R}}^{\top},B] ^{\top}\in\mathbb{R}^{7}\) containing the 3D positions of two UEs and a clock offset \(B\).
The positioning task in this work is to extract the geometric channel parameter vector \(\mathbf{\eta}\) from the observed signal \(\mathbf{Y}\), and then estimate the state vector \(\mathbf{s}\) based on \(\mathbf{\eta}\). To make sure the number of channel parameters (i.e., \(\mathbf{\eta}_{\text{N}}\) with \(3L+1\) elements) is larger than the number of state parameters (i.e., \(\mathbf{\text{s}}_{\text{N}}\) with 7 elements), the minimum number of RISs needed is \(L=2\). Since the positioning task can be performed by a one-way positioning pilot signal transmission, the problem formulation can be easily extended to multiple UEs and more than two RISs. Note that a round-trip pilot signal transmission will only provide extra clock offset estimation information, reducing the nuisance-free state vector \(\mathbf{\text{s}}_{\text{N}}\) into 6 unknown parameters, but does not affect the localizability.
## III Lower Bound Analysis
In this section, we derive the CRBs for the estimation of the channel parameter vector \(\mathbf{\eta}\) and state vector \(\mathbf{s}\).
### _CRB of Channel Parameter Estimation_
Based on the defined channel parameter vector \(\mathbf{\eta}\), state vector \(\mathbf{s}\), and the signal model in (5), the channel parameter estimation CRB can be obtained as \(\mathbf{\mathcal{I}}(\mathbf{\eta})^{-1}\in\mathbb{R}^{(5L+3)\times(5L+3)}\) with [35] (Sec. 3)
\[\mathbf{\mathcal{I}}(\mathbf{\eta})=\frac{2}{\sigma_{n}^{2}}\sum_{g=1}^{G} \sum_{k=1}^{K}\mathrm{Re}\left\{\left(\frac{\partial\mu_{g,k}}{\partial\mathbf{ \eta}}\right)^{\mathsf{H}}\left(\frac{\partial\mu_{g,k}}{\partial\mathbf{\eta}} \right)\right\}. \tag{21}\]
Here, \(\mathbf{\mathcal{I}}(\mathbf{\eta})\) is the FIM of the channel parameter vector, \(\mathrm{Re}\{\cdot\}\) extracts the real part of a complex variable, and \(\mu_{g,k}=\mathbf{Y}_{\mathrm{U},g,k}+\mathbf{Y}_{\mathrm{R},g,k}\) is the noise-free observation of the received signal. We can further define delay error bound (DEB) and spatial error bounds (SEBs) for \(\tau_{\ell},\xi_{\ell},\zeta_{\ell}\) as
\[\mathrm{EB}(\tau_{\ell}) =\sqrt{([\mathbf{\mathcal{I}}(\mathbf{\eta})^{-1}]_{1+5(\ell-1),1+5(\ell- 1)})},\quad(\ell\geq 0), \tag{22}\] \[\mathrm{EB}(\xi_{\ell}) =\sqrt{([\mathbf{\mathcal{I}}(\mathbf{\eta})^{-1}]_{5\ell-1,5\ell-1})}, \quad(\ell>0),\] (23) \[\mathrm{EB}(\zeta_{\ell}) =\sqrt{([\mathbf{\mathcal{I}}(\mathbf{\eta})^{-1}]_{5\ell,5\ell})},\quad( \ell>0). \tag{24}\]
### _CRB for 3D Sidelink Positioning_
Based on (21), the CRB of the state parameters \(\mathbf{s}\) can be obtained as
\[\mathrm{CRB}\triangleq[\mathbf{\mathcal{I}}(\mathbf{s})]^{-1}=\left[ \mathbf{J}_{\mathrm{S}}\mathbf{\mathcal{I}}(\mathbf{\eta})\mathbf{J}_{\mathrm{S}}^{ \top}\right]^{-1}, \tag{25}\]
where \(\mathbf{J}_{\mathrm{S}}\triangleq\frac{\partial\mathbf{\eta}}{\partial\mathbf{\eta}} \in\mathbb{R}^{(3L+7)\times(5L+3)}\) is the Jacobian matrix using a denominator-layout notation from the channel parameter vector \(\mathbf{\eta}\) to the state vector \(\mathbf{s}\). We can further define the position error bounds (PEBs), and clock offset error bound (CEB) as
\[\mathrm{PEB}_{\mathrm{T}} =\sqrt{\text{tr}([\mathbf{\mathcal{I}}(\mathbf{s})^{-1}]_{1:3,1:3})}, \tag{26}\] \[\mathrm{PEB}_{\mathrm{R}} =\sqrt{\text{tr}([\mathbf{\mathcal{I}}(\mathbf{s})^{-1}]_{4:6,4:6})},\] (27) \[\mathrm{CEB} =\sqrt{([\mathbf{\mathcal{I}}(\mathbf{s})^{-1}]_{7,7})}. \tag{28}\]
The derivation of FIMs \(\mathbf{\mathcal{I}}(\mathbf{\eta})\) and \(\mathbf{\mathcal{I}}(\mathbf{s})\) can be found in Appendix B. The derived CRB will be used to benchmark the proposed positioning algorithm and evaluate the performance of different RIS profiles, as will be shown in the simulation results in Section V.
## IV Methodology
In this section, we describe RIS profile design, channel parameter estimation algorithms and positioning algorithms.
### _RIS Profile Design_
#### Iv-A1 Time-Orthogonal Random Codebook
Without any prior information on the UEs, random profiles are adopted. In this case, each element in the coefficients vector of the \(\ell\)-th RIS \(\mathbf{\omega}_{\ell,g}\) is chosen with unit amplitude and random phase following \(\angle\omega_{\ell,g,n}\sim\mathcal{U}[0,2\pi)\). However, the channel parameter estimation with multiple RISs is challenging as the received signal contains the reflected signals from all the paths. In order to assist channel parameter estimation, we adopt time-orthogonal profiles to differentiate independent RIS paths from the others [10]. We first divide the total transmission \(G\) into \(\Gamma\geq L+1\) blocks (each block with \(\tilde{G}=G/\Gamma\) OFDM symbols) and define a matrix \(\mathbf{B}\in\mathbb{R}^{\Gamma\times(L+1)}\) containing orthogonal columns (e.g., from a DFT matrix) as [16, 36]
\[\mathbf{B}=[\mathbf{b}_{0},\mathbf{b}_{1},\ldots,\mathbf{b}_{L}],\quad\text{ s.t. }\ \mathbf{B}^{\top}\mathbf{B}=\mathbf{I}_{(L+1)\times(L+1)}, \tag{29}\]
where each element inside \(\mathbf{B}\) has a unit amplitude (i.e., \(|[\mathbf{B}]_{i,j}|=1\)). By selecting \(\mathbf{\omega}_{\ell,\tilde{g}}\in\mathbb{C}^{N}\) for \(1\leq\ell\leq L\), and \(1\leq\tilde{g}\leq\tilde{G}\), the rest of the RIS profiles can be obtained as
\[\mathbf{\omega}_{\ell,(i-1)\tilde{G}+\tilde{g}}=b_{\ell,i}\mathbf{\omega}_{\ell,\tilde {g}},\ (i=1,\ldots,\Gamma), \tag{30}\]
where \(b_{\ell,i}\) is the \(i\)-th element of the vector \(\mathbf{b}_{\ell}\). We further define the received signal for the \(i\)-th block as \(\mathbf{Y}^{(i)}\) (\(i=1,\ldots,\Gamma\)), and the LOS path and all the RIS paths can be separated as
\[\tilde{\mathbf{Y}}_{\ell}=\frac{1}{\Gamma}\sum_{i=1}^{\Gamma}b_{\ell,i}\mathbf{ Y}^{(i)}=\mathbf{Y}_{\ell}^{(1)}+\tilde{\mathbf{N}}, \tag{31}\]
where \(\tilde{\mathbf{Y}}_{\ell}\in\mathbb{R}^{K\times\tilde{G}}\) has a smaller size than the received signal block \(\mathbf{Y}\in\mathbb{R}^{K\times G}\) defined in (5), and \(\tilde{\mathbf{N}}\in\mathbb{R}^{K\times\tilde{G}}\) with each element \(n_{k,\tilde{g}}\sim\mathcal{CN}(0,\sigma_{n}^{2}/\Gamma)\). In the following RIS profile design, we will only discuss the design of the first block, and the rest of the blocks can be obtained based on (30) to form orthogonal profiles that assist channel parameter estimation.
#### Iv-A2 Directional Codebook
Assume the target positions are known, for example, from previous estimations or extra sensors, RIS profiles can be designed to improve positioning performance. A directional (DIR) codebook is one of the simplest codebooks with the main idea of maximizing the received signal strength of the receiver, given the known states (e.g., positions of two UEs). By dropping the time index \(g\), for the UEs located at \(\mathbf{p}_{\mathrm{T}}\) and \(\mathbf{p}_{\mathrm{R}}\), the optimal RIS profile that maximizes the received energy can be obtained based on (12) and (13) as
\[\mathbf{\omega}_{\ell}^{(1)}=\mathbf{a}^{*}(\mathbf{\varphi}_{\ell})=e^{-j \frac{2\pi f}{c}\mathbf{Z}^{\top}(\mathbf{t}_{\mathrm{T},\ell}+\mathbf{u}_{ \mathrm{R},\ell})}. \tag{32}\]
Here, \(\mathbf{t}_{\mathrm{T},\ell}\) and \(\mathbf{t}_{\mathrm{R},\ell}\) can be obtained based on (3) and (4), which are the direction vectors obtained from \(\mathbf{p}_{\mathrm{T}}\) and \(\mathbf{p}_{\mathrm{R}}\), respectively. In real scenarios, however, we cannot know the true location of both UEs, and the prior information may appear in the form of certain distributions (e.g., \(\mathbf{p}_{\mathrm{R}}\sim\mathcal{N}(\mathbf{p}_{\mathrm{R}},\tilde{\mathbf{ \Sigma}}_{\mathbf{p}_{\mathrm{R}}})\)). In this case, we can sample two candidate positions \(\mathbf{p}_{\mathrm{T},\tilde{g}}\) and \(\mathbf{p}_{\mathrm{R},\tilde{g}}\) (\(1\leq\tilde{g}\leq\tilde{G}\)) for \(\tilde{G}\) times based on their distribution. The DIR codebook of the \(\ell\)-th RIS (for the first block of transmissions) can be obtained as
\[\mathbf{\Xi}_{\ell}^{\mathrm{DIR}}=[\mathbf{\omega}_{\ell,1}^{(1)},\ldots,\mathbf{\omega}_{ \ell,\tilde{G}}^{(1)}]\in\mathbb{C}^{N\times\tilde{G}}, \tag{33}\]
with each column \(\mathbf{\omega}_{\ell,\tilde{g}}^{(1)}\) corresponding to the DIR beam sampled UE positions \(\mathbf{p}_{\mathrm{T},\tilde{g}}\) and \(\mathbf{p}_{\mathrm{R},\tilde{g}}\) based on (32). The RIS profiles for the rest \((\Gamma-1)\) blocks can be obtained based on (30).
#### Iv-A3 Directional and Derivative Codebook
As has been shown in previous works [37, 38], maximizing the received signal strength does not indicate an optimal positioning performance. For given \(\xi\) and \(\zeta\) (computed from the positions
\(\mathbf{p_{T}}\) and \(\mathbf{p_{R}}\)), the optimal RIS phase profiles should lie in the subspace spanned by the following vectors [38] as
\[\boldsymbol{\omega}^{(1)} =\mathbf{a_{R}^{*}}(\xi,\zeta)=e^{-j\frac{2\pi f_{c}}{c}\mathbf{Z}^ {\top}[0,\xi,\zeta]^{\top}}, \tag{34}\] \[\boldsymbol{\omega}^{(2)} =\frac{\partial\mathbf{a_{R}^{*}}(\xi,\zeta)}{\partial\xi}= \boldsymbol{\omega}^{(1)}\odot(-j\frac{2\pi f_{c}}{c}\mathbf{Z}^{\top}[0,1,0]^ {\top}),\] (35) \[\boldsymbol{\omega}^{(3)} =\frac{\partial\mathbf{a_{R}^{*}}(\xi,\zeta)}{\partial\zeta}= \boldsymbol{\omega}^{(1)}\odot(-j\frac{2\pi f_{c}}{c}\mathbf{Z}^{\top}[0,0,1]^ {\top}), \tag{36}\]
where the first RIS profile \(\boldsymbol{\omega}^{(1)}\) is identical to the DIR beam defined in (32), and \(\boldsymbol{\omega}^{(2)}\), \(\boldsymbol{\omega}^{(3)}\) are the so-called derivative (DER) beams. Since the elements in \(\boldsymbol{\omega}^{(2)}\) and \(\boldsymbol{\omega}^{(3)}\) do not have unit amplitude, gradient projection [39] is adopted to find the closest unit-amplitude profiles to \(\boldsymbol{\omega}^{(2)}\) and \(\boldsymbol{\omega}^{(3)}\). Similar to the formulation of the DIR codebook in (33), we can sample TX and RX UE positions for \(\tilde{G}/3\) times based on their distribution. The DIR+DER codebook of the \(\ell\)-th RIS can be formulated as
\[\boldsymbol{\Xi}_{\ell}=[\boldsymbol{\omega}_{\ell,1}^{(1)},\boldsymbol{ \omega}_{\ell,1}^{(2)},\boldsymbol{\omega}_{\ell,1}^{(3)},\ldots,\boldsymbol {\omega}_{\ell,\tilde{G}/3}^{(1)},\boldsymbol{\omega}_{\ell,\tilde{G}/3}^{(2) },\boldsymbol{\omega}_{\ell,\tilde{G}/3}^{(3)}]\in\mathbb{C}^{N\times\tilde{G}}. \tag{37}\]
Similarly, the RIS profiles for the rest \((\Gamma-1)\) blocks can be obtained based on (30).
#### Iii-A4 Power Control of the DIR+DER Codebook with Prior Information
To further improve the positioning performance, power control can be adopted for the implemented codebook by using different transmit powers for each beam in the codebook. More specifically, the optimization problem can be formulated (take RX UE for example) as minimizing the expectation of the RX PEB (defined in (27)) given by
\[\begin{split}\min_{\boldsymbol{\delta}\in\mathbb{R}^{G}}\int \text{Prob}(\mathbf{s})&\mathrm{PEB_{R}}(\mathbf{s},\boldsymbol{ \Xi}_{1},\ldots,\boldsymbol{\Xi}_{L}|\boldsymbol{\delta})\text{ds},\\ &\mathrm{s.t.}\ \|\boldsymbol{\delta}\|^{2}=G\,\end{split} \tag{38}\]
where \(\boldsymbol{\delta}\) is the power control vector defined in (8), and \(\text{Prob}(\mathbf{s})\) is the posterior distribution of the state vector \(\mathbf{s}\). Considering the high complexity of solving the problem (38), especially when the integral operation is involved, we can further simplify the problem formulation as
\[\begin{split}\min_{\boldsymbol{\delta}\in\mathbb{R}^{G}}\frac{3} {\tilde{G}}\sum_{\tilde{g}=1}^{\tilde{G}/3}&\mathrm{PEB_{R}}( \mathbf{p_{T}}_{\tilde{g},\tilde{p}_{R},\tilde{g}},\boldsymbol{\Xi}_{1}, \ldots,\boldsymbol{\Xi}_{L}|\boldsymbol{\delta}),\\ &\mathrm{s.t.}\ \ \|\boldsymbol{\delta}\|^{2}=G\,\end{split} \tag{39}\]
where \(\mathbf{p_{T}}_{\tilde{g}},\mathbf{p_{R}}_{\tilde{g}}\) are sampled based on the prior information. The optimization problem (39) can be solved by using convex optimization [37, 38], providing optimal positioning performance for a given codebook. Based on the insights from simulation results, optimal performance can be achieved when DER beams \(\boldsymbol{\omega}^{(2)}\) and \(\boldsymbol{\omega}^{(3)}\) are assigned with the same amount of power. In order to further relieve the computational burden, we replace to use the same power control coefficient \(\frac{\sqrt{3}}{\sqrt{1+2\gamma_{\mathbf{p}}^{2}}}\) for all the DIR beams (i.e., \(\boldsymbol{\omega}^{(1)}\)), and the same coefficient \(\frac{\sqrt{3}\gamma_{\mathbf{p}}}{\sqrt{1+2\gamma_{\mathbf{p}}^{2}}}\) for all the DER beams (i.e., \(\boldsymbol{\omega}^{(2)}\), \(\boldsymbol{\omega}^{(3)}\)), where \(\gamma_{\mathbf{p}}\) is the ratio between of the DER beam power and DIR beam power (only DIR beams are kept when \(\gamma_{\mathbf{p}}=0\)). The optimization problem in (39) can be simplified as
\[\min_{\gamma_{\mathbf{p}}\in\mathbb{R}}\sum_{\tilde{g}=1}^{\tilde{G}/3}\mathrm{ PEB_{R}}(\mathbf{p_{T}}_{\tilde{g}},\mathbf{p_{R}}_{\tilde{g}},\boldsymbol{\Xi}_{1}, \ldots,\boldsymbol{\Xi}_{L}|\gamma_{\mathbf{p}}),\ \ \gamma_{\mathbf{p}}\geq 0. \tag{40}\]
### _Channel Parameter Estimation Algorithm_
Once the RIS profiles are designed, the system can send positioning pilot signals and perform positioning algorithms. Here, we describe a two-stage positioning algorithm, including a channel parameter extraction step and a positioning step. For each stage, a coarse estimation algorithm and a refined maximum likelihood estimation (MLE) are developed for different performance and complexity tradeoffs.
#### Iii-B1 Low-complexity Channel Parameters Estimator
By implementing the orthogonal RIS profile as described in Section IV-A1, the uncontrolled path and each RIS path can be well-separated. For the LOS path observation \(\tilde{\mathbf{Y}}_{0}\) from (31), we first obtain the estimated channel elements and sum across all the \(G\) transmissions as
\[\underbrace{\mathbf{h}_{0}}_{\mathbf{h}_{0}\in\mathbb{R}^{K}}=\sum_{g=1}^{ \tilde{G}}[\tilde{\mathbf{Y}}_{0}]_{\cdot,g}\odot\mathbf{x}^{*}. \tag{41}\]
The delay of the LOS path \(\hat{\tau}_{0}\) can be estimated based on (5), (14) and (41) as
\[\hat{\tau}_{0}=\arg\max_{\tau}|\mathbf{d}^{\text{H}}(\tau)\mathbf{h}_{0}|, \tag{42}\]
where \(\mathbf{d}(\tau)\) is defined in (9), and (42) can be solved using an \(N_{\text{F}}\) point Discrete Fourier Transform (DFT) [37]. For the observation of the \(l\)-th RIS path \(\tilde{\mathbf{Y}}_{\ell}\), since RIS profiles are different from one transmission to another, we need to modify (41) and (42) as
\[\underbrace{\hat{\mathbf{H}}_{\ell}}_{\mathbf{H}_{\ell}\in\mathbb{R}^{K\times \tilde{G}}} =\tilde{\mathbf{Y}}_{\ell}\odot(\mathbf{x}^{*}\mathbf{1}_{\tilde{G}}^{\top}), \tag{43}\] \[\hat{\tau}_{\ell} =\arg\max_{\tau}\|\mathbf{d}^{\text{H}}(\tau)\hat{\mathbf{H}}_{\ell }\|. \tag{44}\]
Once the delay of the \(\ell\)-th RIS path \(\tau_{\ell}\) has been obtained, the estimation of spatial frequencies \(\hat{\xi}_{\ell}\) and \(\hat{\zeta}_{\ell}\) can be formulated as
\[[\hat{\xi}_{\ell},\hat{\xi}_{\ell}]=\operatorname*{arg\,min}_{\xi,\zeta}| \boldsymbol{\omega}_{\ell,g}^{\top}e^{\frac{j2\pi}{\tilde{\mathbf{Y}}_{\ell}}[0, \xi,\zeta]^{\top}}d_{k}(\tilde{\tau}_{\ell})\tilde{y}_{k,g}^{*}|, \tag{45}\]
where \(\tilde{y}_{k,g}\) is the element of the matrix \(\tilde{\mathbf{Y}}\), and the problem can be solved via a 2D search.
#### Iii-B2 MLE for Channel Parameter Estimation
From the low-complexity channel parameters estimator, we can estimate the nuisance-free channel parameter vector \(\hat{\boldsymbol{\eta}}_{\mathbf{N},\ell}\) (\(\ell=0,1,\ldots,L\)). The MLE aims to find the optimal channel parameters as
\[[\hat{\rho}_{\ell},\hat{\boldsymbol{\eta}}_{\mathbf{N},\ell}]=\operatorname*{ arg\,min}_{\rho_{\ell},\boldsymbol{\eta}_{\mathbf{N},\ell}}\|\tilde{\boldsymbol{\mu}}_{ \ell}-\rho_{\ell}\boldsymbol{\mu}_{\ell}(\boldsymbol{\eta}_{\mathbf{N},\ell})\|, \tag{46}\]
where \(\tilde{\boldsymbol{\mu}}_{\ell}=\text{vec}(\tilde{\mathbf{Y}}_{\ell})\), \(\boldsymbol{\mu}^{\text{H}}(\boldsymbol{\eta}_{\mathbf{N},0})=\text{vec}( \mathbf{D}(\tau_{0})\odot\mathbf{X})\), and \(\boldsymbol{\mu}^{\text{H}}(\boldsymbol{\eta}_{\mathbf{N},\ell})=\text{vec}( \mathbf{D}(\tau_{\ell})\odot\mathbf{A}_{\ell}(\xi_{\ell},\zeta_{\ell})\odot \mathbf{X})\) (\(\ell\geq 1\)) that can be obtained from (9) and (20). Since the channel gain \(\rho_{\ell}\) is a complex constant, by letting \(\partial\|\tilde{\boldsymbol{\mu}}_{\ell}-\rho_{\ell}\boldsymbol{\mu}_{\ell}( \boldsymbol{\eta}_{\mathbf{N},\ell})\|^{2}/\partial\rho_{\ell}=0\)
we can obtain the channel gain as \(\hat{\rho}_{\ell}=\frac{\mathbf{\mu}^{\mathrm{H}}(\mathbf{\eta}_{\mathrm{N},\ell})\hat{\bm {\mu}}}{|\mathbf{\mu}(\mathbf{\eta}_{\mathrm{N},\ell})|^{2}}\). And hence, the MLE can be formulated from (46) with nuisance-free channel parameters only as
\[\hat{\mathbf{\eta}}_{\mathrm{N},\ell}=\operatorname*{arg\,min}_{\mathbf{\eta}_{\mathrm{ N},\ell}}\left\|\tilde{\mathbf{\mu}}_{\ell}-\frac{\mathbf{\mu}^{\mathrm{H}}(\mathbf{\eta}_{ \mathrm{N},\ell})\tilde{\mathbf{\mu}}}{\|\mathbf{\mu}(\mathbf{\eta}_{\mathrm{N},\ell})\|^{ 2}}\mathbf{\mu}_{\ell}(\mathbf{\eta}_{\mathrm{N},\ell})\right\|. \tag{47}\]
### _Positioning Algorithm_
#### Iv-C1 Coarse Position Estimation
Based on the estimated nuisance-free channel parameter vector \(\hat{\mathbf{\eta}}_{\mathrm{N}}\), we propose a 3D-search positioning algorithm. For a position candidate \(\hat{\mathbf{p}}_{\mathrm{T}}\) of the transmitter UE, the candidate direction vector \(\hat{\mathbf{t}}_{\mathrm{T},\ell}\) can be obtained from (3). Based on the estimated spatial frequency \(\hat{\xi}_{\ell}\) and \(\hat{\zeta}_{\ell}\), the candidate direction vector of the \(\ell\)-th RIS \(\hat{\mathbf{t}}_{\mathrm{R},\ell}\) can be calculated as
\[\begin{split}\hat{t}_{\mathrm{R},\ell,2}&=\hat{\xi }_{\ell}-\check{\mathrm{t}}_{\mathrm{T},\ell,2}\\ \check{t}_{\mathrm{R},\ell,3}&=\hat{\zeta}_{\ell}- \check{\mathrm{t}}_{\mathrm{T},\ell,3}\\ \check{t}_{\mathrm{R},\ell,1}&=\sqrt{1-\check{t}_{ \mathrm{R},\ell,2}^{2}-\check{t}_{\mathrm{R},\ell,2}^{2}}.\end{split} \tag{48}\]
Note that ambiguities exist in the estimated spatial frequencies due to \(\hat{\xi},\hat{\zeta}\in[-1,1)\), while the true spatial frequencies \(\xi,\zeta\in[-2,2]\). This issue can be solved with prior location information to limit the searching area, or with a reduced RIS inter-element spacing (e.g., to \(\lambda_{c}/4\) instead of \(\lambda_{c}/2\), see [40]).
Based on the candidate direction vector \(\hat{\mathbf{t}}_{\mathrm{R},\ell}\) (\(\ell\geq 1\)) and known RIS states, we are able to calculate the candidate receiver UE position \(\check{\mathbf{p}}_{\mathrm{R}}\) by getting the closest point to both AOD direction vectors [41]. Given two bearing lines \(\mathbf{l}_{i}=\mathbf{p}_{i}+r\hat{\mathbf{t}}_{\mathrm{G},i}\) and \(\mathbf{l}_{j}=\mathbf{p}_{j}+r\hat{\mathbf{t}}_{\mathrm{G},j}\) (\(i,j\in{1,\ldots,L}\) and \(\hat{\mathbf{t}}_{\mathrm{G},\ell}=\mathbf{R}_{\ell}\mathbf{t}_{\mathrm{R}, \ell}\)), the following equations hold
\[\hat{\mathbf{p}}_{ij}-\hat{\mathbf{p}}_{ji} =-\check{d}_{ji}(\hat{\mathbf{t}}_{\mathrm{G},j}\times\hat{ \mathbf{t}}_{\mathrm{G},i}), \tag{49}\] \[\check{d}_{ji} =\frac{(\hat{\mathbf{t}}_{\mathrm{G},j}\times\hat{\mathbf{t}}_{ \mathrm{G},i})(\mathbf{p}_{j}-\mathbf{p}_{i})}{|\hat{\mathbf{t}}_{\mathrm{G},j }\times\hat{\mathbf{t}}_{\mathrm{G},i}|}, \tag{50}\]
where \(\hat{\mathbf{p}}_{ij}\) is the closest point on the bearing line \(\mathbf{l}_{i}\) to the bearing line \(\mathbf{l}_{j}\) that can be expressed as
\[\hat{\mathbf{p}}_{ij}=\mathbf{p}_{i}+\check{r}_{ij}\hat{\mathbf{t}}_{\mathrm{ G},i}. \tag{51}\]
By using least squares, \(r_{ij}\) and \(r_{ji}\) can be obtained as
\[\begin{bmatrix}\check{r}_{ij}\\ \check{r}_{ji}\end{bmatrix}=(\check{\mathbf{Q}}^{\top}\check{\mathbf{Q}})^{-1 }\check{\mathbf{Q}}^{\top}[\mathbf{p}_{j}-\mathbf{p}_{i}-\check{d}_{ji}(\hat{ \mathbf{t}}_{\mathrm{G},j}\times\hat{\mathbf{t}}_{\mathrm{G},i})], \tag{52}\]
with \(\check{\mathbf{Q}}=[\hat{\mathbf{t}}_{\mathrm{G},i},\hat{\mathbf{t}}_{ \mathrm{G},j}]\), and the candidate receiver UE position can be obtained as
\[\check{\mathbf{p}}_{\mathrm{R}}=\sum_{i\neq j}w_{ij}\check{\mathbf{p}}_{ij}, \quad(\sum_{i\neq j}w_{ij}=1), \tag{53}\]
where \(i,j\in\{1,\ldots,L\}\) and \(w_{ij}\) is the weight coefficient that can be chosen based on the quality of the estimated parameters (an example can be found in [41], (8)).
Finally, we can obtain the estimated clock offset \(\tilde{B}\) as
\[\tilde{B}=c\check{\tau}_{0}-\|\check{\mathbf{p}}_{\mathrm{R}}-\check{\mathbf{ p}}_{\mathrm{T}}\|, \tag{54}\]
and the cost function can be formulated as
\[J(\check{\mathbf{p}}_{\mathrm{T}})=\sum_{\ell}w_{\ell}|\tilde{B}+\|\check{ \mathbf{p}}_{\mathrm{T}}-\check{\mathbf{p}}_{\ell}\|+\|\check{\mathbf{p}}_{ \mathrm{R}}-\check{\mathbf{p}}_{\ell}\|-c\check{\tau}_{\ell}|, \tag{55}\]
with \(w_{\ell}\) as the weighting coefficients. Among all the transmitter UE position candidates, the one with the lowest cost will be the estimated position as
\[\hat{\mathbf{p}}_{\mathrm{T}}=\operatorname*{arg\,min}_{\check{\mathbf{p}}_{ \mathrm{T}}}J(\check{\mathbf{p}}_{\mathrm{T}}), \tag{56}\]
and the rest of the state parameter vector can be obtained based on (48) to (54).
#### Iv-C2 MLE for Positioning
The MLE refinement for positioning can be formulated as
\[\hat{\mathbf{s}}_{\mathrm{N}}=\operatorname*{arg\,min}_{\check{\mathbf{s}}_{ \mathrm{N}}}\ \ (\hat{\mathbf{\eta}}_{\mathrm{N}}-\mathbf{\eta}_{\mathrm{N}}(\mathbf{s}_{ \mathrm{N}}))^{\mathsf{T}}\,\mathbf{\Sigma}_{\mathbf{\eta}_{\mathrm{N}}}^{-1}\,(\hat{ \mathbf{\eta}}_{\mathrm{N}}-\mathbf{\eta}_{\mathrm{N}}(\mathbf{s}_{\mathrm{N}}))\,, \tag{57}\]
where \(\mathbf{\Sigma}_{\eta_{\mathrm{N}}}=\mathbf{\mathcal{I}}(\mathbf{\eta}_{\mathrm{N}})^{-1}\) is the covariance matrix of the estimated channel parameters, and the optimization problem in (57) can be solved by, e.g., the trust-region method and the gradient of the cost function in (57) is \(-(\frac{(\partial\hat{\mathbf{\eta}}_{\mathrm{N}}(\mathbf{s}_{\mathrm{N}}))}{ \mathbf{s}_{\mathrm{N}}})^{\mathsf{T}}\mathbf{\Sigma}_{\mathbf{\eta}_{\mathrm{N}}}^{-1} \,(\hat{\mathbf{\eta}}_{\mathrm{N}}-\mathbf{\eta}_{\mathrm{N}}(\mathbf{s}_{\mathrm{N}}))\). For the scenarios covariance matrix in MLE formulation is not available, we can set \(\mathbf{\Sigma}_{\mathbf{\eta}_{\mathrm{N}}}=\mathbf{I}\), leading to a least squares solution. The pseudo-codes for channel parameter estimation and position estimation can be found in Algorithm 1 and Algorithm 2, respectively.
### _Complexity Analysis_
In this subsection, we perform complexity analysis on the proposed channel parameter estimation in Section IV-B and positioning algorithms in Section IV-C. In channel parameter estimation, \(L+1\) 1D \(N_{\text{F}}\)-point DFT are needed for delay estimation, resulting in complexity on the order of \(\mathcal{O}(LN_{\text{F}}\log N_{\text{F}})\). For each of the \(L\) RISs, a 2D search for spatial frequency estimation is needed, resulting in a complexity of \(\mathcal{O}(LQ_{1}Q_{2}GKN)\), where \(Q_{1}=|\mathcal{G}_{\xi}|\) and \(Q_{2}=|\mathcal{G}_{\zeta}|\) denote the searching dimension of \(\xi\) and \(\zeta\), respectively. To refine the channel parameter estimation, we have \(\mathcal{O}(LQ_{3}GKN)\), where \(Q_{3}\) is the number of iterations. Regarding the positioning algorithm, a 3D search is needed to estimate the positions of both UEs and clock offset, giving \(\mathcal{O}(L^{2}Q_{4}Q_{5}Q_{6})\), where \(L^{2}\) indicates the number of beam pairs to be calculated (e.g., there are \(L(L-1)\) pairs of beams from \(L\) RISs to obtain the candidate receiver position via intersections) and \(Q_{4},Q_{5},Q_{6}\) represent the searching dimension of the position on the \(x\), \(y\), and \(z\) axis, respectively. For the refinement via MLE, the complexity is \(\mathcal{O}(L^{3}Q_{7})\), where \(L^{3}\) indicates the multiplication of matrices, and \(Q_{7}\) is the number of iterations to refine the positioning results. In summary, the overall complexity of the positioning problem is given by
\[\mathcal{O}_{\text{P}} =\underbrace{\mathcal{O}(LN_{\text{F}}\log N_{\text{F}})+\mathcal{ O}(LQ_{1}Q_{2}GKN)+\mathcal{O}(L^{2}Q_{4}Q_{5}Q_{6})}_{\text{Coarse Estimation}}\] \[+\underbrace{\mathcal{O}(LQ_{3}GKN)+\mathcal{O}(L^{3}Q_{7})}_{ \text{Refinement}}. \tag{58}\]
## V Simulation Results
### _Simulation Parameters_
We consider a 3D scenario with two single-antenna UEs and two RISs. The pilot signal \(x_{g,k}\) has a constant amplitude and random phase. The unknown channel gains for the LOS path and the \(\ell\)-th RIS path are set as \(|\rho_{0}|=\frac{\lambda}{4\pi d_{0}}\) for the LOS path and \(|\rho_{l}|=\frac{\lambda^{2}}{16\pi^{2}d_{l},d_{\text{R},\ell}}\), both with random phase. For channel parameter estimation, \(N_{\text{FFT}}=2^{10}\) is adopted, and the step size of the 2D grid search is \(0.02\). In the positioning step, a step size \(0.25\,\mathrm{m}\) is used to search inside the \(2\times 2\times 2\,\mathrm{m}^{2}\) area around the true position. The rest of the default simulation parameters can be found in Table I.
### _Channel Parameters and Position Estimation Results_
#### V-B1 Positioning Without Multipath
We first evaluate the performance of the channel estimator without the effect of multipath, by using the simplified channel model from (14) and (15). The channel parameter estimation and positioning results are shown in Fig. 2 (a) and (b), respectively. It can be seen from both figures that the coarse estimations saturate to a certain level with the increased transmit power. However, when refinement processes are applied, the CRBs of channel parameters and state parameters can be attached. The results show the effectiveness of the derived bounds and the estimators at the high transmit power. Since refinement processes are involved, the tradeoff between positioning performance and complexity (e.g., the number of iterations) can be performed. In addition, the simulation results become asymptotic when the transmit power is higher than \(25\,\mathrm{dBm}\), which is impractical for power-limited UE devices. This issue can be solved by increasing the RIS sizes or the number of transmissions. Implementing antenna arrays at the UE side for beamforming gain is also an option; however, the orientation estimations for both UEs need to be considered.
#### V-B2 The Effect of Multipath
We further explore the effect of multipath on sidelink positioning. Since \(\text{PEB}_{\text{T}}\) and \(\text{PEB}_{\text{R}}\) are showing a similar trend in performance (which makes
Fig. 2: RMSE of the estimation results vs. derived CRBs: (a) channel parameter estimation, (b) position estimation, both are benchmarked by the derived CRBs. Coarse estimation results saturate at high transmission powers, whereas the refined results are able to attach the bounds.
sense due to a large TX UE estimation error will affect the positioning of the RX UE, we focus on the evaluation of the positioning lower bound \(\text{PEB}_{\text{R}}\). By changing the Rician fading factor \(K_{r}\), the results are shown in Fig. 3. Generally, we can observe that the positioning error bounds decrease as the Rician factor increases and converge to the fading-free case. By removing the fading effect of the LOS and RIS channel separately, we can see that the latter case (black dashed curve with circle markers) provides a more significant performance improvement than the former one (black dashed curve with triangle markers). This result reveals that the RIS channels contribute more than the LOS channel in sidelink positioning. In other words, improving the channel quality of the RIS paths would be more conducive to improving the positioning performance. Besides, for a fixed environment (fixed \(K_{r}\)), a larger number of RISs (i.e., with the third RIS located at \([0,4,1]^{\top}\) (with orientation \(\mathbf{o}_{3}=[-\pi/2,0,0]^{\top}\)) introduced, as shown in the cyan curve with square markers) or a higher transmit power (see the blue curve with diamond markers and the magenta curve with triangle markers) is needed to combat the effect of multipath on positioning performance.
Considering the multipath acts as extra noise and affects the positioning error bound, for simplicity, the simulation results in the following sections do not consider the effect of multipath, which lower-bound the performance in the scenarios with multipath.
### _Evaluation of RIS Profiles_
#### Iv-C1 Visualization of DIR and DER beams
Based on the simulation parameters in Table I, we first visualize the radiation patterns (i.e., the equivalent RIS gain \(|\mathbf{\omega}^{\top}\mathbf{a}_{\text{R}}(\xi,\zeta)|\)) of DIR beam \(\mathbf{\omega}_{1}\) and DER beam \(\mathbf{\omega}_{2}\) obtained from (34) and (35) for the first RIS \(\mathbf{p}_{1}\). By changing the spatial frequencies \(\xi\) and \(\zeta\), the radiation patterns of two beams are shown in Fig. 4 (a)-1 and (b)-1. If we assume the position of the TX is known and fix the transmitter angles as \(\mathbf{\varphi}_{\text{A,1}}\), the 2D radiation patterns of the two beams are visualized in Fig. 4 (a)-2, (b)-2, and the 3D radiation patterns are visualized in Fig. 4 (a)-3, (b)-3, respectively. We can see from the figures that the DIR beam maximizes the SNR of the TX-RX link, while the DER beams are split at the dimension of \(\xi\) and \(\zeta\) compared with the DIR beam. The DER beam \(\mathbf{\omega}_{3}\) (derivation with respect to \(\zeta\)) shows a similar pattern to Fig. 4 (b)-1, by splitting the beam from the \(\zeta\) axis, which is not discussed.
#### Iv-C2 The Effect of Prior Error Level on RIS Profile Design
To evaluate the effect of prior error level on the \(\text{PEB}_{\text{R}}\) for different RIS profile designs, we assume the covariance matrices of the prior information are set as \(\mathbf{\Sigma}=\sigma_{\text{pri}}^{2}\mathds{I}\in\mathbb{R}^{3\times 3}\), for simplicity. Benchmarked by the random RIS profile (black dashed curve), the PEBs for the DIR codebook, and DIR+DER codebooks with different power allocations are shown in Fig. 5. We can see from the figure that both DIR and DIR+DER RIS profiles do not help when the prior error level is high. With more accurate prior information, the DIR profile can largely reduce the PEB. However, when the prior error is too small, the RIS profiles based on the DIR beams are configured to beamforming to a small area and provide less spatial diversity. In an extreme case, the positioning task cannot be completed with all the beams pointing to a single point. When adopting the DIR+DER profiles, however, this phenomenon can be mitigated by choosing a proper power allocation coefficient \(\gamma_{\text{P}}\).
We further evaluate the effect of power allocation coefficients on the positioning error bound, as shown in Fig. 6. For a fixed RIS size (i.e., \(10\times 10\)), the power allocation does not affect a lot when the prior error level is high (cyan curve with diamond markers), and becomes crucial with a small error level (blue curve with circle markers). We can also see that the optimal coefficient slightly shifts from left (red triangle) to the right (red cross) with the increase of RIS sizes, which is due to the narrow beamwidth requiring accurate prior information.
### _Localizability Discussion_
#### Iv-D1 PEB visualization of different RIS layouts
Based on the analysis in Section II-C, at least two RISs are needed to enable sidelink positioning, under the far-field assumption. However, this may not always work, and the localizability also depends on the state of the RIS anchors and the UEs. We have visualized the PEB of the RX UE (with TX UE position fixed) on a 2D x-y plane where both UEs have an unknown but fixed height, RISs are 1 m above both UEs, and the TX UE is located at \([-1,-1,0]^{\top}\). Three different RIS layouts are considered: (a) two RISs at different locations (\([-4,0,1]^{\top}\) and \([4,0,1]^{\top}\)) facing each other, (b) two RISs are located on the same y-z plane (\([-4,-3,1]^{\top}\) and \([-4,3,1]^{\top}\)) facing the same direction, and (c) two RISs are perpendicular to each other (\([-4,0,1]^{\top}\) and \([0,4,1]^{\top}\)). The results are shown in Fig. 7 with different assumptions. Benchmarked by the default setup (column-1), the PEB for perfect synchronization (column-2), UEs with known heights (column-3), and RISs with a higher height at \(2\,\mathrm{m}\) above the UEs (column-4) are visualized. The positions behind the RIS are also plotted as there exist certain types of RIS that are able to refract the signals rather than reflect them.
Fig. 3: PEB\({}_{\text{R}}\) vs. Rician fading factor \(K_{r}\). We can see from the figure that the multipath in the RIS channel has a larger effect on positioning performance. To combat the effect of multipath and achieve better performance, large transmit power and more RISs could be helpful.
We can see from Fig. 7 that the blind areas exist (yellow area), where the positioning cannot be done or will yield poor performance. However, with extra information, such as clock offset (column-2) or known UE heights (column-3), the blind area can be largely reduced. We also notice that the blind areas in the 3D space are changing continuously (see column-1 and column-4). Since the derivation of PEBs involves a high dimension of parameters such as the RIS positions/orientations and UE positions, it would be challenging to derive a closed
Fig. 4: The beam pattern of (a) DIR beam \(\mathbf{\omega}_{1}\) and (b) DER beam \(\mathbf{\omega}_{2}\). The figures in column-1 show the radiation patterns of two beams by changing \(\xi\) and \(\zeta\), and the DIR beam in (a)-1 reaches the maximum at (\(\xi=-0.4443\), \(\zeta=-0.5039\)). The figures in column-2 and column-3 visualize the radiation patterns (in 2D and 3D, respectively) by changing the azimuth and elevation of the receiver \(\mathbf{\varphi}_{\text{A,1}}\), with a fixed \(\mathbf{\varphi}_{\text{A,1}}=[-1.1071,-0.2200]^{\top}\) (rad), the DIR beam in (a)-2 reaches the maximum at \(\mathbf{\varphi}_{\text{D,1}}=[0.4636,-0.2898]^{\top}\) (rad).
Fig. 5: The evaluation of different prior error levels on positioning with different RIS profile designs. We can see that by using a DIR codebook, the positioning error bounds decrease and then increase when reducing the prior error level, and this effect can be mitigated by power control.
Fig. 6: The evaluation of different power allocation coefficients for different prior error levels and RIS sizes. We notice that power control is important when the prior error level is small, and the selection of the optimal coefficient also depends on the RIS size.
form solution to avoid these areas.
#### Iv-A2 Interpretation of the Blind Area
For a fixed TX UE position, worse performance (yellow area) happens in the location where the surrounding RX positions can provide similar geometrical information. To visualize this, we plot the noise-free cost function, defined in (55), for a 2D scenario. In the localizable location, the cost function shows a clear, sharp global optimal. In the blind area, the cost of the optimal point is similar to the surrounding positions, and the same level of noise will cause a larger estimation error compared with the first scenario. We further choose several surrounding candidate RX UE positions and find the corresponding optimal TX that minimizes the cost function, as shown in Fig. 8 (d). All these pairs provide similar spatial frequencies and TDOA observations as the ground truth TX/RX pair, and erroneous estimations are likely to happen at these positions. In order to avoid the effect of local optima, global optimization methods can be adopted. In addition, prior location information can also effectively eliminate the local minima. As mentioned earlier, for a fixed setup, the blind areas can be reduced by adopting a round-trip estimation to remove the clock offset, using geometric constraints (e.g., known UE height). In the next, we discuss the scenarios with more than two RISs.
#### Iv-A3 Evaluation of More RISs
We evaluate the effect of the number of RISs on positioning with four candidate anchor positions (\([-4,0,1]^{\top},[0,4,1]^{\top},[4,0,1]^{\top},[0-4,1]^{\top}\)) covering an \(8\times 8\,\mathrm{m}^{2}\) area. The RIS tilt angles are set as \(30^{\circ}\) pointing down, and we can see that RIS orientation affects the blind area (see Fig. 9 (a) vs. Fig. 7 (c)-1, and Fig. 9 (b) vs. Fig. 7 (a)-1). In general, more RISs can increase positioning coverage; however, if the same orthogonal strategy is implemented, more blocks are needed, increasing difficulties in coordinating between these RISs and channel parameter estimation. To support this statement in a more general scenario, we assume that the TX and RX UEs can be located at grids with \(x\in\{-3,-2,\ldots,3\}\) m, \(y\in\{-3,-2,\ldots,3\}\) m, and
Fig. 7: Visualization of PEBs of RX at different positions, while the TX is fixed. Three scenarios are considered where two RISs (same height) are located I m above the UEs (same height): a) two RIS are parallel facing each other. b). Two RISs are located on the position with the same x and z axis, facing the positive of the x-axis. c). Two RISs are forming an L shape facing the positive of the X axis and the negative of the Y axis (e.g., a corner). For each benchmark scenario (column-1), PEB with known clock offset (column-2), PEB with known z-axis (column-3), and PEB with RISs \(2\,\mathrm{m}\) higher than the UEs (column-4) are also visualized.
\(\{0,0.5\}\) m. We further assume both UEs cannot be located at the same place, and hence a total number of \(\binom{7\times 7\times 2}{2}=4753\) TX-RX pairs can be evaluated. The cumulative distribution function (CDF) of the \(\text{PEB}_{\text{R}}\) for all the scenarios in Fig. 9 is shown in Fig. 10. The CDFs for the RIS (L) scenario with known height (black dashed curve with cross markers) and known clock offset (black dashed curve with triangle markers) are also plotted, which validates the suggested solutions in reducing the blind areas.
#### V-B4 Summary of Localizability for Multi-RIS-Enabled 3D Sidelink Positioning
We have shown that blind areas exist in the problem of multi-RIS-enabled 3D sidelink positioning, and we have provided several ways to mitigate the effect, namely, round-trip positioning to remove clock bias, consider geometric constraints, and adopt more RISs. These discussions also open new directions for offline and online system optimization. The offline deployment of the anchors needs to consider the TX and RX position probability (e.g., the vehicles can only drive on the road with a certain movement model), as well as the surrounding environment map (e.g., where RISs can be installed). The online optimization needs to take advantage of the prior information and consider when, and to which UE, to trigger a positioning process, as the positioning performance may not meet the positioning performance requirements in the blind areas.
## VI Conclusion
In this work, we have formulated and solved the multi-RIS-enabled 3D sidelink positioning problem. In this problem, with the assistance of at least two RISs, two unsynchronized UEs are able to be localized via a one-way sidelink communication, even without BSs. Channel parameter estimation and positioning algorithms are developed and benchmarked by the derived CRBs. We discussed the effect of multipath on positioning performance and found the impact on the RIS channels is more significant. We also evaluated the benefit of RIS profile design with prior information to boost positioning performance. Most importantly, we have shown that blind areas exist in RIS-enabled sidelink positioning problems with interpretations. Several solutions can be considered to
Fig. 8: Interpretation of the blind area. (a) Heatmap of a 2D scenario where RISs and UEs are on the same X-Y plane and the UEs have known heights; (b) Cost function for the RX UE located at \([-0.61,2.41,0]^{\top}\) (blind area); (c) Cost function for the RX UE located at \([1.01,1.01,0]^{\top}\) (non-blind area); (d) Candidate TX/RX pairs at the locations around the local minima in (c).
Fig. 10: CDF of the \(\text{PEB}_{1}\) for both UEs at different locations inside a \(7\times 7\,\mathrm{m}^{2}\) area. RIS (L) indicates RISs are located in an L-shape, as shown in Fig. 9 (a), and RIS (P) indicates RISs are located in parallel, as shown in Fig. 9 (b).
Fig. 9: Visualization of RX PEBs for different numbers of RISs (with \(30^{\circ}\) tilting down). Subfigures (a) and (b) show that the RIS orientations affect the PEB (compared with the benchmarks in Fig. 7 (a)-1, (c)-1). With more RISs, the areas with poor positioning performance can be largely reduced.
reduce the effect of blind areas, such as utilizing round-trip communication to remove clock offset, adding geometric constraints to reduce the number of unknowns, and adopting more RISs to increase positioning coverage. However, this work is just the starting point with simplified scenarios and channel model assumptions. Further directions can consider the high-mobility scenario where the Doppler effect should be considered and more accurate channel models that account for more features such as the near-field effect and beam squint effects. In addition, more types of RIS structures can be implemented for positionings, such as active RISs that can amplify the reflected signals and hybrid RISs with processing elements.
## Appendix A
For a specific TX-RIS-RX channel, by dropping the indices \(\ell\), \(g\), \(k\), we have
\[h_{\text{R}} =\rho_{\text{R}}(\mathbf{h}_{\text{R}}+\mathbf{h}_{\text{R,MP}}) ^{\top}\boldsymbol{\Omega}\rho_{\text{T}}(\mathbf{h}_{\text{T}}+\mathbf{h}_{ \text{TMP}}), \tag{59}\] \[=\underbrace{\rho_{\text{R}}\rho_{\text{T}}\mathbf{h}_{\text{R}}^ {\top}\boldsymbol{\Omega}\mathbf{h}_{\text{T}}}_{\text{Direct Channel}}+\underbrace{\rho_{\text{R}}\rho_{\text{T}}\mathbf{h}_{ \text{R,MP}}^{\top}\boldsymbol{\Omega}\mathbf{h}_{\text{T}}}_{\text{RX- RIS multipath}}\] \[+\underbrace{\rho_{\text{R}}\rho_{\text{T}}\mathbf{h}_{\text{T}}^ {\top}\boldsymbol{\Omega}\mathbf{h}_{\text{T,R}}}_{\text{TX-RIS multipath}}+ \underbrace{\rho_{\text{R}}\rho_{\text{T}}\mathbf{h}_{\text{R,MP}}^{\top} \boldsymbol{\Omega}\mathbf{h}_{\text{T,MP}}}_{\text{Triple-bounded path}},\]
where \(\rho_{\text{R}}\), \(\mathbf{h}_{\text{R}}\in\mathbb{C}^{N}\) and \(\mathbf{h}_{\text{R,MP}}\in\mathbb{C}^{N}\) are the channel gain, direct channel vector, and multipath channel vector between the RX and RIS, respectively. Similar applies to the channel between the TX and RIS. The direct channel vector can be expressed using a steering vector as \(\mathbf{h}_{\text{T}}=\mathbf{a}(\boldsymbol{\varphi}_{\text{A}})\), \(\mathbf{h}_{\text{R}}=\mathbf{a}(\boldsymbol{\varphi}_{\text{D}})\). We further assume there is no correlation between the elements in the multipath channels \(\mathbf{h}_{\text{R,MP}}\), \(\mathbf{h}_{\text{TMP}}\), which can then be modeled as i.i.d. random variables [33]. Since the elements in \(\boldsymbol{\Omega}\mathbf{h}_{\text{T}}\) and \(\mathbf{h}_{\text{R}}^{\top}\boldsymbol{\Omega}\) have unit amplitude, the sum of the second and third terms in (59) can be treated as a zero-mean i.i.d random variable. By ignoring the triple-bounced path (i.e., the fourth component), the Rician model of the RIS path can be approximated as (7).
## Appendix B
The entries inside the matrix \(\boldsymbol{\mathcal{I}}(\boldsymbol{\eta})\) in (21) can be expressed as (with the rest elements as zeros)
\[\frac{\partial\mu_{g,k}}{\partial\alpha_{0}} =\frac{\rho_{0}}{\alpha_{0}}d_{k}(\tau_{0}),\ \ \frac{\partial\mu_{g,k}}{\partial\beta_{0}}=-j\rho_{0}d_{k}(\tau_{0}), \tag{60}\] \[\frac{\partial\mu_{g,k}}{\partial\tau_{0}} =-j2\pi k\Delta_{f}\rho_{0}d_{k}(\tau_{0}),\] (61) \[\frac{\partial\mu_{g,k}}{\partial\alpha_{0}} =\frac{\rho_{l}}{\alpha_{l}}d_{k}(\tau_{0})a_{g}(\boldsymbol{\vartheta }_{l}),\ \ \frac{\partial\mu_{g,k}}{\partial\beta_{l}}=-j\rho_{l}d_{k}(\tau_{ 0})a_{g}(\boldsymbol{\vartheta}_{l}),\] (62) \[\frac{\partial\mu_{g,k}}{\partial\tau_{0}} =-j2\pi k\Delta_{f}\rho_{l}d_{k}(\tau_{0})a_{g}(\boldsymbol{ \vartheta}_{l}),\] (63) \[\frac{\partial\mu_{g,k}}{\partial\xi_{l}} =\rho_{l}d_{k}(\tau_{l})\hat{a}_{g,\xi_{l}}(\boldsymbol{\vartheta }_{l}),\] (64) \[\frac{\partial\mu_{g,k}}{\partial\xi_{l}} =\rho_{l}d_{k}(\tau_{l})\hat{a}_{g,\xi_{l}}(\boldsymbol{\vartheta }_{l}),\] (65) \[\hat{a}_{g,\xi_{l}}(\boldsymbol{\vartheta}_{l}) =\boldsymbol{\omega}_{g}^{\top}(e^{j\frac{2\pi f_{c}}{c}\mathbf{ z}^{\top}\mathbf{t}_{\text{R},l}(\boldsymbol{\vartheta}_{l})}\odot(j\frac{2\pi f_{c}}{c} \mathbf{Z}^{\top}[0,1,0]^{\top})),\] (66) \[\hat{a}_{g,\zeta_{l}}(\boldsymbol{\vartheta}_{l}) =\boldsymbol{\omega}_{g}^{\top}(e^{j\frac{2\pi f_{c}}{c}\mathbf{ z}^{\top}\mathbf{t}_{\text{R},l}(\boldsymbol{\vartheta}_{l})}\odot(j\frac{2\pi f_{c}}{c} \mathbf{Z}^{\top}[0,0,1]^{\top})). \tag{67}\]
The elements inside the Jacobian matrix \(\mathbf{J}_{\text{S}}\) can be expressed as (the rest of the entries are all zeros)
\[\frac{\partial\tau_{0}}{\partial\mathbf{p}_{\text{T}}} =\frac{\mathbf{p}_{\text{T}}-\mathbf{p}_{\text{R}}}{\|\mathbf{p}_{ \text{T}}-\mathbf{p}_{\text{S}}\|}=-\frac{\partial\tau_{0}}{\partial\mathbf{p} _{\text{R}}}, \tag{68}\] \[\frac{\partial\tau_{\ell}}{\partial\mathbf{p}_{\text{S}}} =\frac{\mathbf{p}_{\text{S}}-\mathbf{q}_{\ell}}{\|\mathbf{p}_{ \text{S}}-\mathbf{q}_{\ell}\|},\ \ \ \text{S}\in\{\text{T},\text{R}\},\] (69) \[\frac{\partial\xi_{\ell}}{\partial\mathbf{p}_{\text{S}}} =\frac{\mathbf{r}_{2}-(\mathbf{r}_{2}^{\top}\mathbf{t}_{\text{S}}) \mathbf{t}_{\text{S}}}{\|\mathbf{p}_{\text{S}}-\mathbf{q}_{\ell}\|},\ \ \ \text{S}\in\{\text{T},\text{R}\},\] (70) \[\frac{\partial\xi_{\ell}}{\partial\mathbf{p}_{\text{S}}} =\frac{\mathbf{r}_{3}}{\|\mathbf{p}_{\text{S}}-\mathbf{q}_{\ell}\|}- \frac{\mathbf{r}_{3}^{\top}(\mathbf{p}_{\text{S}}-\mathbf{q}_{\ell})}{\| \mathbf{p}_{\text{S}}-\mathbf{q}_{\ell}\|^{2}}\mathbf{t}_{\text{S}},\ \ \ \text{S}\in\{\text{T},\text{R}\},\] (71) \[\frac{\partial\tau_{0}}{\partial B} =\frac{\partial\tau_{\ell}}{\partial B}=1,\] (72) \[\frac{\partial\alpha_{0}}{\partial\alpha_{0}} =\frac{\partial\beta_{0}}{\partial\beta_{0}}=\frac{\partial\alpha_{\ell}}{ \partial\alpha_{\ell}}=\frac{\partial\beta_{\ell}}{\partial\beta_{\ell}}=1, \tag{73}\]
where \(\mathbf{r}_{2,l}=[\mathbf{R}_{l}]_{:,2},\mathbf{r}_{3,l}=[\mathbf{R}_{l}]_{:,3}\).
## Acknowledgment
This work was supported, in part, by the European Commission through the EU H2020 RISE-6G project under grant 101017011, by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. ORACRG2021-4695, and by the 6G-Cities project from Chalmers.
|
2310.12648 | Towards Real-World Streaming Speech Translation for Code-Switched Speech | Code-switching (CS), i.e. mixing different languages in a single sentence, is
a common phenomenon in communication and can be challenging in many Natural
Language Processing (NLP) settings. Previous studies on CS speech have shown
promising results for end-to-end speech translation (ST), but have been limited
to offline scenarios and to translation to one of the languages present in the
source (\textit{monolingual transcription}).
In this paper, we focus on two essential yet unexplored areas for real-world
CS speech translation: streaming settings, and translation to a third language
(i.e., a language not included in the source). To this end, we extend the
Fisher and Miami test and validation datasets to include new targets in Spanish
and German. Using this data, we train a model for both offline and streaming ST
and we establish baseline results for the two settings mentioned earlier. | Belen Alastruey, Matthias Sperber, Christian Gollan, Dominic Telaar, Tim Ng, Aashish Agarwal | 2023-10-19T11:15:02Z | http://arxiv.org/abs/2310.12648v2 | # Towards Real-World Streaming Speech Translation for Code-Switched Speech
###### Abstract
Code-switching (CS), i.e. mixing different languages in a single sentence, is a common phenomenon in communication and can be challenging in many Natural Language Processing (NLP) settings. Previous studies on CS speech have shown promising results for end-to-end speech translation (ST), but have been limited to offline scenarios and to translation to one of the languages present in the source (_monolingual transcription_).
In this paper, we focus on two essential yet unexplored areas for real-world CS speech translation: streaming settings, and translation to a third language (i.e., a language not included in the source). To this end, we extend the Fisher and Miami test and validation datasets to include new targets in Spanish and German. Using this data, we train a model for both offline and streaming ST and we establish baseline results for the two settings mentioned earlier.
## 1 Introduction
Speech technologies are one of the main applications of machine learning, and are currently deployed in many real-world scenarios. To ensure a adequate user experience, factors other than accuracy need to be taken into account. One of them is the ability to produce an output in real-time (streaming settings) with a low latency and another one is effectively handling the distinctive characteristics inherent in spoken language, like Code-switching. Code-switching (CS) is the phenomenon in which a speaker alternates between multiple languages in a single utterance. Due to globalization Winata et al. (2022), it is becoming increasingly prevalent in spoken language, not only in bilingual communities but also in monolingual communities.
CS presents a challenge in various natural language processing (NLP) settings, such as automatic speech recognition (ASR), machine translation (MT), and speech translation (ST), due to the inherent complexity of dealing with two source languages, as well as the scarcity of CS training and test data Jose et al. (2020).
Despite the relevance of ST for CS speech task, the available literature on the subject is rather limited. Nakayama et al. (2019) investigate the task defined as _monolingual transcription_, i.e. transcribing a CS utterance using words of only one language, hence translating those words that are CS. Their work proposes and compares different approaches to evaluate the stated task in Japanese-English CS to English. Other follow-up work takes a similar approach (see Section 2).
To date, however, certain essential topics, such as translation to a language not present in the CS source or streaming ST, have yet to be explored, despite its critical importance for real-world usage. The primary challenge in translating to a third language stems from the unavailability of datasets with such characteristics. Furthermore, streaming settings present further challenges: achieving a balance between latency, stability and accuracy is crucial for delivering a seamless user experience, as with any streaming task. Besides, CS tasks may require more context than monolingual ones because of the added complexity of language mixing. Thus, addressing the trade-offs between these metrics in CS streaming ST may prove to be more intricate than with monolingual data.
In our work, we resolve the two aforementioned challenges: first, the insufficiency of data and results for translation to a third language, and second, the absence of a baseline for streaming CS ST.
To alleviate the data scarcity in CS tasks, we extend Fisher Cieri et al. (2004) and Bangor Miami CS Deuchar et al. (2014) datasets (combined English and Spanish source and English targets) by incorporating Spanish and German targets in the test and validation sets.1 These additions allow
us to evaluate the performance of our models on monolingual transcription (translation to English or Spanish), but also for the first time in CS ST into a third language (German) setting baseline results.
Furthermore, this study is the first on streaming ST for CS speech, and examines errors in transcripts generated by both offline and streaming models, considering different latency and flickering constraints, and different training techniques such as prefix-sampling. We show that prefix-sampling does not improve the model performance, and that errors in CS points appear in the same proportion streaming and offline ST. Our work sets baseline results and provides insight into the impact of CS on the performance of different models, and helping to identify potential points for future research that can contribute to the advancement of the field. To sum up, the main contributions of our work are:
* We provide baseline results for streaming ST for CS speech, contrary to previous work that focuses on offline settings.
* We provide baseline results to CS ST into a third language, contrary to previous work that focuses on monolingual transcription. To do so, we extend the Fisher-Miami CS dataset, adding Spanish and German targets.
## 2 Related Work
During the past few years, there has been an increasing interest in CS tasks. Prior work has focused in MT Sinha and Thakur (2005); Winata et al. (2021); Zhang et al. (2021); Yang et al. (2020) and ASR Lyu et al. (2006); Ahmed and Tan (2012); Vu et al. (2012); Johnson et al. (2017); Yue et al. (2019). However, the topic of CS in ST has been relatively under-explored, and usually concentrating only on monolingual transcription Nakayama et al. (2019); Hamed et al. (2022); Weller et al. (2022), and relying on synthetically generated data Nakayama et al. (2019); Huber et al. (2022).
The first work on CS ST was done by Nakayama et al. (2019). The authors analyse different architectures and training configurations for Japanese-English CS to English monolingual transcription.
Weller et al. (2022) present a similar work but in a different language pair. The authors present a CS dataset with natural English-Spanish CS text and speech sources and English text targets, gathering CS sentences in Fisher and Bangor Miami datasets. With these data, they are able to evaluate ASR and ST, although the ST setting is actually monolingual transcription. The authors explore different architectures through a two-steps training: a pretraining on non-CS data and a fine-tuning on CS data. They find that end-to-end ST models obtain higher accuracy than cascaded ones and that accuracy on CS test sets improves after the fine-tuning step without noticeably impacting performance on non-CS sets.
Later, Hamed et al. (2022) present a corpus for Egyptian Arabic-English CS tasks. The dataset contains text and speech CS sources, and targets in monolingual English and Egyptian Arabic. By combining these sets the authors are able to study ASR (from CS speech to CS text), as well as MT and ST. However, because of the target languages, both the ST and MT settings are actually monolingual transcription and a text-to-text variant of this task.
Finally, Huber et al. (2022) present LAST, a language-agnostic model for ST and ASR that aims to replace acoustic language ID gated pipelines by a unique CS model. However, their work focuses on inter-sentential CS (when a CS happens just at sentence boundaries) using synthetic data.
## 3 Model
We adopt the multimodal model design proposed by Ye et al. (2021) for speech translation (Figure 1). This model supports speech transcription, speech translation, and text translation, and leverages paired data of all three tasks through multitask
Figure 1: Proposed model architecture. The multimodal encoder supports training on both speech translation and text translation data. The tagging scheme is designed to allow generating either the (code-switched) transcript or a (monolingual) translation.
training. Similar to Ye et al. (2021), we extract speech representations using a pretrained wav2vec 2.0 Base model (Baevski et al., 2020)2 which results in 20ms per frame. To compute downsampled speech representations, wav2vec 2.0 applies a stack of three convolutional layers, resulting in 160ms per frame: each layer has a kernel of 3 and a stride of 2. To extract text representations for multitask text-to-text training, we simply use a 1024-dimensional embedding layer. Next we attach an encoder-decoder Transformer (Vaswani et al., 2017) with pre-layer normalization, a hidden dimension of 1024, dropout of 0.1, five encoder layers and three decoder layers. The input to the encoder is either the downsampled speech representations, or the embedded source text. In the decoder, we use 1024-dimensional LSTMs (Hochreiter and Schmidhuber, 1997) instead of self-attention which obtained better results in preliminary investigations.
Footnote 2: Specifically, facebook/wav2vec2-base-960h via Hugging Face Transformers (Wolf et al., 2020).
The model is trained in a multi-task fashion, where we some the losses of the transcription task, text translation task, speech translation task, as well as a CTC loss (Graves et al., 2006) applied on top of the full encoder. Tasks are weighted equally.
Importantly to our work, we use a shared decoder to perform either transcription or translation, with a language tag indicating the desired output language for ST, or the tag <src> to generate a transcript. Note that the transcript will be equivalent to the translation in the source language for monolingual sentences, but a special token for transcripts is needed to account for CS sentences.
To employ our model in a streaming setting, we use the re-translation technique (Niehues et al., 2018; Weller et al., 2021). This technique re-translates the utterance to update its prior prediction as additional information is received. To control the trade-off between latency, flickering, and accuracy, we set a mask on the last \(k\) sub-words of the prior prediction, allowing the model to rewrite only that part of the output. Therefore, a high \(k\) allows the model to rewrite the whole prediction, obtaining a high accuracy but poor latency and flickering scores, and on the contrary, setting \(k=0\) forces the model to commit to the previous prediction, hindering the accuracy but leading to no flickering and the lowest possible latency. Section 5 contains experiments to obtain the appropriate \(k\).
## 4 Datasets
Although our primary target is CS speech, we train our models on both monolingual and CS data due to the scarcity of the latter. In particular, we use the following datsets:
Bangor Miami (Deuchar et al., 2014):The dataset contains recorded conversations between bilingual English/Spanish speakers in casual settings, with a high proportion of naturally occurring code-switched speech. The recordings were obtained using small digital recorders worn on belts, resulting in low audio quality with background noise. We use the splits for CS ST defined by Weller et al. (2022).
Fisher (Cieri et al., 2004):The dataset was collected for ASR by pairing Spanish speakers located in the US and Canada through phone calls. Although it is not a CS focused dataset, it contains a significant amount of CS utterances due to the speakers being in English-speaking contexts. The recording was done through phone recordings in 2004, which makes it a noisy ASR dataset, although less noisy than Miami. We use the splits for CS ST defined by Weller et al. (2022).
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{4}{c}{**Pre-training**} \\ \hline
**Dataset** & **Language** & **Source** & **\#Samples** \\ \hline \multirow{3}{*}{MuST-C} & En-Es & Original & 270 000 \\ & En-De & Original & 234 000 \\ \hline \multirow{6}{*}{CoVoST} & Es-En & Original & 64 351 \\ & De-En & Original & 71 831 \\ \cline{1-1} & En-De & Original & 232 958 \\ \cline{1-1} & Es-De & Synthetic & 64 351 \\ \cline{1-1} & De-Es & Synthetic & 71 831 \\ \hline Fisher & Es-En & Original & 130 600 \\ \hline Miami & Es-En & Original & 6 489 \\ \hline \hline \multicolumn{4}{c}{**Fine-tuning**} \\ \hline
**Dataset** & **Language** & **Source** & **\#Samples** \\ \hline \multirow{3}{*}{Fisher} & En/Es-En & Original & 7 398 \\ & En/Es-Es & Synthetic & 7 398 \\ \cline{1-1} & En/Es-De & Synthetic & 7 398 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the training data used during our two-steps training.
CoVoST (Wang et al., 2020):A multilingual and diversified ST datset based on the Common Voice project (Ardila et al., 2020). This dataset includes language pairs from multiple languages into English, and it includes low resource languages.
MuST-C (Di Gangi et al., 2019):A dataset for ST research. It is a large-scale, multi-language dataset that includes speech recordings from English TED Talks and corresponding human transcriptions and translations. The dataset covers translation from English to many languages. The recording context (TED talks) makes it a quality clean dataset.
### Data Collection
Miami and Fisher CS sets consist of a source in CS En/Es, along with CS transcripts and monolingual English transcripts as targets. To expand the range of languages included, we include the monolingual Spanish transcript, as well as a new language not used in the source, namely German. By including this new language, we will be able to assess the performance of our models in pure speech translation, as opposed to previous work on monolingual transcription. Hence, we collect data for Miami and Fisher CS test and validation sets in German and Spanish. The data was translated by professional translators who were native speakers in the respective target languages.
### Data Usage and Preparation
Following (Weller et al., 2022), we divide our experiments in two steps: (1) pre-training on monolingual data and, (2) fine-tuning on code switched data.
During the pretraining we use CoVoST (Es-En, De-En, En-De splits), MuST-C (En-Es, En-De splits) and the non-CS sets in Fisher and Miami datasets (Es-En). Additionally, we use MarianMT 3 model from Hugging Face Transformers package (Wolf et al., 2020) to translate CoVoST De-En set to Spanish, and Es-En set to German, obtaining data for the pairs Es-De and De-Es. During the fine-tuning step, we focus on Fisher's code-switched (Es/En-En) training set (\(7389\) samples) and extend it for Es/En-Es and Es/En-De translation using the MarianMT model to translate English targets to German and Spanish.
Footnote 3: We manually clean the translations afterward.
We use 200 epochs for the pretraining stage and 100 epochs for finetuning. We use the Adam (Kingma and Ba, 2015) optimizer with \(\alpha=5e{-4}\), \(\beta_{1}{=}0.9\), \(\beta_{2}{=}0.98\). For pretraining, we use an inverted square root learning schedule with 500 warm-up steps. For finetuning a tri-stage schedule with 12.5% warm-up steps, 12.5% hold steps, and 75% decay steps.
For the experiments with prefix sampling, we use the same training set but prefix-sampling half of the instances following the approach presented by Niehues et al. (2018). For a summary of the data used on each step see Table 1.
## 5 Experiments
Our experiments follow four main directions: (1) Finding a reasonable \(k\) to control re-translation flickering and latency, (2) studying the occurrence of errors around CS switching points, (3) analyzing the usefulness of prefix-sampling and (4) establishing baseline numbers for translation to a third language and for streaming tasks for CS speech, including transcription, monolingual translation, and translation.
To evaluate our models we will use three different metrics. To measure the model accuracy we use BLEU (Papineni et al., 2002) with SACREBLEU (Post, 2018) and a beam size of 5. To evaluate the lag between model input and output we use Average Lag (AL, Ma et al. (2019)), and to measure the flickering we use Normalized Erasure (NE, Arivazhagan et al. (2020)). Additionally, we use WER to evaluate ASR performance.
### Metrics Trade-off and \(k\) Analysis
As described in Section 3, our model uses re-translation (Niehues et al., 2018) to generate a streaming output. Following the re-translation approach, we mask the last \(k\) sub-words of an output when predicting the following one. We evaluate latency, flickering and accuracy metrics for \(k\in\{0,5,10,15,20,25,30,+\infty\}\). As shown in Figure 2, results are consistent for Fisher and Miami datasets and across the different language pairs. All metrics increase together with \(k\). However the gap between \(30\) and \(+\infty\) is much higher in AL and NE than in BLEU. BLEU shows improvements for higher \(k\) but it is more stable than the other metrics. For this reason, we henceforth use \(k=15\), since BLEU scores are close to optimal while NE and AL are still low.
### Code-Switches and Errors in Predictions
We hypothesize that CS points are points of high linguistic uncertainty and, therefore, comparably hard to predict or translate. Hence, words around CS switch points would tend to be predicted wrong. We analyze this phenomenon for an ASR task comparing offline and streaming models with the aim of: (1) confirming or denying that more wrong predictions happen near CS points, (2) studying how offline or streaming ST can affect the conclusion of (1).
We analyze the predicted transcripts of our model in the ASR 4 task on Fisher CS test set under three different inference constraints: a streaming model with \(k=0\) (which has no flickering and the lowest possible latency), a streaming model with \(k=15\) (which we have found to be a reasonable choice to obtain a better accuracy without a critical effect on flickering and latency) and an offline model (which would be equivalent to a streaming model where \(k=+\infty\)). We establish a recall-based metric and count words in the reference transcript as predicted right if the word appears in the predicted transcript, and as predicted wrong otherwise. We study the proportion of words that are predicted right and their distance (in words) to a CS point. Hence, those words at a distance of 1 are right before or after a CS, and so on. To do so, we define the \(Recall\) at distance \(d\) as:
Footnote 4: Note that this can only be evaluated in ASR (not ST), because of the need of a CS target.
\[R(d)=\frac{right\_pred(d)}{right\_pred(d)+wrong\_pred(d)} \tag{1}\]
\begin{table}
\begin{tabular}{l l c c c c c|c c c c} \hline \hline & & & \multicolumn{3}{c}{**Fisher**} & \multicolumn{3}{c}{**Miami**} & \\ & & \multicolumn{3}{c}{**CS**} & \multicolumn{3}{c}{**Mono.**} & \multicolumn{3}{c}{**CS**} & \multicolumn{3}{c}{**Mono.**} \\ & Model & En & Es & De & En & De & En & Es & De & En & De \\ \hline \multirow{3}{*}{BLEU(\(\uparrow\))} & Fisher CS & 23.3 & 30.3 & 12.2 & 22.9 & 12.8 & 19.7 & 16.0 & 6.4 & 11.9 & 5.9 \\ & Fisher CS w/ prefixes & 23.7 & 30.9 & 12.2 & 22.0 & 13.0 & 22.1 & 18.3 & 7.0 & 13.9 & 6.7 \\ & (Weller et al., 2022) \(\dagger\) & 25.6 & - & - & 26.1 & - & 14.7 & - & - & 17.6 & - \\ \hline \multirow{3}{*}{AL(\(\downarrow\))} & Fisher CS & 0.6 & 0.5 & 0.6 & 0.5 & 0.5 & 0.5 & 0.4 & 0.4 & 0.4 & 0.3 \\ & Fisher CS w/ prefixes & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 & 0.4 & 0.4 \\ \hline \multirow{3}{*}{NE(\(\downarrow\))} & Fisher CS & 1.2 & 1.2 & 1.3 & 1.1 & 1.4 & 1.2 & 1.2 & 1.4 & 1.0 & 1.2 \\ & Fisher CS w/ prefixes & 1.2 & 1.0 & 1.2 & 1.2 & 1.0 & 1.0 & 1.0 & 1.1 & 1.6 & 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: BLEU, Average Lag (seconds), and Normalized Erasure scores in streaming **Speech Translation**, for trainings with and without prefix sampling. In every experiment we set \(k=15\). \(\dagger\): Best results reported by Weller et al. (2022) in offline ST.
Figure 2: BLEU, Normalized Erasure and Average Lag scores under different streaming constraints. In each prediction step, the model has to commit to the previous prediction except for the last \(k\) tokens (sub-words). We evaluate the performance of the model for \(k\in\{0,5,10,15,20,25,30,+\infty\}\).
The results in Figure 3 show that CS points impact the model's accuracy. Those words at a distance of 1 are predicted wrong in the highest proportion for every model. However, starting from \(d=2\), the recall increases only slightly, or stays close to constant, so the effect of a CS does not last long. Secondly, we also see that although the streaming setting with \(k=0\) has an overall worse recall, having less available context when making the predictions does not affect those words close to CS points more than those that are not. In particular, we see that the drop between \(d=2\) and \(d=1\) is lower for the streaming model with \(k=0\). This indicates that, contrary to what we expected, the lack of context in streaming ST does not have a negative impact on CS points, and therefore, the model needs the same context to properly predict CS or not CS words.
### Usefulness of Prefix-sampling
A frequently used technique to train streaming models consists of sampling prefixes from part of the training data. We study the impact of using this technique in accuracy, latency, and flickering metrics and its impact on errors around CS points.
To analyze the usefulness of this training strategy, we compare a model trained on the Fisher CS set against a model trained on the same set but substituting half of the complete utterances by prefixes. As shown in Table 2, prefix-sampling produced an improvement in BLEU scores, especially in Miami test sets (up to +2.4). Surprisingly, this training strategy that aims to improve the performance in latency or flickering worsens the Average Lag scores and does not significantly impact Normalized Erasure.
Furthermore, we study whether prefix sampling impacts the accuracy of the predictions around CS points. In Figure 4, we use the same _recall_ metric as in Section 5.2 to compare both models. We see that prefix training degrades the accuracy of the predictions around CS points, especially in those words at a distance of 1, where the recall drops from 0.51 in the standard training to 0.45 in prefixes training.
Figure 4: Analysis of errors in the prediction of words for different distances to a CS point, with and without prefix-sampling the training set.
Figure 3: Analysis of errors in the prediction of words for different distances to a CS point under different inference constraints.
\begin{table}
\begin{tabular}{c c c c c c|c c c c c c} \hline \hline & & \multicolumn{4}{c}{**Fisher**} & \multicolumn{4}{c}{**Miami**} \\ & & \multicolumn{2}{c}{**CS**} & \multicolumn{4}{c}{**Mono.**} & \multicolumn{4}{c}{**CS**} & \multicolumn{4}{c}{**Mono.**} \\ & Model & En & Es & De & En & De & En & Es & De & En & De \\ \hline \multirow{2}{*}{BLEU(\(\uparrow\))} & Fisher CS & 41.8 & 45.8 & 24.27 & 35.5 & 23.7 & 49.4 & 41.8 & 19.9 & 31.7 & 19.5 \\ & Fisher CS w/ prefixes & 41.8 & 44.1 & 22.9 & 35.7 & 22.5 & 48.1 & 38.7 & 19.1 & 32.2 & 18.9 \\ \hline \multirow{2}{*}{AL(\(\downarrow\))} & Fisher CS & 0.4 & 0.4 & 0.4 & 0.4 & 0.4 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 \\ & Fisher CS w/ prefixes & 0.4 & 0.4 & 0.4 & 0.4 & 0.4 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 \\ \hline \multirow{2}{*}{NE(\(\downarrow\))} & Fisher CS & 0.06 & 0.04 & 0.06 & 0.04 & 0.04 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & Fisher CS w/ prefixes & 0.04 & 0.04 & 0.06 & 0.04 & 0.04 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: BLEU, Average Lag (seconds), and Normalized Erasure scores in streaming **Text Translation**, for trainings with and without prefix sampling. In every experiment we set \(k=15\).
### Performance Analysis
After the experiments described in previous sections, we have found that using prefix-sampling does not lead to a noticeable performance improvement. Furthermore, we have seen that masking the last \(15\) sub-words in each step during the translation of a sentence shows an optimal trade-off between the different evaluation metrics. Since there is no previous work in CS streaming ST, we can not fairly compare our results to previous work, and therefore we aim to set baseline numbers. However we compare the BLEU scores of our model to the scores obtained by [20] for offline ST to English (Table 2), to analyse if the performance drop between offline and streaming ST is reasonable. As expected, our streaming model suffers a performance degradation in most of the test sets compared to the offline model in previous work. However, CS ST to English in the Miami dataset obtains an improvement of up to +7.4 BLEU.
When analyzing the performance of German translation we see that there is an important drop compared to English and Spanish translation (both present on the source). CS Speech Translation is commonly studied and evaluated just in translation to languages present in the source, therefore we believe that the performance drop in German is a relevant finding that shows the importance of not relying just on monolingual transcription when aiming for CS ST and sets a baseline result for further work in translation to a third language. Regarding Average Lag and Normalized Erasure, we present our results as a baseline, since previous work using Fisher and Miami datasets was done in offline tasks. However, to have an estimation of the quality of our model in these metrics, we compare our scores with the ones obtained by [20] on MuST-C data, which are over 1 for both metrics. In Table 2, we can see that we obtain similar scores, therefore we conclude that the performance of our model is reasonable regarding flickering and lag.
### Results in Machine Translation and Automatic Speech Recognition
Although the main scope of this work in Speech Translation, we evaluate our models for Machine Translation and Automatic Speech Recognition too. We can easily do this given that the model we are using is multitask and allows us to work on each of the three settings by switching the the input type and properly defining the a tag to generate the output.
In Table 3 we can see the results obtained for MT. We see that, as in ST, prefix sampling does not improve AL and NE scores. Furthermore, in the case of MT using prefixes degrades the performance of the majority of the models. Regarding BLEU scores, we observe that as in ST those tasks that consist on translating to a language present in the source obtain a much higher accuracy than those where we translate to German.
In Table 4 we see the results for the ASR setting. In this case, prefix sampling does work as expected regarding AL and NE scores, being the models with prefixes the ones with lower scores. However, it still has a negative impact on the performance of the models, specially in Miami test sets. Regarding WER, the scores obtained for the Miami dataset are much worse than the ones obtained by Fisher ones, a pattern that we have not observed in translation tasks. This could be due to the fact that during the pretraining, the data used for translation tasks comes from many different datasets, allowing the model to properly learn to generalize. However, the available data with CS targets corresponds mostly to the Fisher dataset (130 600 samples), compared to only 6 487 from the Miami dataset (see Table 1 for more details on the data distribution).
\begin{table}
\begin{tabular}{l l c c|c c} \hline \hline & \multicolumn{3}{c|}{**Fisher**} & \multicolumn{2}{c}{**Miami**} \\ & Model & **CS** & **Mono** & **CS** & **Mono** \\ \hline \multirow{2}{*}{WER(\(\downarrow\))} & Fisher CS & 34.9 & 29.8 & 63.3 & 63.5 \\ & Fisher CS w/ prefixes & 35.4 & 29.9 & 60.6 & 58.1 \\ \hline \multirow{2}{*}{AL(\(\downarrow\))} & Fisher CS & 1.0 & 0.8 & 0.8 & 0.6 \\ & Fisher CS w/ prefixes & 0.5 & 0.4 & 0.5 & 0.3 \\ \hline \multirow{2}{*}{NE(\(\downarrow\))} & Fisher CS & 1.2 & 1.0 & 1.2 & 1.1 \\ & Fisher CS w/ prefixes & 1.1 & 0.8 & 1.2 & 0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: WER, Average Lag (seconds), and Normalized Erasure scores in streaming **Automatic Speech Recognition**, for trainings with and without prefix sampling. In every experiment we set \(k=15\).
Conclusions
In this work, we have tackled two open ends in CS ST: translation to a third language and streaming settigns. To do so, we have trained offline and streaming models for direct translation and transcription of CS speech. Furthermore, we have extended Fisher and Miami test and validation sets with new Spanish and German targets. By doing this we have been able to analyse not only monolingual transcription, but also pure translation. We have observed a drop of up to 18 BLEU points between the two settings, showcasing the importance of not relying on monolingual transcription when aiming for ST models, as has been commonly done in previous work. Given the greater complexity of translating to a third language as compared to monolingual translation, we think that incorporating additional data would be necessary to tackle the accuracy drop. However, since natural code-switched data is limited and generating synthetic data is beyond the scope of this study, we leave this for future research.
To summarize, our work presents new data, an in depth analysis of the impact of CS in the predictions, and results for streaming CS Speech Translation and translation to a third language, which can serve as a baseline for future work in a field that although relevant is still far from solved.
## Limitations
Our work is limited to high-resource languages such as English, German, and Spanish. Therefore, further work needs to be done tackling low resource languages in order to achieve real-world CS translation.
|
2304.08957 | Climate uncertainty impacts on optimal mitigation pathways and social
cost of carbon | Emissions pathways used in climate policy analysis are often derived from
integrated assessment models. However, such emissions pathways do not typically
include climate feedbacks on socioeconomic systems and by extension do not
consider climate uncertainty in their construction. We use a well-known
cost-benefit integrated assessment model, the Dynamic Integrated
Climate-Economy (DICE) model, with its climate component replaced by the
Finite-amplitude Impulse Response (FaIR) model (v2.1). The climate uncertainty
in FaIR is sampled with an ensemble that is consistent with historically
observed climate and Intergovernmental Panel on Climate Change (IPCC) assessed
ranges of key climate variables such as equilibrium climate sensitivity. Three
scenarios are produced: a pathway similar to the "optimal welfare" scenario of
DICE that has similar warming outcomes to current policies, and pathways that
limit warming to "well-below" 2C and 1.5C with low overshoot, in line with
Paris Agreement long-term temperature goals. Climate uncertainty alone is
responsible for a factor of five variation (5-95% range) in the social cost of
carbon in the 1.5C scenario. CO2 emissions trajectories resulting from the
optimal level of emissions abatement in all pathways are also sensitive to
climate uncertainty, with 2050 emissions ranging from -12 to +14 GtCO2/yr in
the 1.5C scenario. Equilibrium climate sensitivity and the strength of
present-day aerosol effective radiative forcing are strong determinants of
social cost of carbon and mid-century CO2 emissions. This shows that narrowing
climate uncertainty leads to more refined estimates for the social cost of
carbon and provides more certainty about the optimal rate of emissions
abatement. Including climate and climate uncertainty in integrated assessment
model derived emissions scenarios would address a key missing feedback in
scenario construction. | Christopher J. Smith, Alaa Al Khourdajie, Pu Yang, Doris Folini | 2023-04-18T12:47:51Z | http://arxiv.org/abs/2304.08957v3 | # Climate uncertainty impacts on optimal mitigation pathways and social cost of carbon
###### Abstract
Emissions pathways used in climate policy analysis are often derived from integrated assessment models. However, such emissions pathways do not typically include climate feedbacks on socioeconomic systems and by extension do not consider climate uncertainty in their construction. We use a well-known cost-benefit integrated assessment model, the Dynamic Integrated Climate-Economy (DICE) model, with its climate component replaced by the Finite-amplitude Impulse Response (FaIR) model (v2.1). The climate uncertainty in FaIR is sampled with an ensemble that is consistent with historically observed climate and Intergovernmental Panel on Climate Change (IPCC) assessed ranges of key climate variables such as equilibrium climate sensitivity. Three scenarios are produced: a pathway similar to the "optimal welfare" scenario of DICE that has similar warming outcomes to current policies, and pathways that limit warming to "well-below" 2degC and 1.5degC with low overshoot, in line with Paris Agreement long-term temperature goals. Climate uncertainty alone is responsible for a factor of five variation (5-95% range) in the social cost of carbon in the 1.5degC scenario. CO2 emissions trajectories resulting from the optimal level of emissions abatement in all pathways are also sensitive to climate uncertainty, with 2050 emissions ranging from -12 to +14 GtCO2 yr\({}^{-1}\) in the 1.5degC scenario. Equilibrium climate sensitivity and the strength of present-day aerosol effective radiative forcing are strong determinants of social cost of carbon and mid-century CO2 emissions. This shows that narrowing climate uncertainty leads to more refined estimates for the social cost of carbon and provides more certainty about the optimal rate of emissions abatement. Including climate and climate uncertainty in integrated assessment model derived emissions scenarios would address a key missing feedback in scenario construction.
## 1 Introduction
Integrated Assessment Models (IAMs) can be categorized into two broad types: process-based (PB-IAMs) and cost-benefit (CB-IAMs) [1]. PB-IAMs model the energy system, technology, economy, agricultural productivity and land use across a number of world regions, are used to construct possible future emissions scenarios, and have extensive policy reach [2], partly as a consequence of their ubiquity across IPCC reports [3]. PB-IAMs produced the Shared Socioeconomic Pathways (SSPs) used to drive Earth System model projections of future climate [4], providing a large base of model evidence to the Intergovernmental Panel on Climate Change (IPCC) Working Group 1 (WG1) report. Analysis of future potential technological and social developments in a large number of PB-IAMs are assessed in IPCC Working Group 3 (WG3) [3].
CB-IAMs are simpler and often used to model climate change effects on the global economy at a macro level. One aspect in which CB-IAMs have had extensive policy reach is in determining the social cost of carbon (SCC), describing the marginal time-discounted climate damages suffered by society for each additional ton of CO\({}_{2}\) emitted [1]. CB-IAMs perform a cost-benefit analysis that balances the foregone present-day economic consumption (which under the current global energy mix, is CO\({}_{2}\)-intensive) that is instead invested in emissions abatement technologies, with benefits future avoided climate damages from warming. The SCC forms a central component of climate policy in several countries, most notably the United States [5]. In a hypothetical efficient market, the SCC could be used to set the optimal global carbon price or carbon taxation level.
A CB-IAM requires a simple climate module as an integral part of the model in order to calculate global warming and hence climate damages. While their model dynamics are highly aggregated and parameterised, CB-IAMs tend to include a two-way coupling between emissions and climate, which is not often present in PB-IAMs. Additionally, the relative simplicity of CB-IAMs means that an optimal solution (e.g. from an iterative optimization process) can be found relatively quickly. Therefore, uncertainty analysis can be undertaken by varying model parameters and re-running many times using variance-based sensitivity analyses or Monte Carlo sampling [6, 7]. The properties of economic-climate coupling and efficiency make CB-IAMs useful tools for exploring the impact of climate uncertainty on emissions scenarios and SCC.
Despite their central importance in CB-IAMs, it has recently been observed that climate module components in these models are performing poorly with respect to Earth System models and observations [8]. CB-IAM climate modules can be improved if model parameters are better calibrated [9], though key Earth System processes such as the carbon cycle feedback are often missing [10]. As climate damages (and therefore SCC) in CB-IAMs depend on global mean surface temperature, it is important to use an appropriate simple climate model within a CB-IAM to prevent biased estimates of SCC [8].
A fitness-of-purpose test for simple climate models was developed as part of the IPCC's Sixth Assessment Report (AR6) WG1, supported by the Reduced Complexity Model Intercomparison Project (RCMIP) [11, 12, 13, 14]. This test evaluates the ability of simple climate models to reproduce historically observed climate change and expert assessments of emergent climate variables, including the equilibrium climate sensitivity (ECS). Models are evaluated on how well they spanned the distribution of assessed uncertainty across several climate variables, with the ultimate purpose of being used to provide climate projections from PB-IAM emissions scenarios from WG3 [3, 15]. The IPCC AR6 determined that an appropriately calibrated probabilistic ensemble of the FaIR simple climate model [16, 17, 18] was fit for purpose [13]. While FaIR was not the only simple climate model to pass this test, it is structurally simple enough to be used inside the optimization code of a CB-IAM [19, 20].
An additional consideration for SCC is that of uncertainty in climate. Several climate variables including ECS and the magnitude of present day aerosol forcing have large uncertainty bounds [13] and varying the climate response in CB-IAMs can lead to differing estimates of the SCC [5, 21, 22]. We extend this previous work by producing a systematic assessment of climate uncertainty using FaIR coupled to the DICE-2016 CB-IAM, focusing on allowable CO\({}_{2}\) emissions under Paris Agreement consistent mitigation scenarios in addition to the SCC.
## 2 Methods
### DICE integrated assessment model
DICE-2016R is well documented [23, 24] and restrict the discussion to modifications made in this work.
We reduce the model timestep in DICE from 5 years to 3 years, and use 2023 as the first period (updated from 2015 in DICE-2016R). We run the DICE to 2500 for a total of 160 periods (DICE-2016R runs to 2510 for a total of 100 periods). A 3-year time step allows for more responsive emissions reductions in the near term, without significantly adding to the computational burden.
Gross world economic output \(Y\) is determined with a Cobb-Douglas production function
\[Y(t)=A(t)K(t)^{\gamma}L(t)^{1-\gamma} \tag{1}\]
where \(K\) is global capital stock, \(L\) is global labour stock, \(\gamma=0.3\) is the output elasticity to capital and
is total factor productivity. \(t=1\ldots 160\) is the period. \(L(t)\) is assumed to scale proportionally with global population.
The projections of world population from DICE-2016R are updated with the median projection of 10,000 scenarios from the Resources For the Future Socioeconomic Pathways (RFF-SPs) [5, 25]. The RFF-SPs run to 2300, which we extend to 2500 by taking the average growth rate over 2250-2300 in each projection and linearly declining this growth rate to zero by 2500. In the median RFF-SP projection, global population peaks in 2116 at 11.2bn, declining to 7.3bn in 2300 and 4.9bn in 2500. This population trajectory is substantially different to DICE-2016R, which assumes an asymptotic convergence to 11.5bn by 2500.
Global capital stock \(K(t)\) and total global product \(Y(t)\) are updated to use 2019 figures from the International Monetary Fund (IMF) reported in 2017$ and re-indexed to give \(K=\$341\)tn and \(Y=\$133\)tn for 2023 in 20208. Total factor productivity \(A(t)\) in 2023 is calculated by rearrangement of eq. (1) using the re-indexed 2019 estimates of \(K(t)\) and \(Y(t)\) from the IMF data and \(L(t)\) from the RFF-SP timeseries.
CO\({}_{2}\) emissions from fossil fuel and industrial processes (\(E_{\text{FFI}}\)) are given by
\[E_{\text{FFI}}(t)=\sigma(t)Y(t)(1-\mu(t)) \tag{2}\]
where \(\sigma(t)\) is the emissions intensity of GDP [kg CO\({}_{2}\) $\(\$^{-1}\)]. \(\sigma(t)\) includes a baseline improvement in energy efficiency over time in the absence of any climate policy. We update \(E_{\text{FFI}}\) to be 36.6 Gt CO\({}_{2}\) yr\({}^{-1}\) in 2023, which is the estimate of 2022 fossil fuel emissions from the Global Carbon Project (GCP) [26].
\(\mu(t)\) is the emissions abatement fraction. In DICE-2016R, net negative emissions (\(\mu>1\)) are not allowed until 2160. We relax this assumption, allowing net zero CO\({}_{2}\) emissions (\(\mu=1\)) in 2040 and net negative emissions thereafter. While the feasibility of achieving net zero CO\({}_{2}\) emissions in 2040 is debatable [27, 28, 29], it is later than the earliest net zero year (2037) from PB-IAM scenarios in IPCC WG3 database [3, 30]. In order to construct sensible transition pathways, we impose an upper limit of \(\mu(t)=0.15t\) for \(1\leq t\leq 7\) and retain DICE-2016R's maximum allowable abatement of \(\mu(t)=1.2\) for \(t\geq 8\). We use \(\mu=0.15\) in 2023 rather than DICE-2016R's \(\mu=0.03\) in 2015. A present-day emissions abatement level of 15% can be justified on the basis that some limited emissions mitigation has occurred. Around 10% of global primary energy supply is renewable [31], and a significant coal-to-gas shift has occurred over the last 30 years in the energy sector.
Total CO\({}_{2}\) emissions are given by \(E=E_{\text{FFI}}+E_{\text{AFOLU}}\). \(E_{\text{AFOLU}}\) is the CO\({}_{2}\) emissions from agriculture, forestry and other land use (AFOLU). DICE-2016R uses an exogenous pathway of AFOLU CO\({}_{2}\) emissions. We replace this with a regression-based relationship of \(E_{\text{AFOLU}}\) with \(E_{\text{FFI}}\) and \(t\) that is derived from 1202 PB-IAM scenarios from the IPCC WG3 database [30]:
\[E_{\text{AFOLU}}=(1.54+0.0464E_{\text{FFI}}-0.189t)\left(1-\frac{1}{1+e^{-(t-3 5)}}\right). \tag{3}\]
The second term in eq. (3) ramps down AFOLU emissions from close to their 2100 levels to close to zero by 2150 (\(t=35\) is model year 2125) and is similar to the linear phase out of AFOLU emissions used in post-2100 extensions to the Shared Socioeconomic Pathways (SSPs) [32].
### The calibrated FaIR v2.1 climate model
FaIR is described in refs. [16, 17, 18]. Unlike the DICE-2016R climate module, FaIR includes carbon cycle feedbacks simulating the declining efficiency of land and ocean carbon sinks (increasing airborne fraction) with increasing emissions of CO\({}_{2}\).
We produce a 1001 member posterior sample of FaIR parameters from a 1.5 million member prior ensemble. The 1001 ensemble members simultaneously span IPCC assessed ranges of ECS (e.g. 90% of the distribution lying within 2-5"C), transient climate response (TCR), ocean heat content change from 1971-2018, global mean surface temperature from 1995-2014 relative to 1850-1900, aerosol effective radiative forcing (ERF; 2005-2014 relative to 1750), CO\({}_{2}\) concentrations in 2014 and future warming projected under SSP2-4.5 in 2081-2100. We verify that FaIR reproduces historical observed warming including its uncertainty (fig. 1a) and present-day CO\({}_{2}\) atmospheric concentrations (fig. 1b) when run with historical emissions from 1750 at a 3-year timestep.
As DICE only models CO\({}_{2}\) emissions, non-CO\({}_{2}\) emissions are treated as an external forcing so that the total forcing \(F=F_{\text{CO}_{2}}+F_{\text{ext}}\). To generate \(F_{\text{ext}}\) for our scenarios we run FaIR offline using the 1001-member
Figure 1: Historical and present-day climate state projected with the FaIR model. (a) Historical global mean surface temperature in FaIR (5–95% range in grey shading, median in black) compared to the IPCC’s best estimate time series (red) from Gulev et al.[33]. Temperatures use a baseline of 0.85\({}^{\circ}\)C above pre-industrial for 1995–2014, following IPCC. Widening spread near the beginning of the time series relates to observational uncertainty in present-day warming included in the ensemble. (b) Distribution of atmospheric CO\({}_{2}\) concentration at the start of 2023 from FaIR initialised in 1750 (grey histogram) compared to NOAA’s global mean surface dataset (red line). Start of year 2023 CO\({}_{2}\) concentrations were estimated from extrapolating the 12-month trend value from December 2022 forward for half a month. Data was obtained from [https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_gl.txt](https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_gl.txt) (accessed 3 April 2023).
posterior ensemble under SSP-4.5, SSP1-2.6 and SSP1-1.9 emissions for the "optimal", 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C scenarios respectively for 1750-2500 [34]. For 2023 onwards we export \(F_{\rm ext}\) from each ensemble member and use this as an exogenous input to the DICE runs. This captures uncertainty in the strength of non-CO\({}_{2}\) forcing, including aerosols, but not uncertainties in its time evolution. We use GCP CO\({}_{2}\) emissions from 1750-2022 and non-CO\({}_{2}\) emissions from the RCMIP dataset [11, 12, 35]. For CO\({}_{2}\), we harmonize [36] the CO\({}_{2}\) emissions to ensure a smooth transition between the GCP historical and the SSP future for CO\({}_{2}\).
FaIR v2.1 uses the Meinshausen et al. [32] relationship of ERF from concentrations of CO\({}_{2}\), CH\({}_{4}\) and N\({}_{2}\)O which includes radiative band overlaps between gases. As DICE only models CO\({}_{2}\) concentrations explicitly we revert to the logarithmic formula for CO\({}_{2}\) forcing [37]
\[F_{\rm CO_{2}}=F_{\rm 2\times CO_{2}}\frac{\log(C_{\rm CO_{2}}/C_{\rm CO_{2}, \rm ref})}{\log 2} \tag{4}\]
where \(C_{\rm CO_{2}}\) is the CO\({}_{2}\) concentration in parts per million volume (ppm) and \(C_{\rm CO_{2},\rm ref}\) is the pre-industrial concentration. \(F_{\rm 2\times CO_{2}}\) is the ERF from a doubling of CO\({}_{2}\) above pre-industrial concentrations. To transition from the Meinshausen formula to the logarithmic formula we calculate an effective \(F_{\rm 2\times CO_{2}}\) from each historical ensemble member to use in the corresponding DICE simulation by rearranging (4) and using 2023 values of \(F_{\rm CO_{2}}\) and \(C_{\rm CO_{2}}\).
For computing the temperature response to ERF, FaIR uses an impulse-response formulation of the well-known \(n\)-layer energy balance model [38]. We use \(n=3\), expected to be sufficient to capture short- and long-term climate responses to forcing [18, 39]. Results from the offline historical FaIR runs are saved out for 2023 and used as initial conditions for DICE. The temperatures of the three ocean layers in 2023 are re-baselined such that the uppermost layer (a proxy for global mean near-surface air temperature) is defined to be 0.85\({}^{\circ}\)C above pre-industrial over the 1995-2014 mean, this being the best estimate assessed warming in the IPCC AR6 WG1 [33] and following the treatment of scenario assessment in IPCC AR6 WG3 [3, 15, 30]. The other two ocean layers are adjusted by the same amount that was required to fix the uppermost layer at 0.85\({}^{\circ}\)C, maintaining relative differences.
FaIR uses four atmospheric boxes to model CO\({}_{2}\) concentrations. The carbon mass in each box is also saved out of the historical run and used for initialising DICE in 2023. The sum of the atmospheric boxes (a mass anomaly above pre-industrial) and the pre-industrial mass (a probabilistic parameter sampled in ref. [34]) gives the initial atmospheric CO\({}_{2}\) concentration at the start of 2023 (fig. 1b).
### Scenario construction
The three scenarios (Nordhaus' "optimal", well-below 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C-low overshoot) are differentiated solely by their discount parameters and the SSP scenario chosen to represent their non-CO\({}_{2}\) forcing.
DICE uses Ramsey-style discounting [40] to express future values in today's equivalents. The social discount rate \(r\) is
\[r=\rho+\eta g \tag{5}\]
where \(\rho\) is the pure rate of time preference, \(\eta\) is the elasticity of marginal utility of consumption and \(g\) is per-capita growth in consumption. In Nordhaus' "optimal" scenario we use the default DICE-2016R parameters of \(\rho=1.5\%\) and \(\eta=1.45\%\) resulting in social discount rates around 3.1%. The 2\({}^{\circ}\)C scenario uses \(\rho=\eta=0.35\%\) and the 1.5\({}^{\circ}\)C scenario uses \(\rho=\eta=0.12\%\), resulting in very low social discount rates centred around 1.4% and 0.6% respectively. These parameters have been selected solely to achieve the goals of constructing scenarios that meet the Paris Agreement targets and are not necessarily constructed to be economically meaningful.
## 3 Results
### CO\({}_{2}\) emissions pathways
Figure 2 shows the headline projections for the three scenarios, which are summarized in table 1. In each scenario, a wide range of allowable CO\({}_{2}\) emissions consistent with the ensemble warming classification are shown. The Nordhaus "optimal" pathway produces a level of total CO\({}_{2}\) emissions ranging from 5-41 Gt CO\({}_{2}\) yr\({}^{-1}\) in 2100 (5-95% range), with a relatively smaller spread in 2050. In contrast, the 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C
scenarios show larger spreads in their 2050 CO\({}_{2}\) emissions (2-24 and -14 to +12 Gt CO\({}_{2}\) yr\({}^{-1}\) respectively). This suggests that climate uncertainty alone can either demand high levels of net negative emissions or permit substantial residual positive emissions in mid-century. By the end of the century, a majority of 1.5\({}^{\circ}\)C scenarios approach the maximum abatement level allowed in DICE (120% of gross emissions), evidenced by the 5th and 50th percentile being at the same -23 CO\({}_{2}\) yr\({}^{-1}\) level.
The observation that all 1.5\({}^{\circ}\)C and 2\({}^{\circ}\)C pathways follow the emissions abatement upper bound of \(\mu(t)=0.15t\) (emissions lower bound) for the first few periods (fig. 2a) demonstrates that decarbonizing as rapidly as possible in the near term is welfare-optimal under Paris Agreement long-term temperature constraints.
### Timing of net zero CO\({}_{2}\)
The 1.5\({}^{\circ}\)C scenario reaches net zero CO\({}_{2}\) emissions with an ensemble median year of 2054, which is consistent with the C1 scenario category of IPCC AR6 WG3. The well-below 2\({}^{\circ}\)C ensemble has a median net zero CO\({}_{2}\) emissions year of 2077, which is a little later than the IPCC's C3 scenario category. The "optimal" ensemble does not reach net zero CO\({}_{2}\) emissions this century, but does reach net zero with a median year of 2129. This demonstrates the utility of extending scenarios beyond 2100 to consider longer-term impacts.
### Global mean surface temperature
Global mean surface temperature reaches 2.9\({}^{\circ}\)C above pre-industrial in the "optimal" pathway, peaking at 3.1\({}^{\circ}\)C in the 22nd century (fig. 2b). The 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C scenarios exhibit peak warming this century, consistent with net-zero CO\({}_{2}\) dates well before 2100. The 1.5\({}^{\circ}\)C low overshoot ensemble has a peak warming of 1.6\({}^{\circ}\)C, consistent with the IPCC C1 definition of allowing for a small, temporary overshoot of 1.5\({}^{\circ}\)C. Indeed, it is difficult to avoid overshooting 1.5\({}^{\circ}\)C from today's starting level of warming, even under very rapid emissions phase-out scenarios [41].
### Effective radiative forcing
The total median ERF (fig. 2c) in 2100 is 5.2 W m\({}^{-2}\) in the "optimal" scenario, 2.7 W m\({}^{-2}\) in the 2\({}^{\circ}\)C scenario and 1.9 W m\({}^{-2}\) in the 1.5\({}^{\circ}\)C scenario. Non-CO\({}_{2}\) forcing pathways were provided from SSP2-4.5, SSP1-2.6 and SSP1-1.9 respectively, though the total ERF is dominated by the CO\({}_{2}\) component. In the 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C scenarios, the median ERF in 2100 is very similar to the non-CO\({}_{2}\) scenario nameplate forcing in 2100. SSP1-2.6 and SSP1-1.9 were designed to be 'well-below 2\({}^{\circ}\)C" and 1.5\({}^{\circ}\)C-consistent scenarios respectively and our ERF results are therefore consistent with the SSP scenario framework [4].
### Social cost of carbon
The SCC shows a wide uncertainty range for each scenario, with the spread increasing for stronger mitigation (fig. 2d). The 5-95% uncertainty range is approximately a factor of three (15-44 $ (t CO\({}_{2}\))\({}^{-1}\)), four (237-934 $ (t CO\({}_{2}\))\({}^{-1}\)) and five (821-4434 $ (t CO\({}_{2}\))\({}^{-1}\)) for the "optimal", 2\({}^{\circ}\)C and 1.5\({}^{\circ}\)C cases respectively (values are reported in 2020 US dollars).
\begin{table}
\begin{tabular}{l r r r} Variable & Nordhaus “optimal” & Well below 2\({}^{\circ}\)C & 1.5\({}^{\circ}\)C low overshoot \\ \hline CO\({}_{2}\) emissions 2050 (Gt CO\({}_{2}\) yr\({}^{-1}\)) & 45 (39–49) & 15 (2–24) & 2 (–14 to +12) \\ CO\({}_{2}\) emissions 2100 (Gt CO\({}_{2}\) yr\({}^{-1}\)) & 25 (5–41) & –19 (–23 to –5) & –23 (–23 to –13) \\ Net zero CO\({}_{2}\) year & 2129 (2105–2152) & 2077 (2053–2094) & 2054 (2040–2079) \\ Social cost of carbon 2023 (20208 (t CO\({}_{2}\))\({}^{-1}\)) & 26 (15–44) & 439 (237–934) & 1759 (821–4434) \\ Peak warming (’C relative to 1850–1900) & 3.1 (2.7–3.7) & 1.8 (1.5–2.2) & 1.6 (1.3–2.1) \\ Warming 2100 (’C relative to 1850–1900) & 2.9 (2.4–3.6) & 1.7 (1.5–2.0) & 1.4 (1.2–1.7) \\ Effective radiative forcing 2100 (W m\({}^{-2}\)) & 5.2 (4.4–5.9) & 2.7 (1.9–3.3) & 1.9 (1.4–2.6) \\ ECS/SCC correlation coefficient &.51 & 74 &.74 \\ ECS/2050 CO\({}_{2}\) emissions correlation coefficient & —.48 & —.72 & –.76 \\
2014 aerosol forcing/SCC correlation coefficient & —.64 & —.60 & —.59 \\
2014 aerosol forcing/2050 CO\({}_{2}\) emissions correlation coefficient &.61 &.59 &.56 \\ Near-term discount rate (\%) & 3.1 (3.1–3.2) & 1.4 (1.2–1.6) & 0.6 (0.2–0.8) \\ \hline \end{tabular}
\end{table}
Table 1: Key results from the three scenarios. All correlations are significant at the 1% level.
Figure 2: Emissions, climate and economic projections for three scenarios using DICE-FaIR. (a) CO\({}_{2}\) emissions from energy and industrial processes for the “optimal” (blue), 2\({}^{\circ}\)C (pink) and 1.5\({}^{\circ}\)C (yellow) scenarios. (b) Temperature projections. (c) Total effective radiative forcing projections. (d) Histogram of year-2023 SCC (in 20208) on a log-log scale. In (a-c), light shading shows 5-95% range, darker shading shows 16-84% range and solid lines show ensemble medians.
### Relationships between climate sensitivity, aerosol radiative forcing and social cost of carbon
There is a strong positive correlation between SCC and ECS [6], particularly in 1.5\({}^{\circ}\)C and 2\({}^{\circ}\)C mitigation scenarios (fig. 3a). This follows from the fact that if climate sensitivity is high, emissions need to be abated more aggressively to maintain a similar warming level (and similar level of associated climate damages) compared to a case where climate sensitivity is low. Stronger abatement necessitates a higher social cost of carbon. This also confirms that reducing climate sensitivity uncertainty can lead to better informed estimates of the social cost of carbon and net present benefits [42].
The negative correlation between ECS and net CO\({}_{2}\) emissions in 2050 is shown in fig. 3b, showing that stronger emissions abatement is required if climate sensitivity is high as a corollary of the discussion above. In 2050, the maximal level of mitigation (net emissions of -14 GtCO\({}_{2}\) yr\({}^{-1}\)) is reached in several of the 1.5\({}^{\circ}\)C ensemble members. These tend to be clustered towards higher values of ECS, though moderate ECS between 3 and 4\({}^{\circ}\)C could still require very high levels of abatement.
Alongside climate sensitivity, present-day aerosol ERF is a strong predictor of 21st century warming [43, 44]. In fig. 3c there is a negative correlation between aerosol ERF in 2014 and social cost of carbon, and in fig. 3d a positive correlation between aerosol ERF and 2050 CO\({}_{2}\) emissions. These are the opposite signs to the correlations related to ECS in fig. 3a-b, and is due to ECS and aerosol ERF being negatively correlated in observationally consistent climate simulations [17]. A strong negative aerosol forcing is associated with a sensitive climate, as historical greenhouse gas warming has been offset by cooling aerosols. Aerosol forcing may be easier to constrain than ECS, and this indicates there are also net present economic benefits to reducing uncertainty in aerosol forcing [44].
## 4 Discussion and conclusions
We show that the optimal CO\({}_{2}\) emissions pathways and social cost of carbon are sensitive to physical climate uncertainty, including ECS and present-day aerosol forcing. Due to climate uncertainty alone, a range of CO\({}_{2}\) emissions pathways could be consistent with a 1.5\({}^{\circ}\)C future, ranging from requisition of a high level of net negative emissions to allowance of a substantial level of residual positive emissions. However, there are few plausible climate states forgiving enough to allow achieving Paris-compliant climate goals (well-below 2\({}^{\circ}\)C or 1.5\({}^{\circ}\)C) without net negative emissions in the second half of the century, evidenced by the emissions in the 95th percentile of the 2\({}^{\circ}\)C scenario being below zero in 2100 (fig. 2a). Net negative emissions in 2100 are at -23 Gt CO\({}_{2}\) yr\({}^{-1}\) in more than half of the 1.5\({}^{\circ}\)C ensemble, this being the maximum abatement of 120% of gross emissions assumed in DICE. We note that this level of net negative emissions may not be achievable in reality due to feasibility constraints [28, 45].
There is a strong positive correlation between SCC and ECS, and negative correlation between aerosol forcing and ECS, where high climate sensitivity or strong aerosol forcing leads to aggressive abatement being socially optimal, and hence leads to a higher SCC. Owing to this, there is a relationship between climate sensitivity (or aerosol forcing) and emissions which can be contextualised as a climate-abatement feedback. This feedback is straightforward to demonstrate in DICE but is missing from PB-IAMs, at least when being used to construct emissions scenarios for IPCC [3] and policymaking.
In PB-IAMs, there exists the opportunity to consider the processes under which climate change causes economic losses (or benefits). Climate change may lead to impacts on energy generation [46], heating and cooling demand [47], labour productivity, agriculture, bioenergy, and sea-level rise [5], in addition to remedial costs resulting from climate catastrophes that will likely increase in severity and frequency [48]. While in some cases difficult, incorporation of these effects into PB-IAMs will lead to more realistic emissions scenarios, particularly in high emissions pathways where high levels of warming increases climate damages, reduces GDP and consumption, and hence is a negative feedback onto emissions [10].
Our "optimal" scenario has a lower median SCC at $26 than DICE-2016R which is $31 in 2015$ ($34 in 2020$). This is despite the lower effective discount rate in our study (3.1% versus DICE-2016R's 4.25%), driven by lower near-term per-capita consumption growth rates. An updating and recalibration of the economic assumptions used in DICE partly accounts for the differences, particularly our lower future population projections compared to DICE-2016R (section 2.1). The social discount rates required to construct our scenarios are significantly lower than those used in the literature for mitigation scenarios. Our 2\({}^{\circ}\)C
Figure 3: Relationship between parameters. (a) ECS versus social cost of carbon; (b) ECS versus CO\({}_{2}\) emissions in 2050; (c) 2014 aerosol ERF versus social cost of carbon; (d) 2014 aerosol ERF versus CO\({}_{2}\) emissions in 2050.
scenario uses the same discount rate, by coincidence, as Stern's assessment of the costs of climate change [49]. We construct our scenarios by varying the discount rate parameters through a standard neoclassial Ramsey-like model [40]. As the real discount rate relies on the growth in consumption, and consumption is affected by both by investment diverted towards emissions abatement and climate damages, the near-term discount rate is affected by climate uncertainty in our scenarios and is not a single value across all ensemble members (table 1). Our analysis shows that meeting 1.5\({}^{\circ}\)C with low or no overshoot would require a very high carbon price, with a median estimate of $1759 (t CO\({}_{2}\))\({}^{-1}\) and 95th percentile of $4434 (t CO\({}_{2}\))\({}^{-1}\).
The social discount rate is one of the most contested and controversial parameters in climate economics [19]. Nordhaus [24] suggests the discount rate should be a continuation of the real risk-free interest rate in the recent past, and opts for a discount rate in DICE-2016R of 4.25%. Stern [49] argues that the discount rate is a subjective valuation of the welfare of future generations compared to the present, and is a normative choice, putting forward an ethical basis for lower discount rates [50]. Our use of the discount rate as a control dial on the acceptable level of future warming puts us more in the "normative choice" camp of Stern. Regardless of viewpoint, the fact that three very different scenarios are achievable by modifying the discount rate confirms that discounting is one of the most influential parameters controlling emissions pathways and social cost of carbon [6, 7, 51].
In every ensemble member, a cost-benefit optimal emissions pathway is constructed, with the assumption that in each of these 1001 different "worlds" the social planner knows the state of the climate system in advance. It is likely that as climate change unfolds over the coming decades, uncertainty in emergent parameters in the climate system such as the ECS will reduce; we will simply have more observational evidence to draw upon [42]. This reduction in uncertainty or updating of knowledge over time would be a useful future analysis. Another additional avenue of future study is the relative contributions of socioeconomic (e.g. growth in population, carbon intensity, total factor productivity, discount rate) and climate uncertainties on total variation in social cost of carbon and emissions pathways, including their time dependence. Although we include their forcing contributions and uncertainties and report on the dependency of SCC on aerosol ERF, non-CO\({}_{2}\) emissions are not calculated endogenously. Doing so from a process perspective would require modelling of cost-abatement curves in several sectors and substantially increase the complexity of the analysis, but relationships between key important non-CO\({}_{2}\) forcers and fossil CO\({}_{2}\) could be sought from a large database of PB-IAM scenarios [52, 53] at a relatively low computational cost, as we do for land-use CO\({}_{2}\) (eq. (3)). Notwithstanding its simplicity, this study highlights the importance of incorporating climate uncertainty into IAM-derived emissions scenarios.
## Acknowledgments
CJS was supported by a NERC-IIASA Collaborative Research Fellowship (NE/T009381/1) and Horizon Europe project WorldTrans (101081661). AAK was supported by the Engineering and Physical Sciences Research Council, United Kingdom, grant/award no. EP/P022820/1. PY was supported by the Climate Compatible Growth programme, which is funded by UK aid from the UK government. The views expressed herein do not necessarily reflect the UK government's official policies. DF was supported by the Swiss National Science Foundation (SNF) under project ID 'Can Economic Policy Mitigate Climate-Change?'. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising. The authors declare no conflicts of interest.
## Data and code availability
Code required to reproduce the results, data and figures is available from [https://doi.org/10.5281/zenodo.7866715](https://doi.org/10.5281/zenodo.7866715) and includes all data output. v1.0.2 of the probabilistic FaIR calibration is available from [https://doi.org/10.5281/zenodo.7556734](https://doi.org/10.5281/zenodo.7556734) [34]. FaIR v2.1 is available from the Python Package Index and [https://doi.org/10.5281/zenodo.7459702](https://doi.org/10.5281/zenodo.7459702). |
2310.15127 | Open-Ended Instructable Embodied Agents with Memory-Augmented Large
Language Models | Pre-trained and frozen large language models (LLMs) can effectively map
simple scene rearrangement instructions to programs over a robot's visuomotor
functions through appropriate few-shot example prompting. To parse open-domain
natural language and adapt to a user's idiosyncratic procedures, not known
during prompt engineering time, fixed prompts fall short. In this paper, we
introduce HELPER, an embodied agent equipped with an external memory of
language-program pairs that parses free-form human-robot dialogue into action
programs through retrieval-augmented LLM prompting: relevant memories are
retrieved based on the current dialogue, instruction, correction, or VLM
description, and used as in-context prompt examples for LLM querying. The
memory is expanded during deployment to include pairs of user's language and
action plans, to assist future inferences and personalize them to the user's
language and routines. HELPER sets a new state-of-the-art in the TEACh
benchmark in both Execution from Dialog History (EDH) and Trajectory from
Dialogue (TfD), with a 1.7x improvement over the previous state-of-the-art for
TfD. Our models, code, and video results can be found in our project's website:
https://helper-agent-llm.github.io. | Gabriel Sarch, Yue Wu, Michael J. Tarr, Katerina Fragkiadaki | 2023-10-23T17:31:55Z | http://arxiv.org/abs/2310.15127v2 | # Open-Ended Instructable Embodied Agents with
###### Abstract
Pre-trained and frozen large language models (LLMs) can effectively map simple scene rearrangement instructions to programs over a robot's visuomotor functions through appropriate few-shot example prompting. To parse open-domain natural language and adapt to a user's idiosyncratic procedures, not knowing during prompt engineering time, fixed prompts fall short. In this paper, we introduce HELPER, an embodied agent equipped with an external memory of language-program pairs that parses free-form human-robot dialogue into action programs through retrieval-augmented LLM prompting: relevant memories are retrieved based on the current dialogue, instruction, correction, or VLM description, and used as in-context prompt examples for LLM querying. The memory is expanded during deployment to include pairs of user's language and action plans, to assist future inferences and personalize them to the user's language and routines. HELPER sets a new state-of-the-art in the TEACh benchmark in both Execution from Dialog History (EDH) and Trajectory from Dialogue (TfD), with a 1.7x improvement over the previous state-of-the-art for TfD. Our models, code, and video results can be found in our project's website: helper-agent-llm.github.io.
## 1 Introduction
Parsing free-form human instructions and human-robot dialogue into task plans that a robot can execute is challenging due to the open-endedness of environments and procedures to accomplish, and to the diversity and complexity of language humans use to communicate their desires. Human language often contains long-term references, questions, errors, omissions, or descriptions of routines specific to a particular user (Tellex et al., 2011; Liang, 2016; Klein and Manning, 2003). Instructions need to be interpreted in the environmental context in which they are issued, and plans need to adapt in a closed-loop to execution failures. Large Language Models (LLMs) trained on Internet-scale text can parse language instructions to task plans with appropriate plan-like or code-like prompts, without any finetuning of the language model, as shown in recent works (Ahn et al., 2022; Liang et al., 2022; Zeng et al., 2022; Huang et al., 2022; Singh et al., 2022). The state of the environment is provided as a list of objects and their spatial coordinates, or as a free-form text description from a vision-language model (Liang et al., 2022; Liu et al., 2023; Wu et al., 2023; Ahn et al., 2022). Using LLMs for task planning requires engineering a prompt that includes a description of the task for the LLM to perform, a robot API with function documentation and expressive function names, environment and task instruction inputs, and a set of in-context examples for inputs and outputs for the task (Liang et al., 2022). These methods are not trained in the domain of interest; rather they are prompt-engineered having in mind the domain at hand.
How can we extend LLM-prompting for semantic parsing and task planning to open-domain, free-form instructions, corrections, human-robot dialogue, and users' idiosyncratic routines, not known at prompt engineering time? The prompts used for the domain of tabletop rearrangement are already approaching the maximum context window of widely used LLMs (Singh et al., 2022; Liang et al., 2022). Even as context window size grows, more prompt examples result in larger attention operations and cause an increase in both inference time and resource usage.
To this end, we introduce HELPER (Human-instructable Embodied Language Parsing via Evolving Routines), a model that uses retrieval-augmented situated prompting of LLMs to parse free-form dialogue, instructions, and corrections from humans and vision-language models to programs over a set of parameterized visuomotor routines. HELPER is equipped with an external
non-parametric key-value memory of language-program pairs. HELPER uses its memory to retrieve relevant in-context language and action program examples, and generates prompts tailored to the current language input. HELPER expands its memory with successful executions of user specific procedures; it then recalls them and adapts them in future interactions with the user. HELPER uses pre-trained vision-language models (VLMs) to diagnose plan failures in language format, and uses these to retrieve similar failure cases with solutions from its memory to seed the prompt. To execute a program predicted by the LLM, HELPER combines successful practices of previous home embodied agents, such as semantic and occupancy map building (Chaplot et al., 2020; Blukis et al., 2022; Min et al., 2021), LLM-based common sense object search (Inoue and Ohashi, 2022), object detection and tracking with off-the-shelf detectors (Chaplot et al., 2020), object attribute detection with VLMs (Zhang et al., 2022), and verification of action preconditions during execution.
We test HELPER on the TEACh benchmark (Padmakumar et al., 2021), which evaluates agents in their ability to complete a variety of long-horizon household tasks from RGB input given natural language dialogue between a commander (the instruction-giving user) and a follower (the instruction-seeking user). We achieve a new state-of-the-art in the TEACh Execution from Dialog History and Trajectory-from-Dialogue settings, improving task success by 1.7x and goal-condition success by 2.1x compared to prior work in TfD. By further soliciting and incorporating user feedback, HELPER attains an additional 1.3x boost in task success. Our work is inspired by works in the language domain (Perez et al., 2021; Schick and Schutze, 2020; Gao et al., 2020; Liu et al., 2021) that retrieve in-context prompt examples based on the input language query for NLP tasks. HELPER extends this capability to the domain of instructable embodied agents, and demonstrates the potential of memory-augmented LLMs for semantic parsing of open-ended free-form instructive language into an expandable library of programs.
## 2 Related Work
Instructable Embodied AgentsSignificant strides have been made by training large neural networks to jointly map instructions and their sen
Figure 1: **Open-ended instructable agents with retrieval-augmented LLMs. We equip LLMs with an external memory of language and program pairs to retrieve in-context examples for prompts during LLM querying for task plans. Our model takes as input instructions, dialogue segments, corrections and VLM environment descriptions, retrieves relevant memories to use as in-context examples, and prompts LLMs to predict task plans and plan adjustments. Our agent executes the predicted plans from visual input using occupancy and semantic map building, 3D object detection and state tracking, and active exploration using guidance from LLMs’ common sense to locate objects not present in the maps. Successful programs are added to the memory paired with their language context, allowing for personalized subsequent interactions.**
sory contexts to agent actions or macro-actions using imitation learning (Anderson et al., 2018; Ku et al., 2020; Anderson et al., 2018; Savva et al., 2019; Gervet et al., 2022; Shridhar et al., 2020; Cao et al.; Suglia et al., 2021; Fan et al., 2018; Yu et al., 2020; Brohan et al., 2022; Stone et al., 2023; Yu et al., 2023). Existing approaches differ--among others--in the way the state of the environment is communicated to the model. Many methods map RGB image tokens and language inputs directly to actions or macro-actions (Pashevich et al., 2021; Wijmans et al., 2020; Suglia et al., 2021; Krantz et al., 2020). Other methods map language instructions and linguistic descriptions of the environment's state in terms of object lists or objects spatial coordinates to macro-actions, foregoing visual feature description of the scene, in an attempt to generalize better (Liang et al., 2022; Singh et al., 2022; Chaplot et al., 2020; Min et al., 2021; Liu et al., 2022; Murray and Cakmak, 2022; Liu et al., 2022; Inoue and Ohashi, 2022; Song et al., 2022; Zheng et al., 2022; Zhang et al., 2022; Huang et al., 2022, 2023; Ahn et al., 2022; Zeng et al., 2022; Huang et al., 2022). Some of these methods fine-tune language models to map language input to macro-actions, while others prompt frozen LLMs to predict action programs, relying on the emergent in-context learning property of LLMs to emulate novel tasks at test time. Some methods use natural language as the output format of the LLM (Wu et al., 2023; Song et al., 2022; Blukis et al., 2022; Huang et al., 2022), and others use code format (Singh et al., 2022; Liang et al., 2022; Huang et al., 2023). HELPER prompts frozen LLMs to predict Python programs over visuo-motor functions for parsing dialogue, instructions and corrective human feedback.
The work closest to HELPER is LLM Planner (Song et al., 2022) which uses memory-augmented prompting of pretrained LLMs for instruction following. However, it differs from HELPER in several areas such as plan memory expansion, VLM-guided correction, and usage of LLMs for object search. Furthermore, while Singh et al. (2022) frequently seeks human feedback, HELPER requests feedback only post full task execution and employs Visual-Language Models (VLMs) for error feedback, reducing user interruptions.
Numerous simulation environments exist for evaluating home assistant frameworks, including Habitat (Savva et al., 2019), GibsonWorld (Shen et al., 2021), ThreeDWorld (Gan et al., 2022), and AI2THOR (Kolve et al., 2017). ALFRED (Shridhar et al., 2020) and TEACh (Padmakumar et al., 2021) are benchmarks in the AI2THOR environment (Kolve et al., 2017), measuring agents' competence in household tasks through natural language. Our research focuses on the 'Trajectory from Dialogue' (TfD) evaluation in TEACh, mirroring ALFRED but with greater task and input complexity.
Prompting LLMs for action prediction and visual reasoningSince the introduction of few-shot prompting by (Brown et al., 2020), several approaches have improved the prompting ability of LLMs by automatically learning prompts (Lester et al., 2021), chain of thought prompting (Nye et al., 2022; Gao et al., 2022; Wei et al., 2022; Wang et al., 2022; Chen et al., 2022; Yao et al., 2023) and retrieval-augmented LLM prompting (Nakano et al., 2021; Shi et al., 2023; Jiang et al., 2023) for language modeling, question answering, and long-form, multi-hop text generation. HELPER uses memory-augmented prompting by retrieving and integrating similar task plans into the prompt to facilitate language parsing to programs.
LLMs have been used as policies in Minecraft to predict actions (Wang et al., 2023, 2023), error correction (Liu et al., 2023), and for understanding instruction manuals for game play in some
Figure 2: **HELPER’s architecture.** The model uses memory-augmented LLM prompting for task planning from instructions, corrections and human-robot dialogue and for re-planning during failures given feedback from a VLM model. The generated program is executed the Executor module. The Executor builds semantic, occupancy and 3D object maps, tracks object states, verifies action preconditions, and queries LLMs for search locations for objects missing from the maps, using the Locator module.
Atari games (Wu et al., 2023). They have also significantly improved text-based agents in text-based simulated worlds (Yao et al., 2022; Shinn et al., 2023; Wu et al., 2023; Richards, 2023). ViperGPT (Suris et al., 2023), and CodeVQA (Subramanian et al., 2023) use LLM prompting to decompose referential expressions and questions to programs over simpler visual routines. Our work uses LLMs for planning from free-form dialogue and user corrective feedback for home task completion, a domain not addressed in previous works.
## 3 Method
Helper is an embodied agent designed to map human-robot dialogue, corrections and VLM descriptions to actions programs over a fixed API of parameterized navigation and manipulation primitives. Its architecture is outlined in Figure 2. At its heart, it generates plans and plan adjustments by querying LLMs using retrieval of relevant language-program pairs to include as in-context examples in the LLM prompt. The generated programs are then sent to the Executor module, which translates each program step into specific navigation and manipulation action. Before executing each step in the program, the Executor verifies if the necessary preconditions for an action, such as the robot already holding an object, are met. If not, the plan is adjusted according to the current environmental and agent state. Should a step involve an undetected object, the Executor calls on the Locator module to efficiently search for the required object by utilizing previous user instructions and LLMs' common sense knowledge. If any action fails during execution, a VLM predicts the reason for this failure from pixel input and feeds this into the Planner for generating plan adjustments.
### Planner: Retrieval-Augmented LLM Planning
Given an input \(I\) consisting of a dialogue segment, instruction, or correction, HELPER uses memory-augmented prompting of frozen LLMs to map the input into an executable Python program over a parametrized set of manipulation and navigation primitives \(G\in\{G_{manipulation}\cup G_{navigation}\}\) that the Executor can perform (e.g., goto(X), pickup(X), slice(X),...). Our action API can be found in Section D of the Appendix.
Helper maintains a key-value memory of language - program pairs, as shown in Figure 3A. Each language key is mapped to a 1D vector using an LLM's frozen language encoder. Given current context \(I\), the model retrieves the top-\(K\) keys, i.e., the keys with the smallest \(L_{2}\) distance with the embedding of the input context \(I\), and adds the corresponding language - program pairs to the LLM prompt as in-context examples for parsing the current input \(I\).
Figure 3B illustrates the prompt format for the Planner. It includes the API specifying the primitives \(G\) parameterized as Python functions, the retrieved examples, and the language input \(I\). The LLM is tasked to generate a Python program over parameterized primitives \(G\). Examples of our prompts and LLM responses can be found in Section F of the Appendix.
#### 3.1.1 Memory Expansion
The key-value memory of HELPER can be continually expanded with successful executions of instructions to adapt to a user's specific routines, as shown in Figure 1. An additional key-value pair is added with the language instruction paired with the execution plan if the user indicates the task was successful. Then, HELPER can recall this plan and adapt it in subsequent interactions with the user. For example, if a user instructs HELPER one day to _"Perform the Mary cleaning. This involves cleaning two plates and two cups in the sink"_, the user need only say _"Do the Mary cleaning"_ in future interactions, and HELPER will retrieve the previous plan, include it in the examples section of the prompt, and query the LLM to adapt it accordingly. The personalization capabilities of HELPER are evaluated in Section 4.4.
#### 3.1.2 Incorporating user feedback
A user's feedback can improve a robot's performance, but requesting feedback frequently can deteriorate the overall user experience. Thus, we enable HELPER to elicit user feedback only when it has completed execution of the program. Specifically, it asks _"Is the task completed to your satisfaction? Did I miss anything?"_ once it believes it has completed the task. The user responds either that the task has been completed (at which point HELPER stops acting) or points out problems and corrections in free-form natural language, such as, _"You failed to cook a slice of potato. The potato slice needs to be cooked."_. HELPER uses the language
feedback to re-plan using the PLANNER. We evaluate HELPER in its ability to seek and utilize user feedback in Section 4.3.
#### 3.1.3 Visually-Grounded Plan Correction using Vision-Language Models
Generated programs may fail for various reasons, such as when a step in the plan is missed or an object-of-interest is occluded. When the program fails, HELPER uses a vision-language model (VLM) pre-trained on web-scale data, specifically the ALIGN model Jia et al. (2021), to match the current visual observation with a pre-defined list of textual failure cases, such as _an object is blocking you from interacting with the selected object_, as illustrated in Figure 4. The best match is taken to be the failure feedback \(F\). The PLANNER module then retrieves the top-\(K\) most relevant error correction examples, each containing input dialogue, failure feedback, and the corresponding corrective program, from memory based on encodings of input \(I\) and failure feedback \(F\) from the VLM. The LLM is prompted with the the failed program step, the predicted failure description \(F\) from the VLM, the in-context examples, and the original dialogue segment \(I\). The LLM outputs a self-reflection Shinn et al. (2023) as to why the failure occurred, and generates a program over manipulation and navigation primitives \(G\), and an additional set of corrective primitives \(G_{corrective}\) (e.g., step-back(), move-to-an-alternate-viewpoint(),...). This program is sent to the Executor for execution.
### Executor: Scene Perception, Pre-Condition Checks, Object Search and Action Execution
The Executor module executes the predicted Python programs in the environment, converting the code into low-level manipulation and navigation actions, as shown in Figure 2. At each time step, the Executor receives an RGB image and obtains an estimated depth map via monocular depth estimation Bhat et al. (2023) and object masks via an off-the-shelf object detector Dong et al. (2021).
#### 3.2.1 Scene and object state perception
Using the depth maps, object masks, and approximate egomotion of the agent at each time step, the Executor maintains a 3D occupancy map and object memory of the home environment to navigate around obstacles and keep track of previously seen objects, similar to previous works Sarch et al. (2022). Objects are detected in every frame and are merged into object instances based on closeness of the predicted 3D centroids. Each object instance is initialized with a set of object state at
Figure 4: Inference of a failure feedback description by matching potential failure language descriptions with the current image using a vision-language model (VLM).
Figure 3: HELPER parses dialogue segments, instructions, and corrections into visuomotor programs using retrieval-augmented LLM prompting. **A.** Illustration of the encoding and memory retrieval process. **B.** Prompt format and output of the PLANNER.
tributes (cooked, sliced, dirty,...) by matching the object crop against each attribute with the pre-trained ALIGN model Jia et al. (2021). Object attribute states are updated when an object is acted upon via a manipulation action.
#### 3.2.2 Manipulation and navigation pre-condition checks
The Executor module verifies the pre-conditions of an action before the action is taken to ensure the action is likely to succeed. In our case, these constraints are predefined for each action (for example, the agent must first be holding a knife to slice an object). If any pre-conditions are not satisfied, the Executor adjusts the plan accordingly. In more open-ended action interfaces, an LLM's common sense knowledge can be used to infer the pre-conditions for an action, rather than pre-defining them.
#### 3.2.3 Locator: LLM-based common sense object search
When HELPER needs to find an object that has not been detected before, it calls on the Locator module. The Locator prompts an LLM to suggest potential object search location for the Executor to search nearby, e.g. "search near the sink" or "search in the cupboard". The Locator prompt takes in the language \(I\) (which may reference the object location, e.g., "take the mug from the cupboard" ) and queries the LLM to generate proposed locations by essentially parsing the instruction as well as using its common sense. Based on these predictions, HELPER will go to the suggested locations if they exist in the semantic map (e.g., to the cupboard) and search for the object-of-interest. The Locator's prompt can be found in Section D of the Appendix.
**Implementation details.** We use OpenAI's gpt-4-0613 (gpt, 2023) API, except when mentioned otherwise. We resort to the text-embedding-ada-002 (ada, 2022) API to obtain text embeddings. Furthermore, we use the SOLQ object detector Dong et al. (2021), which is pretrained on MSCOCO Lin et al. (2014) and fine-tuned on the training rooms of TEACh. For monocular depth estimation, we use the ZoeDepth network Bhat et al. (2023), pretrained on the NYU indoor dataset Nathan Silberman and Fergus (2012) and subsequently fine-tuned on the training rooms of TEACh. In the TEACh evaluations, we use \(K\)=3 for retrieval.
## 4 Experiments
We test HELPER in the TEACh benchmark Padmakumar et al. (2021). Our experiments aim to answer the following questions:
1. How does HELPER compare to the SOTA on task planning and execution from free-form dialogue?
2. How much do different components of HELPER contribute to performance?
3. How much does eliciting human feedback help task completion?
4. How effectively does HELPER adapt to a user's specific procedures?
### Evaluation on the TEACh dataset
DatasetThe dataset comprises over 3,000 human-human, interactive dialogues, geared towards completing household tasks within the AI2-THOR simulation environment Kolve et al. (2017). We evaluate on the Trajectory from Dialogue (TfD) evaluation variant, where the agent is given a dialogue segment at the start of the episode. The model is then tasked to infer the sequence of actions to execute based on the user's intents in the dialogue segment, ranging from Make Coffee to Prepare Breakfast. We show examples of such dialogues in Figure 3. We also test on the Execution from Dialogue History (EDH) task in TEACh, where the TfD episodes are partitioned into "sessions". The agent is spawned at the start of one of the sessions and must predict the actions to reach the next session given the dialogue and action history of the previous sessions. The dataset is split into training and validation sets. The validation set is divided into'seen' and 'unseen' rooms based on their presence in the training set. Validation'seen' has the same room instances but different object locations and initial states than any episodes in the training set. At each time step, the agent obtains an egocentric RGB image and must choose an action from a specified set to transition to the next step, such as pickup(X), turn left(), etc. Please see Appendix Section G for more details on the simulation environment.
Evaluation metricsFollowing evaluation practises for the TEACh benchmark, we use the following two metrics: **1. Task success rate (SR),** which refers to the fraction of task sessions in which the
agent successfully fulfills all goal conditions. **2. Goal condition success rate (GC),** which quantifies the proportion of achieved goal conditions across all sessions. Both of these metrics have corresponding path length weighted (PLW) variants. In these versions, the agent incurs penalties for executing a sequence of actions that surpasses the length of the reference path annotated by human experts.
BaselinesWe consider the following baselines: **1. Episodic Transformer (E.T.)**Pashevich et al. (2021) is an end-to-end multimodal transformer that encodes language inputs and a history of visual observations to predict actions, trained with imitation learning from human demonstrations.
**2. Jarvis**Zheng et al. (2022) trains an LLM on the TEACh dialogue to generate high-level subgoals that mimic those performed by the human demonstrator. Jarvis uses a semantic map and the Episodic Transformer for object search.
**3. FILM**Min et al. (2021, 2022) fine-tunes an LLM to produce parametrized plan templates. Similar to Jarvis, FILM uses a semantic map for carrying out subgoals and a semantic policy for object search.
**4. DANLI**Zhang et al. (2022) fine-tunes an LLM to predict high-level subgoals, and uses symbolic planning over an object state and spatial map to create an execution plan. DANLI uses an object search module and manually-defined error correction.
HELPER differs from the baselines in its use of memory-augmented context-dependent prompting of pretrained LLMs and pretrained visual-language models for planning, failure diagnosis and recovery, and object search. We provide a more in-depth comparison of HELPER to previous work in Table 1.
EvaluationWe show quantitative results for HELPER and the baselines on the TEACh Trajectory from Dialogue (TID) and Execution from Dialogue History (EDH) validation split in Table 2.
**On the TID validation unseen, HELPER achieves a 13.73% task success rate and 14.17% goal-condition success rate, a relative improvement of 1.7x and 2.1x, respectively, over DANLI, the prior SOTA in this setting. HELPER additionally sets a new SOTA in the EDH task, achieving a 17.40% task success rate and 25.86% goal-condition success rate on validation unseen.**
### Ablations
We ablate components of HELPER in order to quantify what matters for performance in Table 2_Ablations_. We perform all ablations on the TEACh TfD validation unseen split. We draw the following conclusions:
**1. Retrieval-augmented prompting helps** for planning, re-planning and failure recovery. Replacing the memory-augmented prompts with a fixed prompt (w/o Mem Aug; Table 2) led to a relative 18% reduction in success rate.
**2. VLM error correction helps** the agent recover from failures. Removal of the visually-grounded plan correction (w/o Correction; Table 2) led to a relative 6% reduction in success rate.
**3. The pre-condition check and the LLM search help.** Removal of the action pre-condition checks (w/o Pre Check; Table 2) led to a relative 16% reduction in success rate. Replacing the LOCATOR LLM-based search with a random search (w/o LOCATOR; Table 2) led to a relative 12% reduction in success rate.
**4. Larger LLMs perform better.** Using GPT-3.5 (w GPT-3.5; Table 2) exhibits a relative 31% reduction in success rate compared to using GPT-4. Our findings on GPT-4's superior planning abilities align with similar findings from recent studies of Wu et al. (2023); Bubeck et al. (2023); Liu et al. (2023); Wang et al. (2023).
**5. Perception is a bottleneck.** Using GT depth
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & No & Memory- & User & Accepts & VLM- & LLM- & Pre- \\ & In-Domain & Augmented & Personal- & User & Guided & Guided & Condition \\ & LLM & LLM & ization & Feedback & correction & Search & Check \\ \hline E.T.Pashevich et al. (2021) & x & x & x & x & x & x & x \\ \hline JARVISZhang et al. (2022) & x & x & x & x & x & x & x \\ \hline FILMWang et al. (2021) & x & x & x & x & x & x & x \\ \hline DANLIZhang et al. (2022) & x & x & x & x & x & x & ✓ \\ \hline LLM-PlannerSong et al. (2022) & ✓ & ✓ & x & x & x & x & x \\ \hline Code as PoliciesZhang et al. (2022) & ✓ & x & x & x & x & x & x \\ \hline HELPER (ours) & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of HELPER to previous work.
(w/ GT depth; Table 2) led to an improvement of 1.15x compared to using estimated depth from RGB. Notable is the 1.77x improvement in path-length weighted success when using GT depth. This change is due to lower accuracy for far depths in our depth estimation network lower, thereby causing the agent to spend more time mapping the environment and navigating noisy obstacle maps. Using lidar or better map estimation techniques could mitigate this issue.
Using ground truth segmentation masks and depth (w/ GT depth, seg; Table 2) improves task success and goal-conditioned task success by 1.64x and 2.11x, respectively. This shows the limitations of frame-based object detection and late fusion of detection responses over time. 3D scene representations that fuse features earlier across views may significantly improve 3D object detection. Using GT perception (w/ GT percept; Table 2), which includes depth, segmentation, action success, oracle failure feedback, and increased API failure limit (50), led to 2.20x and 3.56x improvement.
### Eliciting Users' Feedback
We enable HELPER to elicit sparse user feedback by asking _"Is the task completed to your satisfaction? Did I miss anything?"_ once it believes it has completed the task, as explained in Section 3.1.2. The user will then respond with steps missed by HELPER, and HELPER will re-plan based on this feedback. As shown in in Table 2_User Feedback_, asking for a user's feedback twice improves performance by 1.27x. Previous works do not explore this opportunity of eliciting human feedback partly due to the difficulty of interpreting it--being free-form language--which our work addresses.
### Personalization
We evaluate HELPER's ability to retrieve user-specific routines, as well as on their ability to modify the retrieved routines, with one, two, or three modifications, as discussed in 3.1.1. For example, for three modifications we might instruct HELPER: "Make me a Dax sandwich with 1 slice of tomato, 2 lettuce leaves, and add a slice of bread".
DatasetThe evaluation tests 10 user-specific plans for each modification category in five distinct tasks: Make a Sandwich; Prepare Breakst; Make a Salad; Place X on Y; and Clean X. The evaluation contains 40 user requests. The complete list of user-specific plans and modification requests can be found in the Appendix, Section C.
EvaluationWe report the success rate in Table 3. HELPER generates the correct personalized plan for all but three instances, out of 40 evaluation requests. This showcases the ability of HELPER to acquire, retrieve and adapt plans based on context and previous user interactions.
## 5 Limitations
Our model in its current form has the following limitations:
1. Simplified failure detection.The AI2-THOR simulator much simplifies action failure detection which our work and previous works exploit [22, 13]. In a more general setting, continuous progress monitoring
\begin{table}
\begin{tabular}{l c c c c c c c c c} & \multicolumn{4}{c}{**TfD**} & \multicolumn{4}{c}{**EDH**} \\ \cline{2-10} & \multicolumn{3}{c}{**Unseen**} & \multicolumn{3}{c}{**Seen**} & \multicolumn{3}{c}{**Unseen**} & \multicolumn{3}{c}{**Seen**} \\ & SR & GC & SR & GC & SR & GC & SR & GC \\ \hline E.T. & 0.48 (0.12) & 0.35 (0.59) & 1.02 (0.17) & 1.42 (4.82) & 7.8 (0.9) & 9.1 (1.7) & 10.2 (1.7) & 15.7 (4.1) \\ JARVIS & 1.80 (0.30) & 3.10 (1.60) & 1.70 (0.20) & 5.40 (4.50) & 15.80 (2.60) & 16.60 (8.20) & 15.10 (3.30) & 22.60 (8.70) \\ FILM & 2.9 (1.0) & 6.1 (2.5) & 5.5 (2.6) & 5.8 (11.6) & 10.2 (1.0) & 18.3 (2.7) & 14.3 (2.1) & 26.4 (5.6) \\ DANLI & 7.98 (3.20) & 6.79 (6.57) & 4.97 (1.86) & 10.50 (10.27) & 16.98 (7.24) & 23.44 (19.95) & 17.76 (9.28) & 24.93 (22.20) \\ HELPER (ours) & **13.73** (1.61) & **14.17** (4.56) & **12.15** (1.79) & **18.62** (9.28) & **17.40** (2.91) & **25.86** (7.90) & **18.59** (4.00) & **32.09** (9.81) \\ \hline \multicolumn{10}{l}{_Ablations_} \\ w/o Mem Aug & 11.27 (1.39) & 11.09 (4.00) & & & & & & \\ w/o Pre Check & 11.6 (1.36) & 11.32 (4.15) & & & & & \\ w/o Correction & 12.9 (1.53) & 12.45 (4.91) & & & & & \\ w/o Doctor & 12.09 (1.29) & 10.89 (3.83) & & & & & \\ w/ GPT-3.5 & 9.48 (1.21) & 10.05 (3.68) & & & & & \\ w/ GT depth & 15.85 (2.85) & 14.49 (6.89) & & & & & \\ w/ GT depth,seg 22.55 (6.39) & 30.00 (14.56) & & & & & & \\ w/ GT percept & 30.23 (9.12) & 50.46 (20.24) & & & & & \\ _User Feedback_ & & & & & & & \\ w/ Feedback 1 & 16.34 (1.67) & 14.70 (4.69) & & & & & \\ w/ Feedback 2 & 17.48 (1.97) & 14.93 (4.74) & & & & & \\ w/ GT percept, & 37.75 (10.96) & 56.77 (19.80) & & & & & \\ Feedback 2 & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: **Trajectory from Dialogue (TfD) and Execution from Dialog History (EDH) evaluation on the TEACH validation set.** Trajectory length weighted metrics are included in ( parentheses ). SR = success rate. GC = goal condition success rate.
from pixels would be required for failure detection, which model VLMs can deliver and we will address in future work.
2. 3D perception bottleneck. HELPER relies on 2D object detectors and depth 3D lifting for 3D object localization. We observe a 2X boost in TEACH success rate from using ground truth segmentation in HELPER. In future work we plan to integrate early 2D features into persistent 3D scene feature representations for more accurate 3D object detection.
4. Cost from LLM querying.GPT-4 API is the most accurate LLM used in HELPER and incurs a significant cost. NLP research in model compression may help decreasing these costs, or finetuning smaller models with enough input-output pairs.
3. Multimodal (vision and language) memory retrieval.Currently, we use a text bottleneck in our environment state descriptions. Exciting future directions include exploration of visual state incorporation to the language model and partial adaptation of its parameters. A multi-modal approach to the memory and plan generation would help contextualize the planning more with the visual state.
Last, to follow human instructions outside of simulation environments our model would need to interface with robot closed-loop policies instead of abstract manipulation primitives, following previous work (Liang et al., 2022).
## 6 Conclusion
We presented HELPER, an instructable embodied agent that uses memory-augmented prompting of pre-trained LLMs to parse dialogue segments, instructions and corrections to programs over action primitives, that it executes in household environments from visual input. HELPER updates its memory with user-instructed action programs after successful execution, allowing personalized interactions by recalling and adapting them. It sets a new state-of-the-art in the TEACH benchmark. Future research directions include extending the model to include a visual modality by encoding visual context during memory retrieval or as direct input to the LLM. We believe our work contributes towards exciting new capabilities for instructable and convexable systems, for assisting users and personalizing human-robot communication.
## 7 Acknowledgements
This material is based upon work supported by National Science Foundation grants GRF DGE1745016 & DGE2140739 (GS), a DARPA Young Investigator Award, a NSF CAREER award, an AFOSR Young Investigator Award, and DARPA Machine Common Sense, and an ONR award N000142312415. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the United States Army, the National Science Foundation, or the United States Air Force.
This research project has benefitted from the Microsoft Accelerate Foundation Models Research (AFMR) grant program through which leading foundation models hosted by Microsoft Azure along with access to Azure credits were provided to conduct the research.
The authors thank William W. Cohen, Ayush Jain, Theophile Gervet, Nikolaos Gkanatsios, and Adam Harley for discussions and useful feedback over the course of this project.
## Ethics Statement
The objective of this research is to construct autonomous agents. Despite the absence of human experimentation, practitioners could potentially implement this technology in human-inclusive environments. Therefore, applications of our research should appropriately address privacy considerations.
All the models developed in this study were trained using Ai2Thor (Kolve et al., 2017). Consequently, there might be an inherent bias towards North American homes. Additionally, we only consider English language inputs in this study.
\begin{table}
\begin{tabular}{l c} \hline \hline & Success \\ \hline Original Plan & 100\% \\ One Change & 100\% \\ Two Changes & 80\% \\ Three Changes & 90\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Evaluation of HELPER for user personalization.** Reported is success of generating the correct plan for 10 personalized plans for a request of the original plan without modifications, and one, two, or three modifications to the original plan. These experiments use the text-davinci-003 model as the prompted LLM. |
2310.14239 | Guidance system for Visually Impaired Persons using Deep Learning and
Optical flow | Visually impaired persons find it difficult to know about their surroundings
while walking on a road. Walking sticks used by them can only give them
information about the obstacles in the stick's proximity. Moreover, it is
mostly effective in static or very slow-paced environments. Hence, this paper
introduces a method to guide them in a busy street. To create such a system it
is very important to know about the approaching object and its direction of
approach. To achieve this objective we created a method in which the image
frame received from the video is divided into three parts i.e. center, left,
and right to know the direction of approach of the approaching object. Object
detection is done using YOLOv3. Lucas Kanade's optical flow estimation method
is used for the optical flow estimation and Depth-net is used for depth
estimation. Using the depth information, object motion trajectory, and object
category information, the model provides necessary information/warning to the
person. This model has been tested in the real world to show its effectiveness. | Shwetang Dubey, Alok Ranjan Sahoo, Pavan Chakraborty | 2023-10-22T09:24:57Z | http://arxiv.org/abs/2310.14239v1 | # Guidance system for Visually Impaired Persons using Deep Learning and Optical flow
###### Abstract
Visually impaired persons find it difficult to know about their surroundings while walking on a road. Walking sticks used by them can only give them information about the obstacles in the stick's proximity. Moreover, it is mostly effective in static or very slow-paced environments. Hence, this paper introduces a method to guide them in a busy street. To create such a system it is very important to know about the approaching object and its direction of approach. To achieve this objective we created a method in which the image frame received from the video is divided into three parts i.e. center, left, and right to know the direction of approach of the approaching object. Object detection is done using YOLOv3. Lucas Kanade's optical flow estimation method is used for the optical flow estimation and Depth-net is used for depth estimation. Using the depth information, object motion trajectory, and object category information, the model provides necessary information/warning to the person. This model has been tested in the real world to show its effectiveness.
YOLOv3, Deep Learning, Object detection, Visually Impaired, Neural network, DepthNet, Optical Flow. +
Footnote †: publicationid: pubid: 979-8-3503-4210-9/23/$31.00 ©2023 IEEE
## I Introduction
According to a report published by WHO on 14 October 2021[21], there are at least 2.2 billion peoples who are having vision impairment. Around 1 billion are having moderate or severe vision impairment or blindness. The number was around 285 million in 2010. 246 million were having serious blindness. So, the rise in visual impairment necessitates the development of an algorithm to give real-time suggestions to fully or partially blind persons. Hence, this paper intends to give a method for guiding visually impaired persons in real to avoid obstacles.
The real challenge with the guidance system is the response time of that model. As when a person is moving on the road then it means that when any object comes in front of him then he will not be able to avoid it unless the guidance system gives him a good response in real-time. So the main challenge here was to generate a system that can not only give good object detection and depth estimation but also can give better time accuracy so that obstacle avoidance can be performed in real-time.
Here we are using the YOLOv3[9] model for object detection as it can give out output much faster than other object detection algorithms like R-CNN, Fast R-CNN, and Mask R-CNN [10][12][11]. Since all YOLO models are trained to do classification and bounding box regression simultaneously so it works much faster than R-CNN or Fast R-CNN. This YOLOv3 object detection algorithm is trained on the COCO[18] dataset which has a total of 80 classes. So we will be able to detect any object if they belong to one of these 80 classes. This YOLOv3 algorithm generates an Anchor box around the detected object which can be further used for depth estimation if we have the feed from both the left and right camera.
In this paper we are giving a model to generate voice commands for the user in a real-time situation for assisting a visually impaired person to move. We also performed optical flow analysis to get the motion of the approaching object towards the camera to generate proper instruction for the blind person. We have divided the frame into 3 parts i.e. left, right, and center then according to the position of the approaching object, the instruction is generated. The main contribution of this paper is the following:
* Used YOLOv3 object detection algorithm for classification of objects in video frames from both left and right images.
* A method to divide the entire image into 3 segments i.e. left, right, and center to generate instruction properly.
* Created an optical flow-based method to give the trajectory of the moving object for object tracking.
* Depth estimation is also performed for detecting depth in real time.
* Finally Google text-to-speech converter is used for generating speech instruction for blind persons.
## II Related Work
In recent years lots of methods have been developed for vision-based obstacle detection and avoidance systems but most of them use different technologies like WiFi, RFID, laser, etc. but the use of cameras for object detection and avoidance is very limited. Vision-based obstacle avoidance system was first introduced by Sainarayanan et al.[3]. In this the team has used they used grayscale images for detection and the background removal is done by using a neural network and obstacle pixels are enhanced. Ulrich et al.[24] proposed a method of histogram comparison. In this method, first color image is filtered and then converted to HSV color space and then the color histograms of the candidate area and reference area are matched. Joachim et el.[15] derived a method to detect obstacles using human color vision and then the auto-focused stereo camera is used to get the depth of obstacles from a person. Rodriguez et al.[22] proposed a model which was based on the cumulative grid in front of visually impaired users for obstacle detection and avoidance. In this method basically a stereo vision is used for obstacle detection and background reduction.
### _Obstacle detection_
Bernabei et el.[4] proposed a method that basically uses an RGB-D sensor for depth estimation and obstacle detection. In this paper basically, a 3D points cloud is generated using Microsoft Kinect, and then the volume of the obstacle in front of the person is calculated finally according to the volume instructions are generated. Vlaminck et al.[25] presented a method where he used RANSAC on a 3D point cloud for plane segmentation. After getting different planes they used that information for ground, wall, and obstacle detection. This method is good but not in real-time situations as RANSAC takes too much time to process the 3D point cloud. Moreover, in this paper, they considered obstacles to be on ground level which is not true for all cases. Rodriguez et al.[22] published a paper that uses a stereo camera for screen capture and then they generated a map of that place using visual SLAM and used that map for autonomous navigation of the visually impaired person.
### _Feedback system using voice command_
After detecting the obstacle the second part of this problem is to make an alert mechanism that can alert the visually impaired person in real-time. This is a hard task to describe where the location of an obstacle and how to avoid it to a blind person as he can not understand the world as we do. There are multiple ways used to describe scenes by different researchers. Joachim et al.[15] used a text-to-speech engine to convert the text into the form of speech and then used a speaker to alert a person. The vOICe [13] system is introduced to give complete views of the scene through image-to-sound renderings. In this system the image is scanned from left the right and any elevation is represented as pitch and brightness as loudness. So using sound illusion created by the speaker a rough figure of the scene is created in the person's mind. This system is still in the development stage and needs a lot of practice from users to clearly understand how to use it. This is why this system is usually suitable for younger people. Sainarayanan et al. [3] used the segmentation of the image into two parts left and right for this purpose. The warning voice is generated according to the position of the obstacle in these two segments and then sent to the user through headphones.
### _Feedback system using tactical sensors_
Johnson and Higgins et al.[16] created a tactical feedback system in which different motors are attached to generate the vibration signal for the obstacle present in front of the user. According to the location of the obstacle, motors are assigned. This system needs training of the user for proper functioning. Nguyen et al.[20] used an eclectic pulse-based system for feedback. Electric pulses are generated in data gloves and give instructions through nerves present in the skin.
## III Method
In this section, we will describe how we performed our experiment and what are different methods used for obstacle detection, depth estimation, and warning generation. We are basically using the object detection method for obstacle detection in both stereo inputs and then depth is estimated using stereo vision and finally warning is generated using a GOOGLE text-to-speech generation algorithm.
### _Obstacle detection using YOLOv3_
Obstacle detection is one of the basic problems to create effective methods for visually impaired guidance systems. As we know obstacles can be on any elevation and will keep moving towards the person or going away from the person so the location of an obstacle is keep changing in a real-time environment. To detect those objects and classify them as obstacles is one of the biggest challenges of this problem. Here we are using the YOLOv3[9] object detection algorithm for obstacle detection because it is very fast and creates anchor boxes around the object. YOLOv3 is trained on the COCO dataset which has 80 classes so it can predict the object if it belongs to one of these classes. YOLOv3 uses Darknet-53 as a backbone feature extractor which has 53 convolutional layers making it a powerful network. YOLOv3 has skip connection-based architecture like ResNet and 3 prediction heads like FPN. Basically, YOLOv3 is not as accurate as YOLOv4[5] or YOLOv5 but this is much faster so we used this model for our method as it can give results in real-time, and using that we can create a good guidance system. We have used YOLOv3 in our captured frames for obstacle detection and then monocular depth estimation is performed for the depth estimation of the object.
### _Depth estimation_
Depth estimation is one of the basic needs for this type of problem as we need to find the depth of the obstacle from the person to generate a warning in time. Since we are using only one camera for our objective so depth estimation becomes one of the most challenging works for this problem. Monocular depth estimation is often considered an ill-posted problem as estimating depth only from the pixel values is not generally possible. But current development in deep learning techniques made it possible up to a good extent. There are many deep learning-based techniques available for depth estimation like FlowNet architecture by Dosovitskiy et al.[7], who applied a supervised encoder-decoder CNN-based method to estimate the optical flow using channel-concatenated image pairs, and Zhou et al.[26] who used unsupervised settings for depth and pose estimation from a video sequence.
In our approach, we used DepthNet[17] for depth estimation which is a recurrent neural network architecture for monocular depth estimation. In DepthNet convolutional LSTM (ConvLSTM)- based architecture is used for depth estimation and the fully connected LSTM layer is replaced by a stack of ConvLSTM layers. The LSTM layers allow the network to learn temporal information better but here convolutional layer is also used as it retains the spatial relationship between the grid cells. DepthNet uses multiple frames from the video feed for depth estimation from a scene which goes into an encoder-decoder setup. As this network is trained on the KITTI dataset so it is suitable for outdoor performance tasks. Depth calculated from this method is fast and more accurate as it uses multiple frames of video of the same scene so we are using this method in our task. The loss function used here is the Eigen scale invariant loss function by Eigen et al.[8]. Given a predicted depth map \(y_{i}\) and ground true depth map as \(y_{i}^{*}\) the loss function is given by:
\[L(y,y^{*})=\frac{1}{n}\sum_{i}d_{i}^{2}-\frac{\lambda}{n^{2}}(\sum_{i}d_{i})^ {2} \tag{1}\]
where \(d_{i}=log(y_{i})-log(y_{i}^{*})\) for the \(i^{t}h\) pixel and \(n\) is total number of pixels.
The depth map evaluated from the DepthNet is mixed with the optical flow obtained and then given to the model to generate the warning according to distance from the person. The warning is generated according to the quadrant of the obstacle in the image. The next section is about how we obtained optical flow from the video feed.
### _Optical flow estimation_
Optical flow or optical flow is the pattern of apparent motion of objects, edges, and surfaces in a visual scene caused by the relative motion between an observer (an eye or a camera) and the scene. Motion field can be defined as the real-world 3D motion and the optical flow field is its projection to the 2D image. So any motion in 3D generates a vector in 2D and that vector is defined as the optical flow of that particular object. J.L. Barron and N. A Thacker [14] proposed that a 2D Motion Field can be defined as -2D velocities for all visible points. We used Lucas-Kanade[19] method for optical flow estimation from the video feed. This method has two assumptions for optical flow estimation i.e the two images are separated by a small time increment \(\Delta t\), in such a way that objects have not displaced significantly and the images depict a natural scene containing textured objects exhibiting shades of gray which changes smoothly.
Since we are using the camera on a person who is moving at a slow speed and also obstacles approaching him are also not that fast so we are satisfying both assumptions up to a good level so we are using this model for our optical flow estimation. Lucas Kanade's[19] method solves basic optical flow equations for all the pixels in that neighborhood using least squares error. This method is very fast in operation so it is most suitable for our task as we want to create a real-time system. The disadvantage of this method is errors in moving boundaries. If \(P1,P2....Pn\) are pixels of the window and \(I_{x}(p_{n}),I_{y}(p_{n}),I_{t}(p_{n})\) are partial derivatives of the image I with respect to position x,y, and the time t of point \(P_{n}\) at the current time. \(V_{x}andV_{y}\) are components velocity vector then
Fig. 1: YOLOv3 comparison with other object detection models. This shows YOLOv3 is faster and better this is why we are using it for our model.[9]
Fig. 2: The \(I^{t}h\) frame in the figure is a current frame and optical flow uses current and previous frames for flow detection. YOLOv3 model is used for object detection and Depth net is used for depth prediction from \(I^{t}h\) frame. Finally, all three inputs are given to the model to generate a warning signal.
the equations according to Lucas Kanade's method can be given as:
\[Av=b\]
, where
\[A=\begin{bmatrix}I_{x}(p_{1})&I_{y}(p_{1})\\ I_{x}(p_{2})&I_{y}(p_{2})\\ \vdots&\vdots\\ I_{x}(p_{n})&I_{y}(p_{n})\end{bmatrix},v=\begin{bmatrix}V_{x}\\ V_{y}\end{bmatrix},b=\begin{bmatrix}-I_{t}(p_{1})\\ -I_{t}(p_{2})\\ \vdots\\ -I_{t}(p_{n})\end{bmatrix}\]
Lucas Kanade used a least squares criterion-based approach for optical flow estimation. So:
\[V=(A^{T}A)^{-1}A^{T}b\]
where, \(A^{T}\) is transpose of matrix \(A\),
This equation gives the same importance to all the pixels of the image but we want to give more importance to the center pixels so in the Lucas Kanade method weighted version of the least square is used.
\[V=(A^{T}WA)^{-1}A^{T}Wb \tag{2}\]
Where W is \(n\times n\) dimension diagonal matrix containing weights assigned to pixels \(p_{n}\).
The flow-chart for Lucas Kanade's method can be given as:
* Calculate flow \(u(i-1),v(i-1)\) from level i-1.
* Up sample \(u(i-1),v(i-1)\) to create \(u^{*}(i),v^{*}(i)\) of twice resolution as of level i and then multiply them with 2.
* Now as \(u^{*}(i),v^{*}(i)\) apparent velocities so we apply this block displacement.
* Then apply Lucas Kanade to find \(u^{{}^{\prime}}(i)\) and\(v^{{}^{\prime}}(i)\).
* \(u(i)=u^{*}(i)+u^{{}^{\prime}}(i)\) \(v(i)=v^{*}(i)+v^{{}^{\prime}}(i)\)
Finally we apply these two formulae to find optical flow for \(Ith\) level.
We compute above defined Lucas Kanade to the highest level to get optical flow. So after calculating the optical flow and depth map, we have depth information as well as information about the flow of objects in front of a person so now we will use these two pieces of information in our model to create a model which can be used for real-time prediction of warning for a visually impaired person.
### _Evaluation of Obstacle Location_
One of the most challenging tasks for generating a guidance system is to find the location of the obstacle in the image or in the video frames. For this particular task, we have used a special kind of method in which we divided the entire image or frame into 3 parts ie. Left, Center, and Right. The left part is basically the left of the person who is having the camera and so as others. Here as we already calculated the depth map using DepthNet[17] and optical flow using Lucas Kanade[19] so we created an algorithm to generate an alert using all these three values. As if the object is detected using YOLOv3[9] and it belongs to the right side of the image then it is most likely not going to be an obstacle for the moving visually impaired person but if its depth map and optical flow show it is moving towards the person then warning should be generated. And same goes for left-side obstacles also. So here we generated an algorithm that uses all three inputs i.e. depth map, optical flow, and object detection outputs, and then generates the warning using these three values in real-time.
We have divided the image into 3 parts, consider if the complete width of the image is 1280 then the image is divided using the following formula \(0\:to\:448\) width is considered as left and if the depth value corresponding to those value is less than 210 then a warning is generated that object is approaching from the left. If the width value is between \(448\:to\:832\) and the depth is less than 220 the warning is generated that the obstacle is in front and if the width value is between \(832\:to\:1280\) and depth is less than 210 then a warning is generated that object is approaching from the right. So using this small method we are generating warning text in our program and we are doing it in a real-time environment.
### _Text to speech_
We are using Google text-to-speech converter API to convert the generated warning text into speech a blind person can listen to. The API used in this method is commonly known as gTTS [1]. We used this API because it generates mp3 files for speech and this gTTS supports \(30+\) languages like Hindi, English, Tamil, German, etc., and 100+ voices with localized accents for some languages like English, French, Mandarin, etc. so this a very useful tool provided by google for this particular task. Google Text to Speech accepts a maximum of 100 and if the length is longer than that then it is handled by dividing the text into two parts. In our use case, we do not need to do that as we are generating only small text files as warning symbols. The text messages which are used as warnings are:
* The object is approaching from the left.
* The object is approaching from the center.
* The object is approaching from the right.
## IV Experiments and Results
In this section, we will apply our model to the real-time environment and will see how the model performs. We are using a single-camera framework for this purpose where we record the moving scene in front of the visually impaired person. The model is working in real time but for evaluation purposes, we recorded the video and applied the model to our computer.
### _Datasets_
Here we are using the Common objects in Context COCO[18] dataset for obstacle detection purposes. This dataset is made of a total of 328K images of 80 different object categories. This dataset was developed by Microsoft for different tasks like object detection, captioning, keypoint detection, etc. This dataset is having images of very common
objects like persons, cars, chairs, etc. The video is captured from a single camera installed on top of a moving person. This video is used as input to our model and for frame capturing.
### _Implementation details_
In our experiment we used an MP4 video file as input for implementing our model. The output video file is of shape 2560*720 in which the object is detected and written as well as audio based warning is generated. We used Lucas Kanade's method for optical flow estimation, using trajectories made by optical flow to judge from which side the object is approaching the person. We used the search window size for Lucas Kanade as \(15\times 15\), maximal pyramid level number used is 4 so a total of five levels have been used. The shi-Tomasi corner detection method is used for corner detection with a maximum number of corners of \(200\) and quality level parameter of \(0.03\) and a maximum distance of \(10\). Depth net used for depth estimation uses encoding and decoding phases where the encoding phase consists of \(3\times 3\) ConvLSTM layers of size \(\{32,64,64,128,128,256,256,512\}\) filters respectively, except first two layers where \(7\times 7\) and \(5\times 5\) filter size is used. A ReLu activation function is used for the ConvLSTM layer and a hard sigmoid is used for the recurrent step. In the decoding phase, a number of deconvolution layers have been used of sizes \(\{512,256,256,128,128,64,64,32\}\) respectively. The filter size used for deconvolution layers is \(3\times 3\) and \(1\times 1\). Finally, we have used a Google text-to-speech converter to convert the text available into a voice signal so that the person can be alerted. We are English language for our experiment since it is mostly accepted language.
### _Experimental setup_
The proposed model uses a Pytorch and Google-Colaboratory GPU for processing purposes. We used a single video camera on the forehead of the moving person to record a video feed for testing. Then we deploy our model to a laptop attached to the forehead camera to generate the warning according to the approaching obstacle. The text-to-speech converter is then used to convert the text which we got from our model to convert into a voice command to instruct the visually impaired person about the incoming obstacles. For now, we also used some random moving person video from the internet to test our model and it is working well in those inputs and giving sufficient output.
### _Testing on videos_
We have tested this model on different videos available on YouTube for prediction. Using this model the output achieved is fast as the objects can be faster recognized using the YOLOv3 method. As we can see in the Fig.1 the YOLOv3 is faster compared to previous methods in the evaluation part. At the table, we are watching a video with our eyes and observing from which side the person or obstacle is approaching and then we have tested our model on the same video to generate warnings for the visually impaired person. We have repeated this process for three videos and the results are shown in the table. The optical flow model generates warnings only for approaching objects.
## V Conclusion and Future Work
As technology is growing these days and as it is making the life of a human being easier and easier this is our little effort to make the life of visually impaired persons easier using the latest deep learning technologies. We have created our model in such a way that it can perform in real-time so that it can be really helpful. At the same time, our model should be more and more accurate as it will be assisting the blind person. That is why we are using optimal models which are neither too heavy nor too much inaccurate. In the future, we will try to find solutions for different environmental conditions and will try to make a better model which can work on any environmental condition and in real time.
|
2302.14028 | Modeling of Interface Loads for EOD Suit Wearers | Explosive Ordnance Disposal (EOD) suits are widely used to protect human
operators to execute emergency tasks such as bomb disposal and neutralization.
Current suit designs still need to be improved in terms of wearer comfort,
which can be assessed based on the interaction forces at the human-suit contact
regions. This paper introduces a simulation-based modeling framework that
computes the interaction loads at the human-suit interface based on a wearer's
kinematic movement data. The proposed modeling framework consists of three
primary components: a) inertial and geometric modeling of the EOD suit, b)
state estimation of the wearer's in-suit movement, and c) inverse dynamics
analysis to calculate the human-suit interface forces based on the simulated
human-suit model and the estimated human movement data. This simulation-based
modeling method could be used to complement experimental testing for improving
the time and cost efficiency of EOD suit evaluation. The accuracy of the
simulated interface load was experimentally benchmarked during three different
human tasks (each with three trials), by comparing the predicted interface
forces with that measured by commercial pressure sensors. | Yuan Gao, Stephanie Epstein, Murat Inalpolat, Yi-Ning Wu, Yan Gu | 2023-02-27T18:39:28Z | http://arxiv.org/abs/2302.14028v1 | # Modeling of Interface Loads for EOD Suit Wearers
###### Abstract
Explosive Ordnance Disposal (EOD) suits are widely used to protect human operators to execute emergency tasks such as bomb disposal and neutralization. Current suit designs still need to be improved in terms of wearer comfort, which can be assessed based on the interaction forces at the human-suit contact regions. This paper introduces a simulation-based modeling framework that computes the interaction loads at the human-suit interface based on a wearer's kinematic movement data. The proposed modeling framework consists of three primary components: a) inertial and geometric modeling of the EOD suit, b) state estimation of the wearer's in-suit movement, and c) inverse dynamics analysis to calculate the human-suit interface forces based on the simulated human-suit model and the estimated human movement data. This simulation-based modeling method could be used to complement experimental testing for improving the time and cost efficiency of EOD suit evaluation. The accuracy of the simulated interface load was experimentally benchmarked during three different human tasks (each with three trials), by comparing the predicted interface forces with that measured by commercial pressure sensors.
## I Introduction
Various capabilities of the existing explosive ordnance disposal (EOD) suits have been extensively studied [1, 2, 3, 4], with a primary focus on blast and heat protection. In contrast, only a few studies have investigated the ergonomics of existing EOD suits [5, 6] in terms of user comfort and fatigue. Yet, the ergonomics of other full-body, heavy-weight, protective suits, such as the Extravhericular Mobility Units (EMUs), have been extensively studied with a focus on the physical suit-human interaction that can be used to indicate user comfort. These studies have revealed that existing EMU designs (e.g., space suits) could cause user discomfort by inducing injuries and significantly boosting wearers' metabolic costs [7, 8, 9]. These negative effects may compromise the operational performance of a suit wearer during task execution [10]. Thus, it is essential to quantify the physical human-suit physical interaction for users wearing full-body, heavy-weight, protective suits that include both EMUs and EOD suits.
The physical interaction between a wearer and a space suit has been recently investigated. Diaz and Newman have proposed an approach to measure the physical human-suit interaction as well as the joint torque [11], by modelling the interaction forces as an external load applied to the human subject. Yet, modelling the human-suit interaction as a pre-specified external load applied at a point may not accurately reflect the interface load because the interaction typically occurs within a finite region instead of a point.
To accurately capture the physical interaction at the human-suit interface, a pressure sensing system has been developed to experimentally measure the interface loads between the human and suit [12, 13, 14, 15]. To further investigate the interaction between the space-suit and wearer, a sensing system with additional capabilities (e.g., temperature and humidity sensing) has been developed [16].
Although pressure sensing systems could be used to directly measure the interface load experienced by EOD suit wearers, experimental pressure sensing during various movements of wearers could be time-consuming (e.g., due to the time costs of the calibration, placement, and re-zeroing of pressure sensors [6]). To this end, simulation-based modelling could be exploited to compute the interface loads without utilizing experimental pressure sensing, thus complementing experimental testing and alleviating the burden of extensive tests. In this study, we introduce a simulation-based modeling framework that uses biomechanics simulation software to calculate the interaction forces between the wearer and the EOD suit during different full-body motions. The framework includes an integrated human-suit model that captures the realistic human biomechanics, the essential features of the inertial and geometrical properties of the EOD suit, and the physical interaction between the suit and the human model within the finite contact regions. Based on the integrated human-suit model, the framework also incorporates inverse dynamics analysis, which is performed via biomechanics software, to compute the reaction forces at a set of user-defined contact regions and points based on the wearer's movement data. The main contributions of this work are: (a) proposing a new method to obtain the pressure data between the wearer and suit with various motions rather than relying on the human subject experiments solely and (b) emulating the suit-human interactions using rigid bodies and various constraints. Results of pilot experiments validated the effectiveness of the framework in modeling the wearer-suit interface loads during different mobility tasks.
## II Simulation-based Human-suit Modeling
This section presents the proposed simulation-based approach of human-suit modeling. The objective of the modeling is to accurately produce the interface loads between the wearer and the suit based on the wearer's movement data.
To reach the modeling objective, the proposed approach comprises three main components (see Fig. 1). The first
component is the modeling of the EOD suit to capture its essential physical properties (e.g., mass and geometry) that could affect the human-suit interface loads at the critical regions (e.g., shoulders), as introduced in subsection A. The geometry modeling is performed in SOLIDWORKS.
The second component is the human movement estimation to obtain the kinematic data (e.g., the global position of the wearer) that is needed to compute the interface loads using biomechanics software but cannot be directly measured, as explained in subsection B. Note that the human movement data is required for interface load computation since a wearer's movement can directly affect the interface loads.
The last component is the inverse dynamics analysis via biomechanics-based simulation for calculating the interface loads based on the outcomes from the first two components (i.e., integrated human-suit model and the estimated human movement), as presented in subsection C. We choose to use physics-based simulations, instead of analytical methods (e.g., mathematically modeling the interaction based on physics laws), as the basis to study the interface loads. This is due to the fact that the human-suit interaction is complex, involving contact areas at multiple locations, complex geometry of both the human and the suit, and different load patterns under different human motions. In other words, it may not be tractable to model the physical interface loads using analytical methods.
### _EOD Suit Modeling_
This subsection introduces the proposed modeling of the EOD suit in SOLIDWORKS to capture the essential inertial and geometric properties of the suit. The suit model created is integrated with a high fidelity human model in biomechanics software for interface force computation as explained in subsection C.
The EOD suit of interest to this study is the "EOD 8 Suit" (see Fig. 2 a)which is a heavy, full-body suit designed to protect the wearer from the heat and shockwaves induced by a bomb or any fragments the bomb may generate. The EOD 8 suit has been in service since 1999 and is one of the most widely used EOD suits for bomb disposal operations around the world [1, 6].
The EOD 8 Suit utilized in this study is medium-small sized, and the total mass of its main components (without the helmet and the groin portion) is approximately 18.25 kg (i.e., 179.01 N). Its outer fabric is made of an aramid wave, within which alloy plates are installed at the chest, back, knee portions for providing additional protection.
The inertial and geometrical properties of the EOD suit are complex because the suit comprises multiple rigid (e.g., metal pads inserted within the suit) and soft pieces (e.g., fabrics) with complex shapes. To provide a relatively accurate representation and model of the EOD suit for efficient interface load computation, we use SOLIDWORKS to build a simplified three-dimensional (3-D) model of the EOD suit that captures the essential features of the suit such as its inertia and geometry.
#### Ii-A1 Modeling Assumptions
The following model simplifying assumptions are considered:
1. The suit is modeled as a collection of rigid bodies.
2. The density of each segment of the suit model is assumed to be evenly distributed.
3. The helmet and the soft armor at the groin portion is omitted from the suit model.
Assumptions (A1) and (A2) are reasonable because the majority of an EOD suit's weight is contributed by the lumped alloy plates located at the subject's chest, back, and knees and the density of these alloy plates is evenly distributed. Assumption (A3) is mainly for simplifying the suit modeling, and the helmet and the soft groin armor will be considered in our future work of suit modeling.
#### Ii-A2 Suit Component Modeling
Under assumptions (A1)-(A3), we decompose the EOD suit (without the helmet and the groin portion) into the following eight parts (Fig. 2-a): (1) Right Leg Upper (RLU); (2) Right Leg Lower (RLL); (3) Left Leg Upper (LLU); (4) Left Leg Lower (LLL); (5) Back Pad (BP); (6) Right Arm (RA); (7) Left Arm (LA); and (8) Front Torso (FT).
We model the shape and dimensions of each component in SOLIDWORKS based on those of a representative suit wearer, the EOD suit, and the wearer-suit contact region. The eight suit components are illustrated in Fig. 2-b. Note that the model of the FT component has both shoulder and chest parts but does not include an abdominal part because the most significant pressure for the subject's upper body are at the shoulders and the chest [6].
To obtain the precise weight of each major component
Fig. 1: Overview of the proposed modeling framework that comprises three main components (highlighted with dashed blocks). The three components are suit modeling, in-suit motion estimation, and inverse dynamics analysis.
Fig. 2: Illustrations of a) the tested subject wearing the EOD 8 Suit, b) eight major components of the proposed suit model, which are created in SOLIDWORKS and assembled to the human model in AnyBody, and c) “belt” constraints anchoring the suit components to the human model in AnyBody. The labels in subplot a) highlight seven of the eight components: (1) FT; (2) RA; (3) LA; (4) RLU; (5) LLU; (6) RLL; and (7) LLL.
of the EOD suit (specifically for the version EOD 8), we measured each suit component using a force plate for ten times and used the average value to represent its weight. The force plate is the BMS600900 platform developed by Applied Molecular Transport Inc., with an accuracy of 0.05% of the load and a resolution of 0.169 N. The measured weight of the suit components is listed in Table I.
With the individual segments modeled in SOLIDWORKS, we then import the individual segments into biomechanics software to assemble the suit components and the human body (see Fig. 2-b, c), as explained in subsection C.
### _In-Suit Kinematics Measurement and Estimation_
Movement data is required by the inverse dynamics analysis. However, it cannot be directly measured based on the raw data returned by common sensors. Wearerble sensors, such as APDM [17], estimate the joint angle of human subject. Yet, they do not return the global position. Therefore, state estimation methods are needed to produce the subject's global position based on movement data returned by wearerable sensors.
#### Iii-B1 Movement Sensors Selected
To reduce the discomfort caused by placing sensors on a suit wearer, we choose to use inertial-based motion capture systems that are compact and lightweight (see Fig. 3-a).
The sensing system we use is the APDM [17] inertial motion-capture system, which provides the joint angles and the 3-D orientation of each IMU in the world. As the the global position of the human subject is not directly returned by APDM but is often needed by inverse dynamics analysis (e.g., via AnyBody), we choose to develop a state estimator based on Kalman filtering to obtain the global position data. To that end, besides IMUs that are attached to each body limb for measuring the joint angles, we use the IMU placed at the lower back (i.e., base) to directly measure the linear acceleration and angular velocity of the base with respect to the IMU frame. The method used in this section can be found at [18]. In the following parts, process model and measured model will be introduced. The computational detail of Kalman filter is omitted due to the space limitation.
#### Iii-B2 Estimated Movement Variables
The state of interest to be estimated is compactly expressed as \(\mathbf{x}_{t}=[\mathbf{p}_{t}^{T},\ \mathbf{v}_{t}^{T},\ \mathbf{p}_{t,1}^{T},\ \mathbf{p}_{t,2}^{T}]^{T}\), where \(\mathbf{p}_{t}\in\mathbb{R}^{3}\) is the base position in the world frame, \(\mathbf{v}_{t}\in\mathbb{R}^{3}\) is the base velocity in the world frame, and \(\mathbf{p}_{1,t}\in\mathbb{R}^{3}\) and \(\mathbf{p}_{2,t}\in\mathbb{R}^{3}\) are left and right foot positions in the world frame. Note that the subscript \(t\) indicates the time instant \(t\) and \((\cdot)_{t}\) denotes the value of the variable \((\cdot)_{t}\) at time \(t\) All of these variables and reference frames are illustrated in Fig. 4.
#### Iii-B3 Process Model
As the APDM sensor system returns data at discrete times, the process model of the Kalman filter is designed in discrete time. The filter design assumes that the IMU attached to the base gives sufficiently accurate data of the 3-D base orientation \(\mathbf{R}_{t}\in\mathbb{R}^{3\times 3}\) in the world frame.
Let the scalar variable \(\Delta t\) be the duration between two successive sampling events. Based on the dynamics of the base IMU [19, 20, 21], the process model of the base position and velocity at time \(t\) is given by:
\[\begin{split}\mathbf{p}_{t+1}&=\mathbf{p}_{t}+ \mathbf{v}_{t}\Delta t+\frac{\Delta^{2}}{2}\mathbf{R}_{t}(\mathbf{y}_{a,t}+ \mathbf{g});\\ \mathbf{v}_{t+1}&=\mathbf{v}_{t}+\Delta\mathbf{R}_{t }(\mathbf{y}_{a,t}+\mathbf{g}).\end{split} \tag{1}\]
Here, the vector \(\mathbf{y}_{a,t}\in\mathbb{R}^{3}\) is the accelerometer reading. Then, \(\mathbf{R}_{t}\mathbf{y}_{a,t}\) is the true value of the linear acceleration of the base IMU expressed in the world frame.
Based on the dynamics of the feet [19, 22, 23], the process models of the left and right foot positions at time \(t\) are:
\[\mathbf{p}_{1,t+1}=\mathbf{p}_{1,t}\quad\text{and}\quad\mathbf{p}_{2,t+1} =\mathbf{p}_{2,t}. \tag{2}\]
Here, \(\mathbf{p}_{1,t}\in\mathbb{R}^{3}\) and \(\mathbf{p}_{2,t}\in\mathbb{R}^{3}\) are the positions of the left and the right feet expressed in the world frame, respectively. If \(\mathbf{p}_{i,t}\)\((i=1,2)\) is the stance foot position and the stance foot is static on the ground, then \(\mathbf{p}_{i,t+1}=\mathbf{p}_{i,t}\) holds. However, if \(\mathbf{p}_{i,t}\) is the swing foot position, then the process model \(\mathbf{p}_{i+1}=\mathbf{p}_{i,t}\) no longer holds. Accordingly, we set the covariance of this foot to be significantly large to effectively deactivate the process model of that foot position.
These process models can be compactly expressed as:
\[\underbrace{\begin{bmatrix}\mathbf{p}_
#### Iii-B4 Measurement Model
When the sensors return data at time \(t\), the update step of the KF is performed based on measurement models. In this study, we form measurement models based on the forward kinematic chain connecting the base and the foot frames. Let \(\mathbf{h}_{1}(\mathbf{q}_{t})\) and \(\mathbf{h}_{2}(\mathbf{q}_{t})\) be the nonlinear forward kinematics functions representing the left and right foot positions with respective to the base frame, respectively. Then, by the definition of \(\mathbf{h}_{i}\) (\(i=1,2\)), we have \(\mathbf{R}_{t}^{T}\,\mathbf{h}_{i}(\mathbf{q}_{t})=\mathbf{p}_{i,t}-\mathbf{p }_{t}\).
Let the vector \(\mathbf{q}_{t}\) be the wearer's joint angles obtained by APDM sensors at time \(t\).
The measurement model of the filter is expressed as:
\[\mathbf{R}_{t}^{T}\,\mathbf{h}_{1}(\mathbf{q}_{t})=\mathbf{p}_{1,t}-\mathbf{p }_{t}\,\,\,\text{and}\,\,\mathbf{R}_{t}^{T}\,\mathbf{h}_{2}(\mathbf{q}_{t})= \mathbf{p}_{2,t}-\mathbf{p}_{t}. \tag{3}\]
Equation (3) can be organized into:
\[\underbrace{\begin{bmatrix}\mathbf{R}_{t}^{T}\mathbf{h}_{1}(\mathbf{q}_{t})\\ \mathbf{R}_{t}^{T}\,\mathbf{h}_{2}(\mathbf{q}_{t})\end{bmatrix}}_{=:\mathbf{h} }=\underbrace{\begin{bmatrix}\mathbf{I}_{3}&\mathbf{0}_{3\times 3}&-\mathbf{I}_{3}& \mathbf{0}_{3\times 3}\\ \mathbf{I}_{3}&\mathbf{0}_{3\times 3}&\mathbf{0}_{3\times 3}&-\mathbf{I}_{3}\\ \end{bmatrix}}_{=:\mathbf{C}}\underbrace{\begin{bmatrix}\mathbf{p}_{t}\\ \mathbf{v}_{t}\\ \mathbf{p}_{1,t}\\ \mathbf{p}_{2,t}\end{bmatrix}}_{=:\mathbf{x}_{t}}\]
### _Interface Load Computation via Simulation-based Inverse Dynamics Analysis_
This subsection explains the computation of the human-suit interaction force based on simulation-based inverse dynamics analysis. The analysis utilizes the previously explained suit model and the estimated human movement.
#### Iii-C1 Selection of Inverse Dynamics Analysis Software
To reach our modeling objective of accurately producing the wearer-suit interface loads based on the wearer's movement data, the software should possess the following features. First, the biomechanics model of the human should be reasonably accurate. Second, the software should be capable of computing the contact force between the wearer and the suit in a realistic way, e.g., by explicitly considering the realistic physical interaction within the finite contact areas. AnyBody software meets these requirements as it has high-fidelity customized human model. It also allows the computation of the subject-suit reaction force.
#### Iii-C2 Selection of Human Biomechanics Model in AnyBody
The human model used here is a generic human body model provided by the AnyBody Managed Model Repository, which can be customized based on the actual subject's limb lengths, overall height, and weight. In total, the human model in Anybody has 408 degrees of freedom and 214 joints.
The 3-D suit model created in SOLIDWORKS, as explained in subsection A, is a group of disconnected components corresponding to the eight major parts of a typical EOD suit. We need to appropriately integrate the suit model with the realistic human model in AnyBody for computing the interface load based on the human's movement data, which is explained next.
#### Iii-C3 Contact Region Definition
The suit and the human make contact at multiple finite sized regions, specifically, at infinitely many points within those contact regions. Yet, computing the interface loads at infinitely many points may not be tractable. To that end, we choose to simplify the interaction force computation by exploiting the built-in functionality of AnyBody that allows users to define a finite set of contact regions on both the suit and the subject for interaction force computation. With AnyBody, each contact point within a contact region between the human model and an external object/environment is defined by: a) the position of the point in a 3-D Cartesian coordinate frame fixed to the suit and b) a local 3-D Cartesian coordinate system attached to the suit with its \(y\)-axis aligned with the normal direction of the contact surface at that point.
#### Iii-C4 "Belt" Constraint Design
To ensure that the eight components of the suit remain a secured contact with the human body, we use the "belt" constraint provided in AnyBody to anchor the suit components on the human body. Without the "belt" constraints enforced, the disconnected individual suit components will fall off the human body, and the simulator will report an error. The belt only applies "pulling" forces between the connected suit and human segments, mimicking the suit's highly stiff fabric that connects different metal segments of the suit. The belt can be defined by specifying its two end points, with one on the suit and the other on the body.
We choose to set the belt constraints for different suit parts as follows (see Fig. 1 b):
1. Back Pad (BP) and Front Torso (FT) are connected to a single point on the lower part of the neck.
2. Each Upper Leg (LLU or RLU) is connected to a single point on the outer side of the hip.
3. Each Lower Leg (LLL or RLL) is connected to a single points on the outer side of the knee.
#### Iii-C5 Interface Load Computation via AnyBody Inverse Dynamics Analysis
After setting up the integrated human-suit model in AnyBody, the inverse dynamic analysis can be performed to obtain the 3-D reaction force at each contact point. These forces can then be used to compute the resultant force at the specified suit-wearer interface region. For a musculoskeletal system with additional contacts, solving the interaction forces is an indeterminate problem. AnyBody solves the problem by casting it as an optimization problem, with the cost function set as the norm of muscle and contact forces, and with the constraints enforcing muscle forces to
Fig. 4: Illustration of the position and orientation variables and reference frames used in the proposed Kalman filter. The reference frame {_world_} is the world frame. The reference frames {_right foot_}, {_left foot_}, and {_base_} are attached to the subject’s right foot, left foot, and base (i.e., lower back).
be pulling and contact forces as pushing.
## III Experimental Validation
The test data collected during the human subject tests were imported and directly processed in MATLAB.
This section reports the experimental validation results of the proposed simulation modeling framework.
### _Setup of Subject and EOD Suit_
**Human subject:** This study is approved by the Institutional Review Board (IRB) of the University of Massachusetts Lowell (#19-023). In the pilot testing, one healthy human subject (31 years old, 169 cm, and 60 kg) was recruited.
**Movement types:** The pilot subject testing included three movement types, which were flat-ground walking, walking upstairs, and walking downstairs (see Fig. 5). The distance of walking on the flat terrain was about 6.4 m. The total height of the staircase with five flights was approximately 0.8 m. Three trials were tested with each movement type. During each trial, the movement sequence in the temporal order was quite standing, walking (on ground or stairs), and quite standing.
### _Setup of Human-Suit Model in AnyBody_
In this study, we focus on validating the suit model in predicting the interface load at the subject's shoulders because shoulders have been reported as one of the body segments that are subject to significant discomfort during common suited movements [6]. In AnyBody (Version 7.2), sixty contact points were defined to be evenly distributed within the top portion of each shoulder to ensure an accurate computation of the interface load without inducing an overly high computational load.
### _Setup of Movement Sensors and Kalman Filter_
In this experiment, the in-suit motions of the human subject were measured by the APDM [17] inertial motion-capture system. The system comprises a suite of compact, light-weight inertial measurement units (IMUs) that can be worn on the subject (see Fig. 3-a). The system processes the raw data returned by the IMUs to produce the estimated joint angles of the subject as well as the orientation of each IMU in the world frame. The APDM sensors return data at a rate of 128 Hz (i.e., the sampling period \(\Delta t\) is 0.0078 s), and its inaccuracy of base orientation measurement is 2.8\({}^{\circ}\).
Table II lists the noise standard deviations (SD) for the Kalman filter. The values are tuned based on the nominal noise levels provided by the sensors' manufactures for ensuring a reasonable convergence rate and final accuracy. Although the human model in AnyBoby has 214 joints, the Kalman filter only utilized the hip, knee, and ankle joints to estimate the global-position of the human model.
### _Setup of Pressure Sensors at Shoulder-Suit Interface_
During all experiments, the human subject wore APDM IMUs and EOD suit together with pressure sensors (see Fig. 3). The pressure sensors were used to verify the interface load produced by the proposed modeling approach.
**Pressure sensor selection:** Pressures sensors developed by Novel Electronics Inc. were utilized to obtain the interface load at the top portion of the subject's left and right shoulders. We tested both Pliance and insole Pedar sensors, and then chose to use the Pedar sensors instead of the Pliance sensors because of their higher accuracy in obtaining static and dynamic pressure measurements at the shoulder areas. This is essentially due to Pedar sensors' concentrated measuring surfaces and more robust measurement range for highly concentrated moving loads. The Pedar sensor system contains two sensors. Each sensor covers an area of \(70\times 160\) mm\({}^{2}\), consists of 99 sensing units with a resolution of 5 kPa, and transmits data via Bluetooth.
**Pressure sensor placement:** In this study, the interfacial dynamic loads between the suit and individual shoulders were collected. The left Pedar sensor was set in between the left pectoral and trapezius region (shoulder composition of clavicle and acromioclavicular joint), with the right sensor in the same region on the right-handed side [6]. The cables of the sensors were attached using a Velcro strap onto the outside of the EOD suit after the suit was worn by the human subject(s) [6]. The sensors were prevented from physically shifting during the tests through shoulder straps and kinesiology tapes applied directly on the subject's skin (see Fig. 3). Individual sensors were checked to ensure no shifting occurred during the placement of the EOD suit throughout the body. During the active use of the EOD suit, the overall weight of the suit is distributed not only between the two shoulder regions, but some of the pressure is taken up by the chest, arms, torso, and back [6].
**Pressure sensor calibration and re-zeroing:** To ensure measurement accuracy and repeatability during dynamic human subject movements, the pressure sensors were carefully
Fig. 5: Time lapse figures of the three types of subject movements: a) walking on the flat ground, b) walking upstairs, and c) walking downstairs.
calibrated using the Truth calibration device (developed by Novel Electronics Inc.). The device uniformly pressurizes the Pedar sensors to the maximum amount the sensors can withhold through several incremental steps. To remove the nonzero sensor reading caused by the pressure applied by the sensor anchoring mechanism (i.e., straps and tapes), the suit was taken off of the subject (with the anchoring mechanism still on) every three movement trials to re-zero the reading.
**Interface load computation in AnyBody and through pressure sensing:** In AnyBody, we compute the resultant force from the shoulder area at each time step by directly summing the projections of the individual contact forces along the normal direction of the contact area. This approximation is reasonably accurate because the tangential forces are less than 10% of the normal forces in magnitude. In experiments, the proprietary software of the Pedar pressure sensing system sums all the forces returned by the sensing units to provide the resultant force at each time step.
### _Validation of Simulated Shoulder-Suit Interface Loads_
#### Iv-E1 Results of Shoulder-Suit Interface Loads Obtained through Experimental Pressure Sensing
To evaluate the accuracy of the proposed modeling approach in reflecting the shoulder-suit interface loads, we first used experimental pressure sensing to obtain the relatively accurate approximations of the true interface loads. As the output of the pressure sensor is the resultant rather than the pressure distribution, we use the resultant from both the experiment and AnyBody simulation to validate the simulation results.
Figure 6 a) and c) display the interface loads at the subject's shoulders obtained via pressure sensing. These two plots show relatively significant spikes (i.e, outliers) that intermittently appear within short periods of time. Furthermore, in Fig. 7, the average interface forces of all trials (with outliers retained) were graphed for left and right shoulders and for upstairs and downstairs walking. The figure displays that the right shoulders from both (upstairs and downstairs walking) experiments are closely correlated to each other, whereas the left shoulder data exhibited more noise.
The outliers correspond to the unexpected spikes in the experimental data due to sudden impacts detected by the sensors. Such a sudden impact can be an impact between the shoulder and the suit induced by foot-landing events. Other causes of the faulty data could be the non-symmetrical dimensions of the left and right shoulder regions as well as sensor bending uncorrelated with the physical tasks.
These outliers were consistently removed using the Interquartile range rule as part of the data analysis and interpretation processing [24]. Data was filtered by determining where 95 percent of the results fell between and using standard deviation to differentiate the accuracy. All force readings exceeding three-halves the mean (\(\mu\pm 1.5\sigma\)), or data that fell outside the 95\({}^{th}\) percentile, was considered noise.
Figure 6 b) and 6 d) show the interface loads at the shoulders after outlier removal. These plots indicate that the pressure distribution along the shoulder projected from the EOD suit was consistent. Specifically, Trials #2 and #3 were discerned to be the most congruent with each other.
In Fig. 8, the bars (blue, yellow, grey, and red) indicate the average values of the pressure data. The upper and the lower whiskers, respectively, indicate the maximum and the minimum pressure values. The figures exhibit that the variation of the standard deviation was small for the right shoulders in both walking upstairs and downstairs while left-shoulder standard deviation experienced more variability. This indicates the results for the right shoulder were relatively more accurate.
#### Iv-E2 Results of Simulated Shoulder-Suit Interface Loads
Figure 9 displays the interface loads obtained based on the proposed modeling framework and pressure sensing for three movement types (i.e., walking on the flat terrain, upstairs, and downstairs). The figure indicates that the average values
Fig. 6: Interfacial shoulder loads when walking downstairs: a) left-shoulder with outliers; b) left-shoulder without outliers; c) right-shoulder with outliers; and d) Right-shoulder without outliers.
Fig. 7: Average interface forces (with outliers retained) at the shoulder-suit contact regions for all trials of upstairs and downstairs walking.
and overall trends of the simulated and experimental loads are relatively close. This is confirmed by the RMS errors of all trials for the three motions as given in Table IV.
However, the errors between the simulated and experimental interface loads appear to be significant at the left shoulder during upstairs walking. This large error could be caused by the relatively inaccurate reading of the pressure sensors at the left shoulder during the trial, as discussed in Section III-E-1). This is also based on the observations of the figure that : a) the experimental forces at the left and right shoulders during upstairs walking show relatively large discrepancies and b) the simulated and experimental forces at the right shoulder show relatively close correspondence.
Moreover, for flat terrain walking, the experimental and simulated forces at the right shoulder have an offset of approximately 30 N while there is no obvious offset at the left shoulder. In particular, the experimental interaction forces at the right shoulder has a nearly constant offset compared with that at the left shoulder. This implies the interface force error between simulations and experiments for the right shoulder could be caused by the relatively inaccurate pressure sensing for that displayed trial.
### _Discussion of Validation Results_
From inverse dynamics results, we noticed that although the trends match well, the magnitudes suffer from the discrepancy between experimental data and simulation data. We have investigated this issue and found a few potential causes for this issue. We have found that the pressure sensor on the shoulder only covers a portion of the shoulder, and thus the interaction outside of the sensor coverage cannot be detected. Also, the pressure sensor is only capable of detecting the normal force between shoulder and EOD suit, but the shear force cannot be detected. Finally, there exists a geometry discrepancy between the actual and modeled suits which may cause inaccurate force computation.
## IV Conclusion
This paper has introduced a simulation-based modeling framework that computes the interaction forces between an EOD suit and its human wearer during different mobility tasks. The framework comprised three main components, which are: a) 3-D modeling of the suit for accurately and efficiently capturing its physical properties, b) movement state estimation for producing the wearer's in-suit motions based on data returned by wearable inertial motion-capture sensors, and c) inverse dynamics analysis based on the simulated human-suit model and estimated human movement. The effectiveness of the framework in producing accurate human-suit interaction loads during different wearer movements was experimentally validated through the comparison with the loads measured by commercial pressure sensors.
To improve the accuracy of the interface loads produced by the proposed modeling framework, we will increase the fidelity of the proposed suit model by including the suit's helmet and groin components, and will validate the framework through movement experiments with a larger number of human subjects and for an even wider variety of human movements. To obtain more complete and reliable ground-truth data for result validation, we will utilize pressure sensors with customized shapes to measure the contact regions at multiple critical locations on a wearer (e.g., shoulders, thighs, and back) and to ensure sufficient sensor coverage at those locations are experimentally measured. More importantly, we will investigate how the Anybody simulation results could help improve EOD suit designs.
## V Acknowledgements
This project is sponsored by the Department of the Army, U.S. Army DEVCOM Soldier Center (SC). Distribution Statement: Approved for public release; distribution is unlimited (PAO #: PR2022_14825). Thank you to the NERVE
Fig. 8: The average values and the ranges of variations for all trials of upstairs and downstairs walking at left (L) and right (R) shoulders.
Fig. 9: Shoulder-suit interaction forces computed based on AnyBody inverse dynamics analysis for three subject motions. The green shaded area indicates that the human subject stands quietly with two feet on the ground. The yellow (and blues) shaded areas correspond to the periods during which only the right (and left) foot contacts the ground.
Center, TRACE Lab, and SDASL at UMass Lowell for providing the test course and research assistance.
|
2306.10961 | Extensions to the Guaranteed Service Model for Industrial Applications
of Multi-Echelon Inventory Optimization | Multi-echelon inventory optimization (MEIO) plays a key role in a supply
chain seeking to achieve specified customer service levels with a minimum
capital in inventory. In this work, we propose a generalized MEIO model based
on the Guaranteed Service approach to allocate safety stock levels across the
network at the lowest holding cost. This model integrates several existing and
some novel features that are usually present in pharmaceutical multi-echelon
supply chains into a single model: review periods, manufacturing facilities,
hybrid nodes (nodes with both internal and external demand), minimum order
quantities (MOQ), and different service level performance indicators (fill rate
and cycle service levels). We include a polynomial regression to approximate
fill rates as a possible target measure to set safety stocks. To improve
efficiency, we propose a nonlinear programming model to support decision
making, which can be reformulated as a Quadratically Constrained Program (QCP),
which leads to order of magnitude reductions in computational time. The
performance of the model is evaluated by solving illustrative and real-world
cases, and is validated with simulation. | Victoria G. Achkar, Braulio B. Brunaud, Hector D. Perez, Rami Musa, Carlos A. Mendez, Ignacio E. Grossmann | 2023-06-19T14:25:30Z | http://arxiv.org/abs/2306.10961v1 | Extensions to the Guaranteed Service Model for Industrial Applications of Multi-Echelon Inventory Optimization
###### Abstract
Multi-echelon inventory optimization (MEIO) plays a key role in a supply chain seeking to achieve specified customer service levels with a minimum capital in inventory. In this work, we propose a generalized MEIO model based on the Guaranteed Service approach to allocate safety stock levels across the network at the lowest holding cost. This model integrates several existing and some novel features that are usually present in pharmaceutical multi-echelon supply chains into a single model: review periods, manufacturing facilities, hybrid nodes (nodes with both internal and external demand), minimum order quantities (MOQ), and different service level performance indicators (fill rate and cycle service levels). We include a polynomial regression to approximate fill rates as a possible target measure to set safety stocks. To improve efficiency, we propose a nonlinear programming model to support decision making, which can be reformulated as a Quadratically Constrained Program (QCP), which leads to order of magnitude reductions in computational time. The performance of the model is evaluated by solving illustrative and real-world cases, and is validated with simulation.
\({}^{a}\) Universidad Nacional del Litoral, Argentina
\({}^{b}\) INTEC (UNL - CONICET), Argentina
\({}^{c}\) Johnson & Johnson, USA
\({}^{d}\) Carnegie Mellon University, USA
Keywords: inventory, optimization, guaranteed-service, multi-echelon
## 1 Introduction
On-time fulfilment of customer demand is critical in today's customer-centric supply chains. Achieving this goal depends in great part on the inventory levels and policies that are set along a supply chain. However, efficient inventory control is particularly challenging when customer demand is uncertain and retailers may not know the exact size of an order in advance. Other sources of uncertainty may increase the problem complexity, such as lead time variability. Moreover, the decision at one stage impacts inventory decisions at other stages. The intent of safety stock allocation is to determine an overall strategy for deploying inventory levels across the supply chain in order to buffer it against sources of uncertainty (Graves & Willems, 2003). To overcome these challenges, having a safety stock serves to mitigate the risk of stock-outs in the system. The purpose is to allocate safety stocks to meet
customer service levels, while minimizing the total capital tied up in inventory throughout the supply chain, in contrast to single-echelon inventory optimization (SEIO), which seeks to independently minimize cost at each echelon. MEIO has enabled companies to reduce their inventories up to 30% and improve item availability up to 5% by supporting supply chain segmentation, and providing a better balance between lead time, inventory and service under uncertainty (Gartner, 2016). From an optimization perspective, making decisions about inventory in multi-echelon systems is a challenging task because the objective functions usually involve non-linearities, and decision variables affect more than one echelon.
MEIO approaches have been studied in the literature for allocating safety stock in supply chains. De Kok et al. (2018) present a general typology and review stochastic MEIO models in which they classify the extensive research on multi-echelon inventory management under model assumptions, research goals, and different applied methodologies. They state that multi-echelon inventory systems are still a very active area of research because of their complexity and practical relevance. More recently, Goncalves et al. (2020) present a systematic literature review describing the history and trends regarding the safety stock determination from an operations research perspective. They also highlight that the number of contributions to MEIO has seen a significant increase from the year 2005 onwards, and they list many potential directions and trends for future research. There are two widely known approaches in MEIO to determine safety stock levels: the stochastic-service model (Clark and Scarf, 1960) and the guaranteed-service model (GSM), introduced by Simpson (1958). Detailed comparisons between them can be found in De Smet et al. (2019), Graves and Willems (2003), and Simchi-Levi and Zhao (2012).
The objective of this work is to develop a MEIO model based on the GSM approach that accounts for issues and characteristics arising in industrial practice in order to provide an improved representation to support strategic decision-making. Many authors (e.g. Inderfurth, 1993; Minner, 1998; Eruguz et al., 2014) have developed extensions to the GSM, but to the best of our knowledge, nobody has developed a model that can achieve optimum safety stocks on complex supply chains while integrating all the features typical of industrial environments presented in this work. We also propose strategies to obtain efficient solutions.
This paper addresses the problem of a multi-echelon, multi-product supply chain with both demand and lead time uncertainty. Demand can occur at any node in the network. This can result in hybrid nodes that have both dependent and independent demands. To the best of our knowledge, these characteristics, which represent the common operation mode of many multi-echelon systems, has not been addressed before, as most of the literature on supply chain inventory management considers only external demand at the final nodes of the network. Second, manufacturing plants can be placed at any location in the network, enabling the manufacture of any desired good at those locations. This feature allows generalizing and managing larger supply chains that have grown in their vertical integration. Capturing wider networks can significantly improve the inventory decision-making process across the
process supply chains as is seen in those industries that produce both raw materials and finished goods. Third, the fill rate, which is the most widely applied service level measure in industry (Teunter et al., 2017), can be used as an alternative customer service indicator when setting safety stock levels. We adapt the fill rate constraint (Axsater, 2006; Chopra & Meindl, 2013) to include hybrid nodes, and we propose a quadratic regression to estimate the equivalent Cycle Service Level (CSL) when fill rates required used in the model. In addition, Minimum Order Quantities (MOQ) for replenishment orders are explicitly modelled. Finally, the resulting nonconvex Nonlinear Programming (NLP) model is reformulated as a Quadratically Constrained Problem (QCP) by exploiting the structure of the constraints of the base model. Several computational examples for illustrative and industrial systems are presented to illustrate the application of the proposed model and its resulting improved computational performance.
The outline of the paper is as follows. The literature review and background with the basic concepts of the GSM are presented in the following subsection. The problem statement is given in Section 2, followed by the model formulation in Section 3. Section 4 details the application of the model on illustrative and real-world case studies. We conclude this article in Section 5. A Nomenclature section is presented at the end to facilitate the model understanding. A Supporting Information Section is included to provide the data input used in the real case study and detail additional discoveries relating to the impact of MOQ on service metrics.
**1.1 Literature and Background of the Guaranteed-service Model**
The present paper relies on the GSM to optimize safety stocks. Although this approach was developed more than 50 years ago, 80% of the existing works on this topic have been published in the last 2 decades (Eruguz et al., 2016). The first multi-echelon serial system for the GSM model was proposed by Simpson (1958), and then it was extended to deal with different network topologies (Graves & Willems, 2000; Inderfurth, 1993; Inderfurth & Minner, 1998; Minner, 1998). Later, Magnanti et al. (2006) developed a guaranteed-service approach for general acyclic networks. The main idea of the classic guaranteed service approach is that if the customer places an order of size \(d_{i}(t)\) on node \(j\) at time \(t\), this order will be fulfilled by time \(t+S_{j}\)(Graves & Willems, 2000), with \(S_{j}\) being the guaranteed-service time of node \(j\). Moreover, each node \(j\) receives a service commitment from its upstream node \(i\in J\), called inbound service time \(SI_{j}\) (\(SI_{j}=S_{j}\)), and has an order processing time or lead time of \(LT_{j}\). This lead time represents the time until the outputs are available to serve the demand, including material handling and transportation times. Both \(SI_{j}\) and \(LT_{j}\) are times that must be taken into account to define \(S_{j}\). The Net Lead Time (NLT) is a concept that links them and represents the period of exposure that is not covered within the guaranteed service time and must be covered with safety stock. The \(NLT\) for node \(j\) is defined as \(NLT_{j}=SI_{j}+LT_{j}\) - \(S_{j}\). Figure 1 displays examples for different values of \(S_{j}\). The first example (1) is the case where node \(j\) promises to its customer a guaranteed service time equal to the worst-case replenishment time (\(S_{j}=SI_{j}+LT_{j}\)). This node places an order to its
predecessor every time it receives an order from its customer, then it waits for the upstream node to process its order before processing the order without storing any inventory. In this case, \(NLT_{j}=0\). On the other hand, if the customer bounds the maximum possible service time (\(S_{j}\leq maxS_{j}\)) and this maximum is less than the worst-case replenishment time (\(maxS_{j}\leq SI_{j}+LT_{j}\)), node \(j\) should satisfy customer demand in less time that the required to place an order on the supplier and process it. Therefore, \(NLT_{j}>0\), meaning that there is a period of time that should be covered with safety stock, as shown in cases (2) and (3) in Figure 1(A).
The objective function in the GSM is the total holding cost minimization. The holding cost on a given echelon \(h_{j}\) is multiplied by the safety stock on that echelon. Assuming normal distribution to represent external demand patterns, the safety stock of echelon \(j\) is calculated as \(SS_{j}=k_{j}\ \sigma_{j}\sqrt{NLT_{j}}\), where \(k_{j}\) is the safety stock factor that reflects the percentage of time that the safety stock covers the demand variation, and \(\sigma_{j}\) represents the demand standard deviation. More details on the safety stock formula can be found in Eruguz et al. (2016). The aim of the GSM is to define the values of \(SI_{j}\) and \(S_{j}\) in order to reduce the safety stock holding cost. As shown in Figure 1(B), the guaranteed-service time defined for one node impacts the downstream stages in the network, because the guaranteed service time for the node becomes the inbound service time for its downstream successors (\(SI_{j}=S_{j}\)). In case (1), avoiding safety stocks in node \(j\) yields large inventory levels on the successor stage \(k\) (proportional to \(NLT_{k}\)), while in case (2) the inventory level at \(k\) is reduced by holding stock in \(j\).
A key assumption of the basic GSM is that demand is bounded. If demand in a certain period exceeds the bound, it is assumed that other extraordinary measures such as overtime production are used to satisfy excess demand. Moreover, it is assumed that each stage of the supply chain operates
Figure 1: (a) Examples of different values for \(SI_{j}\), \(S_{j}\), and \(NLT_{j}\): 1) pull scenario, 2) intermediate scenario, and 3) push scenario. (b) Guaranteed service approach in multi-echelon supply chains.
under a (_R_,_S_) inventory policy with a base-stock level. The demand is independent and identically distributed following a normal distribution. Lead times are constant, and independent demand only occurs at final nodes in the network. In addition, the service times at the initial and the final nodes are inputs. Finally, each plant has a coefficient that represents the Bill of Materials (BOM) for product transformation and depends on location-location relationships (network arcs).
Over the years, many authors have worked on extending the original GSM assumptions to enable real-world supply chain characteristics to be captured, as presented in the survey by Eruguz et al. (2016). The authors in this survey summarize several extensions made to the basic model of Graves and Willems (2000). The main assumptions that were relaxed are related to the external demand, lead time variability, capacity constraints, service time customization, alternative replenishment policies, review periods, and extraordinary measures. Moreover, other authors have presented works about integrating the classic GSM with other activities or approaches. You and Grossmann (2008) develop models and algorithms that simultaneously consider inventory optimization and supply chain network design under demand uncertainty. In a subsequent work, these authors present an integrated multi-echelon supply chain design and inventory management model under uncertainty using the GSM (You and Grossmann, 2009). Klosterhalfen et al. (2013) propose an integrated hybrid guaranteed-service and stochastic-service approach for inventory optimization, that allows selecting the approach that minimizes costs. Recently, the work by Ghadimi (2020) presents a model for joint optimization of production capacity and safety stocks under the GSM approach. Bendadou et al. (2021) analyze the impact of merging activities in a supply chain under the GSM.
In addition to the extensions mentioned in (Eruguz et al., 2016), the inclusion of MOQ and fill rate as a service level measure have significant importance for representing supply chain dynamics and must be accounted for. Chopra and Meindl (2013) define the Cycle Service Level (CSL) as the fraction of replenishment cycles that end with all the customer demand being met, where the replenishment cycle is the interval between two successive replenishment deliveries. On the other hand, the product fill rate (_fr_) is the fraction of product demand that is fulfilled on time from the product in inventory. Chopra and Meindl (2013) describe how to introduce the fill rate given a continuous review inventory policy with a formula that links both indicators to obtain the equivalent CSL for single echelon networks. They also describe how a large MOQ yields larger fill rates. Silver and Bischak (2011) present an exact fill rate in a periodic review base stock system under normally distributed demand, and they state that the fill rate depends on four parameters, safety factor, coefficient of variation, review period, and lead time, but not on the minimum order quantity. De Smet et al. (2019) combine stochastic lead times with batching decisions for a distribution network based on the work of Humair et al. (2013) and calculate fill rates with an iterative procedure. More recently, Peeters (2020) accounts for MOQ to set safety stock levels and review periods integrated with stochastic lead times, based on the approach proposed by Humair et al. (2013), and using the Cycle Service Level (CSL) as a customer service measure.
In summary, this paper extends the GSM approach by including four main contributions. First, we address a more general supply chain that is frequently found in the pharmaceutical industry, with hybrid nodes including differentiated service times for each type of demand. We should note that this type of structure has not been considered previously in the literature, and it has great relevance in industrial practice for solving industrial problems. Second, we combined new (e.g. hybrid nodes) and existing features (e.g. stochastic lead times, review periods, fill rate) into a single model, requiring adaptations to allow their integration, such as extending the approaches of Inderfurth (1993) of stochastic lead times or Eruguz (2014) to a more generalized network. Third, we propose a new approximation method to include the fill rate as a target, using polynomial regression and adapting the existing formula in Chopra & Meindl (2013) for the GSM (_R,S_) inventory policy with minimum order quantities, multi-echelon networks, and hybrid nodes. A high R-squared value is obtained from the regression, meaning that the approximation has a good fit and is considered to be reliable. Finally, we introduce an exact reformulation of the NLP problem to a QCP to improve the computational efficiency by several orders of magnitude. This reformulation is equivalent to the original NLP problem, yielding the same optimal solution, thus guaranteeing the same quality of the solution. The proposed model is tested on examples for illustrative and industrial systems and provides computationally efficient solutions. The simulation of the results shows the accuracy of the proposed model to meet the service levels in the multi-echelon system under study.
## 2 Problem Statement
We are given a supply chain with a fixed design for a set of materials \(p\in P\) that can be either raw materials or finished goods. The locations \(j\in J\) belong to a set of plants, distribution centers, and retailers that can store different materials. Stock holding costs are incurred at all nodes; their unit costs are given. We assume uncertain demand and lead times. The objective is to determine the guaranteed-service times for each material at each location, and consequently how much safety stock to maintain at each location to minimize the total holding costs and satisfy a specified customer service level.
Unlike most literature on the topic, this work does not assume there is a final customer demand zone. In practice, it is usual that large hubs have an important external customer that places orders directly to this node. One the other hand, we assume that external or independent demand for any material \(p\) can be placed at any node \(j\) in the network. Each node \(j\) can have an independent normally distributed demand of material \(p\) with mean \(\mu_{lip}\) and variance \(\sigma_{lip}\), and/or an internal or dependent demand to satisfy replenishment orders from downstream nodes, with mean \(\mu_{Dip}\) and variance \(\sigma_{Dip}\). Demand is propagated upstream considering the risk pooling assumptions described in You and Grossmann (2009). A node that satisfies both dependent and independent demands is called a hybrid node, and an example of it is presented in Figure 2 on the left side.
Regarding the network topology, we assume divergent networks, as shown in Figure 2. In other words, a node that holds a material \(p\) can only receive this material from a single node and can distribute
it to one or more locations, as is usual in finished goods supply chains. The same node can be supplied with another material \(q\in P\) from another location, but this last one should be the only supplier of \(q\) for that node. The route that each material follows, as well as the lead time distributions between two connected nodes, are given. Lead times are assumed to follow an independent normal distribution \(LT_{jp}\)\(\sim N(lt_{jp}\), \(\sigma_{LT_{jp}})\). They represent the delay that is under the responsibility of node \(j\), including transportation, material handling, and other processing times until the material is ready to be shipped (i.e., is fulfilled).
Plants can be located at any node. Plant nodes can hold stock of both raw materials and finished goods. We introduce a general BOM based on a material-material relation, instead of a location-location relation as in Graves and Willems (2000). The value \(\phi_{pq}\) determines the amount of material \(p\) required to produce a unit of material \(q\), regardless of the plant location. On the right side of Figure 2, there is an example of how independent, dependent and total demand mean and standard deviation of demand are propagated, including a Finished Good (FG) and a Raw Material (RM).
We assume an (\(R\),\(S\)) inventory policy, with nested review periods (\(r_{jp}\)) as inputs to the model and common review days (Eruguz et al., 2014). Furthermore, a minimum order quantity \(moq_{jp}\) may be enforced on replenishment orders. This means that if location \(j\) needs to place an order, it will need to order at least the \(moq_{jp}\), which may force it to receive an amount larger than required.
The network topology and the modes of transportation are assumed to be fixed. Hence transportation costs are not accounted for in this study. The service time of the most upstream nodes in the network and the maximum service time of each final node are given. We assume that information about demand is shared in the network and ordering decisions are decentralized; therefore, each node makes its own replenishment decisions and has no delay in ordering. For each node, the safety factor \(k\) related to the CSL, which is represented by the standard normal distribution is also given, reflecting the percentage of time that the safety stock covers the demand variation. Alternatively, the modeler can also ask for a fill rate to be considered as a target service measure.
Figure 2: Example of a hybrid node and a divergent network.
Model formulation
The multi-echelon safety inventory optimization problem can be formulated as a nonlinear program (NLP) that deals with the safety inventory planning in a given supply chain. The model proposed in Graves and Willems (2000) is used as a basis, and all sets, parameters, and variables of this model are presented in the Nomenclature section. First, we assume that external demand on each node is a random variable defined as in Graves and Willems (2000). We assume that the external demand is normally distributed \(D_{lip}\sim N(\mu_{ljp},\sigma_{ljp})\), and its mean and standard deviations are propagated upstream to define internal demand means and standard deviations throughout the network. If there are stages with more than one successor, we require a decision on how to combine the demand bounds for the downstream stages to obtain a relevant demand bound for the upstream stage to position the safety stock (Graves and Willems, 2000). There will be a relative reduction in variability as we combine demand streams due to risk pooling. Therefore, the dependent demand parameters for material \(p\) at node \(j\) are obtained by converting the demand parameters for all materials \(q\) that require \(p\) as an input at all successor nodes \(k\). The conversion is done via the Bill of Materials (BOM) \(\phi_{pq}\) as a pre-processing step. The set \(\Phi\) contains all valid material transformations (from material \(p\) to material \(q\)), i.e. the raw material \(p\) is required for obtaining the finished good \(q\). \(A\) is a set with indices (\(i\),\(j\),\(p\)) indicating that there is a feasible route for material \(p\) from node \(i\) to node \(j\). Note that \(q=p\) and \(i\neq j\) if it is a distribution link (\(i\) to \(j\)) of the same product \(p\), and \(q\neq p\) and \(i=j\) if node \(j\) is a plant location that produces \(p\) from \(q\). We assume that the total demand of a given node \(D_{lip}\) is the sum of the orders placed by immediate successors \(D_{Dip}\) plus any external orders \(D_{lip}\), i.e. \(D_{jp}=\)\(D_{lip}+D_{Dip}\). As random variables are assumed to be independent from each other, the total demand is a linear combination of normal distributed variables, being also normally distributed \(D_{jp}\sim N(\mu_{jp},\sigma_{jp})\), as stated in Graves and Willems (2000) and You and Grossmann (2008). Similarly, the dependent demand \(D_{Dip}\sim N(\mu_{D_{Dp}},\sigma_{Djp})\) is also a linear combination of demands of successors. Therefore, the total mean demand is the sum of the mean demands as shown in Equation (1), and the total demand standard deviation is calculated as in Equation (2). In this work, we include the first term in both equations referring to independent demand mean and standard deviation that can be placed at any node. Note that the second term on both equations is equivalent to the dependent demand mean and deviation, that is, \(\mu_{Dip}\) and \(\sigma_{Dip}\). We assume pooling of both types of demand parameters for propagation purposes. For nodes where material \(p\) is distributed, rather than transformed into \(q\), \(p=q\) and \(\phi_{pq}=1\). For manufacturing nodes where material \(p\) is transformed into material \(q\), \(\phi_{pq}\) equals the amount of \(p\) consumed per unit of \(q\).
\[\mu_{jp}=\mu_{ljp}+\sum_{(p,q)\in\Phi}\sum_{(j,k,p)\in A}\phi_{pq} \mu_{kq} \forall\,j\in J,p\in P_{j} \tag{1}\] \[\sigma_{jp}=\sqrt{\sigma_{ljp}^{2}+\sum_{(p,q)\in\Phi}\sum_{(j,k, q)\in A}\phi_{pq}^{2}\sigma_{kq}^{2}} \forall\,j\in J,p\in P_{j} \tag{2}\]
**3.1 Constraints**
The first set of constraints is related to bounding the guaranteed-service time variables. Equation (3) defines the first inbound service time for the starting (source) nodes in the network \(\hat{J}^{0}\), where \(si^{0}\) is a given input. Equation (4) links the inbound guaranteed-service time \(SI_{ip}\) and the guaranteed-service time of its upstream node \(S_{iq}\). If there is a maximum accepted delay for any material on a node, the inequality in (5) is active. In addition, Equation (6) fix the maximum accepted service time \(S_{Eip}\) exclusively for external demand nodes. This service times does not impact the inbound service time of downstream nodes, because it related to safety stock dedicated only for external customers.
\[SI_{ip}=si_{jp}^{0} \forall\,j\,\in J^{0},p\in P_{j} \tag{3}\]
\[SI_{jp}\geq S_{iq} \forall\,(i,j,p)\in A,(q,p)\in\Phi,p\in P_{j} \tag{4}\]
\[S_{jp}\leq maxS_{jp} \forall\,j\,\in J,p\in P_{j} \tag{5}\]
\[S_{E_{jp}}\leq maxSE_{jp} \forall\,j\,\in J_{p}^{l},p\in P_{j} \tag{6}\]
#### 3.1.1 Manufacturing locations
In this work, a manufacturing site has the possibility of storing both raw materials and finished goods in the same node \(j\). To the best of our knowledge, this representation of the manufacturing site has not been addressed before. Despite the notation demonstrating that only one node is involved, it is possible to represent the plant as two artificial nodes connected by an arc that represents the manufacturing time, as depicted in Figure 3. If it is required, for example, that the safety stock of raw materials is enough to satisfy production demand immediately from stock, a maximum possible service time \(maxS_{*}=0\) can be required. On the other hand, safety stocks of finished goods could be constrained as \(maxS_{*}=0\) if no stock is allowed in the manufacturing node. For this special case, \(A\) represents an enabled production process to obtain product \(p\) at node \(j\). There is a production lead-time that can be constant or normally distributed with parameters \(lt_{o}\) and \(\sigma_{LT_{o}}\) to represent the manufacturing time. In case external and internal customers are able to place orders of finished goods to the plant, there would be dedicated safety stocks for each type of demand. The demand parameters for raw materials are defined by Eqs. (1) and (2), with their corresponding coefficient \(\phi_{*}\) related to the BOM. For example, if the raw material \(qI\) at plant \(j\) is used to produce both materials \(pI\) and \(p2\), the mean demand for this raw material is \(\mu_{j,q1}=\phi_{q1,p1}\,\mu_{j,p1}+\phi_{q1,p2}\,\mu_{j,p2}\) and the standard deviation is \(\sigma_{j,q1}=\sqrt{\phi_{q1,p1}^{2}\,\sigma_{j,p1}^{2}+\phi_{q1,p2}^{2}\, \sigma_{j,p2}^{2}}\).
#### 3.1.2 Stochastic lead times
Concerning the incorporation of stochastic lead times to the GSM, our work is based on the approach by Inderfurth (1993). In that work, a serial network is proposed with external demand at the final nodes, called "demand nodes", and upstream nodes are called "non-demand nodes". Inderfurth (1993) proposes that the safety stock at a demand node involves the combination of the two random variables: independent demand random variable \(D_{ip}\sim N(\mu_{Ijp},\,\sigma_{Ijp})\) and the lead time random variable \(LT_{jp}\sim N(lt_{jp},\,\sigma_{LT_{jp}})\). The service process is usually arranged in such a way that customer demand will be fulfilled as soon as the fluctuating final-stage lead time will allow it. Therefore, in this planning situation there are flexible lead times. In our work, we propose to replace the mean lead time with the value of the net lead time variable. Hence, the safety stock for demand nodes can be calculated as:
\[SS_{Ijp}=k_{jp}\sqrt{NLT_{jp}\,\,\sigma_{Ijp}\,^{2}\,+\,\mu_{Ijp}\,^{2}\,\sigma _{LT_{jp}}\,^{2}}\hskip 56.905512pt\forall\,j\,\in J_{p}^{I},\,p\in P_{j} \tag{7}\]
where \(SS_{Ijp}\) represents the safety stock level to satisfy independent demand. On the other hand, the approach of Inderfurth states that in an upstream stage the stochastic lead time is converted into a deterministic lead time to be consistent with integrated multi-level production planning from MRP systems. Therefore, \(\widehat{lt}_{jp}=lt_{jp}+k_{LT\,jp}\,\sigma_{LT_{jp}}\), where \(k_{LT_{jp}}\) relates to the service level that denotes the probability regards the random lead time does not exceed the planned lead time \(\widehat{lt}_{jp}\). Thus, using fixed planned lead times means that each internal demand will be satisfied just after \(\widehat{lt}_{jp}\) periods predicting the expected stocks as follows:
\[SS_{D_{jp}}=k_{jp}\,\sigma_{D_{jp}}\sqrt{S_{Ijp}+\widehat{lt}_{jp}-S_{jp}} \hskip 56.905512pt\forall\,j\,\in J_{p}^{D},\,p\in P_{j} \tag{8}\]
Note that in the last case the pipeline inventory, \(\mu_{jp}\widehat{lt}_{jp}\), differs from the deterministic case. In our work, we propose that the safety stock for each hybrid node is equal to the summation of safety
Figure 3: Representation of a manufacturing plant with optional safety stock for raw materials and finished goods.
stocks for independent and dependent demands, \(SS_{jp}=SS_{I_{jp}}+SS_{D_{jp}}\). Hence, on each node, we define a safety stock to satisfy downstream orders and another safety stock for external orders. In general, these inventories are at the same location but have to be dedicated to each type of demand. Therefore, this limits pooling at this location. In practice, safety stock can be considered as a whole to satisfy demand if it does not mean a stockout on other customers at the same location.
#### 3.1.3 Review periods
The original GSM assumes that review periods are common for all stages, and that the lead time includes any waiting and processing time at the given stage (Graves & Willems, 2000). Inderfurth (1993) assumes that a final-stage lead time additionally contains the length of the review period. In the present work, we introduce a more detailed definition of how review periods are accounted for in the net lead time definition. In Equations (7) and (8), no details are given on how the lead time mean is determined. In other words, these equations account for lead time variability, but it is not certain what \(lt_{jp}\) includes in this delay time. We propose to distinguish this feature separating the review period \(r_{jp}\) from the lead time parameter \(lt_{jp}\). We assume nested review periods with common review days as in the work of Eruguz et al. (2014), and that a replenishment order is ready to satisfy the demand on its period of arrival. A node that faces external demand, needs to cover with safety stock the demand during net lead time \(NLT_{jp}=SI_{jp}-S_{Eip}+lt_{jp}+r_{jp}\). Hence, when an order is placed in a node, it is instantaneously propagated upstream. Therefore, in upstream nodes \(NLT_{jp}=SI_{jp}\) - \(S_{jp}+lt_{jp}+r_{jp}-1\), as stated in Eruguz et al. (2014).
As mentioned above, we propose the alternative of hybrid nodes with both types of demand. Inequalities (9) and (10) account for the definition of the net lead times to be covered with safety stock to achieve the desired service level for independent and dependent customers, respectively. These equations combine review periods and the stochastic lead time approach developed above. Note that \(SI_{jp}\), \(lt_{jp}\) and \(r_{jp}\) are assumed to be the same for both types of demands, and \(ARG_{I}\) and \(ARG_{2}\) are positive continuous variables representing the terms in the square roots for the independent and dependent customers safety stocks, respectively. We can have different amounts of safety stocks for satisfying internal or external demands, because stochastic lead times are accounted for differently for both demand types.
\[ARG_{1jp} \geq SI_{jp}-S_{jp}+lt_{jp}+k_{LT}{}_{jp}\ \sigma_{LT}{}_{jp}+r_{jp}-1 \forall\ j\ \in J^{D},p\in P_{j} \tag{9}\] \[ARG_{2}{}_{jp} \geq(SI_{jp}-S_{E}{}_{jp}+lt_{jp}+r_{jp})\ \sigma_{lt}{}_{lp}{}^{2}\ +\mu_{ l}{}_{lp}{}^{2}\ \sigma_{LT}{}_{lp}{}^{2} \forall\ j\ \in J^{I},\ p\in P_{j} \tag{10}\]
Note that the the right-hand sides in (9) and (10) must be positive. For this purpose, the upper bounds \(S_{ip}\) and \(S_{Eip}\) are defined with inequalities (11) and (12). The former defines the upper bound as only for the case of dependent demand, while the latter one accounts for the upper bound in the case of independent demand.
\[S_{jp}\leq Sl_{jp}+\tau_{jp}-1+lt_{jp}+k_{LTjp}\ \ \sigma_{LTjp} \forall\ j\in J^{D},\ p\in P_{j} \tag{11}\]
\[S_{Ejp}\leq Sl_{jp}+\tau_{jp}+lt_{jp}+\left(\frac{\mu_{tp}}{\sigma_{l_{jp}}} \sigma_{LTjp}\right)^{2} \forall\ j\in J_{p}^{t},\ p\in P_{j} \tag{12}\]
#### 3.1.4 Fill rate as a target service level
As described previously, the GSM uses the Cycle Service Level (CSL) as the customer service performance indicator when setting safety stocks. Since fill rate is more widely used in industry (Teunter et al., 2017), we extend the GSM to allow specifying fill rates if desired. Fill rates represent the fraction of demand that was met on-time from inventory. We use the works of Axsater (2006) and Chopra and Meindl (2013) as a baseline and we propose modifications to account for additional features. First, we propose to replace the lead-time demand variability, expressed as \(\sigma_{jp}\sqrt{NLT_{jp}}\), by \(\sigma_{Djp}\sqrt{ARG_{1jp}}+\sqrt{ARG_{2jp}}\), allowing the representation of multi-echelon networks with service times, stochastic lead times and hybrid nodes. This is possible since in Section 3.1.2 it is assumed that the safety stock is the sum of independent and dependent safety stocks for that node, \(SS_{jp}=SS_{Ijp}+SS_{Djp}\), with common service level \(k_{ip}\). From Equations (7), (8), (9) and (10) we obtain \(SS_{jp}=k_{jp}\left(\sigma_{Djp}\sqrt{ARG_{1jp}}+\sqrt{ARG_{2jp}}\right)\)".
Moreover, in (\(R\),\(Q\)) inventory policies, \(Q\) refers to the replenishment quantity, however, for periodic review policies this amount is variable. We assume an average replenishment quantity, \(Q_{jp}=\mu_{jp}\ r_{jp}\). From the formula presented by Chopra and Meindl (2013) and including the extensions mentioned, we can obtain the constraint that links fill rate (\(f_{ip}\)) to the safety factor \(k_{jp}\), and consequently to the CSL. The safety factor \(k_{ip}\) becomes a continuous positive variable \(KV_{ip}\) for those materials and locations that have fill rate levels active. The objective is to find the lowest CSL level that can meet a defined fill rate, given by Equation (13). \(F_{s}(KV_{jp})\) and \(f_{s}(KV_{ip})\) correspond to the cumulative and density standard normal distributions functions, respectively. Therefore, the constraint proposed in this model to find the minimum CSL to achieve the desired fill rate is given by Equation (13).
\[\begin{split} f\tau_{jp}\ \leq\ \frac{\left(\sigma_{Djp}\sqrt{ARG_{1jp} +\sqrt{ARG_{2jp}}}\right)}{Q_{jp}}\Big{(}KV_{jp}\left[1-F_{s}\big{(}KV_{jp} \big{)}\right]-\ f_{s}\big{(}KV_{jp}\big{)}\Big{)}+1\\ \forall\ j\in J,\ p\in P_{j},\ (j,p)\in\ F\end{split} \tag{13}\]
#### 3.1.5 Minimum Order Quantity (MOQ)
This requirement is frequently found in practice, however, to the best of our knowledge, there is little literature that relates MOQ to safety inventories. When an MOQ is required, flexibility is reduced, because the customer needs to either order many units or none. However, this does not necessarily mean that the risk and safety stocks are increased. Figure 4 depicts the effect that MOQ has on inventories. Plot (a) presents inventory evolution through time for a periodic review policy with a review frequency
of one week and no lead time (\(l_{h}=0\)). In grey color, we can see the safety stock level (\(SS_{k}\)) set to cover a proportion of the demand excess during the net lead time. The basestock level B denotes the order-up-to level that must be accounted for when a replenishment order is placed, equivalent to parameter S in the (\(R\),\(S\)) inventory policy. The order quantity (\(Q\)) is equal to the expected mean demand during a review period (\(\mu_{ip}\),\(\tau_{ip}\)). Each replenishment cycle, that is, the time between two consecutive replenishments deliveries, has a probability of non-stocking out of 1 - \(\alpha\). The safety stock level is set to cover demand variability during the net lead time (1-\(\alpha\))100% of the times, this being the probability determined by the \(k_{ip}\) factor.
If there is an MOQ required by the supplier to a node, and the MOQ is larger than the standard Q, the inventory level evolution will look like the one in Figure 4 (b). In this example, the MOQ size is three times the mean demand during the review period. Therefore, the first and the second periods have a low probability of stocking out, because there will be more inventory than is needed to satisfy the expected mean demand. However, the Cycle length is increased and a replenishment order is placed every three periods on average. The CSL measure will not be affected by the MOQ because there will be less replenishment cycles (Chopra & Meindl, 2013). On the other hand, fill rate levels will increase, which means that safety stocks can be reduced at the expense of increasing the cycle stock as a result of the MOQ requirement. The number of orders placed by the customer is not modified, and the overshoot in stock causes that many periods have more stock than necessary to fulfil the order. In this work, we propose to include this concept to reduce safety stock levels if the MOQ is larger than the original \(Q\), as shown in Figure 4 (c). This reduction results in a decrease in the safety factor, because now \(Q_{ip}=\max\{MOQ_{ip}\), \(\mu_{ip}\tau_{ip}\}\) in (13). This feature represents and extension of the GSM to include an (\(s\),\(S\)) inventory policy instead of the original (\(R\),\(S\)) policy, in which \(s\) represents the Basestock level (\(B\)) and \(S\) the order-up-to level \(MOQ+SS\). Figure 5 depicts how fill rates are generally larger than CSL for a given value of \(KV_{ip}\). In the green lines, it is possible to see different curves for fill rates for increasing MOQ sizes, being \(MOQ1\) the smaller one and \(MOQ4\) the larger one. The larger the MOQ is, the larger is the fill rate achieved for a specified safety factor.
Figure 4: MOQ effect on inventory levels
### Objective Function
The objective function is to minimize safety stock holding cost as defined in Equation (14), where \(h_{ip}\) is the coefficient that represents holding cost for each material \(p\) at each location \(j\).
\[min\sum_{j\in J}\sum_{p\in P_{j}}h_{jp}\ KV_{jp}\left(\ \sigma_{D\,jp}\sqrt{ARG_{1\,jp}}+\sqrt{ARG_{2\,jp}}\right) \tag{14}\]
### Solution Approach
The guaranteed service model (MNL1), given by equations (3)-(6), (9)-(14) is a nonconvex NLP with a concave objective function. Nonconvex NLP problems can in principle be solved with global optimization solvers like BARON. However, for medium or large-scale problem sizes, the computational time required to find the global optimum may be very expensive. To improve the model tractability and efficiency, we propose two solution approaches. In the first one, we use a quadratic regression to find an approximation to Equation (13), obtaining model MNL2. In the second one, we propose an exact reformulation of MNL2 into a quadratically constrained problem (QCP), denoted as MQC.
Equation (13) presents a difficulty to overcome. The function \(g(x)=x\) [1- \(F_{x}(x)\)] - \(f_{x}(x)\) needs to be included in the mathematical model. To simplify this function, we propose a surrogate model through a second-order polynomial regression (\(h(x)=ax^{2}+bx+c\)) to generate an approximation to \(g(x)\) in (15), using as the domain the values that variable \(KV_{jp}\) can take. The best-fit values obtained for the parameters in \(h(x)\) are \(a=\) -0.074700, \(b=0.331986\), \(c=\) -0.357195, with \(R^{2}=0.98\). A high R-squared value like the one obtained means that the approximation has a good fit with a small error. Figure 6 presents the original function \(g(x)\) and the surrogate model function \(h(x)\). The mathematical nonlinear program obtained (MNL2) is a nonlinear program composed of Equations (3)-(6), (9)-(12), (14)-(15), differentiated from MNL1 by the approximation on the fill rate constraint.
Figure 5: Fill rate sensitivity analysis with variations on replenishment quantities.
\[f\tau_{jp}\leq \frac{1}{Q_{jp}}\left(\ \sigma_{D\,jp}\sqrt{ARG_{1\,jp}}+\sqrt{ARG_{2\,jp}} \right)\left(-a\ KV_{jp}^{2}\ +b\ KV_{jp}\ -c\right)+1 \tag{15}\] \[\forall\ j\in J,\ p\in P_{j},\ (j,p)\in\ F\]
In order to improve the efficiency of the optimization, we propose a reformulation of the NLP model (MNL2) into a quadratically constrained problem, denoted MQC, which solvers like CPLEX and Gurobi can solve quite effectively in reasonable computational times. The idea behind the mathematical reformulation is to build an exact optimization model that benefits from its mathematical structure to improve the computational efficiency.
In order to derive the MQC reformulation, we first define a new variable \(Z\) that replaces all the square root terms in the problem, where \(Z=\sqrt{\tau}\) for a general expression \(\tau\). Accordingly, the objective function of the NLP plotted in Figure 7(a) will be reformulated as in Figure 7(b), where Equation (16) is the reformulation of the objective function in (14).
\[min\sum_{j\in J}\sum_{p\in P_{j}}h_{jp}\ KV_{jp}\left(\ \sigma_{D\,jp}\ Z1_{jp}+Z2_{jp}\right) \tag{16}\]
Inequalities (9) and (10) are reformulated by replacing the left-hand sides terms with variables \(ZI_{jp}\) and \(Z2_{jp}\), resulting in Equations (17) and (18).
\[Z1_{jp}^{2}\geq SI_{jp}-S_{jp}+lt_{jp}+k_{LT\,jp}\ \sigma_{LT\,jp}+\tau_{jp}-1 \forall\ j\ \in J^{D},p\in P_{j} \tag{17}\]
Figure 6: Surrogate model \(h(x)\) and original function \(g(x)\) curves.
Figure 7: Objective function for NLP and QCP models.
\[Z2_{jp}^{2}\geq\left(Sl_{jp}-S_{E\,jp}+lt_{jp}+\tau_{jp}\right)\sigma_{I_{jp}} ^{2}+\mu_{I_{jp}}^{2}\sigma_{LT_{jp}}^{2} \forall\ j\ \in J^{1},\ p\in P_{j} \tag{18}\]
Finally, the fill rate constraint can be exactly reformulated as quadratically constrained, replacing Equation (15) with Equations (19) and (20).
\[Q_{jp}\big{(}f\tau_{jp}-1\big{)}\leq\left(-a\ U_{jp}+b\ KV_{jp}-c \right)\Big{(}\ \sigma_{D_{jp}}Z1_{jp}+Z2_{jp}\Big{)} \tag{19}\] \[\forall\ j\in J,\ p\in P_{j},\ (j,p)\in F\] \[KV_{jp}^{2}-U_{jp}\leq 0 \forall\ j\in J,\ p\in P_{j},\ (j,p)\in F \tag{20}\]
In this way, the MQC reformulation is given by the objective function (16), subject to the constraints (3)-(6), (11)-(12), (17)-(20). This reformulation is equivalent to the original NLP as stated in the following proposition.
_Proposition 1. The optimization problem MNL2 is equivalent to the optimization problem MQC._
The proof of this proposition can be found in Appendix A.
## 4 Application and Results
### Illustrative example and sensitivity analysis
An illustrative example is presented in Figure 8 to understand the model results and how different considerations impact safety stock decisions. From the supply chain network showed on the left, just a sample of products is selected, shown on the right. This case involves the production and distribution of a finished good (\(SKU1\)) obtained from two raw materials (_Raw1_ and _Raw2_), and a Plant location that manufactures \(SKU1\) and delivers it to three retailers that satisfy external demand. The proportion of raw materials needed to obtain a unit of \(SKU1\) are \(\phi_{Raw1,\ SKU1}=1\) and \(\phi_{Raw2,\ SKU1}=0.014\). \(S_{ip}=0\) for \(SKU1\) at the three retailers' locations. The production lead time is 2 weeks and it is represented by the loop above the plant. Table 1 displays the demand and lead time input parameters, maximum service time constraints, and unit holding costs. The target CSL is 97% for all products.
Figure 8: Illustrative example representation
Results are detailed in Table 2. The computational tests are performed on an Intel(r) Core i7 CPU with 16 GB RAM and 4 parallel threads using Gurobi 9.1.2 as the QCP solver. The model (MQC) involves 30 continuous variables and 34 constraints. The CPU time required to obtain the optimal solution is 0.15 seconds and the total holding cost obtained is $162,205. It is possible to see that in the plant the decision is not to hold safety stock, and to select a guaranteed service of 2 weeks for supplying the retailers.
It is worth mentioning that the guaranteed service time of _SKU\({}_{I}\)_ in _Plant_ affects the retailers' safety stock levels, which need to cover for 2 more weeks with stock, as this is the inbound service time (_SI\({}_{Retanler,SKU}\)_= 2 weeks). If this inbound service time continues to increase, the safety stock at the retailers will also increase, and it is possible that the model decides to change safety stock setting in the plant so as to take advantage of system-wide risk-pooling, and have a lower cost in the supply chain. As a sensitive analysis, Figure 9 (A) depicts the current case, with a production lead time of 2 weeks. In case (B), the production lead time is increased to 10 weeks, and the optimal solution for this case is to pool, holding stock of finished goods in the plant, with a total holding cost of $259,250. The decision of pooling in the plant yields a lower cost than if we maintained the decision of no safety stock in the plant
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline Material & _Raw\({}_{1}\)_ & _Raw\({}_{2}\)_ & _SKU\({}_{1}\)_ & _SKU\({}_{1}\)_ & _SKU\({}_{1}\)_ \\ Location & _Plant_ & _Plant_ & _Plant_ & _Retailer\({}_{1}\)_ & _Retailer\({}_{2}\)_ & _Retailer\({}_{3}\)_ \\ \hline Demand & \(\mu_{ip}\) & 425,717 & 5,913 & 425,717 & 162,379 & 67,284 & 196,054 \\ (units) & \(\sigma_{ip}\) & 192,229 & 2,669 & 192,229 & 119,665 & 61,585 & 137,258 \\ \hline Coefficient of Variation & & & & & & \\ (CV=\(\sigma_{ip}\)/\(\mu_{ip}\)) & 0.45 & 0.45 & 0.45 & 0.73 & 0.91 & 0.70 \\ \hline Lead Time & \(LT_{ip}\) & 6 & 3 & 2 & 1 & 1 & 1 \\ (weeks) & \(\sigma_{LT_{ip}}\) & 1.9 & 0.7 & 0.0 & 0.3 & 0.6 & 0.4 \\ \hline Max Service Time \(S_{ip}\) (weeks) & - & - & - & 0 & 0 & 0 \\ \hline \(h_{ip}\) (\$/unit) & 0.01171 & 0.00002 & 0.12 & 0.12 & 0.12 & 0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Illustrative example input data
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Material & _Raw\({}_{1}\)_ & _Raw\({}_{2}\)_ & _SKU\({}_{1}\)_ & _SKU\({}_{1}\)_ & _SKU\({}_{1}\)_ & _SKU\({}_{1}\)_ \\ Location & _Plant_ & _Plant_ & _Plant_ & _Retailer\({}_{1}\)_ & _Retailer\({}_{2}\)_ & _Retailer\({}_{3}\)_ \\ \hline \(S_{ip}\) (weeks) & 0 & 0 & 2 & 0 & 0 & 0 \\ \(SS_{ip}\) (units) & 1,143,300 & 11,228 & 0 & 459,359 & 243,783 & 536,961 \\ Holding cost\({}_{\text{ip}}\) (\$) & 13,393 & 0.2 & 0 & 55,123 & 29,254 & 64,435 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Illustrative example results
for _SKU1_, as shown in (C). The total cost increased to $265,360 in comparison to the solution in (B) because the opportunity of pooling and reducing the inbound service time for retailers is missed.
This network does not have any hybrid node. As an example, we assume that the plant has external demand that is equal to the external demand of \(Retailer_{1}\), with \(\mu_{ip}\)= 162,379 and \(\sigma_{ip}\) =119,665. In that case, the optimal solution defines \(SS_{plant,SKU1}\) = 459,359, and this stock will be set exclusively to satisfy external demand orders. Internal demand will still be satisfied using a MTO policy. Regarding raw materials, the safety stocks increase to face the increased demand the plant of the finished good, being \(SS_{plant,raw2}\) = 13,632 and \(SS_{plant,raw1}\) = 1,384,128.
For this illustrative example, there is no MOQ requirement. Therefore, we assume that for a periodic review policy, the expected order size (\(Q_{ip}\)) is equal to the mean demand during the review period, that is \(\mu_{ip}\)\(r_{ip}\). The last analysis of this illustrative example concerns the measurement of customer service. In the current case, there is a desired 97% CSL, which corresponds to a safety factor \(k\) of 1.88. If a location, for example, _Retailer 1_, has to change the customer service metric from CSL to fill rate for a given product, a different CSL can be required to achieve the expected fill rate, so \(k_{ip}\) (now \(KV_{jp}\)) is variable. In Figure 10, CSL and safety stocks are given for different cases varying target fill rates and MOQ constraints for _SKU1_ on _Retailer 1_. The blue line indicates the expected fill rate, which is a given input in all cases except in the first one, in which the CSL is defined to set safety stocks as the original example, and the fill rate in this case is obtained through (19). In the following scenarios the target is a given fill rate (98%, 90%, 80% and 70%), and the CSL is obtained by the model. The yellow dashed line represents the resulting CSL for each case, and the yellow bar is the correspondent safety stock (secondary vertical axis) for that coverage. In addition, the brown dashed line and brown bars represent the resulting CSL and safety stock when applying an MOQ=500,000. The first case (the left-most case) is the current illustrative example scenario. The desired CSL is 97%, and a near 100% fill rate is expected, with or without a required MOQ. The second case sets a 98% fill rate to define safety stock levels. The minimum required CSL to achieve this expected fill rate decreases together with its corresponding safety stock level, and a sharper decrease occurs when a large MOQ is required. In the subsequent scenarios, the desired fill rates decrease and consequently, the CSL is lower. This difference
Figure 9: Analysis different lead times: A) 2-week production lead time, B) 10-week production lead time, C) 10-week lead time without safety stock for _SKU1_ at the _Plant_ as constraint.
is even more remarkable in the presence of MOQs, having no safety stocks defined for fill rates less than or equal to 80%, with a minimum required CSL of 50%.
In summary, the illustrative example provides a clear demonstration about how the model is able to manage safety stock decisions in a multi-echelon network. The risk pooling allows the business to recognize the opportunity of potential savings by holding safety stock upstream given a lower total variability in the demand. Moreover, it is interesting to see the impact of a large minimum order quantity on safety stock. The larger the lot size, the less need for safety stock. However, it is worth to mention that it becomes an additional carrying cost for cycle stock. The MOQs represent transportation and production constraints that are frequent for found in almost all echelons in the supply chain. The opportunity of including this feature combined with the most used service level metric, the fill rate, yields significant savings in safety stock levels, even eliminating them in the case of large MOQs, as shown in Figure 10. In this figure it is also possible to see how the different service level metrics yield significantly different solutions, reinforcing the importance of representing the desired service level measure by the company.
### Small-size industrial case study
The MQC formulation is now applied to a small-size industrial case study with the supply chain network shown on the left in Figure 8, with two echelons, 4 SKUs, and 31 raw materials coming from different locations and 1 intermediate product produced and consumed in the plant. The first echelon has one plant and the second echelon has three retailers, as in the illustrative example presented above. The complete input data is presented in Tables S1 and S2 in the Supporting Information Section. Note that lead times have decimals because they are averages of historic data. For MEIO purposes, the ceiling of the lead time[\(lt_{jp}\)] is used as input.
QCP formulation finds the optimal solution in 0.03 seconds. The optimal solution is $762,503. The results were compared with a commercial software (not identified due to confidentiality reasons), and the current safety stock levels for raw materials (RM) and finished goods (FG) are summarized in Table 3. While the commercial software solution obtains a 10% reduction in holding costs regarding the current safety stock levels, the proposed model in this work yields a 17% reduction, clearly showing the advantage of this tool to reduce the capital in inventory. Note that safety stocks at the retailers are slightly larger with the proposed model. The model seems to reduce the amount of inventory of raw materials, yielding the largest reduction in holding costs.
### Medium-size industrial case study
This case involves the same network as in Figure 8, but now it has 20 finished goods and 120 raw materials, requiring 196 safety stock decisions. The MOQs constraints (Equations (19) and (20)) are active for all nodes and materials, and the customer service measure of interest is the fill rate, which is different for each material. The size of the QCP model is 1,973 constraints and 1,427 continuous variables, and is solved to optimality within 3 seconds using Gurobi as the QCP solver. This further supports the usefulness of the proposed approach for solving real-world problems efficiently, obtaining optimal solutions at low computational expense.
In summary, the results of the three case studies show the computational efficiency of the proposed model to obtain fast and optimal solutions for different instances. A significant reduction in computational time is of great relevance since the company is presently facing problems that can take a few days to run for 52 periods with commercial software. While the NLP formulation is not able to find a feasible solution, the proposed QCP formulation finds the optimal solution at minimum computational expense. This clearly shows the competitive advantage of this tool, while achieving customer service levels with a minimum capital in inventory. The simulations in the following section are helpful to test the accuracy of the model outputs to achieve expected service levels.
## 5 Validation through simulation
Having presented the model formulation and its application to several case studies, we include simulation studies to provide some insights about the effectiveness and the robustness of the predicted solutions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Material & Model output & Baseline (Current level) & Commercial software \\ \hline \multirow{2}{*}{Holding cost} & RM & $ 126,532 & $ 155,070 & $ 193,080 \\ & FG & $ 635,971 & $ 761,525 & $ 634,950 \\ \multirow{2}{*}{Safety stock} & RM & 8,031,692 & 9,917,801 & 13,029,797 \\ & FG & 2,740,254 & 3,042 031 & 2,685,615 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Small-size industrial safety stock levels and holding costs
**5.1 Single-echelon simulations**
Safety stock formulas are developed for normally distributed demands during lead time. Soares (2013) states that if the coefficient of variation (CV) is not considerably less than 1, there is a relatively high probability for negative demand when using the normal distribution, and the accuracy may be compromised. Generally, the cases addressed in this paper concern demand patterns that can be considered smooth in most of the cases, with relatively low CVs. The CVs in the cases addressed in this paper ranges from 0.45 to 0.91, while lead times CVs are within 0 and 0.6. For some values, such as 0.91, the service level target may be compromised, showing a slightly lower performance in reaching the expected service level. We propose to add a brief analysis to evaluate if the safety stock decisions are accurate to meet the service levels for different demand CVs and deterministic lead times. The complete analysis and results are presented in the Supporting Information, Section S2. Figure 11 displays the results on each plot for each service type. The average expected service level is on the horizontal axis and the one obtained from simulation is on the vertical axis. The orange line refers to the mean effective service level obtained from simulation, the light orange area represents the confidence intervals, and the grey line the ideal results. All 95% confidence intervals presented in Tables S7 and S8 in the Supporting Information Section for both service types include the target value within their bounds. Note that the estimation of CSL is slightly less accurate when the CVs are closer to 1, with a maximum difference of 0.02, and the estimation is better for low CVs, in general for CVs lower than 0.66. In the case of fill rates, only a few values differ in 0.01 or 0.02 points from the expected values, for CV 0.96. For low targets, the fill rate is larger than expected. In summary, with these simulations it is possible to see that, in general, the service levels are achieved with the safety stock levels proposed by the model, with a maximum difference on 0.02.
**5.2 Multi-echelon simulations**
The following simulations have the purpose to evaluate service level achievement in multi-echelon networks. All results obtained from the developed model are validated using simulation. This is done using an open-sourced discrete-time inventory simulation software package written in the Julia language: _InventoryManagement.jl_(Perez, 2021). This simulator allows modeling multiproduct supply networks of any topology (e.g., serial, divergent, convergent, tree, or general). Each of the features included in the extended GSM model can be simulated using this software: hybrid nodes, MOQ, bill of materials, stochastic demand, and stochastic lead times. For greater clarity, the validation of the illustrative example is presented with two extra scenarios to analyze how some features affect the system behavior in the simulation. The simulation tool also has a procedure to estimate the initial required parameters of normally distributed random variables, to obtain the desired parameters of the normal distribution after truncation of negative values.
Demand and lead times values are randomly generated using normal distributions for each period, with the parameters defined in Table 1. Random demand is only generated when there is external demand, and it is then propagated upstream. Base stock levels are calculated following Equation (21).
\[B_{jp}=SS_{jp}+\mu_{p_{jp}}\left(S_{lp}-S_{jp}+\left[t_{jp}+z\; \sigma_{xrjp}\right]+\tau_{jp}-1\right)+\mu_{t_{jp}}\left(S_{lp}-S_{\epsilon jp }+\left[t_{jp}\right]\;+\tau_{jp}\right) \tag{21}\] \[\forall\;j\;\in J_{p},\;p\in P_{j}\]
In each period, a random demand value is generated, orders are placed and delivered, and inventory levels are updated. During each review period, an order is placed if the inventory position of a material on a location is below the basestock level. The management policy is decentralized: each location asks the amount they need to reach the basestock level, no matter how much the upstream node has on stock. If the available inventory is not able to meet demand, backorders are considered (the extraordinary measures that are referred to in the GSM approach are ignored in the simulation). The period selected in this case is one day with 7,000 days (1,000 weeks) in each run, so as to simulate the stationary state at each location, and 8 replications for each scenario are carried out. Demand and review periods have a weekly basis. The sequence of steps in the simulation of each period is the following:
1. External demand is observed and discounted at each node with independent demand. Unfulfilled demands are marked as lost sales.
2. On each review period, internal replenishment orders are placed if the current stock level is below the base stock level, and the lead time \(lt_{jp}\) is defined accordingly for this order. Orders start being processed with no delay. Internal demand is discounted at each node, and unfulfilled replenishment orders are registered.
3. Stocks are updated with the replenishment orders that arrive at each node.
The illustrative case study was run and two additional scenarios were also tested for sensitivity analysis, combining fill rates as a target measure and MOQs for finished goods at retailers. Table 4 presents the model output for each scenario that is to be used by the simulation.
The results of the scenarios are shown in Table 5. Results show the average of the effective service levels of each echelon obtained from the simulation and the 95% confidence intervals for these service levels.
The first scenario simulates the results of the illustrative example presented in Section 4.1, with a 97% CSL target for every material/location combination and no MOQs required. Note that service levels at the plant are larger than expected. As mentioned in (Minner, 1998) the approach of Inderfurth may induce large safety stocks, and does not benefit from joint coverage against both sources of uncertainty. On the other hand, as in the single echelon simulations, the effective CSL obtained in the retailers is slightly lower than the expected 97%. A possible reason is that the simulation model takes the ceiling of lead time values, what causes an increase in the lead time parameter. This error can be reduced if the discretization of time in the simulation is increased to simulate, for example, lead times given in hours. However, this impacts on the computational efficiency.
In Scenario 2, the service level target is changed to a 97% expected fill rate for all materials. Table 4 presents the base stock level and the minimum safety factor required to meet the target, obtained
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Scenario & Location & Material & Safety Stock level & Basestock level & Expected & Expected & \(k\) factor \\ & & & & & fill rate & CSL & \\ \hline \multirow{6}{*}{**1**} & _Plant_ & _SKU\({}_{1}\)_ & 0 & 5,375,266 & - & 97\% & 1.88 \\ & _Plant_ & _raw\({}_{1}\)_ & 1,118,096 & 39,992 & - & 97\% & 1.88 \\ & _Plant_ & _raw\({}_{2}\)_ & 10,428 & 5,375,266 & - & 97\% & 1.88 \\ & _Retailer1_ & _SKU\({}_{1}\)_ & 459,166 & 1,108,682 & - & 97\% & 1.88 \\ & _Retailer2_ & _SKU\({}_{1}\)_ & 243,680 & 512,816 & - & 97\% & 1.88 \\ & _Retailer3_ & _SKU\({}_{1}\)_ & 536,736 & 1,320,952 & - & 97\% & 1.88 \\ \hline \multirow{6}{*}{**2**} & _Plant_ & _SKU\({}_{1}\)_ & 0 & 0 & 97\% & - & - \\ & _Plant_ & _raw\({}_{1}\)_ & 910,989 & 5,168,159 & 97\% & - & 1.58 \\ & _Plant_ & _raw\({}_{2}\)_ & 8,069 & 37,633 & 97\% & - & 1.50 \\ & _Retailer1_ & _SKU\({}_{1}\)_ & 383,857 & 1,033,373 & 97\% & - & 1.57 \\ & _Retailer2_ & _SKU\({}_{1}\)_ & 209,762 & 478,898 & 97\% & - & 1.62 \\ & _Retailer3_ & _SKU\({}_{1}\)_ & 446,787 & 1,231,003 & 97\% & - & 1.56 \\ \hline \multirow{6}{*}{**3**} & _Plant_ & _SKU\({}_{1}\)_ & 0 & 0 & 97\% & - & - \\ & _Plant_ & _raw\({}_{1}\)_ & 910,989 & 5,168,159 & 97\% & - & 1.58 \\ \cline{1-1} & _Plant_ & _raw\({}_{2}\)_ & 8,069 & 37,633 & 97\% & - & 1.50 \\ \cline{1-1} & _Retailer1_ & _SKU\({}_{1}\)_ & 301,155 & 950,671 & 97\% & - & 1.23 \\ \cline{1-1} & _Retailer2_ & _SKU\({}_{1}\)_ & 118,761 & 387,897 & 97\% & - & 0.92 \\ \cline{1-1} & _Retailer3_ & _SKU\({}_{1}\)_ & 369,736 & 1,153,952 & 97\% & - & 1.30 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Input data for simulation of all scenarios (multi-echelon)
from variable \(KV_{ip}\) in Equations (19) and (20) in the optimization model. Note that all safety factors are strongly reduced regarding Scenario 1, showing that lower safety stock levels are enough to meet the desired fill rate. The retailers can achieve in average the 97% target, and the service levels on upstream nodes are again larger than the target. The proposed quadratic regression to approximate the fill rate is accurate for the proposed scenario, according to simulation results.
Finally, Scenario 3 adds a minimum order quantity of 500,000 units to Scenario 2 to supply retailers. This minimum order quantity is the minimum necessary batch size to deliver an order from the plant to a retailer. The safety factor is reduced in Table 4 for the retailers because of the MOQ, as discussed in Section 3.1.5. Note that fill rates are achieved with a 24% of reduction in safety stock levels. The major reduction (43%) is in _Retailer 2_, the one that has the greatest CVs in demand and lead time. The MOQ effect can be detected on inventory levels through time in Figure 12, that presents inventory levels for _Retailer 2_. This retailer has the lowest mean demand, and therefore the MOQ is able to cover more review periods, 7 weeks on average. In the zoomed-in rectangle, it is possible to detect that an MOQ can cover several weeks of demand.
In summary, confidence intervals obtained from the simulation show the accuracy of the proposed model to meet the service levels in the multi-echelon system under study. The estimation is more accurate for fill rate targets. However, the differences between the expected and the effective CSLs is at most 0.02 points. The plant with a MTO policy for the finished goods, pushing more safety stocks to the retailers works well. The safety stocks of raw materials may be reduced, because they allow achieving larger service levels than expected.
Figure 12: Inventory levels and inventory positions of _Retailer 2_ from the simulation of Scenario 3
\begin{table}
\begin{tabular}{r r r r|r r|r r} \hline \hline & & _Location_ & \multicolumn{2}{c|}{_Plant_} & \multicolumn{2}{c}{_Plant_} & \multicolumn{2}{c}{_Retailers_} \\ & & _Materials_ & \multicolumn{2}{c|}{_Raw materials_} & \multicolumn{2}{c|}{_SKU\({}_{1}\)_} & \multicolumn{2}{c}{_SKU\({}_{1}\)_} \\ _Scenario_ & _Target_ & & _Mean_ & _95\% CI_ & _Mean_ & _95\% CI_ & _Mean_ & _95\% CI_ \\ \hline
**1** & 97\% CSL & Effective CSL & 100.0\% & [100,100] & 100.0\% & [100,100] & 96.3\% & [95.9,96.8] \\
**2** & 97\% FR & Effective FR & 100.0\% & [99.9,100] & 100.0\% & [99.9,100] & 97.0\% & [96.4,97.6] \\
**3** & 97\% FR & Effective FR & 99.8\% & [99.4,100] & 99.8\% & [99.4,100] & 97.9\% & [96.9,98.9] \\ \hline \hline \end{tabular}
\end{table}
Table 5: Simulation Results
Conclusions
In this paper, we have presented an optimization model based on the guaranteed-service approach that determines the optimal safety stock allocation in multi-echelon divergent networks. To the best of our knowledge, this is the first model that brings together multiple features typical of industrial practice, such as MOQs, hybrid nodes, and alternative service level measures to determine safety stock levels. It is also the first model to introduce the QCP reformulation to improve the computational efficiency of the optimization. The QCP outperforms the NLP formulation by allowing the use of QCP solvers, which leads to order of magnitude reductions in computational time. Real-world examples from the pharmaceutical industry are presented to illustrate the applicability of the proposed formulation. Optimal solutions can be found with small computational expense for medium/large scale problems. The simulation of the results demonstrates that the model is valid for achieving target service levels.
Results were presented through several exchanges with the pharmaceutical industry who had access to a commercial software vendor. The proposed model provides solutions with increased efficiency, apart from obtaining the exact global solutions. According to the feedback by the company, the results are significantly better than the current safety stock levels they currently have. We also obtain better performance than the commercial software used, which has missing features, such as hybrid nodes. A reduction in computational time is of great importance to the company, since they are presently facing solving problems that take days to run with the commercial software. In addition, they found valuable that the work provides an opportunity for developing the algorithm as an open source, and not hidden under a software package. In addition, it presents an opportunity to integrate MEIO decisions with other tactical planning model, such as rhythm wheel (lot size optimizer along with sequencing), which gives an end-to-end view of the optimization.
Future work will address an extension of the present formulation for cases of non-normal demand, and a pre-processing procedure of input data in order to decide which mathematical formulation is appropriate to optimally determine safety stock levels. The effects of CV and MOQ on CSL estimation can also be analyzed to review other potential safety stocks reductions. This research can also be extended by including responsive characteristics to account for supply chain disruptions and by including storage capacity limitations. Another important extension is constrained capacity on the nodes.
## 7 Nomenclature
### Sets
\begin{tabular}{l l} \(J\) & Set of locations \\ \(P_{j}\) & Subsets of products that can be stored at location \(j\) \\ \(J_{p}\) & Subsets of locations in the route of material \(p\) \\ \end{tabular}
\begin{tabular}{l l} \(J^{0}\) & Subset of starting locations in the network \\ \(J^{I}\) & Subset of locations that face external demand \\ \(J^{D}\) & Subset of locations that face internal demand \\ \(A\) & Subset of routes segments (from node \(i\) to node \(j\)) enabled for material \(p\) \\ \(F\) & Set of locations that have materials with an active fill rate as a target \\ \(\Phi\) & Set of all valid material transformations (from material p to material q) \\ \end{tabular}
### Parameters
\begin{tabular}{l l} \(\mu_{jp}\) & Mean of the total demand of material \(p\) in location \(j\) \\ \(\sigma_{jp}\) & Standard deviation of the total demand of material \(p\) in location \(j\) \\ \(\mu_{D\,jp}\) & Mean of the dependent demand of material \(p\) in location \(j\) \\ \(\sigma_{D\,jp}\) & Standard deviation of the dependent demand of material \(p\) in location \(j\) \\ \(\mu_{I\,jp}\) & Mean of the independent demand of material \(p\) in location \(j\) \\ \(\sigma_{I\,jp}\) & Standard deviation of the independent demand of material \(p\) in location \(j\) \\ \(lt_{jp}\) & Lead time/order processing time of material \(p\) in location \(j\) \\ \(\sigma_{LT\,jp}\) & Standard deviation of the lead time/order processing time of material \(p\) in location \(j\) \\ \(h_{ip}\) & Holding cost of material \(p\) in location \(j\) \\ \(si_{pp}^{0}\) & Inbound service time for the source nodes in the network \\ \(\phi_{pq}\) & Amount of material \(p\) required to produce material a unit of material \(q\) \\ \(maxS_{jp}\) & Maximum service time accepted for material \(p\) in location \(j\) \\ \(maxS_{Ejp}\) & Maximum service time for material \(p\) in location \(j\) regarding external demand \\ \(\tau_{jp}\) & Stock review period for material \(p\) in location \(j\) \\ \(moq_{jp}\) & Minimum Order Quantity of material \(p\) that location \(j\) must place \\ \(Q_{jp}\) & Replenishment order size of material \(p\) at location \(j\) \\ \(fr\tau_{jp}\) & Fill rate level of material \(p\) at location \(j\) \\ \(k_{ip}\) & Safety factor associated with CSL of material \(p\) at location \(j\) \\ \end{tabular}
### Positive Variables
\begin{tabular}{l l} \(S_{jp}\) & Guaranteed service time within which location \(j\) will attend demand of material \(p\) \\ \(S_{E\,jp}\) & Guaranteed service time for external demand of product \(p\) at location \(j\) \\ \(Sl_{jp}\) & Inbound Guaranteed service time at location \(j\) of material \(p\) \\ \(ARG_{1\,jp}\) & Argument of square root for independent demand of material \(p\) at node \(j\) \\ \(ARG_{2\,jp}\) & Argument of square root for dependent demand of material \(p\) at node \(j\) \\ \end{tabular}
\(NLT_{jp}\) Net Lead time of material \(p\) at node \(j\)
\(Z1_{jp}\) Variable used for quadratic reformulation on dependent demand net lead time formula
\(Z2_{jp}\) Variable used for quadratic reformulation on independent demand net lead time formula
\(KV_{jp}\) Variable used to replace \(k\) input factor when the fill rate is introduced to determine safety stocks
\(U_{jp}\) Variable defined to replace \(KV_{jp}^{2}\) and avoid trilinear terms
## 8 Appendix A
**Proof of Proposition 1.** We define two positive continuous variables \(Z1_{jp}\) and \(Z2_{jp}\) as follows,
\[Z1_{jp} =\sqrt{ARG_{1_{jp}}} \forall\;j\;\in J^{D},p\in P_{j}\] (A.1) \[Z2_{jp} =\sqrt{ARG_{2_{jp}}} \forall\;j\;\in J^{I},\;p\in P_{j}\] (A.2)
Substituting (A.1) and (A.2) into the objective function (Eq. (14)), we have:
\[min\sum_{j\;\in\;j}\sum_{p\;\in\;P_{j}}h_{jp}\;KV_{jp}\left(\;\sigma_{D_{jp}} \;Z1_{jp}+Z2_{jp}\right)\] (A.3)
Since (A.3) is a minimization problem and both variables are present in the objective function, following the KKT optimality conditions, we can relax Eqs. (A.1) and (A.2) and rewrite them as inequalities (A.4) and (A.5), both being active at the optimal solution.
\[Z1_{jp} \geq\sqrt{ARG_{1_{jp}}} \forall\;j\;\in J^{D},p\in P_{j}\] (A.4) \[Z2_{jp} \geq\sqrt{ARG_{2_{jp}}} \forall\;j\;\in J^{I},\;p\in P_{j}\] (A.5)
(A.4) and (A.5) can be reformulated as quadratic inequalities,
\[Z1_{jp}^{2} \geq ARG_{1_{jp}} \forall\;j\;\in J^{D},p\in P_{j}\] (A.6) \[Z2_{jp}^{2} \geq ARG_{2_{jp}} \forall\;j\;\in J^{I},\;p\in P_{j}\] (A.7)
Therefore, Eqs. (9), (10), and (15) can be rewritten as follows:
\[Z1_{jp}^{2} \geq SI_{jp}-S_{jp}+lt_{jp}+k_{LT_{jp}}\;\sigma_{LT_{jp}}+\tau_{jp }-1 \forall\;j\;\in J^{D},p\in P_{j}\] (A.8) \[Z2_{jp}^{2} \geq(SI_{jp}-S_{E_{jp}}+lt_{jp}+\tau_{jp})\;\sigma_{l_{jp}}^{2}+ \mu_{l_{jp}}^{2}\;\sigma_{LT_{jp}}^{2} \forall\;j\;\in J^{I},\;p\in P_{j}\] (A.9) \[f\tau_{jp} \leq\frac{1}{Q_{jp}}\Big{(}\;\sigma_{D_{jp}}Z1_{jp}+Z2_{jp}\Big{)} \;\;\big{(}-aKV_{jp}^{2}+bKV_{jp}-c\big{)}+1\] (A.10) \[\forall\;j\in J,\;p\in P_{j},\;(j,p)\in F\]
Moreover, we define \(U_{jp}=KV_{jp}^{2}\) to avoid a trilinear term in (A.10). Following the same steps, we have variable \(KV_{jp}\) in the objective function. Since (A.3) is a minimization problem, we rewrite the equality as the inequality:
\[U_{jp}\geq KV_{jp}^{2} \forall\;j\in J,\;p\in P_{j},\;(j,p)\in\;F\] (A.11)
Hence, \(KV_{jp}^{2}-U_{jp}\leq 0\), as in Eq. (20). Replacing \(KV_{jp}^{2}\) with \(U_{jp}\), we obtain Equation (A.12):
\[f\tau_{jp}\leq\frac{1}{Q_{jp}}\Big{(}\;\sigma_{Djp}Z1_{jp}+Z2_{jp} \Big{)}\;\;\big{(}-aU_{jp}+bKV_{jp}-c\big{)}+1\] (A.12) \[\forall\;j\in J,\;p\in P_{j},\;(j,p)\in F\]
Therefore, from (A.3), (A.8), (A.9), (A.11), (A.12), this proves that problem MQC is an exact quadratically constrained reformulation of the nonlinear problem NLP2.
## 9 Acknowledgments
The authors gratefully acknowledge the financial support from Johnson and Johnson, the Fulbright Program and the Ministerio de Educacion de Argentina, and the Center for Advanced Process Decision-making (CAPD) from Carnegie Mellon University. We would also like to thank Kyle Harshbarger for his useful comments during EWO meetings at CMU and Alev Kaya for her valuable insights on MEIO.
|
2308.09744 | The 21-cm bispectrum from neutral hydrogen islands at z < 6 | Spatial variations in the Lyman-$\alpha$ forest opacity at $z<6$ seem to
require a late end to cosmic reionization. In this picture, the universe
contains neutral hydrogen 'islands' of up to 100 cMpc$/h$ in extent down to
redshifts as low as $z\sim 5.3$. This delayed end to reionization also seems to
be corroborated by various other observables. An implication of this scenario
is that the power spectrum of the cosmological 21-cm signal at $z<6$ is
enhanced relative to conventional reionization models by orders of magnitude.
However, these neutral hydrogen islands are also predicted to be at the
locations of the deepest voids in the cosmological large-scale structure. As a
result, the distribution of the 21-cm signal from them is highly non-Gaussian.
We derive the 21-cm bispectrum signal from these regions using
high-dynamic-range radiative transfer simulations of reionization. We find that
relative to conventional models in which reionization is complete at $z>6$, our
model has a significantly larger value of the 21-cm bispectrum. The neutral
islands also imprint a feature in the isosceles bispectrum at a characteristic
scale of $\sim 1$ cMpc$^{-1}$. We also study the 21-cm bispectrum for general
triangle configuration by defining a triangle index. It should be possible to
detect the 21-cm bispectrum signal at $\nu\gtrsim 200$ MHz using SKA1-LOW for
1080 hours of observation, assuming optimistic foreground removal. | Janakee Raste, Girish Kulkarni, Catherine A. Watkinson, Laura C. Keating, Martin G. Haehnelt | 2023-08-18T18:00:03Z | http://arxiv.org/abs/2308.09744v2 | # The 21-cm bispectrum from neutral hydrogen islands at \(z<6\)
###### Abstract
Spatial variations in the Lyman-\(\alpha\) forest opacity at \(z<6\) seem to require a late end to cosmic reionization. In this picture, the universe contains neutral hydrogen 'islands' of up to 100 cMpc/\(h\) in extent down to redshifts as low as \(z\sim 5.3\). This delayed end to reionization also seems to be corroborated by various other observables. An implication of this scenario is that the power spectrum of the cosmological 21-cm signal at \(z<6\) is enhanced relative to conventional reionization models by orders of magnitude. However, these neutral hydrogen islands are also predicted to be at the locations of the deepest voids in the cosmological large-scale structure. As a result, the distribution of the 21-cm signal from them is highly non-Gaussian. We derive the 21-cm bispectrum signal from these regions using high-dynamic-range radiative transfer simulations of reionization. We find that relative to conventional models in which reionization is complete at \(z>6\), our model has a significantly larger value of the 21-cm bispectrum. The neutral islands also imprint a feature in the isosceles bispectrum at a characteristic scale of \(\sim 1\) cMpc\({}^{-1}\). We also study the 21-cm bispectrum for general triangle configuration by defining a triangle index. It should be possible to detect the 21-cm bispectrum signal at \(\nu\gtrsim 200\) MHz using SKA1-LOW for 1080 hours of observation, assuming optimistic foreground removal.
keywords: cosmology: theory - dark ages, reionization, first stars - intergalactic medium
## 1 Introduction
Observations of the Lyman-\(\alpha\) forest point to a late end of reionization (Kulkarni et al., 2019; Keating et al., 2020; Bosman et al., 2022). In our previous work, we explored the implications of a reionization model that agrees with these observational constraints at redshifts 5-8 for the 21-cm power spectrum (Raste et al., 2021). We found that given the late end of reionization, the power spectrum of the 21-cm signal at redshifts \(z=5\)-6 is orders of magnitude higher than previous estimates. This signal should be detectable by hera and ska1-low in \(\sim 1000\) hours of observations, assuming optimistic foreground subtraction (Raste et al., 2021).
However, models predict that the large islands of neutral hydrogen that persist till redshift \(z\sim 5.5\) in our reionization models are in the deepest density voids in the universe. As a result, the 21-cm signal from them should be significantly non-Gaussian, which should lead to a large bispectrum signal. Furthermore, these neutral islands have highly irregular shapes that might hold clues about the galaxies that contributed to reionization. While the power spectrum is sensitive to the size of the ionized regions, the bispectrum is sensitive to their shapes, which makes it a promising tool to study the topology of reionization (Hutter et al., 2020). Unlike the power spectrum, the value of the bispectrum signal for different triangle configurations can be negative as well as positive and this can encode information on various features of the reionization. Simulations have shown that the modelling parameters of density, ionization, X-ray heating and Lyman-\(\alpha\) coupling drive the non-Gaussianity at various scales. These processes determine the shape, magnitude, peak and sign of the 21-cm bispectrum as a function of redshift (Shimabukuro et al., 2016; Majumdar et al., 2018; Watkinson et al., 2019; Hutter et al., 2020; Kamran et al., 2021; Ma et al., 2021; Shaw et al., 2021; Kamran et al., 2021). The 21-cm bispectrum is also a function of the redshift-space distortions and light-cone anisotropy (Bharadwaj et al., 2020; Majumdar et al., 2020; Kamran et al., 2021; Shaw et al., 2021; Mondal et al., 2021). It has been consistently shown by multiple authors that observing the 21-cm bispectrum together with the power spectrum can reduce the inferred uncertainty on reionization parameters (Shimabukuro et al., 2016, 2017; Watkinson et al., 2022; Tiwari et al., 2021).
The 21-cm bispectrum signal can be observed by correlat
ing the visibilities at three different baselines and frequencies (Bharadwaj and Pandey, 2005; Thyagarajan et al., 2018; Thyagarajan and Carilli, 2020; Thyagarajan et al., 2020). The shape of the bispectrum triangle determines its detectability by interferometric experiments (Shaw et al., 2021; Tiwari et al., 2021). Particularly, the squeezed-limit isosceles bispectrum is expected to present the best observational prospects (Trott et al., 2019; Watkinson et al., 2022; Mondal et al., 2021). Sensitivity of bispectrum measurements, for radio-interferometric arrays, has also been explored (Yoshiura et al., 2015; Shaw et al., 2019; Mondal et al., 2021), for example for ska1-low(Tiwari et al., 2021; Mondal et al., 2021) and MWa(Trott et al., 2019).
In this paper we compute the bispectrum of the ionized hydrogen fraction, the gas density and the 21-cm brightness temperature for our late reionization model. In Section 2, we briefly describe our simulation. We also discuss in this section the calculation of bispectra using the fast FFT code BiFFT presented by Watkinson et al. (2017) and various normalisations of the bispectrum. We present our results in Section 3 for equilateral, isosceles and scalene triangle configurations. Finally, we discuss the prospects of detecting the 21-cm bispectrum with SKA1-LOW in Section 4, and conclude in Section 5.
We assume a flat \(\Lambda\)CDM universe with baryon and matter density parameters \(\Omega_{\rm b}=0.0482\) and \(\Omega_{\rm m}=0.308\), \(\Omega_{\Lambda}=0.692\), Hubble constant 100 \(h\,{\rm km\,s^{-1}\,Mpc^{-1}}\) with \(h=0.678\), spectral index of primordial curvature perturbations \(n_{\rm s}=0.961\), clustering amplitude \(\sigma_{8}=0.829\) at \(z=0\), and helium mass fraction \(Y_{\rm He}=0.24\)(Planck Collaboration, 2014). The units 'ckpc' and 'cMpc' refer to comoving kpc and comoving Mpc, respectively.
## 2 Methods
We have discussed our simulation in detail in Kulkarni et al. (2019) and Raste et al. (2021). Here we repeat only the essential details. To obtain the gas density and velocity fields, we have used the p-gadget-3 code, a modified version of the gadget-2 code (Springel et al., 2001; Springel, 2005). This simulation is similar to the simulations from the Sherwood Simulation Suite (Bolton et al., 2017) with their 160-2048 initial conditions, containing \(2048^{3}\) gas and dark matter particles and \(160\,h^{-1}\) cMpc box length with periodic boundary conditions. For further processing, the gas density is gridded by projecting the smooth particle hydrodynamic (SPH) kernel in our simulation onto a Cartesian grid of size \(2048^{3}\). This gives a grid resolution of \(78.125\,h^{-1}\,\)ckpc. The ionization and temperature fields are computed using the aron code (Aubert and Teyssier, 2008, 2010), which solves the radiative transfer equation by using the M1 approximation for the first moment (Aubert and Teyssier, 2008; Levermore, 1984; Gonzalez et al., 2008).
We calculate the differential brightness temperature (\(\Delta T_{\rm b}\)) box using the density, ionization, and peculiar velocity boxes and assuming \(T_{\rm S}\gg T_{\rm CMB}\),
\[\Delta T_{\rm b}(\nu_{o})\simeq 27\ {\rm mK}\ x_{\rm HI}(1+ \delta)\left(1+\frac{1}{H(z)}\frac{{\rm d}\nu_{p}}{{\rm d}s}\right)^{-1}\] \[\times\left(\frac{1+z}{10}\right)^{1/2}\left(\frac{Y_{H}}{0.76} \right)\left(\frac{0.14}{\Omega_{m}h^{2}}\right)^{1/2}\left(\frac{\Omega_{b}h^ {2}}{0.022}\right). \tag{1}\]
We take the \(z\)-axis of the simulation box as the line-of-sight direction to calculate the peculiar velocity gradient \({\rm d}\nu_{p}/{\rm d}s\) and we enforce a cutoff of \(|{\rm d}\nu_{p}/{\rm d}s|<0.5H(z)\)(Santos et al., 2010; Mesinger et al., 2011).
The reionization in our 'late reionization' simulation model ends at redshift at \(z\sim 5.3\), and the midpoint of reionization
Figure 1: The bottom panel shows the 21-cm brightness temperature, \(\Delta T_{\rm b}\), in a reionization model that is consistent with the Ly\(\alpha\) forest at \(z\sim 6\). Large neutral hydrogen ‘islands’ are seen at \(z<6\). In order to understand the 21-cm bispectrum signal from these neutral hydrogen islands, we contrast this reionization model with a more conventional one in which reionization finishes early, at \(z>6\). This model is shown in the top panel.
occurs at redshift \(z\sim 7.1\). We also study an 'early reionization' model, in which the evolution of the volume-averaged ionized hydrogen fraction is calibrated to match the evolution in the Haardt and Madau (2012) model of reionization. In this model, reionization is complete at \(z\sim 6.7\). The two simulations are identical in all aspects apart from the source emissivity. Figure 1 compares the 21-cm brightness temperature in the two models.
### Computing the Bispectrum
The bispectrum \(B\) of a field \(F(\mathbf{r})\) is defined as,
\[(2\pi)^{3}B(\mathbf{k_{1}},\mathbf{k_{2}},\mathbf{k_{3}})\delta_{\rm D}(\mathbf{k_ {1}}+\mathbf{k_{2}}+\mathbf{k_{3}})\\ \equiv\langle\tilde{F}(\mathbf{k_{1}})\tilde{F}(\mathbf{k_{2}})\tilde{F}( \mathbf{k_{3}})\rangle, \tag{2}\]
where \(\tilde{F}(\mathbf{k})\) is the Fourier transform of \(F(\mathbf{r})\). The Dirac delta \(\delta_{\rm D}\) term requires that the wavevectors \(\mathbf{k_{1}}\), \(\mathbf{k_{1}}\) and \(\mathbf{k_{3}}\) form a closed triangle in the Fourier space.
We follow Scoccimarro (2015), Sefusatti et al. (2016), and Watkinson et al. (2017) to efficiently compute the bispectrum without using multiple nested loops through the Fourier box1. This algorithm is described in detail by Watkinson et al. (2017) and is implemented by these authors in their publicly available code, BiFFT2. We calculate bispectrum using a modified Python version of BiFFT.
Footnote 1: In this work, we do not subtract the mean from the simulation box before calculating bispectra. All the information about the mean value is located only in the real part of the \(\mathbf{k}=0\) mode of the Fourier box. This mode is not used while calculating power spectrum or bispectrum. We confirm that our results do not change by subtracting the mean from the simulation box.
Footnote 2: [https://bitbucket.org/cav11/bifft/](https://bitbucket.org/cav11/bifft/)
#### 2.1.1 Triangle Configuration
For any triangle formed by the wavevectors \(\mathbf{k_{1}}\), \(\mathbf{k_{1}}\) and \(\mathbf{k_{3}}\), \(k_{3}^{2}=k_{1}^{2}+k_{2}^{2}-2k_{1}k_{2}\cos\theta\),
where \(\theta\) is the angle between \(k_{1}\) and \(k_{2}\) arms of the triangle. For isosceles triangles, \(k_{1}=k_{2}\), so
\[\cos\theta=1-\frac{k_{3}^{2}}{2k_{1}^{2}}. \tag{4}\]
Thus, in this case, for a fixed \(k_{1}\), we can label a triangle equivalently using \(\cos\theta\) or \(k_{3}\). When \(k_{3}<k_{1}\), the angle \(\theta<\pi/3\), and \(\cos\theta>0.5\). Such triangles with \(k_{3}\to 1\) and \(\cos\theta\to 1\) are the so-called squeezed-limit triangles. For equilateral triangles, \(k_{1}=k_{2}=k_{3}\), so that \(\theta=\pi/3\) and \(\cos\theta=1/2\). Triangles with \(k_{3}>k_{1}\) have \(\theta>\pi/3\), and \(\cos\theta<0.5\). The stretched-limit triangles have \(\cos\theta\rightarrow-1\), with \(k_{3}\to 2k_{1}\).
#### 2.1.2 Bispectrum Normalization
Various normalizations of the bispectrum have been explored in the literature. In our work, we have either used the unnormalized bispectrum \(B\) (Eq 2), or a normalized bispectrum (Hinich and Clay, 1968; Kim and Powers, 1978; Hinich and Messer, 1995; Hinich and Wolinsky, 2005; Watkinson et al., 2019), defined by
\[b=\frac{B(k_{1},k_{2},k_{3})}{\sqrt{(k_{1}k_{2}k_{3})^{-1}P(k_{1})P(k_{2})P(k_ {3})}}. \tag{5}\]
The normalisation of the bispectrum isolates the non-Gaussianity of the field by removing the contributions of power spectrum. See Watkinson et al. (2019) for more discussion on various bispectrum normalisations. For the bispectrum of density and the neutral/ionized hydrogen fraction, the unnormalized bispectrum has units of cMpc\({}^{6}\), whereas the unnormalised 21-cm brightness temperature bispectrum has units of mK\({}^{3}\) cMpc\({}^{6}\). The normalised bispectrum from Eq 5 is dimensionless.
## 3 Results
### Density Bispectrum
Figure 2 shows the evolution of isosceles bispectra of the gas density from redshift \(z=9.02\) to 5.26. The isosceles triangle configuration is used for four representative values
Figure 2: The normalised isosceles gas density bispectrum from redshift \(z=9.02\) to 5.26 for \(k_{1}=0.2,0.5,0.75\), and 1 cMpc\({}^{-1}\). Amplitude of density bispectrum grows with time due to the increasing non-Gaussianity with the formation of structures. This amplitude also increases with \(k_{1}\), as the small scales have more non-Gaussianity compared with larger scales.
of \(k_{1}=0.2,0.5,0.75\), and \(1\) cMpc\({}^{-1}\). For each value of \(k_{1}\) (\(=k_{2}\)), the figure shows a range of values of \(k_{3}\) available in the simulation box, between 0.08 and 2 cMpc\({}^{-1}\), depending on \(k_{1}\). We have normalised the bispectra using Equation 5 and the box has been reduced to resolution of \(256^{3}\) for computational ease (see Appendix A for a comparison with results from the higher resolution box). The normalised density bispectrum is of the order of a few units, and grows with time due to an increase in the non-Gaussianity induced by structure formation. The normalised bispectrum also increases in amplitude with increasing \(k_{1}\), as the small scales have more non-Gaussianity compared with larger scales.
### Neutral Hydrogen Fraction Bispectrum
Figures 3 and 4 show, respectively, the unnormalised and normalised bispectrum of the neutral hydrogen fraction at various redshifts between \(z=9.02\) and 5.26, at \(k_{1}=0.2,0.5,0.75\) and \(1\) cMpc\({}^{-1}\). Each of the two figures show the bispectrum for both of our reionization models in separate panels. Apart from the difference in the redshift of reionization, the bispectrum in the two reionization models are qualitatively similar, only shifted in redshift (time).
Recall that \(\cos\theta=0.5\) denotes the equilateral triangle configuration. Values of \(\cos\theta\) from \(-1\) to 0.5 correspond to
Figure 4: The normalized bispectrum of the neutral hydrogen fraction \(x_{\rm HI}\) from redshift \(z=9.02\) to 5.26 at \(k_{1}=0.2\) cMpc\({}^{-1}\), 0.5 cMpc\({}^{-1}\), 0.75 cMpc\({}^{-1}\) and 1.0 cMpc\({}^{-1}\) (left to right) as function of \(\cos\theta\) or \(k_{3}\) in the late (top panel) and early (bottom panel) reionization models.
Figure 3: The un-normalized bispectrum of the neutral hydrogen fraction \(x_{\rm HI}\) from redshift \(z=9.02\) to 5.26 at \(k_{1}=0.2\) cMpc\({}^{-1}\), 0.5 cMpc\({}^{-1}\), 0.75 cMpc\({}^{-1}\) and 1.0 cMpc\({}^{-1}\) (left to right) as function of \(\cos\theta\) or \(k_{3}\) in the late (top panel) and early (bottom panel) reionization models.
the stretched limit, and those from 0.5 to 1 correspond to the squeezed limit. A large positive value of the equilateral bispectrum indicates an overabundance of roughly spherical structures of higher-than-average values of \(x_{\rm HI}\) (neutral regions) embedded in the background of lower-than-average values of \(x_{\rm HI}\)(ionized regions) (Lewis, 2011; Hutter et al., 2020). A large negative value indicates an over-abundance of below-average structures embedded in the above-average background. This allows us to interpret the evolution that we see in Figure 3.
At high redshifts, the \(x_{\rm HI}\) distribution has 'holes' of below-average values (the ionized regions), yielding a negative values for almost all \(k\) modes and triangle configurations. However, the bispectra at very small scale (large \(k\)) stretched limit triangles are positive. These triangles correspond to overabundance of small-scale above-average filamentary structure. This is perhaps the small scale neutral 'valleys' between spherical ionized bubbles.
With the progress of reionization, the ionized regions become larger and start merging. As reionization crosses the half-way point, the distribution of \(x_{\rm HI}\) now has 'islands' of above-average values, yielding a positive value of the bispectra
Figure 5: The un-normalized bispectrum of the 21-cm brightness temperature (\(\Delta T_{\rm b}\)) from redshift \(z=9.02\) to 5.26 at \(k_{1}=0.2\) cMpc\({}^{-1}\), 0.5 cMpc\({}^{-1}\), 0.75 cMpc\({}^{-1}\) and 1.0 cMpc\({}^{-1}\) (left to right) as function of \(\cos\theta\) or \(k_{3}\) for late (top panel) and early (bottom panel) reionization models.
Figure 6: The normalized bispectrum of the 21-cm brightness temperature (\(\Delta T_{\rm b}\)) from redshift \(z=9.02\) to 5.26 at \(k_{1}=0.2\) cMpc\({}^{-1}\), 0.5 cMpc\({}^{-1}\), 0.75 cMpc\({}^{-1}\) and 1.0 cMpc\({}^{-1}\) (left to right) as function of \(\cos\theta\) or \(k_{3}\) for late (top panel) and early (bottom panel) reionization models.
trum. At the mid-point of reionization, even if roughly half of the volume is occupied by ionized IGM and half by neutral, the shapes of ionized regions are roughly spherical compared to neutral regions, which have more irregular shapes. Therefore, we see that the stretched and squeezed limit bispectrum start becoming positive at lower redshifts, however small scale (large \(k\)) equilateral bispectra still remain negative. They are signature of small scale spherical ionized regions embedded in neutral IGM. This signature persists even during the later half of reionization. At large scales (small \(k\)), on the other hand, the bispectrum is positive for all configurations during the later part of reionization. This suggests that, on large scales, now there are above-average neutral regions embedded in below-average ionized IGM.
In the stretched limit, \(\cos\theta\rightarrow-1\), the bispectrum measures the probability of the neutral hydrogen pancakes that demarcate ionized bubbles. Such pancake-like boundary surfaces are the over-abundant structure at all redshifts. Consequently, the bispectrum for \(\cos\theta\rightarrow-1\) is positive at all redshifts after the start of reionization.
In the squeezed limit, \(\cos\theta\to 1\), a positive value of the bispectrum indicates an overabundance of small-scale positive perturbations in the \(x_{\rm HI}\) distribution and a large-scale modulation of these perturbations. As soon as the ionized regions grow to a reasonable size, the distribution of \(x_{\rm HI}\) inside the large ionized regions is trivially uniform, but that in the leftover neutral regions has small-scale perturbations due to smaller ionized bubbles. The bispectrum turns positive when this happens.
At the end of reionization, at redshifts of \(z\sim 5.4\), the bispectrum shows complex features that are sensitive to the morphology of the residual \(x_{\rm HI}\) islands. In Figure 4, the normalisation accentuates the features in the bispectrum that we see in Figure 3. This is most pronounced for the oscillating features at redshift \(z\sim 5.4\). These oscillations are a qualitatively distinct signature that seems to mark the end phases of reionization in both early and late reionization models, and so could be a useful smoking gun for identifying the redshift of reionization from observations.
### 21-cm Bispectrum
Figures 5 and 6 respectively show the unnormalised and normalised isosceles bispectrum of the 21-cm brightness temperature for the same redshift range and the \(k\) values as in Figures 3 and 4. For approximately equilateral triangles, the bispectrum is negative in the early stages of reionization, changes sign when reionization is half complete, and then is positive at lower redshift, before settling to zero in the post-reionization era. The bispectrum is positive at almost all redshifts in the stretched limit. Similarly, the bispectrum in the squeezed limit also stays positive at all but the very early stages of reionization. At the end of reionization, for \(z\sim 5.4\), the bispectrum shows a set of complex features that map to the 21-cm brightness structure of the residual neutral hydrogen islands in the voids.
It is noteworthy that, similar to the 21-cm power spectrum explored in our previous work (Raste et al., 2021), the predicted bispectrum is very large at \(z<6\) in the late reionization model, while the bispectrum at these redshifts is zero in the fiducial early reionization model. This is due to the persistence of neutral hydrogen islands at these redshift in the late reionization model. This signal in the 21-cm bispectrum is directly induced by the opaque regions seen in the Ly\(\alpha\) forest at these redshifts.
In the absence of spin temperature fluctuations, most of the brightness temperature bispectrum is induced by the the fluctuations in \(x_{\rm HI}\) in the range of redshifts and wavenumbers considered here. Therefore, we see that brightness temperature bispectrum follows a very similar trend as the bispectrum of the neutral hydrogen fraction. To study this effect more quantitatively, we break down our \(\Delta T_{\rm b}\) bispectra in various auto- and cross-bispectra components of density and neutral fraction. Ignoring the redshift space distortion, the 21-cm signal at any redshift can be written as,
\[\Delta T_{\rm b}(z)=(1+\delta_{\rm D})x_{\rm HI}T_{0}(z), \tag{6}\]
where, \(T_{0}(z)\) is the base 21-cm signal at redshift \(z\) and there are spatial fluctuations due to density (\(\delta_{D}\)) and neutral H i fraction (\(x_{\rm HI}\)). The average 21-cm signal is,
\[\overline{\Delta T_{\rm b}}=T_{0}((1+\delta_{\rm D})x_{\rm HI}). \tag{7}\]
Figure 7: Unnormalised cross bispectra components, multiplied by their respective weights, from Eq 9 for equilateral (left) and isosceles (right, at \(k_{1}=0.5\) cMpc\({}^{-1}\)) bispectrum configuration at \(z=5.95\) for the late reionization model. We compare the weighted sum (gray dashed curves) of these components with the \(\Delta T_{\rm b}\) bispectra (thick black curves).
And its fluctuation is,
\[\delta_{\rm T_{b}}\Delta\overline{T_{\rm b}} = \Delta T_{\rm b}-\overline{\Delta T_{\rm b}} \tag{8}\] \[= T_{0}[x_{\rm HI}+\delta_{\rm D}x_{\rm HI}]-T_{0}\left[\langle x_{ \rm HI}\rangle+\langle\delta_{\rm D}x_{\rm HI}\rangle\right]\] \[= T_{0}[(x_{\rm HI}-\langle x_{\rm HI}\rangle)+(\delta_{\rm D}x_{ \rm HI}-\langle\delta_{\rm D}x_{\rm HI}\rangle)].\]
Defining \(\delta_{\rm HI}=(x_{\rm HI}-\langle x_{\rm HI}\rangle)/(x_{\rm HI})\) and \(\delta_{\rm D,HI}=(\delta_{\rm D}x_{\rm HI}-\langle\delta_{\rm D}x_{\rm HI} \rangle)/(\delta_{\rm D}x_{\rm HI})\), we have,
\[(\overline{\Delta T_{\rm b}})^{3}\langle\delta_{\rm T_{b}}\delta_{\rm T_{b}} \delta_{\rm T_{b}}\rangle = T_{0}^{3}(\overline{[x_{\rm HI}\delta_{\rm HI}+\overline{ \delta_{\rm D}x_{\rm HI}}\delta_{\rm D,HI}]}^{3}) \tag{9}\] \[= T_{0}^{3}(\langle\overline{[x_{\rm HI}\delta_{\rm HI}]}^{3}+( \overline{\delta_{\rm D}x_{\rm HI}}\delta_{\rm D,HI})\rangle^{3}\] \[\quad+3\overline{x_{\rm HI}\overline{\delta_{\rm D}x_{\rm HI}} \delta_{\rm HI}^{2}}\delta_{\rm D,HI}\] \[\quad+3\overline{x_{\rm HI}\overline{\delta_{\rm D}x_{\rm HI}} \delta_{\rm HI}^{2}}\delta_{\rm HI}\delta_{\rm D,HI}^{2}).\]
Notice that the density bispectrum does not contribute directly to the \(\Delta T_{\rm b}\) bispectra. We should also emphasise that while calculating cross bispectra, the order of the fields is important in a non-equilateral configuration. For example, \((\delta_{\rm D}\delta_{\rm HI}\delta_{\rm HI})\) might not be the same as \((\delta_{\rm HI}\delta_{\rm D}\delta_{\rm HI})\) if \(\mathbf{k}_{1}\neq\mathbf{k}_{2}\).
In Figure 7 we show each of the cross bispectra (unnormalised) component, multiplied by its respective weight, from Eq 9 for the equilateral bispectrum configuration (left) and the isosceles configuration at \(k_{1}=0.5\) cMpc\({}^{-1}\), for redshift \(z=5.59\). The gray dashed curve shows the weighted sum of these individual components. Comparing it with the \(\Delta T_{\rm b}\) bispectra, we see that this predicted bispectrum is slightly different from the computed bispectra, since we have ignored here the effect of velocity fluctuations in Eq 6. But this effect is small. We also note that the bispectra of various cross components fluctuate around zero a lot more than the auto bispectra. These fluctuations occur where the non-Gaussianity is close to 0. However, the sum of these cross components do not show these fluctuations, as their effects average out. Finally, note that the \(\Delta T_{\rm b}\) bispectra have shapes very similar to the neutral fraction bispectra. Hence, in absence of spin temperature fluctuations, the neutral fraction fluctuations dominate the 21-cm fluctuations.
Other than a few \(k\)-modes, which show the positive fluctuations, the equilateral bispectrum of \(\Delta T_{\rm b}\) is negative at intermediate \(k\)-modes and postive at very large and very small \(k\)-modes. Also notice that for the equilateral configurations
Figure 8: Normalized bispectrum for neutral fraction (top panel) and brightness temperature (bottom panel) for the late (right panel) and early (left panel) reionization models at various redshifts for all triangle configurations.
\(\langle\delta_{\rm HI}\delta_{\rm HI}\delta_{\rm D,HI}\rangle\) and \(\langle\delta_{\rm HI}\delta_{\rm D,HI}\delta_{\rm HI}\rangle\), as well as \(\langle\delta_{\rm D,HI}\delta_{\rm D,HI}\delta_{\rm HI}\rangle\) and \(\langle\delta_{\rm HI}\delta_{\rm D,HI}\delta_{\rm D,HI}\rangle\) have very similar shapes, other than few fluctuations. However, their shapes are very different for the isosceles configurations.
### Bispectrum for generic triangle configurations
Moving from the isosceles triangle to more general triangle configurations, in Figure 8, we show the evolution of the normalized bispectra for available triangle configurations for the \(256^{3}\) resolution cube of neutral fraction (top panels) and brightness temperature (bottom panels), in the late (right) and early (left) reionization models.
The mapping between the triangle index and \(k\) values is as follows: Of the (\(k_{1}\), \(k_{2}\), \(k_{3}\)) triplet, two values are taken from array (0.1, 0.2, 0.5, 0.75, 1.0 \({\rm cMpc^{-1}}\)), with \(k_{1}\geq k_{2}\). We choose the third \(k\) value with \(\theta/\pi\) from the array [0.01, 0.05, 0.1, 0.2, 0.33, 0.4, 0.5, 0.6, 0.7, 0.85, 0.95]. Then, (\(k_{1}\), \(k_{2}\), \(k_{3}\)) triplets are sorted in increasing order (\(k_{a}\), \(k_{b}\), \(k_{c}\)) with \(k_{a}\geq k_{b}\geq k_{c}\). The triangle index of a given triplet is its rank in the sorted sequence, with \(k_{a}\) moving fastest. Indices \(>100\) roughly correspond to \(k>0.5\)\({\rm cMpc^{-1}}\). The fluctuations at higher triangle indices are due to the triangle configuration fluctuating between stretched limit triangles, which correspond to the large positive bispectra values and other configurations, including equilateral, which correspond to negative bispectra values.
We can see that for early stages of reionization, the bispectra of neutral fraction for various triangle configurations is negative. With the evolution of reionization, they become positive. Towards the end of reionization, the amplitude of the fluctuations increase. The post-reionization bispectra have small amplitude for small triangles (small \(k\)-modes, large scale), but they become larger for larger triangle configurations (large \(k\)-modes, small scale). The brightness temperature bispectra have very similar behaviour, however, their amplitude is slightly more positive, reflecting the effect of density bispectra. The post-reionization normalised \(\Delta T_{b}\) bispectra are usually larger than the reionization bispectra at all \(k\)-mode triangles and do not show strong fluctuations with triangle index.
### Untangling the bispectrum
To understand which features of our simulation box correspond to which details in the 21-cm bispectrum, we take our original ionization fraction box (Box 0) and construct several different modified boxes using following prescription:
* Box 0: Original
* Box 1: \(x_{\rm HII}<0.5\) is set to \(x_{\rm HII}=0\)
* Box 2: \(x_{\rm HII}\geq 0.5\) is set to \(x_{\rm HII}=1\)
* Box 3: Both of the above (essentially converting the box to a binary field of 0 and 1)
Figure 9: \(\Delta T_{b}\) boxes 0 to 5 for redshift \(z=5.96\) for the late reionization model. These boxes were computed after modifying the ionization field by hand. Box 0 is the original simulations box, whereas in Box 1, \(x_{\rm HII}<0.5\) regions are set to \(x_{\rm HII}=0\), and in Box 2, \(x_{\rm HII}\geq 0.5\) regions are set to \(x_{\rm HII}=1\). Box 3 has both these approximations, essentially converting the box to a binary field of 0 and 1. Finally, in Box 4 \(x_{\rm HII}\leq 0.5\) regions are set to \(x_{\rm HII}=1\) and in Box 5 \(x_{\rm HII}\geq 0.5\) regions are set to \(x_{\rm HII}=0\), which respectively removes neutral and ionized regions from the simulation box.
* Box 4: \(x_{\rm HII}\leq 0.5\) is set to \(x_{\rm HII}=1\)
* Box 5: \(x_{\rm HII}\geq 0.5\) is set to \(x_{\rm HII}=0\)
In Figure 9, we show slices of brightness temperature boxes created using these ionization boxes at redshift \(z=5.96\). We show unnormalised isosceles bispectra for these boxes at \(k_{1}=1\) cMpc\({}^{-1}\), for various redshifts in Figure 10. Boxes 0 to 3 have very slight differences. Specifically, we can see that box 0 and box 3 have very similar shape and magnitude for almost all redshifts and almost all \(k\)-modes. This suggests that perhaps, the bispectra are more dependent on the neutral vs ionized state of the IGM, and the partially ionized regions do not affect the bispectra significantly. When we remove these ionized shapes and regions from the simulation box in box 4, the shape of the bispectrum changes completely. Boxes 4 and 5 are mostly ionized and neutral boxes, respectively, with only some partially ionized regions left. The bispectra of these regions is very small, as we see in top panel of Figure 10. Interestingly, the bispectra for box 4 and box 5 seem to have similar shape with the sign inverted.
The brightness temperature bispectra of Box 4 has flat, positive values at all redshifts, which match with the post
Figure 11: SKA1-LOW sensitivity (red curves) for 1080 hours of tracking mode observation with optimistic foreground removal at redshift \(z=5.96\) (\(\nu=204\) MHz) and \(k_{1}=0.2\), 0.5, 0.75 and 1.0 cMpc\({}^{-1}\) from left to right. We compare it with our late reionization model (black curves). The positive and negative sensitivity curves are the 1-\(\sigma\) upper and lower bounds for the real part of noise bispectra.
Figure 10: Unnormalised isosceles bispectra of \(x_{\rm HI}\) (top) and \(\Delta T_{\rm l}\) (bottom) for the late reionization model at \(k_{1}=1\) cMpc\({}^{-1}\) for boxes 0 to 5 and redshifts 7.14, 5.95, 5.41 and 5.26 from left to right. Boxes 0 to 3 have very slight differences, which suggests that perhaps, the bispectra are more dependent on the neutral vs ionized state of the IGM, and the partially ionized regions do not affect the bispectra significantly. Box 4 has large, flat, positive \(\Delta T_{\rm l}\) bispectra at all redshifts, which matches with the post-reionization bispectra in the last panel. In this box, we have removed the effect of neutral regions. Therefore, the post-EoR bispectra is an effect of residual ionization, as expected. \(\Delta T_{\rm b}\) bispectrum of Box 5 is in essence the gas density bispectrum.
reionization bispectra in the last panel. In this box, we have removed the effect of neutral regions. Therefore, the post-EoR bispectra are the effect of residual ionization, as expected. Similarly, the bispectra of box 5 have the same magnitude and sign as the bispectra from the very early stage of reionization, when \(T\mathrm{S}\gg T_{\mathrm{CMB}}\) and \(x_{\mathrm{HI}}\sim 1\) (not shown in this work). The \(\Delta T_{\mathrm{b}}\) fluctuations in this scenario, should be dominated by the density fluctuations.
## 4 Prospects of detection
For a preliminary study of the prospects of detecting the 21-cm bispectrum modelled above, we use the 21cmSense3(Pober et al., 2013, 2014) and PyOns21(Watkinson et al., 2022) codes to model the bispectrum covariance induced by instrument noise for ska1-low. Our assumed telescope parameters are the same as in Raste et al. (2021). We assume a tracking mode of 6 hr, at \(z\sim 6\) (\(\nu_{o}=204\) MHz), for 180 days. We include all \(k\)-modes in our calculation (using the 'opt' foreground mode), assuming that the foreground contamination in the bispectrum will be less severe than for the power spectrum. Using these noise estimates, we generate 100 noise boxes of \(128^{3}\) resolution elements with different seeds and calculate their bispectra. In Figure 11, we show the 1-\(\sigma\) variance of these noise bispectra and compare it with the signal at redshift \(z=5.96\) for various \(k\)-modes4. We see that the the bispectrum sensitivity is better at lower \(k\)-modes (larger scale). These results suggest that ska1-low should be able to detect the 21-cm isosceles bispectra at small \(k\)-modes with \(\sim 1000\) hours of tracking mode observation. These observations would help us understand the history of reionization, as well as the goemetry of neutral islands at the end of reionization.
Footnote 3: [https://github.com/jpober/21cmSense](https://github.com/jpober/21cmSense)
Footnote 4: The low resolution (\(128^{3}\)) boxes do not have very large \(k\)-modes. Therefore, the last panel of Figure 11 does not have bispectra at large \(k_{3}\) values.
## 5 Conclusions
We have computed the 21-cm bispectrum in simulations of cosmic reionization that are consistent with the observed Lyman-\(\alpha\) forest at \(z>5\). Our findings can be summarised as follows:
* A late end of reionization causes the 21-cm bispectrum and the neutral hydrogen fraction bispectrum to have large values at \(z<6\) (frequencies greater than 200 MHz). This is in contrast to traditional reionization models, which typically predict zero bispectrum and power spectrum at these frequencies.
* The neutral fraction bispectrum is negative at all scales during the early stages of reionization. At later stages, the bispectrum starts to become positive at small scales (large \(k\)). Towards the end of reionization, the bispectra are positive at all scales for squeezed and stretched limit triangles. But they remain negative around the equilateral configuration at small scales (large \(k\)). This suggests that most of the non-spherical, "elongated" features have the geometry of above-average regions (neutral islands) embedded in the below-average background (ionized regions), but there are still some small spherical below-average regions (ionization bubbles) embedded within above-average regions (neutral islands).
* The 21-cm bispectrum follows the trends seen in the neutral fraction bispectrum.
* For generic triangle configurations, the normalised bispectra of neutral fraction are negative at high redshifts. They then turn positive with the progress of reionization. The \(\Delta T_{\mathrm{b}}\) bispectra show very similar behaviour, however, their amplitude is slightly more positive, reflecting the effect of the density bispectra. This effect of density is most readily observed in post-reionization brightness temperature bispectra, which are usually larger than the reionization bispectra at all \(k\)-mode triangles and do not show strong fluctuations with triangle index.
* Partially ionised regions do not affect the shape of \(x_{\mathrm{HI}}\) or 21-cm bispectra significantly. However, removing large ionised regions or neutral islands will completely change the shape, amplitude and sign of bispectra at any redshift.
* With about a 1,000 hr of tracking mode observation, ska1-low should be able to detect the bispectrum at \(z\sim 6\) for small \(k\)-modes (large scale).
Overall, this work adds to the realization that statistics beyond the power spectrum can be very useful to understand reionization history from the future 21-cm observations. Similar to our previous work on the 21-cm power spectrum from the late reionization model, this work also highlights the importance of relatively higher frequency corresponding to redshifts \(z<6\) for the late reionization. Combining these 21-cm bispectrum and power spectrum results with the _James Webb Space Telescope (JWST)_ and _Nancy Grace Roman Space Telescope_ should lead to further insights into reionization physics.
## Acknowledgements
We acknowledge useful discussions with Basudeb Dasgupta, Suman Majumdar, Rishi Khatri, Rajesh Mondal and Somnath Bharadwaj. GK gratefully acknowledges support by the Max Planck Society via a partner group grant. This work used the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). This work further used the COSMA Data Centric system operated Durham University on behalf of the STFC DiRAC HPC Facility. This equipment was funded by a BIS National E-infrastructure capital grant ST/K00042X/1, DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the UK's National E-Infrastructure. MH is supported by STFC (grant number ST/N000927/1). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
## Data Availability
Data shown in various figures will be made available upon reasonable request.
|
2310.13556 | Flow techniques for non-geometric RDEs on manifolds | In 2015, Bailleul presented a mechanism to solve rough differential equations
by constructing flows, using the log-ODE method. We extend this notion in two
ways: On the one hand, we localize Bailleul's notion of an almost-flow to solve
RDEs on manifolds. On the other hand, we extend his results to non-geometric
rough paths, living in any connected, cocommutative, graded Hopf algebra. This
requires a new concept, which we call a pseudo bialgebra map. We further
connect our results to Curry et al (2020), who solved planarly branched RDEs on
homogeneous spaces. | Hannes Kern, Terry Lyons | 2023-10-20T14:57:57Z | http://arxiv.org/abs/2310.13556v3 | # Flow techniques for non-geometric RDEs on manifolds
###### Abstract.
In 2015, Bailleul presented a mechanism to solve rough differential equations by constructing flows, using the log-ODE method. We extend this notion in two ways: On the one hand, we localize Bailleul's notion of an almost-flow to solve RDEs on manifolds. On the other hand, we extend his results to non-geometric rough paths, living in any connected, cocommutative, graded Hopf algebra. This requires a new concept, which we call a pseudo bialgebra map. We further connect our results to Curry et al (2020), who solved planarly branched RDEs on homogeneous spaces.
###### Contents
* 1 Introduction
* 2 Local flows and almost-flows
* 3 Preliminaries
* 4 Pseudo bialgebra maps and constructing local flows from rough paths
* 5 Constructing elementary differentials
* 6 Discussion of rough paths on manifolds
* A The push-forward factorizes
## 1. Introduction
### The story in a nutshell
We are interested in analyzing the flow operation associated with a rough differential equation: Consider a (for now geometric) rough path \(\mathbb{X}\) and a set of vector fields \(V_{i}\) on \(\mathbb{R}^{d}\). Then, the solution to the RDE
\[dY_{t}=\sum_{i=1}^{d}V_{i}(Y_{t})d\mathbb{X}_{t}^{i} \tag{1.1}\]
follows the Davie's series [10],[11]:
\[Y_{t}=Y_{s}+V_{i}(Y_{s})X_{s,t}^{i}+V_{i}V_{j}\mathrm{Id}(Y_{s})\mathbb{X}_{s,t}^{i,j}+o(|t-s|^{3\alpha})\,, \tag{1.2}\]
if \(\mathbb{X}\) is \(\alpha\)-Holder continuous. Thus, the flow operator of the solution \(Y_{t}\), given by \(\mu_{s,t}(y)=Y_{t}\) for \(y=Y_{s}\) has the local expansion
\[\mu_{s,t}=\mathrm{Id}+X_{s,t}^{i}V_{i}+\mathbb{X}_{s,t}^{i,j}V_{i}V_{j}+\ldots\,, \tag{1.3}\]
where one recovers (1.2) by applying \(\mu_{s,t}\) to the identity map \(\mathrm{Id}\). We want to analyze this perspective towards solving RDEs: We show that Davie's formula is a direct result of the interaction between a rough path and what we call a _pseudo bialgebra map_ (see Def 1.1). For geometric RDEs, this directly leads to the perspective of [1]. For non-geometric rough paths [12],[13] one only needs to identify a new pseudo bialgebra map and apply the same machinery to get a Davie's formula. Furthermore, this approach easily extends to RDEs on manifolds, which gives us a new framing for non-geometric rough paths on manifolds. We especially show that our techniques generalize the
###### Abstract
We consider the following problem of finding a _non-geometric_ problem:
* _We extend the flow techniques presented in_ _[_1_]_ _to non-geometric rough paths. This requires us to replace the higher order differential operator_ \(\mathbb{X}^{ij}_{s,t}V_{i}V_{j}\) _with appropriate differential operators_ \(\mathbb{X}^{\tau}_{s,t}D^{\tau}\)_, where_ \(\tau\) _are a basis of the more general Hopf algebra._
* _By using vector fields_ \(V_{i},i=1,\ldots,n\) _living on a manifold_ \(M\)_, we can extend this machinery to solve RDEs on manifolds. If_ \(\mathbb{X}\) _is geometric, this approach works for all manifolds_ \(M\)_, whereas a branched rough path requires a connection_ \(\nabla\) _on_ \(M\) _(see_ _[_1_]__). We also analyze the case in which_ \(\mathbb{X}\) _lives in any connected, graded, cocommutative Hopf algebra, which leads us to the previously mentioned pseudo bialgebra maps._
* _This perspective allows us to reframe existing results, focusing on non-geometric RDEs on manifolds. We explain this new approach and show that it generalizes_ _[_1_]__,_ _[_1_]_ _and_ _[_1_]__._
_Before we explain our approach, let us take a look at already existing results: While flow techniques for rough paths have been known since the nineties_ _[_1_]__, they got their most concise presentation somewhat recently in_ _[_1_]__. In this paper, Bailleul presents a simple machinery to derive the operator (_1.3_) with the use of a log-ODE method_ _[_1_]__. If_ \(\mathbb{L}_{s,t}\) _is the logarithm (in the Lie-sense) of_ \(\mathbb{X}_{s,t}\)_, it is possible to associate_ \(\mathbb{L}_{s,t}\) _with a vector field_ \(\mathcal{F}(\mathbb{L}_{s,t})\) _in a canonical way. For small_ \(|t-s|\)_, one can then find the solution to the starting value problem_
\[\begin{cases}dZ_{r}=\mathcal{F}(\mathbb{L}_{s,t})(Z_{r})\\ Z_{0}=z\end{cases}\]
_and set_ \(\tilde{\mu}_{s,t}z=Z_{1}\)_._ _[_1_]_ _then introduces a sewing lemma for almost-flows to "sew" together_ \(\tilde{\mu}\) _to get the solution flow_ \(\mu\)_, leading to the actual solution of the RDE (_1.1_). He later extended this technique in the follow-up paper_ _[_1_]__, where the authors introduced the notion of a rough flow._
_For non-geometric rough paths, flow techniques are not standard at the moment, although they are tremendously helpful in generalizing (_1.2_) to RDEs on manifolds. For branched rough paths, originally introduced in_ _[_1_]__, the Davie's formula was shown in_ _[_1_]__:_
\[Y_{t}\approx\sum_{\tau}X^{\tau}_{s,t}(\mathcal{F}(\tau)\mathrm{Id})(Y_{s}) \tag{1.4}\]
_where the sum goes over all rooted trees up to a specific number of nodes, and for certain differential operators_ \(\mathcal{F}(\tau)\) _depending on the vector fields_ \(V_{1},\ldots,V_{d}\)_. The differential operators_ \(\mathcal{F}(\tau)\) _first appeared in the theory of Butcher series, where they are called elementary differentials of_ \(V_{1},\ldots,V_{d}\) _(A term going back to Butcher_ _[_1_]_ _and Merson_ _[_1_]__, more recent references include_ _[_1_]__,_ _[_1_]__.). This theory has been extended to RDEs on manifolds in_ _[_1_]__, by replacing_ \(\mathrm{Id}\) _with some smooth map_ \(\phi:M\to\mathbb{R}^{d}\)_. Here, Weidner makes the observation that (_1.4_) does lead to coordinate dependent solutions when one takes the classical elementary differentials. However, they also demonstrate that one can construct different elementary differentials as long as the manifold is equipped with a flat, torsion-free connection or one has non-planarly branched rough paths, see Remark_ 4.35 _of_ _[_1_]__. We will analyze this map in more detail in Section_ 5.4_._
_For the level_ \(N=2\) _case, one can get rid of the assumption that the connection is flat and torsion-free: In_ _[_1_]__, the authors use the theory of Ito integrals on manifolds_ _[_1_]__,_ _[_1_]_ _as a motivation to add a correction term (similar to the Ito-Stratonovich correction term) to the rough-path integral to construct a coordinate independent integral, as long as the manifold is equipped with a connection. In the paper, they also derive a Davie's formula, allowing us to compare their results with our approach._
_Similar results are well known in the Butcher-series theory: If_ \(X\) _is simply a smooth path, all of the above still holds true, as long as we replace the rough path with its signature. The theory of Butcher series then tells us how to choose_ \(\mathcal{F}(\tau)\) _to get the correct solution on_ \(M\)_, as long as it is a homogeneous space._
Curry et. al [3] managed to use this inside to generalize this approach to the (to our knowledge) most general setting currently known: If \(\mathbb{X}\) is a planar branched rough path and \(M\) is a homogeneous space, we can use the post-Lie algebra structure of the vector fields of \(M\) to construct the differential operators \(\mathcal{F}(\tau)\) for each tree \(\tau\) and solve the RDE.
The main structure of these approaches can be summarized into two main components: One first needs to identify the differential operators \(\mathcal{F}(\tau)\) and then argue that the Davie's formula (1.4) indeed generates a flow \(\mu_{s,t}\). In this paper, we identify the minimum assumption on a map \(\mathcal{F}\), mapping the underlying Hopf algebra of \(\mathbb{X}\) into differential operators, such that \(\mu_{s,t}\) becomes a flow. Interestingly, there seem to be two approaches to construct \(\mu_{s,t}\) from its approximation (1.4): Curry et. al. prefer to treat \(\mathbb{X}\) as an infinite series (that is, they do not truncate the underlying Hopf algebra), and show that the Taylor expansion converges. On the other hand, Baileul et.al. construct the flow \(\mu_{s,t}\) with the use of a sewing lemma and log-Ode techniques, as described above. We will follow the second approach, as it avoids assumptions on the asymptotic dimension of the Hopf algebra.
To do so, we extend the sewing lemma for flows ([1] Theorem 2.1) to a local setting, which allows us to use it on any finite-dimensional manifold. This allows us to find the minimal assumption on \(\mathcal{F}\) to make this machinery work, which we call a _pseudo bialgebra map_.
**Definition 1.1**.: \(\mathcal{F}\) is called a pseudo bialgebra map, if \(\mathcal{F}(\mathbb{1})=\mathrm{Id}\) and
\[\mathcal{F}(\tau\star\sigma) =\mathcal{F}(\tau)\circ\mathcal{F}(\sigma)\] \[\mathcal{F}(\Delta\tau)(\phi\otimes\psi) =\mathcal{F}(\tau)(\phi\cdot\psi)\,.\]
I.e. we want \(\mathcal{F}\) to be an algebra morphism as a map from \((\mathcal{H},\star)\) to the space of differential operators equipped with composition as a product, and we want the coproduct on \(\mathcal{H}\) to be "dual" to the product between functions.
Using the log-ODE method gives us two important insides into \(\mathcal{F}\): On the one hand, the log-ODE method only depends on \(\mathcal{F}\) restricted to the _primitive elements_\(\mathcal{P}\) of the Hopf algebra \(\mathcal{H}\). This is not very surprising, as the Theorem of Milnor-Moore gives us a \(1-1\) correspondence between the Hopf algebra \(\mathcal{H}\) and the Lie algebra of its primitive elements \(\mathcal{P}\)[13]. However, this does give us an alternative perspective on the pseudo bialgebra map \(\mathcal{F}\), as they are in a \(1-1\) correspondence to Lie algebra maps mapping \(\mathcal{P}\) to vector fields. Furthermore, the log-ODE method does not rely on the post-Lie structure of the vector fields on \(M\), which allows us to generalize the results from [3] to general manifolds, as long as they are equipped with a connection. In this case, we can write down the map \(\mathcal{F}\) explicitly, leading to the map from [14]: If our Hopf algebra is the Munthe-Kaas-Wright algebra [15]\(\mathcal{H}_{MKW}\) over ordered forests, the map \(\mathcal{F}\) mapping dual forests of \(\mathcal{H}_{MKW}\) to differential operators is given by
\[\mathcal{F}(\bullet_{!})\psi =V_{i}\psi\] \[\mathcal{F}(\tau_{1},\dots,\tau_{k})\psi =\nabla^{k}\psi(\mathcal{F}(\tau_{1}),\dots,\mathcal{F}(\tau_{k }))\] \[\mathcal{F}([\tau_{1},\dots,\tau_{k}]_{i}) =\nabla^{k}V_{i}(\mathcal{F}(\tau_{1}),\dots,\mathcal{F}(\tau_{ k}))\,,\]
where \(\tau_{1},\dots,\tau_{k}\) are rooted, ordered trees and \([\tau_{1},\dots,\tau_{k}]_{i}\) denotes the rooted, ordered tree with root \(i\) and children \(\tau_{1},\dots,\tau_{k}\). \(\nabla^{k}\) denotes the \(k\)-th total covariant derivative. We show that the above map is a pseudo bialgebra map generalizing the post-Lie map used in [3].
Last but not least, we briefly discuss if the solution \(Y\) can be seen as a rough path itself. The short answer to the question is _only in the geometric case_, which is already well known. However, we show a shorter proof than the classical one using only algebraic identities to show that in the geometric case, \(Y\) becomes a rough path on \(M\). The non-geometric case remains an interesting field for further research, although we recommend [11] for current advances in that direction.
_Remark 1.2_.: We will use the \(\alpha\)-Holder setting for rough paths with \(\alpha\in(0,1)\), as it is more common in this area and allows us to link our results to compare our results with [1], [1], [1] and more. However, we heavily suspect that our results hold just as well in the \(p\)-variation setting.
### The structure of this paper
Since the concept of an almost-flow can be constructed independently from rough path theory, we start this paper by introducing the concept of _local flows_ in Section 2. This construction allows us to extend the results of [1] from flows on Banach spaces to flows on manifolds.
In Section 3, we recall some facts about Hopf algebras and rough path theory and fix our notation. We also use this chance to briefly discuss the three most commonly used Hopf algebras in rough path theory: The tensor algebra for geometric rough paths, the Grossman-Larson algebra for non-planarly branched rough paths, and the Munthe-Kaas-Wright algebra for planarly branched rough paths. Finally, we recall some notations from differential geometry and linear connections.
Afterward, we introduce the concept of a _pseudo bialgebra map_ in Section 4, which is the most general map \(\mathcal{F}\) from a Hopf algebra into the set of differential operators on \(M\), such that the log-ODE generates an almost-flow. We further show that pseudo bialgebra maps are in a 1-1 correspondence to Lie algebra maps mapping primitive elements to vector fields. Finally, we show that a rough path and a pseudo bialgebra map do indeed generate a local almost-flow on \(M\), which has the Davie's formula (1.4).
In section 5, we go over the commonly used techniques to construct elementary differentials on the tensor algebra, the GL-algebra as well as the MKW-algebra. We show that all of these techniques generate pseudo bialgebra maps on their respective algebras and further show that on the MKW-algebra, the construction of \(\mathcal{F}\) can be generalized to all smooth manifolds equipped with a connection. This also allows us to relate the results from [10] with [1], who constructed rough path solutions on manifolds without a post-Lie structure.
_Remark 1.3_.: [1] only solves the RDE 1.1 for level \(N=2\). It has been generalized to higher levels in [12] for quasi-geometric rough paths. We are convinced that there is another pseudo bialgebra map on the quasi-shuffle algebra hiding in this result, but at this moment, we are not sure what exactly this map looks like.
One advantage [1] has over our theory, is that it can make sense of the solution of (1.1) as a rough path on a manifold. In general Hopf algebras, it is not even clear how to define a rough path on a manifold. However, we do briefly discuss how to construct this for the geometric case in Section 6, and present a new, mainly algebraic proof that the solution to (1.1) is a rough path on \(M\).
## 2. Local flows and almost-flows
The goal of this section is to construct flows from _almost-flows_, a concept introduced in [1]. The main idea is that for a given almost-flow \(\mu_{s,t}\), one can sew together a flow-map
\[\eta_{s,t}=\mu_{t_{N-1},t_{N}}\circ\cdots\circ\mu_{t_{1},t_{2}}\,, \tag{2.1}\]
where \(s=t_{1}<\cdots<t_{N}=t\) is a dyadic partition of \([s,t]\). Given a manifold \(M\), we can express any flow in an open subset \(U\subset\mathbb{R}^{d}\) by using coordinates. So if we can guarantee, that we do not "fall off" \(U\) during the sewing procedure, we could apply the sewing lemma for flows to get a flow \(\eta_{s,t}\) for small enough \(|t-s|\). Unfortunately, we can not give this guarantee a priori: If the control \(X\) of (1.1) is \(\alpha\)-Holder continuous, it is reasonable to assume that \(|\mu_{s,t}x-x|\lesssim|t-s|^{\alpha}\). Taking the \(N\)-th dyadic partition of \([s,t]\), it then follows that
\[\left|\big{\langle}\big{\rangle}_{i=1}^{2^{N}}\mu_{t_{i-1},t_{i}}x-x\right| \lesssim\sum_{i=1}^{2^{N}}2^{-N\alpha}\left|t-s\right|^{\alpha}\to\infty\]
as \(N\to\infty\), so one really needs to invoke the almost sewing properties of the almost-flow \(\eta\). We will show in subsection 2.1 that these assumptions indeed suffice to do the sewing in any open set \(U\), provided that \(|t-s|\) is small enough. In subsection 2.2, we will then use this to conclude that we can sew any flow on the manifold.
We should note that we will be working in a finite-dimensional setting. This has the advantage, that any closed ball is compact, so by showing that \(\left|\bigcirc_{i=1}^{2N}\mu_{t_{i-1},t_{i}}x-x\right|\leq C\) for some \(C\geq 0\), we get that the sewing procedure takes place in some compact set, allowing us to only assume local bounds. In an infinite-dimensional setting, we would need to assume global bounds on the flows.
Throughout this section, we will use the following notation:
* The simplex is denoted by \(\Delta_{T}:=\{(s,t)\in[0,T]^{2}\ |\ s\leq t\}\).
* The diagonal is denoted by \(\operatorname{diag}_{T}:=\{(t,t)\in[0,T]^{2}\}\).
* The open ball is denoted by \(B_{x}(r):=\{y\in\mathbb{R}^{d}\ |\ \left|x-y\right|<r\}\).
### Local flows in Banach spaces
Throughout this section, let us fix an open set \(U\subset\mathbb{R}^{d}\) and a time \(T>0\). It is unreasonable to assume that there is a \(T(U)>0\), such that the almost-flow \(\mu_{s,t}(x)\) is well-defined whenever \(|t-s|\leq T(U)\). Indeed, if \(x\) approaches the border of \(U\), we would assume \(T\) to go to zero. Thus, we will use a setting, where we allow \(T\) to continuously depend on \(x\). The standard setting for this (e.g. [9]) is defined as follows:
**Definition 2.1**.: Let \(O\) be an open set, such that \(\operatorname{diag}_{T}\times U\subset O\subset\Delta_{T}\times U\). We call \(O\) an admissible domain, if for any compact set \(K\subset U\), there is a \(0<T(K)\leq T\), such that for each \(x\in K\) and \(|t-s|\leq T(K)\), \((s,t,x)\in O\), and for any \((s,t,x)\in O\) and \(s\leq u\leq t\): \((s,u,x)\in O\).
We can now define a local flow to be a map on an admissible domain, such that the classical flow property holds:
**Definition 2.2**.: Let \(O\) be an admissible domain. We say \(\eta:O\mapsto U\), \((s,t,x)\mapsto\eta_{s,t}x\) is a local flow, if for all \((s,t,x)\in O\) and \(s\leq u\leq t\) such that \((u,t,\eta_{s,u}x)\in O\), it holds that
\[\eta_{s,t}x=\eta_{u,t}\circ\eta_{s,u}x\,.\]
We want to find a notation of _almost-flows_, such that (2.1) converges for \(N\to\infty\). For the classical sewing techniques (originally introduced in [12], [10]), we can sew together two-parameter processes \(A_{s,t}\), as long as the following coherence property holds:
\[\left|A_{s,t}-(A_{s,u}+A_{u,t})\right|\lesssim|t-s|^{1+\epsilon}\]
for some \(\epsilon>0\) and all \(s\leq u\leq t\). In a more general setting, one can replace the plus with more general products, to arrive at Terry Lyons's notion of almost-multiplicative functionals [11]:
\[\left\|X_{s,t}-X_{s,u}\star X_{u,t}\right\|\leq|t-s|^{1+\epsilon}\,\]
for some appropriate norm. On the surface, this looks precisely like the result we need, if one replaces \(\star\) with the composition product \(\circ\). However, as discovered in [1], one needs to be a bit more careful with flows than with multiplicative functionals: If we are given a composition of flows \(\mu_{u,t}\circ\mu_{s,u}\) and replace \(\mu_{s,u}\) with \(\mu_{u,\bar{u}}\circ\mu_{s,\bar{u}}\), it in general does not hold true that \(\left|(\mu_{s,u}x-\mu_{\bar{u},u}\circ\mu_{s,\bar{u}}x)\right|\) being small implies that \(\left|\mu_{u,t}\circ(\mu_{s,u}x-\mu_{\bar{u},u}\circ\mu_{s,\bar{u}}x)\right|\) is small. Thus, we need additional Lipschitz assumptions on our almost-flow \(\mu\):
**Definition 2.3**.: We say \(\mu:O\mapsto U\), \((s,t,x)\mapsto\mu_{s,t}x\) is a local almost-flow, if for all compact sets \(K\subset U\) we have
* Lipschitz continuity: There exists constants \(L(K,s,t)\) with \(L(K,s,s)=\lim_{|t-s|\to 0}L(K,s,t)=1\) such that for all \(x,y\in K\), \(|t-s|\leq T(K)\): \[\left|\mu_{s,t}x-\mu_{s,t}y\right|\leq L(K,s,t)\left|x-y\right|\]
* Holder continuity: There is an \(\alpha\in(0,1)\) (independent of \(K\)) and a constant \(B(K)\) such that for all \(x\in K\), \(|t-s|\leq T(K)\): \[\left|\mu_{s,t}x-x\right|\leq B(K)\left|t-s\right|^{\alpha}\,.\]
* almost-flow property: There is an \(\epsilon>0\) (independent of \(K\)) and a constant \(C(K)\) such that for all \(x,y\in K\), \(\left|t-s\right|\leq T(K)\) and \((u,t,\mu_{u,s}x),(u,t,\mu_{u,s}y)\in O\): \[\left|\mu_{s,t}x-\mu_{u,t}\circ\mu_{s,u}x\right|\leq C(K)\left|t-s\right|^{1+ \epsilon}\,.\] as well as the joint Lipschitz-almost-flow property holds: \[\left|(\mu_{s,t}-\mu_{u,t}\circ\mu_{s,u})(x-y)\right|\leq C(K)\left|t-s\right| ^{1+\epsilon}\left|x-y\right|\]
We fix a local almost-flow \(\mu\). Let \(P_{k}([s,t]):=\left\{[t_{i},t_{i+1}]\ \big{|}\ t_{i}=s+2^{-k}i(t-s),i=0,\ldots,2^{k}-1\right\}\) be the \(k\)-th dyadic partition of \([s,t]\) and let
\[\mu_{s,t}^{k}:=\bigcirc_{[t_{i},t_{i+1}]\in P_{k}([s,t])}\ \mu_{t_{i},t_{i+1}}\]
be the sewn-together flow under \(P_{k}([s,t])\). As mentioned above, it is a priori not clear that \(\left|\mu_{s,t}^{k}x-x\right|\) is even bounded. The next lemma shows, that the almost-flow property guarantees that:
**Lemma 2.4**.: _Let \(K\subset O\) be a compact set, and \(x\in K\) be an inner point. Chose \(R>0\) in such a way, that \(B_{x}(R)\subset K\). Further let \(0<r<R,\hat{c}>0\) and set \(T(r,\hat{c})=\min\left(\left(\frac{R-r}{\hat{c}}\right)^{\frac{1}{\alpha}},T( K)\right)\). Then, there is \(\hat{c},c_{1},c_{2},c_{3}>0\) such that for all \(y\in B_{x}(r)\), \(\left|t-s\right|\leq T(r,\hat{c})\) we have that \(\mu_{s,t}^{k}y\) is well-defined and that_
\[\left|\mu_{s,t}^{k}y-\mu_{s,t}y\right| \leq c_{1}\left|t-s\right|^{1+\epsilon} \tag{2.2}\] \[\left|\mu_{s,t}^{k}y-y\right| \leq c_{2}\left|t-s\right|^{\alpha}\] (2.3) \[\left|(\mu_{s,t}^{k}-\mu_{s,t})(x-y)\right| \leq c_{3}\left|t-s\right|^{1+\epsilon}\left|x-y\right|\,. \tag{2.4}\]
Proof.: By the definition of an almost-flow, all of these properties are obvious for \(k=1\). We show \(k>1\) with induction: Set \(u=\frac{t+s}{2}\) and decompose
\[\mu_{s,t}^{k}y-\mu_{s,t}y=\underbrace{(\mu_{u,t}^{k-1}\mu_{s,u}^{k-1}y-\mu_{ u,t}\mu_{s,u}^{k-1}y)}_{(I)}+\underbrace{(\mu_{u,t}\mu_{s,u}^{k-1}y-\mu_{u,t}\mu_ {s,u}y)}_{(II)}+\underbrace{(\mu_{u,t}\mu_{s,u}y-\mu_{s,t}y)}_{(III)}\,.\]
By induction hypothesis, \(\mu_{s,u}^{k-1}y\) is well-defined and (2.3) implies \(\mu_{s,u}^{k-1}y\in B_{x}(r+c_{2}(T(r,\hat{c})/2)^{\alpha})\). For large enough \(\hat{c}\), it holds that
\[r+c_{2}(T(r,\hat{c})/2)^{\alpha}+\hat{c}(T(r,\hat{c})/2)^{\alpha}=r+\frac{c_{ 2}+\hat{c}}{2^{\alpha}}T(r,\hat{c})^{\alpha}<R\,,\]
which implies that \(\left|t-u\right|\leq T(r+c_{2}(T(r,\hat{c})/2)^{\alpha},\hat{c})\). It follows that \(\mu_{u,t}^{k-1}(\mu_{s,u}^{k-1}y)\) is well-defined by induction hypothesis, and (2.2) gives us
\[\left|(I)\right|\leq c_{1}\left|t-s\right|^{1+\epsilon}2^{-1-\epsilon}\,.\]
Furthermore, \(\mu_{s,u}^{k-1}y,\mu_{s,u}y\in B_{x}(r+c_{2}(T/2)^{\alpha})\subset K\), so we can use the Lipschitz continuity of \(\mu_{u,t}\) as well as \(\left|t-u\right|\leq T(K)\) to show
\[\left|(II)\right|\leq L(K,u,t)c_{1}\left|t-s\right|^{1+\epsilon}2^{-1-\epsilon }\,,\]
where we used \(\left|\mu^{k-1}y_{s,u}-\mu_{s,u}y\right|\leq c_{1}\left|t-s\right|^{1+\epsilon }2^{-1-\epsilon}\) by (2.2). Finally, the third term is simply bounded by the almost-flow property by
\[\left|(III)\right|\leq C(K)\left|t-s\right|^{1+\epsilon}\,.\]
Adding all three terms together, we get that
\[\left|\mu_{s,t}^{k}y-\mu_{s,t}y\right|\leq\left(c_{1}2^{-1-\epsilon}+L(K,u,t)c _{1}2^{-1-\epsilon}+C(K)\right)\left|t-s\right|^{1+\epsilon}\,.\]
As long as \(\left|t-s\right|\) is small enough, such that \(L(K,u,t)\) is close enough to \(1\), and for \(c_{1}\) big enough (depending on \(C(K)\)), we get that \((c_{1}2^{-1-\epsilon}+L(K,u,t)c_{1}2^{-1-\epsilon}+C(K))\leq c_{1}\), which shows that (2.2) holds for \(k\). (2.4) follows analogously if one replaces the almost-flow property with the joint
Lipschitz-almost-flow property and uses the Lipschitz property from Corollary 2.5 for \(\mu_{s,u}^{k-1}\). (2.3) follows directly from
\[\left|\mu_{s,t}^{k}y-y\right|\leq\left|\mu_{s,t}^{k}y-\mu_{s,t}y\right|+\left| \mu_{s,t}y-y\right|\leq c_{2}\left|t-s\right|^{\alpha}\]
for \(c_{2}\geq c_{1}+B(K)\).
It should be noted that our \(T(r,\hat{c})\) only really depends on the distance between \(B_{x}(r)\) and \(K^{c}\). So we can get (2.2)-(2.4) for all of \(y\in K\) as well as all \(\left|t-s\right|\leq\hat{T}(K)\) for a uniformly chosen \(\hat{T}(K)\), by choosing a \(\delta>0\) and a \(\delta\)-fattening of \(K\), that is, a compact set \(\tilde{K}\supset\{x\in U\ |\ dist(x,K)\leq\delta\}\), such that \(\tilde{K}\subset U\) and applying the above lemma to \(\tilde{K}\).
A simple corollary of Lemma 2.4 is that \(\mu_{s,t}^{k}\) is Lipschitz continuous with a Lipschitz constant independent of \(k\):
**Corollary 2.5**.: _For all \(k\geq 1\) and \(y,z\in B_{x}(r),\left|t-s\right|\leq T(r,\hat{c})\) as above, we have that \(\mu_{s,t}^{k}\) is Lipschitz continuous (on \(B_{x}(r)\)) with_
\[\left|\mu_{s,t}^{k}(z-y)\right|\leq\left(L(K,s,t)+c_{3}\left|t-s\right|^{1+ \epsilon}\right)\left|z-y\right|\,.\]
Proof.: This follows directly by using \(\mu_{s,t}^{k}=(\mu_{s,t}^{k}-\mu_{s,t})+\mu_{s,t}\).
Let \(\Delta_{n}:=\{kT/2^{N}\ |\ k=0,\ldots,2^{N}\}\) (where we identify \(\{s=t_{1}\leq\cdots\leq t_{N}=t\}\) with \(\{[t_{i},t_{i+1}]\ |\ s=t_{1}\leq\cdots\leq t_{N}=t,i=1,\ldots,N-1\}\)) be the n-th dyadic partition of \([0,T]\) and let \(\Delta=\bigcup_{n}\Delta_{n}\) be the set of dyadic numbers. For \(s,t\in\Delta_{n}\), let
\[\mu_{s,t}^{\Delta_{n}}=\bigcirc_{\begin{subarray}{c}[u,v]\in P_{n}([0,T])\\ [u,v]\subset[s,t]\end{subarray}}\mu_{u,v}\]
be the sewn-together flow over \(P_{n}([0,T])\) restricted to \([s,t]\). We claim that the statements of Lemma 2.4 and Corollary 2.5 still hold for \(\mu_{s,t}^{\Delta_{n}}\):
**Lemma 2.6**.: _In the above setting, we have that \(\mu_{s,t}^{\Delta_{n}}y\) is well-defined for all \(n\geq 0\), \(y\in B_{x}(r)\) and \(\left|t-s\right|\leq T(r,\tilde{c})\). For these \(s,t,y\) as well as \(z\in B_{x}(r)\), it further holds that_
\[\left|\mu_{s,t}^{\Delta_{n}}y-\mu_{s,t}y\right| \leq\tilde{c}_{1}\left|t-s\right|^{1+\epsilon} \tag{2.5}\] \[\left|\mu_{s,t}^{\Delta_{n}}y-y\right| \leq\tilde{c}_{2}\left|t-s\right|^{\alpha}\] (2.6) \[\left|\mu_{s,t}^{\Delta_{n}}(z-y)\right| \leq\left(L(K,s,t)+\tilde{c}_{3}\left|t-s\right|^{1+\epsilon} \right)\left|x-y\right|\,, \tag{2.7}\]
_for a new set of constants \(\tilde{c}_{1},\tilde{c}_{2},\tilde{c}_{3},\tilde{c}>0\)._
Proof.: It holds that either \(\mu_{s,t}^{\Delta_{n}}\) is a dyadic partition of \([s,t]\), or we can write it as \(\mu_{s,t}^{\Delta_{n}}=\bigcirc_{k=1}^{K}\tilde{\mu}_{u_{k},v_{k}}\), where \(\tilde{\mu}_{u_{k},v_{k}}\) is a dyadic partition of \([u_{k},v_{k}]\) and \(\left|u_{k}-v_{k}\right|\leq\left|t-s\right|2^{-(K-k)}\). If \(\mu_{s,t}^{\Delta_{n}}\) is already the sewn-together flow over a dyadic partition, there is nothing to prove, so we assume it is not. It then holds that
\[\bigcirc_{k=1}^{K}\tilde{\mu}_{u_{k},v_{k}}y-\mu_{s,t}y=\underbrace {(\tilde{\mu}_{u_{K},t}-\mu_{u_{K},t})(\bigcirc_{k=1}^{K-1}\tilde{\mu}_{u_{k},v _{k}}y)}_{(I)}+\underbrace{\mu_{u_{K},t}(\bigcirc_{k=1}^{K-1}\tilde{\mu}_{u_{k },v_{k}}-\mu_{s,v_{K-1}})}_{(II)}y\] \[+\underbrace{(\mu_{u_{K},t}\circ\mu_{s,v_{K-1}}-\mu_{s,t})}_{(III)}\,.\]
We use induction over \(K\). By induction hypothesis and (2.6), we have that \(\bigcirc_{k=1}^{K-1}\tilde{\mu}_{u_{k},v_{k}}y\in B_{x}\left(r+\tilde{c}_{2} \left(\frac{T(r,\tilde{c})}{2}\right)^{\alpha}\right)\). If \(\tilde{c}\) is large enough, such that
\[r+\tilde{c}T(r,\tilde{c})^{\alpha}+\tilde{c}_{2}\left(\frac{T(r,\tilde{c})}{2} \right)^{\alpha}<R\,.\]
Lemma 2.4 (recall that \(\tilde{\mu}_{u_{K},t}\) is dyadic) gives us that \(\bigcirc_{k=1}^{K}\tilde{\mu}_{u_{k},v_{k}}y\) is well-defined, and that
\[\left|(I)\right|\leq c_{1}\left|t-s\right|^{1+\epsilon}\,.\]
For \((II)\), we note that one can use the induction hypothesis and the Lipschitz continuity of \(\mu\) to get
\[\left|(II)\right|\leq L(K,u_{K},t)\tilde{c}_{1}\left(\frac{\left|t-s\right|}{2 }\right)^{1+\epsilon}\,.\]
We further get
\[\left|(III)\right|\leq C(K)\left|t-s\right|^{1+\epsilon}\,,\]
as before. Gathering all three of the inequalities, we get
\[\left|\bigcirc_{k=1}^{K}\tilde{\mu}_{u_{k},v_{k}}y-\mu_{s,t}y\right|\leq\left( c_{1}+L(K,s,t)\tilde{c}_{1}2^{-1-\epsilon}+C(K)\right)\left|t-s\right|^{1+\epsilon}\]
For large enough \(\tilde{c}_{1}\), it follows that \(\left(c_{1}+L(K,s,t)\tilde{c}_{1}2^{-1-\epsilon}+C(K)\right)\leq\tilde{c}_{1}\), showing (2.5). (2.6) and (2.7) follow as before.
With these technical lemmas, we can now show that local almost-flows generate flows:
**Lemma 2.7** (sewing lemma for local almost-flows).: _Let \(\mu_{s,t}\) be a local almost-flow. Then there exists an admissible domain \(\hat{O}\subset O\) and a unique flow \(\eta\) on \(\hat{O}\), such that for all \(x\in K\), \(\left|t-s\right|\leq\hat{T}(K)\), we have_
\[\left|\mu_{s,t}x-\eta_{s,t}x\right|\leq\hat{C}(K)\left|t-s\right|^{1+\epsilon}\,, \tag{2.8}\]
_as well as_
\[\left|\eta_{s,t}x-x\right| \leq\hat{B}(K)\left|t-s\right|^{\alpha} \tag{2.9}\] \[\left|\eta_{s,t}(x-y)\right| \leq\hat{L}(K,s,t)\left|x-y\right|\,. \tag{2.10}\]
Proof.: **Existence:** Let \(K\) be some compact set with an inner point \(x\) and let \(y\in B_{x}(r)\). For \(\left|t-s\right|\leq T(r,\tilde{c})\), we claim that \(\mu_{s,t}^{\Delta n}y\) is a Cauchy sequence in \(n\) as long as \(s,t\in\Delta_{m}\) for some \(m\geq 0\). Once we have shown this, we will set \(\eta_{s,t}y=\lim_{n\to\infty}\mu_{s,t}^{n}y\).
It holds that \(\mu_{s,t}^{\Delta_{n}}=\bigcirc_{k=0}^{K}\mu_{u_{k}^{n},v_{k}^{n}}\). Set \(f_{u_{k}^{n},v_{k}^{n}}:=\mu_{u_{2k+1}^{n+1},v_{2k+1}^{n+1}}\mu_{u_{2k}^{n+1},v_{2k}^{n+1}}\). It follows that \(\mu_{s,t}^{\Delta_{n+1}}=\bigcirc_{k=0}^{K}f_{u_{k}^{n},v_{k}^{n}}\). We write
\[\mu_{s,t}^{\Delta_{n+1}}-\mu_{s,t}^{\Delta_{n}}=\sum_{k=0}^{K}\bigcirc_{l=k+1}^ {K}f_{u_{l}^{n},v_{l}^{n}}\circ(f_{u_{k}^{n},v_{k}^{n}}-\mu_{u_{k}^{n},v_{k}^{ n}})\circ\bigcirc_{l=0}^{k-1}\mu_{u_{l}^{n},v_{l}^{n}}\,.\]
We chose \(m\) in such a way, that \(2^{-m-1}T<\left|t-s\right|\leq 2^{-m}T\). It follows that we have \(\leq 2^{n-m}\) summands in the above sum. By enlargening \(\tilde{c}\) (or shrinking \(T(r,\tilde{c})\)) a final time, we conclude that the above term is well-defined and the almost-flow property together with (2.7) gives us
\[\left|\mu_{s,t}^{\Delta_{n+1}}y-\mu_{s,t}^{\Delta_{n}}y\right| \leq 2^{n-m}(\tilde{L}(K,s,t)C(K)2^{-n(1+\epsilon)}\left|T \right|^{1+\epsilon})\] \[\leq 2^{-(n-m)\epsilon}2^{-m(1+\epsilon)}\left|T\right|^{1+\epsilon }\tilde{L}(K,s,t)C(K)\,,\]
where \(\tilde{L}(K,s,t)=(L(K,s,t)+\tilde{c}_{3}\left|t-s\right|^{1+\epsilon})\) is the Lipschitz constant from (2.7). It follows that the sequence is Cauchy and thus convergent in \(\mathbb{R}^{d}\). Further, we immediately get from \(\eta-\mu=\sum_{n=m}^{\infty}(\mu^{\Delta_{n+1}}-\mu^{\Delta_{n}})+(\mu^{ \Delta_{m}}-\mu)\), that
\[\left|\eta_{s,t}x-x\right|\leq\hat{B}(K)\left|t-s\right|^{\alpha} \tag{2.11}\] \[\left|\eta_{s,t}(x-y)\right|\leq\hat{L}(s,t,K)\left|x-y\right|\] (2.12) \[\left|\eta_{s,t}x-\mu_{s,t}x\right|\leq\hat{C}(K)\left|t-s\right| ^{1+\epsilon} \tag{2.13}\]
for all \(x,y\in K\subset U\) compact, \(\left|t-s\right|\leq\hat{T}(K)\) and \(\hat{C}(K),\hat{L}(s,t,K),\hat{B}(K)\) a new set of constants. Let us now check that \(\eta\) is indeed a flow on \(\Delta\). For all \(s,u,t\in\Delta_{n}\), \(s\leq u\leq t\), it holds that
\(\mu_{s,t}^{\Delta_{n}}=\mu_{u,t}^{\Delta_{n}}\circ\mu_{s,u}^{\Delta_{n}}\) by construction. Thus, we can use the continuity of \(\eta\) to show that for all \(s,u,t\in\Delta\), \(s\leq u\leq t\) with \(|t-s|\leq\hat{T}(K)\):
\[|\eta_{s,t}-\eta_{u,t}\eta_{s,u}| \leq\left|\eta_{s,t}-\mu_{s,t}^{\Delta_{n}}\right|+\left|\eta_{u, t}\eta_{s,u}-\mu_{u,t}^{\Delta_{n}}\mu_{s,u}^{\Delta_{n}}\right|+\left|\mu_{s,t} ^{\Delta_{n}}-\mu_{u,t}^{\Delta_{n}}\mu_{s,u}^{\Delta_{n}}\right|\] \[\to 0\]
as \(n\to\infty\), where we used that
\[\left|\eta_{u,t}\eta_{s,u}-\mu_{u,t}^{n}\mu_{s,u}^{n}\right|\leq\left|\eta_{u,t}(\eta_{s,u}-\mu_{s,u}^{n})\right|+\left|(\eta_{u,t}-\mu_{u,t}^{n})\mu_{s,u} ^{n}\right|\to 0\]
as \(n\to\infty\). It follows that \(\eta\) is a flow on \(\Delta\), and by continuity we can extend \(\eta\) uniquely to a flow on \(\Delta_{T}\).
**Uniqueness:** Assume \(\eta\) is a flow on \(\hat{O}\) fulfilling (2.8)-(2.10). Then we can calculate for \(s,t\in\Delta\):
\[\left|\eta_{s,t}-\mu_{s,t}^{\Delta_{n}}\right| \leq\sum_{k=0}^{K}\left|\mu_{u_{k+1},t}^{\Delta_{n}}\circ(\eta_{u _{k},v_{k}}-\mu_{u_{k},v_{k}})\circ\eta_{s,v_{k-1}}\right|\] \[\leq 2^{-n\epsilon-m}\tilde{L}(s,t,K)\hat{C}(K)T^{1+\epsilon}\to 0\]
as \(n\to\infty\). This finishes the proof.
_Remark 2.8_.: It should be noted, that we never used the Lipschitz property (2.10) in the proof of uniqueness. (2.8) and (2.9) are enough to uniquely characterize \(\eta\).
### Almost-flows on manifolds
We want to use our notation of local almost-flows to generate flows on a manifold \(M\). To this end, we say that an almost-flow on \(M\) is a map \(\mu\) mapping some admissible domain in \(\Delta_{T}\times M\) to \(M\), such that for each coordinate chart \(\phi:M\supset V\to U\subset\mathbb{R}^{d}\), \(\mu_{s,t}^{\phi}:=\phi\circ\mu_{s,t}\circ\phi^{-1}\) is a local almost-flow on some admissible domain over \(U\). The following proposition shows that this indeed generates a flow:
**Proposition 2.9**.: _Let \(\mu_{s,t}x\) defined on an admissible domain \(\operatorname{diag}_{T}\times M\subset O\subset\Delta_{T}\times M\). Assume that for each coordinate chart \(\phi:M\supset V\to U\subset\mathbb{R}^{d}\), \(\mu_{s,t}^{\phi}\) is an almost-flow on some open set \(\operatorname{diag}_{T}\times U\subset O^{\phi}\subset\Delta_{T}\times U\). Let \(\eta_{s,t}^{\phi}\) be the flow associated with \(\mu_{s,t}^{\phi}\). Then \(\eta_{s,t}:=\phi^{-1}\eta_{s,t}\) is coordinate independent. (Up to the maximal time \(|t-s|\leq T^{\phi}\), which might depend on the coordinate chart)_
Proof.: This follows immediately from the fact that \(\mu_{s,t}^{n}x\in M\) is coordinate independent, as long as it is well-defined. Given two coordinate charts \(\phi,\tilde{\phi}\), and a point \(x\in V\cap\tilde{V}\), we can always choose a small enough \(T\) such that Lemma 2.7 can be applied both to \((\mu_{s,t}^{\phi,\Delta_{k}}x)_{k\geq 0}\) and \((\mu_{s,t}^{\tilde{\phi},\Delta_{k}}x)_{k\geq 0}\) for all \(|s-t|\leq T\). By construction, \(\mu_{s,t}^{\phi,\Delta_{k}}x=\mu_{s,t}^{\tilde{\phi},\Delta_{k}}x\) for all \(k\geq 0\). Since this sequence converges in the topology of \(M\) we get auniqe limit and conclude that \(\eta_{s,t}^{\phi}x=\eta_{s,t}^{\tilde{\phi}}x\). Thus, \(\eta\) is a coordinate independent flow.
## 3. Preliminaries
### Hopf and Lie algebras
We use this section to recall the notion of Hopf algebras and the most commonly used Hopf algebras in rough path theory. We mainly follow the setting of [10] and [10], but also recommend [12],[13] as well as classical references like [21], [14] or [1].
In a nutshell, Rough paths live in the dual of a graded, connected, and commutative Hopf algebra, making the space the rough paths live in a graded, connected, cocommutative Hopf algebra. We start by recalling the notion of a graded vector space:
**Definition 3.1**.: A vector space \(V\) is called a graded vector space, if there are finite-dimensional vector spaces \(V^{(0)},V^{(1)},\dots\), such that \(V=\bigoplus_{i=0}^{\infty}V^{(i)}\) is the set of finite linear combinations of vectors in \(V^{(i)}\). For all \(v\in V^{(i)}\), we write \(|v|=i\) and call \(i\) the grade of \(v\).
Note that we especially assume that all \(V^{(i)}\) are finite-dimensional. This will immediately give us, that all truncated spaces \(\bigoplus_{i\leq N}V^{(i)}\) are finite-dimensional.
We can now define a Hopf algebra. The following definition comes from [10]:
**Definition 3.2**.: A graded connected Hopf algebra over \(\mathbb{R}\) is a graded vector space \(\mathcal{H}=\bigoplus_{i=1}^{\infty}\mathcal{H}^{(i)}\) equipped with linear maps \(m:\mathcal{H}\otimes\mathcal{H}\to\mathcal{H},m(x,y)=:x\cdot y\) (product), \(\Delta:\mathcal{H}\to\mathcal{H}\otimes\mathcal{H}\) (coproduct), \(1:\mathbb{R}\to\mathcal{H}\) (unit), \(\epsilon:\mathcal{H}\to\mathbb{R}\) (counit) and \(S:\mathcal{H}\to\mathcal{H}\) (antipode), such that the following holds for all \(x,y,z\in\mathcal{H}\):
* **Connectedness:**\(\mathcal{H}^{(0)}=\mathbb{R}\).
* **Unit:**\(1:\mathbb{R}\to\mathcal{H}\) has the properties \(1(\lambda)\cdot x=\lambda x=x\cdot 1(\lambda)\), as well as \(\Delta\circ 1=1\otimes 1\). With a slight abuse of notation, we also write \(1=1(1)\in\mathcal{H}\).
* **Count:**\(m\circ(\epsilon\otimes\mathrm{Id})\circ\Delta=\mathrm{Id}=m\circ(\mathrm{Id} \otimes\epsilon)\circ\Delta\) and \(\epsilon(x\cdot y)=\epsilon(x)\epsilon(y)\).
* **Associativity:**\((x\cdot y)\cdot z=x\cdot(y\cdot z)\).
* **Coassociativity:**\((\Delta\otimes\mathrm{Id})\Delta x=(\mathrm{Id}\otimes\Delta)\Delta x\).
* **Compatibility:**\(\Delta(x\cdot y)=\Delta x\cdot\Delta y\) and \(\epsilon\circ 1=\mathrm{Id}_{\mathbb{R}}\).
* **Antipode:**\(m\circ(S\otimes\mathrm{Id})\Delta=m\circ(\mathrm{Id}\otimes S)\Delta=1\circ\epsilon\).
* **Grading:**\(\mathcal{H}^{(i)}\cdot\mathcal{H}^{(j)}\subset\mathcal{H}^{(i+j)}\) and \(\Delta\mathcal{H}^{(i)}\subset\bigoplus_{(k+l=i)}\mathcal{H}^{(k)}\otimes \mathcal{H}^{(l)}\). The antipode fulfills \(S(\mathcal{H}^{(i)})\subset\mathcal{H}^{(i)}\).
Let us recall some basic results about Hopf algebras, which can all be found in [21]: It always holds that \(S\) is an anti-homomorphism, that is that for all \(u,v\in\mathcal{H}\), \(S(u\cdot v)=S(v)\cdot S(u)\). If \(\mathcal{H}\) is further graded, we know that \(1\in\mathcal{H}^{(0)}=\mathbb{R}\) and can thus identify it with \(1\). Furthermore, the grading implies that \(\epsilon(v)=0\) for all \(v\in\mathcal{H}^{(i)}\) for \(i\geq 1\). Thus, the compatibility condition implies that \(\epsilon\) has precisely the following form:
\[\epsilon(\lambda 1+R)=\lambda\,,\]
where \(R\in\bigoplus_{i\geq 1}\mathcal{H}^{(i)}\).
The next concept we need is commutativity and its dual counterpart, cocommutativity. To this end, we define the switch operator \(\tau:\mathcal{H}\otimes\mathcal{H}\to\mathcal{H}\otimes\mathcal{H}\) to be \(v\otimes u\mapsto u\otimes v\) for all \(u,v\in\mathcal{H}\). We then call \(\mathcal{H}\):
* **Commutative**, if \(m\circ\tau=m\).
* **Cocommutative**, if \(\tau\circ\Delta=\Delta\).
The level \(n\) trunctation of \(\mathcal{H}\) is given by
\[\mathcal{H}^{n}=\mathcal{H}/\bigoplus_{k=n+1}^{\infty}\mathcal{H}^{(k)}\,,\]
where we use that \(\bigoplus_{k=n+1}^{\infty}\mathcal{H}^{(k)}\) is an two-sided ideal of \(\mathcal{H}\). We identify the trunctation with \(\mathcal{H}^{n}=\bigoplus_{k=0}^{n}\mathcal{H}^{(k)}\).
#### 3.1.1. Graded dual of a graded Hopf algebra
Let \(\mathcal{H}\) be a graded Hopf algebra. For each \(i\in\mathbb{N}\), we choose a basis \(\{b_{1}^{i},\ldots,b_{k_{i}}^{i}\}\) of \(\mathcal{H}^{(i)}\). Since \(\{b_{k}^{i}\ |\ i\in\mathbb{N},k\leq k_{i}\}\) is a countable Hamel-basis of \(\mathcal{H}\), its dual \(\mathcal{H}^{*}\) is given by all formal series
\[\sum_{i\in\mathbb{N},k\leq k_{i}}\beta(i,k)b_{k}^{i*}\,,\]
where \(b_{k}^{i*}(b_{l}^{j})=\delta_{(i,k),(j,l)}\). We denote the dual action of \(\mathcal{H}^{*}\) on \(\mathcal{H}\) by \(\langle\cdot,\cdot\rangle:\mathcal{H}^{*}\times\mathcal{H}\to\mathbb{R}\). Since \(\mathcal{H}\) is infinite-dimensional, \(\mathcal{H}^{*}\) does in general not have a Hopf algebra structure. However, its _graded dual_ ([21], page 231) of \(\mathcal{H}\), given by \((\mathcal{H}^{g},\star,\delta,\epsilon^{*},1^{*},S^{*})\):
\[\mathcal{H}^{g}:=\bigoplus_{k=0}^{\infty}\mathcal{H}^{(i)*}\subset\mathcal{H} ^{*}\]
is a graded Hopf algebra with Hamel-basis \(\{b_{k}^{i*}\ |\ i\in\mathbb{N},k\leq k_{i}\}\) and dual operations
\[\langle x\star y,z\rangle :=\langle x\otimes y,\Delta z\rangle\] \[\langle\mathbbm{1}^{*},z\rangle :=\epsilon(z)\] \[\langle\delta x,z\otimes y\rangle :=\langle x,z\cdot y\rangle\] \[\epsilon^{*}(x) :=\langle x,\mathbbm{1}\rangle\] \[\langle S^{*}(x),y\rangle :=\langle x,S(y)\rangle\.\]
If \(\mathcal{H}\) is connected, it immediately follows that \(\mathcal{H}^{g}\) is connected as well. \(\mathcal{H}^{g}\) is cocommutative, if \(\mathcal{H}\) is commutative and \(\mathcal{H}^{*}\) is commutative, if \(\mathcal{H}\) is cocommutative.
#### 3.1.2. Group-like and primitive elements
Rough paths live in a Lie group inside \(\mathcal{H}^{*}\), called the group-like elements or characters. These are given by all non-zero \(g\in\mathcal{H}^{*}\), such that \(\langle g,\cdot\rangle:\mathcal{H}\to\mathbb{R}\) is a homomorphism. If \(g\in\mathcal{H}^{g}\), this would imply that \(\delta g=g\otimes g\), as \(\delta\) is dual to \(m\). However, in practice, this is only fulfilled by infinite series \(g=\sum_{i,k}\beta(i,k)b_{k}^{i*}\), whereas \(\mathcal{H}^{g}\) only contains finite sums. Thus, for the three Hopf algebras introduced in Section 3.4, the only group-like element would be given by \(\mathbbm{1}\).
We get around this issue by extending \(\delta\) to a map
\[\delta:\overline{\bigoplus_{i=0}^{\infty}\mathcal{H}^{(i)*}}\to \overline{\bigoplus_{i,j=0}^{\infty}\mathcal{H}^{(i)*}\otimes\mathcal{H}^{(j)*}}\]
by setting
\[\delta\left(\sum_{i,k}\beta(i,k)b_{k}^{i*}\right):=\sum_{i,k}\beta(i,k)\delta b _{k}^{i*}\,.\]
Here, \(\overline{\bigoplus_{i=0}^{\infty}\mathcal{H}^{(i)*}}\) denotes the space of infinite series \(\sum_{i,k}\beta(i,k)b_{k}^{i*}\). Thus, we say that the set of group-like elements is defined by
\[\mathcal{G}:=\left\{x\in\mathcal{H}^{*}\ |\ \delta x=x\otimes x,x\neq 0\right\}.\]
While \(\mathcal{G}\) is not a subset of the Hopf algebra \(\mathcal{H}^{g}\), we can always truncate it to an element in \(\mathcal{H}^{n}\) for some \(n\geq 0\) to regain the Hopf algebra structure. This leads to the following definition:
**Definition 3.3**.: The set of level \(n\) group-like elements is given by
\[\mathcal{G}^{n}:=\left\{x\in\mathcal{H}^{n*}\ |\ \delta x=x\otimes x,x\neq 0 \right\},\]
where the identity \(\delta x=x\otimes x\) holds in the truncated tensor product \((\mathcal{H}^{g}\otimes\mathcal{H}^{g})/(\bigoplus_{i+j\geq n+1}\mathcal{H}^{( i)*}\otimes\mathcal{H}^{(j)*})\).
\(\mathcal{G}^{n}\) is a group with the product \(\star\) inherited from \(\mathcal{H}^{n*}\) and the inverse given by \(x^{-1}=S(x)\). As it turns out, it is a Lie group with its Lie algebra being given by the set of _primitive elements_, which is defined as follows:
**Definition 3.4**.: The set of primitive elements is given by
\[\mathcal{P}:=\left\{x\in\mathcal{H}^{*}\ |\ \delta x=x\otimes 1+1\otimes x \right\}.\]
The level \(n\) primitives are given by
\[\mathcal{P}^{n}:=\left\{x\in\mathcal{H}^{n*}\ |\ \delta x=x\otimes 1+1\otimes x \right\},\]
where \(\delta x=x\otimes 1+1\otimes x\) is again considered in \((\mathcal{H}^{g}\otimes\mathcal{H}^{g})/(\bigoplus_{i+j\geq n+1}\mathcal{H}^{( i)*}\otimes\mathcal{H}^{(j)*})\).
One can easily calculate that \(\mathcal{P}\) as well as \(\mathcal{P}^{n}\) are Lie algebras if we equip them with the commutator \([x,y]=x\star y-y\star x\) as a Lie bracket, see Section 3.2 for details.
We will treat \(\mathcal{G}^{n},\mathcal{P}^{n}\) as subsets of \(\bigoplus_{i=0}^{n}\mathcal{H}^{(i)*}\), where they have a unique representative. For \(g\in\mathcal{G}^{n},p\in\mathcal{P}^{n}\) and an \(m\leq n\), it holds that \(\pi^{m}g\in\mathcal{G}^{m}\) and \(\pi^{m}p\in\mathcal{P}^{m}\), where \(\pi^{m}\) denotes the
truncation map \(\mathcal{H}^{n*}\to\mathcal{H}^{m*}\), \(\sum_{i=0}^{n}h_{i}\mapsto\sum_{i=0}^{m}h_{i}\). For the primitive elements, the inverse is also true: Given an \(p\in\mathcal{H}^{m*}\), \(p=\sum_{i=0}^{n}p_{i}\), we have that \(\sum_{i=0}^{m}p_{i}\in\mathcal{P}^{n}\) is also a level \(n\)-primitive. This is in general not true for the group-like elements.
In fact, for primitive elements, an even stronger statement holds:
**Lemma 3.5**.: _Let \(p\in\mathcal{P}\) be a primitive element. For any \(k\geq 1\), denote by \(p^{k}\) the projection of \(p\) onto \(\mathcal{H}^{(k)*}\). Then \(p^{k}\) is a primitive element._
Proof.: Fix a \(k\geq 1\) and let \(R^{k}\) be defined by
\[\Delta p^{k}=\mathbb{1}\otimes p^{k}+p^{k}\otimes\mathbb{1}+R^{k}\,.\]
We claim that \(R^{k}=0\). Since \(\mathcal{H}^{g}\) is grade, we see that \(R^{k}\in\bigoplus_{i+j=k}\mathcal{H}^{(i)*}\otimes\mathcal{H}^{(j)*}\). If we define \(R^{\tilde{k}}\) analogously to \(R^{k}\) for each \(\tilde{k}\neq k\),
\[\Delta p=\sum_{m\geq 1}\Delta p^{m}=1\otimes p+p\otimes 1+R^{k}+\sum_{\tilde{k} \neq k}R^{\tilde{k}}\]
gives us that \(R^{k}=-\sum_{\tilde{k}\neq k}R^{\tilde{k}}\in\bigoplus_{i+j\neq k}\mathcal{H}^{( i)*}\otimes\mathcal{H}^{(j)*}\). Since
\(\left(\bigoplus_{i+j\neq k}\mathcal{H}^{(i)*}\otimes\mathcal{H}^{(j)*}\right) \cap\left(\bigoplus_{i+j=k}\mathcal{H}^{(i)*}\otimes\mathcal{H}^{(j)*}\right) =\{0\}\), we get that \(R^{k}=0\), showing the claim.
#### 3.1.3. Exponential and Logarithmic map
Since \(\mathcal{G}^{n},\mathcal{P}^{n}\) is a Lie group and its associated Lie algebra, there exists at least a local diffeomorphism \(\exp:\mathcal{P}^{n}\to\mathcal{G}^{n}\) around the \(0\)-element in \(\mathcal{P}^{n}\). As it turns out, this is a global map simply given by the usual formula for the exponential map. In this subsection, we want to construct this map as well as its inverse log.
A standard result gives us that for all \(x\in\mathcal{G}^{n},y\in\mathcal{P}^{n}\), \(\epsilon(x)=1\) and \(\epsilon(y)=0\). Recalling the form of \(\epsilon\), we conclude that \(\mathcal{G}^{n}\subset\mathcal{H}^{n*}_{1}\) and \(\mathcal{P}^{n}\subset\mathcal{H}^{n*}_{0}\), where
\[\mathcal{H}^{n*}_{a}:=\left\{a1+\sum_{i=1}^{n}h_{i}\ \middle|\ h_{i}\in \mathcal{H}^{(i*)}\right\}\,.\]
Note that thanks to the truncation, \(\mathcal{H}^{(n*)}_{0}\) is nilpotent under the product \(\star\), implying that all power series \(\sum_{n\geq 0}\beta(n)h^{*k}\) converge for \(h\in\mathcal{H}^{n*}_{0}\). We conclude that the following maps are well-defined:
\[\exp_{n}:\mathcal{H}^{n*}_{0} \to\mathcal{H}^{n*}_{1}\] \[h \mapsto\sum_{k\geq 0}\frac{h^{*k}}{k!}\] \[\log_{n}:\mathcal{H}^{n*}_{1} \to\mathcal{H}^{n*}_{0}\] \[\mathbb{1}+h \mapsto\sum_{k\geq 1}(-1)^{k+1}\frac{h^{*k}}{k}\,.\]
One easily checks that \(\exp_{n},\log_{n}\) are inverse to each other, turning them into diffeomorphisms with \(\exp(0)=\mathbb{1}\). Further, the following result gives us that the Lie algebra \(\mathcal{P}^{n}\) is indeed associated with the Lie group \(\mathcal{G}^{n}\):
**Proposition 3.6**.: _It holds that \(\exp_{n}|_{\mathcal{P}^{n}}:\mathcal{P}^{n}\to\mathcal{G}^{n}\) is a diffeomorphism with inverse \(\log_{n}|_{\mathcal{G}^{n}}\)._
Proof.: The proof is the same as in the tensor algebra case, given in [11], Theorem 3.2.
### Lie algebras and universal enveloping algebra
The goal of this section is to recall the correspondence between Lie algebras and Hopf algebras For more details on this topic, we refer to [13] for a short introduction as well as [11] and [12] for a deeper introduction. Let us recall the definition of a Lie algebra:
**Definition 3.7**.: A Lie algebra is a vector space L equipped with a bilinear _Lie bracket_\([\cdot,\cdot]:L\times L\to L\), such that
* \([\cdot,\cdot]\) is anti-symmetric.
* The Jacobi-identity holds: For all \(x,y,z\in L\), we are that \[[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0\,.\]
We say \((L,[\cdot,\cdot])\) is graded if one can decompose it into finite-dimensional vector spaces
\[L=\bigoplus_{n\geq 1}L^{(n)}\,,\]
such that for all \(x\in L^{(k)},y\in L^{(l)}\), we have that \([x,y]\in L^{(l+k)}\).
Note that, unlike the Hopf-algebra case, we start with \(n=1\) and will not have a space \(L^{(0)}\)! We say that \(L\) is nilpotent, if there is an \(N\geq 1\) such that for all \(k\geq N\), \(L^{(k)}=\emptyset\). For any graded \(L\), its truncation \(L^{n}=L/\bigoplus_{i\geq n+1}L^{(i)}=\bigoplus_{i=0}^{n}L^{(i)}\) is automatically nilpotent.
**Example 3.8**.: _Let \((A,\circ)\) be an associate algebra with commutator \([x,y]=x\circ y-y\circ x\). Then \((A,[\cdot,\cdot])\) is a Lie algebra._
Since any Hopf algebra is an associative algebra, we can always equip it with its commutator to turn it into a Lie algebra. However, we are more interested in the Lie subalgebra given by the primitive elements \(\mathcal{P}\): For two primitive elements \(p,q\in\mathcal{P}\), it holds that
\[\Delta[p,q]=\Delta p\circ\Delta q-\Delta q\circ\Delta p=[p,q]\otimes 1+1\otimes[p,q]\,.\]
Thus, \(\mathcal{P}\) is closed under \([\cdot,\cdot]\) and thus a Lie subalgebra. It especially follows that the truncation \(\mathcal{P}^{n}\) is a nilpotent Lie algebra.
If one starts with a Hopf algebra \(\mathcal{H}\), it is in general not possible to recover \(\mathcal{H}\) just from the Lie algebra \(\mathcal{P}\) of primitive elements. However, one can recover a smaller Hopf algebra \(U(\mathcal{P})\), called the universal enveloping algebra of \(\mathcal{P}\), such that \([x,y]=x\star y-y\star x\) holds in \(U(\mathcal{P})\) and \(\mathcal{P}\subset U(\mathcal{P})\) is the set of primitive elements in \(U(\mathcal{P})\). It is constructed as follows: Let \(T(\mathcal{P})\) be the tensor algebra over with grading inherited from \(\mathcal{P}\), i.e. \(p_{1}\otimes p_{2}\otimes\cdots\otimes p_{n}\) has grading \(\sum_{k=1}^{n}|p_{k}|\). When \(T(\mathcal{P})\) is an associative algebra with the product \(\otimes\). To make sure that \(x\otimes y-y\otimes x\) is the same as the Lie bracket \([x,y]\), we set \(\mathcal{I}\) to be the two-sided ideal generated by \(\tilde{\mathcal{I}}:=\{x\otimes y-y\otimes x-[x,y]|x,y\in\mathcal{P}\}\). By quotioning out this ideal, we guarantee the desired identity.
**Definition 3.9**.: The space
\[U(P):=T(P)/\mathcal{I}\]
is called the universal enveloping algebra.
\(U(\mathcal{P})\) becomes a graded Hopf algebra by setting
* the multiplication to be the natural product in \(U(P)\).
* For all \(x\in P\), \(\Delta x=1\otimes x+x\otimes 1\).
* \(\mathbbm{1}(\alpha)=\alpha\mathbbm{1}\).
* \(\epsilon(\mathbbm{1})=1\) and \(\epsilon(x)=0\) for all \(x\) of degree higher than \(0\).
* \(S(x)=-x\) for all \(x\in P\).
Here, the coproduct, \(\epsilon\) and \(S\) will be extended to all of \(U(P)\) by demanding, that \(\Delta,\epsilon\) are algebra morphisms and \(S\) is an anti-homomorphism. It is a classical result (e.g. [10], [12]), [13] that this turns \(U(\mathcal{P})\) into a graded Hopf algebra.
Furthermore, \(U(\mathcal{P})\) inherits a grading from \(\mathcal{P}\) by setting \(|p|\) to be the grade of \(p\) as an element of the graded Lie algebra \(\mathcal{P}\). Further, we require \(|x\ast y|=|x|+|y|\), which extends the grading to all of \(U(\mathcal{P})\).
**Proposition 3.10**.: \(U(P)=\mathcal{H}_{Sh}/\mathcal{I}\)_, where \(\mathcal{H}_{Sh}\) denotes the shuffle-Hopf algebra._
Proof.: It is clear that \(\mathcal{H}_{Sh}/\mathcal{I}\) has all the above properties, as long as it is well-defined. One just needs to prove that \(\epsilon,S\) and \(\Delta\) are well-defined in \(\mathcal{H}_{Sh}/\mathcal{I}\). Note that for all \(z\in\tilde{\mathcal{I}}\), it holds that \(S(z)=-z\in\tilde{\mathcal{I}}\). Thus, it is well-defined as an anti-homomorphism \(\mathcal{H}_{Sh}/\mathcal{I}\to\mathcal{H}_{Sh}/\mathcal{I}\). Furthermore,
\(\epsilon(\tilde{\mathcal{I}})=0\). \(\Delta z=1\otimes z+z\otimes 1\) implying \(\Delta\tilde{\mathcal{I}}=1\otimes\tilde{\mathcal{I}}+\tilde{\mathcal{I}}\otimes 1\). Thus, all operations are well-defined showing the claim.
The following statement can be found in [10] as Theorem 2.1:
**Theorem 3.11**.: _Let \(\mathcal{P}\) be a graded Lie-algebra with universal enveloping algebra \(U(\mathcal{P})\). Then \(\mathcal{P}\) is the set of primitive elements in \(U(\mathcal{P})\)._
While in general \(\mathcal{H}\) can be bigger than \(U(\mathcal{P})\), it turns out that they are isomorphic to each other if \(\mathcal{H}\) is cocommutative. This is called the Theorem of Milnor-Moore, and gives a 1-1 correspondence between Lie algebras and Hopf algebras in the cocommutative setting:
**Theorem 3.12** (Milnor-Moore, [11]).: _Let \(\mathcal{H}\) be a connected, graded, cocommutative Hopf algebra. Then \(U(\mathcal{P})\cong\mathcal{H}\)._
This is of special interest in rough path theory, since all of the classically considered Hopf algebras of rough paths are cocommutative, implying that rough paths actually live in universal enveloping algebras. In Section 4.2, we will use this observation to construct elementary differentials from Lie algebra maps.
### Rough paths in general Hopf algebras
Let us fix the notation \(\Delta_{T}:=\{(s,t)\in[0,T]^{2}|s\leq t\}\) for the 2 dimensional simplex. A rough path over a general Hopf algebra \(\mathcal{H}\) is a two-parameter process \((\mathbb{X}_{s,t})_{(s,t)\in\Delta_{T}}\) in its dual space \(\mathcal{H}^{*}\), which fulfills three properties, namely a Holder continuity assumption, a Chen identity and that \(\mathbb{X}\) is a character on \(\mathcal{H}\). Recall that the last assumption translates to \(\mathbb{X}_{s,t}\) being a group-like element. We make a further assumption, namely that for all rough paths, \(\mathcal{H}\) is a commutative Hopf algebra. This especially implies that \(\mathbb{X}\) lives in the cocommutative Hopf algebra \(\mathcal{H}^{g}\), which can therefore be seen as a universal enveloping algebra by Theorem 3.12. Note that all three Hopf algebras introduced in Section 3.4 fulfill this assumption.
Thus, our definition of a rough paths reads as follows:
**Definition 3.13**.: Let \(\mathcal{H}\) be a graded, connected, commutative Hopf algebra with graded dual \(\mathcal{H}^{g}\). Let \(\alpha\in(0,1)\) and set \(N\) such that \(N\alpha\leq 1\), \((N+1)\alpha>1\). Then \(\mathbb{X}:\Delta_{T}\to G^{N}\subset\mathcal{H}^{N*}\) is called an \(\alpha\)-rough path, if:
* Chen's identity: \[\mathbb{X}_{s,u}\star\mathbb{X}_{u,t}=\mathbb{X}_{s,t}\,,\] where \(\star\) refers to the truncated product \(G^{N}\times G^{N}\to G^{N}\).
* Holder continuity: There exists a constant \(C>0\), such that for each \(\tau\in\mathcal{H}^{(i)*}\) for some \(0\leq i\leq N\): \[|\langle\mathbb{X}_{s,t},\tau\rangle|\leq C^{|\tau|}\,|t-s|^{|\tau|\gamma}\;.\] We call the infimum over all such constants \(C\) the norm \(\|\mathbb{X}\|\).
_Remark 3.14_.: Note that at this point, we made the choice to think of \(\mathbb{X}_{s,t}\) not as an infinite sum in \(\mathcal{H}^{*}\), but only as a finite sum in \(\mathcal{H}^{N*}\). Thus, this is the point where we choose to follow the sewing approach of [1] instead of the algebraic approach of [10].
Our choice of \(\|X\|\) allows for the following interaction with the multiplication: For any \(k\geq 1\) and \(\tau\in\mathcal{H}^{(i)}\) for some \(0\leq i\leq N\), one calculates that
\[\left|\left\langle\mathbb{X}_{s,t}^{*k},\tau\right\rangle\right|=\left|\sum_{ \tau_{1},\ldots,\tau_{k}}\left\langle X_{s,t},\tau_{1}\right\rangle\ldots \left\langle X_{s,t},\tau_{k}\right\rangle\left\langle\tau_{1}\star\cdots \star\tau_{k},\tau\right\rangle\right|\leq C\,\|X\|^{|\tau|}\,|t-s|^{\alpha| \tau|}\;, \tag{3.1}\]
for some \(C>0\) depending only on the \(\mathcal{H}\), and where we used that \(\left\langle\tau_{1}\star\cdots\star\tau_{k},\tau\right\rangle\neq 0\) only holds for \(|\tau_{1}|+\cdots+|\tau_{k}|=|\tau|\). This immediately allows us to compare the norm of the logarithm of \(\mathbb{X}\) with \(\|\mathbb{X}\|\):
**Lemma 3.15**.: _Let \(\mathbb{L}_{s,t}=\log(\mathbb{X}_{s,t})\in\mathcal{P}^{N}\) and define \(\left\|\mathbb{L}\right\|\) the same way as \(\left\|\mathbb{X}\right\|\). Then there are constants \(c,C>0\) depending only on the space \(\mathcal{H}^{N}\), such that for all \(\tau\in\mathcal{H}^{(k)}\) for some \(1\leq k\leq N\), we get that_
\[c\left\|\mathbb{X}\right\|\leq\left\|\mathbb{L}\right\|\leq C\left\|\mathbb{X }\right\|\,.\]
Proof.: Let \(\tilde{\mathbb{X}}_{s,t}=\mathbb{X}_{s,t}-1\). Then the formula
\[\mathbb{L}_{s,t}=\sum_{k=1}^{N}(-1)^{k+1}\frac{\tilde{\mathbb{X}}_{s,t}^{k}}{k}\]
together with (3.1) immediately gives us, that there is a \(C>0\) such that
\[\left|\left(\mathbb{L}_{s,t},\tau\right)\right|\leq C^{\left|\tau\right|} \left\|\mathbb{X}\right\|^{\left|\tau\right|}\left|t-s|^{\alpha\left|\tau \right|}\right.,\]
for all \((s,t)\in\Delta_{T}\) and \(\tau\in\mathcal{H}^{(i)},0\leq i\leq N\). This immediately gives us \(\left\|\mathbb{L}\right\|\leq C\left\|\mathbb{X}\right\|\). \(\left\|\mathbb{X}\right\|\leq\frac{1}{c}\left\|\mathbb{L}\right\|\) follows analogously from \(\mathbb{X}_{s,t}=\exp_{N}(\mathbb{L}_{s,t})\).
### Hopf algebras relevant to Rough path theory
In this section, we want to review the classical Hopf algebras used in Rough path theory:
#### 3.4.1. Shuffle algebra and tensor algebra
The classical Hopf algebra considered in rough path theory ([10], [11], [12]) is the tensor algebra acting on the shuffle algebra. An in-depth look at the algebra itself can be found in [14].
Let \(V\) be a (finite-dimensional) vector space and consider the set of tensor polynomials
\[T(V):=\bigoplus_{k\geq 0}V^{\otimes k}\,.\]
We will denote the product \(m:T(V)\otimes T(V)\to T(V)\) with \(\cdot\), so that \(\otimes\) is reserved for elements \(x\otimes y\in T(V)\otimes T(V)\). Thus, \(T(V)\) consists of words \(e_{i_{1}}\dots e_{i_{n}}\) with \(i_{1},\dots,i_{n}\in\{1,\dots,d\}\) and \(\{e_{1},\dots,e_{d}\}\) being a basis of \(V\), if it is finite-dimensional. \(T(V)\) is naturally equipped with the product \(\cdot\), which concatenates two words \(u\cdot v=uv\).
We can also equip it with the shuffle product \(\shuffle\) as follows: Given two words \(w=w_{1}\otimes\dots\otimes w_{n}\), \(u=u_{1}\otimes\dots\otimes u_{k}\), \(w\shuffle u\) is the sum over all words with the letters \(w_{1},\dots,w_{n},u_{1},\dots,u_{k}\), such that the original order of letters in \(w\) and \(u\) gets preserved. It is formally defined recursively by
\[w\shuffle 1 =w\] \[w\shuffle u =(w\shuffle(u_{1}\dots u_{k-1}))\otimes u_{k}+((w_{1}\dots w_{n-1 })\shuffle u)\otimes w_{n}\]
where \(\mathbb{1}\) is the empty word and \(w=(w_{1}\dots w_{n}),u=(u_{1}\dots u_{k})\). Note that \(\shuffle\) is a commutative product. We denote the dual of the multiplication \(\cdot\) with \(\Delta\), which is just given by
\[\Delta w=\sum_{w_{1}\cdot w_{2}=w}w_{1}\otimes w_{2}\,.\]
On the other hand, we denote the dual of the shuffle product by the coproduct \(\Delta_{\shuffle}\), also called the _deshuffle_. We then set \(\mathcal{H}=(T(V),\shuffle,\Delta)\) to be the shuffle-algebra (where \(\mathbb{1}\) is the empty word and \(\epsilon(\mathbb{1})=1\), \(\epsilon(w)=0\) for all non-empty \(w\)), turning \(\mathcal{H}\) into a bialgebra.
We can introduce a grading on \(\mathcal{H}\), whenever there is a grading on \(V\): Set \(\left|(w_{1}\dots w_{n})\right|=\left|w_{1}\right|+\dots+\left|w_{n}\right|\) for all words \(w\), suc that each letter \(w_{i}\in V^{(j)}\) for some \(j\in\mathbb{N}\) and all \(i=1,\dots,n\). If \(V=V^{(1)}\), the grade of each word is simply its length. In this case, we say that the grading of \(T(V)\) is homogenous. Otherwise, we call the grading inhomogenous. \(\mathcal{H}\) is a connected graded bialgebra and thus a Hopf algebra by [11].
Its graded dual is given by the Tensor algebra \(\mathcal{H}^{g}=(T(V),\cdot,\Delta_{\shuffle})\) with the same \(\mathbb{1}\) and \(\epsilon\). In this case, we can directly identify the primitive elements as the single letter words \(\mathcal{P}=V\subset\mathcal{H}\).
Rough paths in \((T(V),\cdot,\Delta_{\sqcup\sqcup})\) are called (weakly) geometric rough paths. Since we never use non-weakly geometric rough paths in this paper, we drop the word weakly and simply refer to rough paths in \(T(V)\) as geometric rough paths.
#### 3.4.2. Hopf algebras of rooted trees
The classical Hopf algebras for branched rough path theory are considered over rooted trees [14]. In this context, we need to differentiate between _planarly_ and _non-planarly_ branched rough paths, which are constructed over ordered and unordered rooted trees, respectively.
To get started, we call an acyclic connected graph with finitely many vertices a tree. If there is a preferred vertex, we call that vertex the root and the tree rooted. Then drawing a rooted tree, we will always draw the root as the bottommost vertex. Given an alphabet \(A\), any tree is called an \(A\)-decorated tree, if each vertex is equipped with a decoration \(a\in A\). For a tree with root with decoration \(i\) and children \(\tau_{1},\ldots,\tau_{n}\), we use the notation
\[[\tau_{1},\ldots,\tau_{n}]_{i}=\raisebox{-0.5pt}{\includegraphics[scale=0.5]{ r0.eps}}\cdots\raisebox{-0.5pt}{\includegraphics[scale=0.5]{ r0.eps}}\]
We also introduce the operators \(B_{a}^{+}(\tau_{1}\ldots\tau_{n})=[\tau_{1},\ldots,\tau_{n}]_{a}\), which maps forests to trees by adding a root decorated with \(a\), as well as the root-cutting operator \(B^{-}([\tau_{1},\ldots,\tau_{n}]_{i})=\tau_{1}\ldots\tau_{n}\), which maps trees to forests.
We say a tree is _ordered_ (or _planar_) if the children of each vertex are equipped with an ordering. If none of them have an ordering, we call the tree _unordered_ (or _non-planar_). We denote the set of unordered trees with \(\mathcal{UT}\) and the set of ordered trees with \(\mathcal{OT}\). For example, in \(\mathcal{UT}\) it holds that
\[\raisebox{-0.5pt}{\includegraphics[scale=0.5]{r0.eps}}\raisebox{-0.5pt}{ \includegraphics[scale=0.5]{r0.eps}}\raisebox{-0.5pt}{\includegraphics[scale=0.5]{ r0.eps}}\raisebox{-0.5pt}{\includegraphics[scale=0.5]{r0.eps}}\raisebox{-0.5pt}{ \includegraphics[scale=0.5]{r0.eps}}\raisebox{-0.5pt}{\includegraphics[scale=0.5]{ r0.
by the "cut off" trees gathered as a forest, while \(f^{(2)}\) is the left over tree \(\tau\) after cutting of \(f^{(1)}\). For example
\[\begin{array}{ccc}\includegraphics[width=14.226378pt]{images/1.eps}&\text{ corresponds to }&\includegraphics[width=14.226378pt]{images/2.eps}\otimes\includegraphics[width=14.226378pt]{images/3.eps}\end{array}\]
We can then set \(\Delta\tau:=\tau\otimes 1+\sum_{\text{adm. cuts}}f^{(1)}\otimes f^{(2)}\). For a forest \(f=\tau_{1}\ldots\tau_{n}\), we set \(\Delta f:=\Delta\tau_{1}\ldots\Delta\tau_{n}\). With these operations, \(\mathcal{H}_{CK}=(\operatorname{span}(\mathcal{U}\mathcal{F}),\cdot,\Delta, \mathds{1},\epsilon)\) forms a bialgebra. We can equip it with a scaling by setting \(|f|\) to be the number of vertices in the forest \(f\). Thus, \(\mathcal{H}_{CK}\) is a graded bialgebra and can thus be equipped with an antipode to become a Hopf algebra. It is called the Connes-Kreimer Hopf algebra.
We denote its graded dual by \(\mathcal{H}_{GL}\) since it forms the Grossman-Larson algebra. We will not associate any basis element in \(\mathcal{F}\) with its dual basis but rather denote with any forest \(f\in\mathcal{H}_{GL}\) the element of \(\mathcal{H}_{CK}^{*}\), such that
\[\langle f,g\rangle=sg(f)\delta_{f,g}\]
holds for all \(g\in\mathcal{U}\mathcal{F}\), where \(sg(f)\) is the _symmetry factor_ of \(f\), which is recursively given by
\[sg(\mathds{1})=1\qquad sg(\tau_{1}\ldots\tau_{n})=n!\prod_{i=1}^{n}sg(\tau_{i })\qquad sg([\tau_{1},\ldots,\tau_{n}]_{a})=sg(\tau_{1}\ldots,\tau_{n})\,,\]
for all trees \(\tau_{1}\ldots\tau_{n}\in\mathcal{U}\mathcal{T}\). This construction causes the dual product \(\star\) in \(\mathcal{H}_{GL}\) defined by \(\langle f\star g,h\rangle=\langle f\otimes g,\Delta h\rangle\) to have the following form: For \(f=\tau_{1}\ldots\tau_{n}\), \(\tau_{i}\in\mathcal{U}\mathcal{T}\), \(i=1\ldots,n\) as well as a \(\sigma\in\mathcal{U}\mathcal{T}\), we define the grafting product \(f\curvearrowright\sigma\) to be the sum over all possibilities to grow the trees \(\tau_{1},\ldots,\tau_{n}\) out of vertices of \(\sigma\). Then, \(\star\) can be defined by adding a root to the forest \(g\) and grafting \(f\) onto the new forest before removing the root again: \(f\star g:=B^{-}(f\curvearrowright B^{+}_{1}(g))\). Note that for two trees \(\tau,\sigma\in\mathcal{U}\mathcal{T}\), it holds that \(\tau\star\sigma=\tau\sigma+\tau\curvearrowright\sigma\). Let us give an example of this operation:
\[\begin{array}{ccc}\includegraphics[width=14.226378pt]{images/1.eps}& \includegraphics[width=14.226378pt]{images/2.eps}&\includegraphics[width=14.226378 pt]{images/2.eps}&\includegraphics[width=14.226378pt]{images/3.eps}\end{array}\]
The dual coproduct \(\delta\) defined by \(\langle\delta f,h\otimes g\rangle=\langle f,hg\rangle\) is given by the deconcatenation, corrected with the appropriate symmetry factors:
\[\delta f=\sum_{gh=f}\frac{sg(f)}{sg(g)sg(h)}g\otimes h\,.\]
\(\mathds{1},\epsilon\) and the grading is given as in \(\mathcal{H}_{CK}\) and the antipode is simply the dual operator of the antipode \(S\) in \(\mathcal{H}_{CK}\).
In \(\mathcal{H}_{GL}\), the primitive elements are given as the vector space spanned by \(\mathcal{U}\mathcal{T}\) of trees, as Proposition 2.11 from [15] shows.
A rough path in \(\mathcal{H}_{GL}\) is called a non-planarly branched rough path.
**Planar case: The Munthe-Kaas-Wright Hopf algebra:** Planar branched rough paths [1] need to be constructed over a Hopf algebra constructed from ordered forests in \(\mathcal{O}\mathcal{F}\). This Hopf algebra was constructed in [11] and played a vital role in the analysis of Lie-group integrators [12], [13]. We will denote it with \(\mathcal{H}_{MKW}\) and its graded dual with \(\mathcal{H}_{MKW}^{g}\). Note that in the 1989 paper of Grossman-Larson [1], they discuss both unordered trees and ordered trees, which makes it sensible to also call \(\mathcal{H}_{MKW}^{g}\) a Grossman-Larson Hopf algebra. To avoid confusion, we will reserve this name for the algebra \(\mathcal{H}_{GL}\) over unordered trees and strictly speak of the Munthe-Kaas-Wright algebra and its graded dual for ordered trees.
As mentioned before, \(\mathcal{H}_{MKW}\) is given by the tensor algebra (or more precisely, the Shuffle algebra) \(T(\mathcal{O}\mathcal{F})\) over the ordered forests \(\mathcal{O}\mathcal{F}\). We equip it with the empty forest as unit element \(\mathds{1}=\emptyset\) and the shuffle product \(\shuffle\) to form an associative, commutative algebra. The counit is given by \(\epsilon(\mathds{1})=1\) and \(\epsilon(f)=0\) for all non-empty forests \(f\), as before. The coproduct, denoted by \(\Delta\) can be constructed as the sum over all _full left admissible cuts_ ([11], Def. 6), but it is easier to construct a product
\(\star\) on \(\mathcal{H}^{g}_{MKW}\) and define \(\Delta\) to be its dual. The grading on \(\mathcal{H}_{MKW}\) is given by the number of vertices, as before.
The graded dual \(\mathcal{H}^{g}_{MKW}\) can be equipped with the dual operation \(\delta\) (making it the deshuffle) and uses the same unit and counit as \(\mathcal{H}_{MKW}\). In the ordered case, we do not need symmetry factors and can simply identify the elements in \(\mathcal{H}^{g}_{MKW}\) with linear combinations of \(\mathcal{O}\mathcal{F}\) by requiring
\[\langle f,g\rangle=\delta_{f,g}\]
for all ordered forests \(f,g\in\mathcal{O}\mathcal{F}\).The grading on \(\mathcal{O}\mathcal{F}\) is again given by the number of vertices in a forest. To make \(\mathcal{H}_{MKW},\mathcal{H}^{g}_{MKW}\) into graded bialgebras and thus Hopf algebras, it remains to construct the product \(\star\) on \(\mathcal{H}^{g}_{MKW}\): For two trees \(\tau,\sigma\), we set the _left grafting_\(\tau\curvearrowright_{l}\sigma\) to be the sum over all possibilities, to grow \(\tau\) out of a vertex of \(\sigma\)_as the left-most child_ of said vertex. For a forest \(f=\tau_{1}\ldots\tau_{n}\), we set \(f\curvearrowright_{l}\sigma\) to be the sum over all possibilities to grow the trees \(\tau_{1},\ldots,\tau_{n}\) out of vertices of \(\sigma\) as the left-most child, with the extra condition that if two or more trees \(\tau_{1},\ldots,\tau_{k}\), \(k\geq 2\) grow out of the same vertex \(p\) in \(\sigma\), then they need to have the same order as children of \(\sigma\) as they had as trees in \(f\). For example,
As before, we can construct \(\star\) by adding and removing a root in a smart way: \(f\star g:=B^{-}(f\curvearrowright_{l}B^{+}_{1}(g))\). Since \(\mathcal{H}^{g}_{MKW}\) as a coalgebra is the shuffle coalgebra over \(T(\mathcal{O}\mathcal{T})\), we immediately get from [10] that the primitive elements are given by the free Lie algebra over \(\mathcal{O}\mathcal{T}\) with respect to the commutator \([f,g]_{\otimes}=fg-gf\). It should be noted that this is not the commutator of \(\mathcal{H}^{g}_{MKW}\) as a Hopf algebra, but since \(\star\) is associative and \(\delta\) is a \(\star\)-homomorphism, \(\mathcal{P}\) is also a Lie algebra with respect to the \(\star\)-commutator \([f,g]_{\star}=f\star g-g\star f\).
The rough paths in \(\mathcal{H}^{g}_{MKW}\) are called planarly branched rough paths.
_Remark 3.16_.: With the use of so-called aborification maps, one can show that any geometric rough path gives rise to a canonical non-planarly branched rough path as well as a planarly branched rough path. However, it is in general not true that a non-planarly branched rough path gives rise to a planarly branched rough path or vice versa. For more information, we refer to [10].
### Vector fields, differential operators and linear connections
Let us use this section to recall some notions from differential geometry: Let \(M\) be a smooth manifold. A smooth vector field on \(M\) is map \(V:M\to TM\), such that \(m\mapsto V(M)\in T_{m}M\). In coordinates, every vector field can be expressed in the form
\[V^{\phi}(m)=\sum_{n=1}^{d}c_{n}(m)\partial_{n}\,,\]
where \(\partial_{n}\) is the derivative of \(\phi\) evaluated in the \(n-th\) unit direction. \(V\) acts on \(C^{\infty}(M)\) by
\[V\psi(m)=\sum_{n=1}^{d}c_{w}(m)\partial_{n}\psi(m)\,,\]
which gives us a map mapping smooth vector fields into smooth differential operators on \(M\). Let us recall the definition of a smooth differential operator:
**Definition 3.17**.: A smooth map \(F:C^{\infty}(M)\to C^{\infty}(M)\) is called a smooth differential operator if there is an \(N>0\) such that \(F\) has the following expression in each coordinate chart:
\[F\psi(x)=\sum_{|w|\leq N}c_{w}(x)\partial_{w}\psi(x)\,,\]
where \(\partial_{w}\psi=\partial_{w_{1}}\ldots\partial_{w_{|w|}}\psi\) for each word \(w=w_{1}\ldots w_{n}\) over the alphabet \(A=\{1,\ldots,d\}\), and \(c_{w}\in C^{\infty}(U)\) for the open set \(U\subset M\) on which the coordinate function lives. If at least one \(c_{w}\neq 0\) for a \(|w|=N\), we call \(N\) the order of \(F\). If \(F\) is of order \(1\), we call it a vector field. We denote by \(\mathcal{D}\) the space of differential operators and by \(\mathcal{V}\) the space of vector fields.
It should be noted that \(c_{w}\) are coordinate dependent, but the order of \(F\) is not. If we have a vector field \(F\), one can regain the map \(V:M\to TM\) by applying \(F\) to the coordinate functions to get \(F\phi(m)\in\mathbb{R}^{d}\cong T_{m}M\). It is further well known that the vector fields are uniquely characterized as the differential operators \(F\in\mathcal{D}\) such that the Leipniz rule holds: For all \(\phi,\psi\in C^{\infty}(M)\),
\[F(\phi\cdot\psi)=F(\phi)\cdot\psi+\phi\cdot F(\psi)\,.\]
\((\mathcal{D},\circ,\mathrm{Id})\) forms an associative algebra, where \(\mathrm{Id}\) is the identity map and \(\circ\) is composition. The set of vector fields \(\mathcal{V}\) equipped with the commutator \([V,U]=V\circ U-U\circ V\) form a Lie algebra.
Given a smooth vector field \(V\) on \(M\), the initial value problem
\[\begin{cases}dZ_{t}=V(Z_{t})\\ Z_{0}=z\in M\end{cases}\]
has a unique solution which we denote by \(\mu_{t}z=Z_{t}\) for all \(t\leq T\) up to some explosion time \(T(z)\). Since everything is smooth, we can choose \(T:M\to\mathbb{R}_{+}\) to be a smooth map on \(M\). We assume that \(M\) does not have a border so that \(T(z)>0\) for all \(z\in M\). It follows that we can assign each compact set \(K\subset M\) an explosion time \(T(K)=\inf_{z\in K}T(z)>0\). Furthermore, it is well known that \(\mu_{t}z\) is Lipschitz-continuous in the starting value \(z\): If we have \(x,y\in K\subset O\) for some compact set \(K\) and open neighbourhood \(O\subset M\), such that \(\mu_{t}x,\mu_{t}y\in O\), and we have a coordinate chart \(\phi:O\to U\subset\mathbb{R}^{d}\), it holds that
\[\left|\phi(\mu_{t}(x))-\phi(\mu_{t}(y))\right|\leq L(\phi,K,t)\left|\phi(x)- \phi(y)\right|\,,\]
for \(x,y\in K\), \(t\leq T(K)\), with \(L(\phi,K,t)\to 1\) as \(t\to 0\). If \(M=\mathbb{R}^{d}\) is just the flat space, we can find \(T\) explicitly: For any \(K\subset\mathbb{R}^{d}\) compact, we can equip \(V\) with the norm
\[\left\|V\right\|_{K}=\left\|V\right\|_{\infty,K}+\left\|V\right\|_{\text{Lip},K}\,,\]
where \(\left\|V\right\|_{\infty,K}=\sup_{x\in K}\left|V(x)\right|\) and \(\left\|V\right\|_{\text{Lip},K}=\sup_{x\neq y\in K}\frac{\left|V(x)-V(y) \right|}{\left|x-y\right|}\) are just the usual supremum norm and Lipschitz norm. Let \(\hat{K}\) be the interior of \(K\). Using \(Z_{t}=z+\int_{0}^{t}V(Z_{s})ds\), we get that \(Z_{t}\) can not leave \(K\), as long as \(T\left\|V\right\|_{K}\leq\operatorname{dist}(z,\hat{K}^{C})\). By a standard fixed-point argument, it follows that we get a solution up to time \(T(z)\) fulfilling
\[T(z)\left\|V\right\|_{K}\leq\min(1,\operatorname{dist}(z,\hat{K}^{C}))\,. \tag{3.2}\]
### Linear connections
For non-geometric rough paths, one requires more structure on a manifold to solve an RDE, as demonstrated by [1]. This extra structure is given by a linear connection ([10], [11], [12]). Note that from this chapter onwards, we will use Einstein summation notation.
Let \(\mathcal{V}(M)\) denote the set of smooth vector fields on \(M\) (a smooth \(d\)-dimensional manifold). A linear connection is a form to differentiate a vector field in the direction of another vector field:
**Definition 3.18**.: A smooth, linear connection (or covariant derivative) is a smooth map \(\nabla:\mathcal{V}(M)\times\mathcal{V}(M)\to\mathcal{V}(M)\), which is linear over \(C^{\infty}(M)\) in the first component and fulfills the Leipniz rule in the second component:
\[\nabla_{fU}V =f\nabla_{U}V\] \[\nabla_{U}fV =U(f)V+f\nabla_{U}V\,.\]
For all \(U,V\in\mathcal{V}(M)\), \(f\in C^{\infty}(M)\). We call \(\nabla_{U}V\) the covariant derivative of \(V\) in direction \(U\). For all fixed \(V\), the map \(\nabla V:\mathcal{V}(M)\to\mathcal{V}(M)\), \(U\mapsto\nabla_{U}V\) is called the total covariant derivative of \(V\).
One can think of \(\nabla_{U}V\) as a directional derivative of \(V\) in direction \(U\). In coordinates, the easiest way to deal with connections is to denote by \(\partial_{i}\) the (local) vector field generated by the coordinates and denote the Christoffel symbols as the smooth functions \(\Gamma_{i,j}^{k}\) in \(C^{\infty}(O)\) for some open set \(O\subset M\) given by
\[\nabla_{\partial_{i}}\partial_{j}=\Gamma_{i,j}^{k}\partial_{k}\,.\]
Since \(\nabla\) is smooth, the Christoffel symbols will be smooth functions. We introduce higher order covariant derivatives as in [10], pages 53-54: We denote the tangent space of \(M\) at a point \(x\in M\) by \(T_{x}M\) and its dual space by \(T_{x}M^{*}\). A smooth map from \(M\) into the tangent bundle is called a vector field and a map into the cotangent bundle \(TM^{*}\) is called a covector field. We denote \(T_{n}M:=TM^{\otimes n}\) and \(T^{k}M:=(TM^{*})^{\otimes k}\) last but not least, \(T_{n}^{k}M:=T_{n}M\otimes T^{k}M\). We can identify \(T_{n}^{k}M\) with smooth sections of linear maps
\[F(x):(T_{x}M^{*})^{\otimes n}\otimes(T_{x}M)^{\otimes k}\to\mathbb{R}\]
and thus, we can always find smooth maps \(F_{j_{1},\dots,j_{k}}^{i_{1},\dots,i_{n}}:M\to\mathbb{R}\) such that
\[F(x)=F_{j_{1},\dots,j_{k}}^{i_{1},\dots,i_{n}}(x)\partial_{i_{1}}\otimes \dots\otimes\partial_{i_{n}}\otimes d^{j_{1}}\otimes\dots\otimes d^{j_{k}}.\]
With these spaces, we can define the n-th covariant derivative quite easily. Let \(V\) be a vector field. Then the covariant derivative in the direction of \(V\) is defined by:
* For a function \(\phi\in T_{0}^{0}M=C^{\infty}(M)\), we have \(\nabla_{V}\phi=V\phi\). If \(U\) is another vector field, we \(\nabla_{V}U\) is given by the covariant derivative.
* For an \(F\in T_{n}^{k}M\), we define \[\nabla_{V}F(\omega_{1},\dots,\omega_{n},U_{1},\dots,U_{k})=V (F(\omega_{1},\dots,\omega_{n},U_{1},\dots,U_{k}))\] (3.3) \[-\sum_{j=1}^{n}F(\omega_{1},\dots,\nabla_{V}\omega_{j},\dots, \omega_{k},U_{1},\dots,U_{k})\] \[-\sum_{j=1}^{k}F(\omega_{1},\dots,\omega_{k},U_{1},\dots,\nabla_{ V}U_{j},\dots,U_{n})\]
Note that this definition seems to require us to first define \(\nabla_{V}\omega\) for a vector field \(V\) and a one-form \(\omega\). Since a one form \(\omega(U)\) only has a vector field as argument, (3.3) gives us a directly \(\nabla_{V}\omega(U)=V(\omega(U))-\omega(\nabla_{V}U)\), which is well-defined. In coordinates, it reads
\[\nabla_{V}\omega=(V^{\alpha}\partial_{\alpha}\omega_{i}-V^{\alpha}\omega_{k} \Gamma_{\alpha,i}^{k})d^{i}\,. \tag{3.4}\]
Thus, (3.3) is well-defined for all \(F\in T_{n}^{k}M\).
We define the total covariant derivative of \(F\) by \(\nabla F\in T^{k+1}\)
\[\nabla F(\omega_{1}\dots,\omega_{n},V,U_{1},\dots,U_{k}):=\nabla_{V}F(\omega_ {1},\dots,\omega_{n},U_{1},\dots,U_{k}).\]
The \(m\)-th covariant derivative of \(F\) is then simply given inductively as \(\nabla^{m}F\in T_{n}^{k+m}M\), \(\nabla^{m}F:=\nabla(\nabla^{m-1}F)\).
In practice, we are only interested in the \(m\)-th covariant derivative of functions and vector fields, for which the above calculations can be somewhat simplified. Let us pick a coordinate chart, such that we can express all vector fields via \(V=V^{i}\partial_{i}\) and the one-forms as \(\omega=\omega_{j}d^{j}\). As discussed above, the covariant derivative of a vector field is then given via
\[\nabla_{V}U=(V^{i}\partial_{i}U^{k}+V^{i}U^{j}\Gamma_{i,j}^{k})\partial_{k},\]
and the covariant derivative of a function is simply \(\nabla_{V}\phi=V^{i}\partial_{i}\phi\). Using (3.4) on the derivative of a function \(\phi\in C^{\infty}(M)\) given by \(\omega=d\phi=\partial_{i}\phi d^{i}\) we get \(\nabla d\phi=\nabla^{2}\phi=(\partial_{i}\partial_{j}\phi-\Gamma_{i,j} \partial_{k}\phi)d^{i}\otimes d^{j}\). More generally, we get the \(n-th\) covariant derivative of \(\phi\) inductively via
\[\nabla^{n}\phi(U_{1},\dots,U_{n})=U_{1}(\nabla^{n-1}\phi(U_{2},\dots,U_{n}))- \sum_{k=2}^{n}\nabla^{n-1}\phi(U_{2},\dots,\nabla_{U_{1}}U_{k},\dots,U_{n})\,,\]
which is again a function in \(C^{\infty}(M)\). For vector fields, we can use the relation \(V\phi=V(d\phi)\) to get a inductive formula with one more term:
\[\nabla^{n}V(U_{1},\dots,U_{n})\phi=U_{1} (\nabla^{n-1}V(U_{2},\dots,U_{n})\phi)\] \[-\sum_{k=2}^{n}\nabla^{n-1}V(U_{2},\dots,\nabla_{U_{1}}U_{k}, \dots,U_{n})(\phi)\] \[-\nabla^{n-1}V(\nabla_{U_{1}}d\phi,U_{2},\dots,U_{n})\,.\]
From an algebraic point of view, a connection gives us a non-commutative, non-associative multiplication \(\mathcal{V}\otimes\mathcal{V}\to\mathcal{V}\), \(U\otimes V\mapsto U\triangleright V:=\nabla_{U}V\) on the space of vector fields. In this case, it is common to control this multiplication with the use of the _torsion_ and _curvature_ of the connection, which are given by
\[T(X,Y) =\nabla_{X}Y-\nabla_{Y}X-[X,Y]\] \[R(X,Y)Z =\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z\]
for vector fields \(X,Y,Z\), where \([X,Y]=X\circ Y-Y\circ X\) is the commutator of the two vector fields. While \((V,\triangleright)\) is not an associative algebra, one can put extra conditions on \(T\) and \(R\) to turn it into a pre-Lie or post-Lie algebra, see Section 5 for details.
## 4. Pseudo bialgebra maps and constructing local flows from rough paths
### Hopf algebraic structures on \(\mathcal{D}\) and pseudo bialgebra maps
As mentioned above, \(\mathcal{D}\) forms an associative algebra with composition as a product. If one chooses a reference point \(m\in M\), if further acts on the function space \(C^{\infty}M\) via the pairing
\[\langle F,\phi\rangle=F\phi(m)\,.\]
Since \(C^{\infty}(M)\) is itself an associative algebra with pointwise multiplication as a product, this raises the question if we can turn \(\mathcal{D}\) into a bialgebra or even Hopf algebra by finding a coproduct such that \(\langle\Delta F,\phi\otimes\psi\rangle=\langle F,\phi\cdot\psi\rangle\). The short answer is no: Since \(C^{\infty}(M)\) is infinite-dimensional, the dual operation of \(\cdot\) is not a coproduct, and \(\mathcal{D}\) with the pairing described above is clearly not the whole dual space of \(C^{\infty}(M)\).
It is however possible to construct Hopf algebras on subsets of \(\mathcal{D}\): The simplest example is given by Example 2.2 of [10], but more advanced structures have been found for example in [11] or [12].
However, in general, it is not necessary to precisely describe the Hopf algebraic structures of subsets of \(\mathcal{D}\): The approach generally taken in Butcher series theory, as well as rough path theory, is to construct abstract Hopf algebras \(\mathcal{H},\mathcal{H}^{g}\) beforehand and find suitable mappings, also called _elementary differentials_[11] to map \(\mathcal{H}^{g}\) into \(\mathcal{D}\).
In our paper, this class of elementary differentials will be given by what we call _pseudo bialgebra maps_, which are supposed to connect \(\mathcal{H}\) and \(\mathcal{D}\) in such a way, that
* \(\star\) in \(\mathcal{H}\) corresponds to the composition \(\circ\) in \(\mathcal{D}\) and
* \(\Delta\) is "dual" to the pointwise product between functions.
This leads to the following definition:
**Definition 4.1**.: Let \(\mathcal{H}\) be a bialgebra. We say that a linear map \(\mathcal{F}:\mathcal{H}\to\mathcal{D}\) is a pseudo bialgebra map, if
* \(\mathcal{F}:(\mathcal{H},\star,1)\to(\mathcal{D},\circ,\mathrm{Id})\) is an associative algebra homomorphism and
* For all \(x\in\mathcal{H}\) and \(\phi,\psi\in C^{\infty}(M)\), we have \[\mathcal{F}(\Delta x)(\psi\otimes\phi)=\mathcal{F}(x)(\phi\cdot\psi)\]
_Remark 4.2_.: Since \(\cdot\) is a commutative product in \(C^{\infty}(M)\), this gives an intuitive meaning to the requirement, that our rough paths need to live in an cocommutative Hopf algebra.
### Pseudo bialgebra maps and Lie algebra maps
From the definition of a pseudo bialgebra map, one can easily show the following:
**Proposition 4.3**.: _Let \(p\in\mathcal{P}\subset\mathcal{H}\) be a primitive object. If \(\mathcal{F}\) is a pseudo bialgebra map, then \(\mathcal{F}(p)\) is a vector field._
Proof.: Let \(\mathcal{F}\) be a pseudo bialgebra map and \(p\in\mathcal{P}\). Then:
\[\mathcal{F}(p)(\phi\cdot\psi)=\mathcal{F}(p\otimes 1+1\otimes p)(\phi\otimes \psi)=\mathcal{F}(p)\phi\cdot\psi+\phi\cdot\mathcal{F}(p)\psi\]
holds for all \(\phi,\psi\in C^{\infty}(M)\). Thus, \(\mathcal{F}(p)\) fulfills the Leipniz rule and is a vector field.
Recall that \(\mathcal{P}\) equipped with the commutator \([x,y]=x\star y-y\star x\) forms a Lie algebra, and since \(\mathcal{F}\) is an algebra-homomorphism, we immediately get that \(\mathcal{F}|_{\mathcal{P}}:\mathcal{P}\to\mathcal{V}\) is a Lie algebra-morphism.
An interesting observation from Section 4.3 is that we only use \(\mathcal{F}|_{\mathcal{P}}\) to construct the solution flow to a rough differential equation. This raises the question, of whether a Lie algebra map mapping \(\mathcal{P}\to\mathcal{V}\) already generates a pseudo bialgebra map on all of \(\mathcal{H}\).
The short answer is no, it generates a pseudo bialgebra map on the universal enveloping algebra \(U(\mathcal{P})\). However, all relevant Hopf algebras for rough paths are cocommutative, so the Theorem of Milnor-Moore gives us, that \(\mathcal{H}\cong U(\mathcal{P})\). That implies that any Lie algebra map can be extended to a Hopf algebra map: Let \(\tilde{\mathcal{H}}\) be another Hopf algebra and let \(\mathcal{F}:\mathcal{P}(\mathcal{H})\to\mathcal{P}(\tilde{\mathcal{H}})\) be a Lie map. \(\mathcal{F}\) extends to an algebra map \(\mathcal{F}:\mathcal{H}\to\tilde{\mathcal{H}}\) by the universal property of \(\mathcal{U}(\mathcal{P}(\mathcal{H}))\). With a bit of work, one can see that it suffices that \(\mathcal{F}\) maps primitive elements into primitive elements to show that it is a Hopf-algebra map (here we just use that \(\Delta,\epsilon\) are homomorphism and \(S\) is an anti-homomorphism). As it turns out, this result extends to pseudo bialgebra maps:
**Theorem 4.4**.: _Let \(\mathcal{F}:\mathcal{P}\to\mathcal{V}\) be a Lie map and let \(\tilde{\mathcal{F}}:\mathcal{U}(\mathcal{P})\to\mathcal{D}\) be the algebra map generated by \(\mathcal{F}\). Then \(\tilde{\mathcal{F}}\) is a pseudo bialgebra map._
Proof.: We already know that \(\tilde{\mathcal{F}}\) is an algebra map, so it suffices to show that
\[\mathcal{F}(\Delta x)(\phi\otimes\psi)=\mathcal{F}(x)(\phi\cdot\psi)\,.\]
For a primitive element \(x\in\mathcal{P}\), this holds since \(\mathcal{F}(x)\) is a vector field and thus obeys the Leipniz rule. We use that every element \(w\in\mathcal{U}(\mathcal{P})\) can be written as a word \(w=(w_{1}\ldots w_{n})\) with letters being primitive elements \(w_{i}\in\mathcal{P}\), \(i=1,\ldots n\). For a general word \(w\in\mathcal{U}(\mathcal{P})\), it holds that
\[\mathcal{F}(w)(\phi\cdot\psi) =\mathcal{F}(w_{1})\circ\cdots\circ\mathcal{F}(w_{|w|})(\phi\cdot\psi)\] \[=(\mathrm{Id}\otimes\mathcal{F}(w_{1})+\mathcal{F}(w_{1})\otimes \mathrm{Id})\circ\cdots\circ(\mathrm{Id}\otimes\mathcal{F}(w_{|w|})+ \mathcal{F}(w_{|w|})\otimes\mathrm{Id})(\phi\cdot\psi)\] \[=\mathcal{F}(\Delta x)(\phi\otimes\psi)\,,\]
where we use that \(\Delta x\) is just the deshuffle of \(w\) in \(\mathcal{U}(\mathcal{P})\).
### Constructing almost-flows from rough paths
The goal of this subsection is to show that any rough path \(\mathbb{X}\) together with a pseudo bialgebra map generates an almost-flow on \(M\), and thus a flow on \(M\) by Section 2. Thus, the only missing ingredient to solve
\[dY_{t}=V_{i}(Y_{t})d\mathbb{X}_{t}^{i}\,.\]
is the construction of pseudo bialgebra maps \(\mathcal{F}(V_{1},\ldots,V_{n})\) for the vector fields \(V_{1},\ldots V_{n}\). This will be the topic of Section5.
Let us start by fixing a connected, graded, commutative Hopf algebra \(\mathcal{H}\) such that its graded dual \(\mathcal{H}^{g}\) is a connected, graded, cocommutative Hopf algebra, and a rough path \(\mathbb{X}\) in \(\mathcal{H}^{N*}\). Let \(\alpha\in(0,1)\) be the Holder continuity of \(\mathbb{X}\) and \(N\) be such that \(N\alpha\leq 1<(N+1)\alpha\). Further, let \(\mathcal{F}:\mathcal{H}^{g}\to\mathcal{D}\) be a pseudo bialgebra map. Note that we consider \(\mathcal{F}\) and \(\mathbb{X}\) fixed, and will not point out if constants explicitly depend on \(\mathcal{F}\) or \(\|\mathbb{X}\|\).
We construct the almost-flow \(\mu_{s,t}\) with the _log-ODE_ method, also used in [1], [1] to construct rough flows. For more general information about the log-ODE method, we recommend
[10]. Since \(\mathbb{L}_{s,t}:=\log_{N}(\mathbb{X}_{s,t})\in\mathcal{P}^{N}\subset\mathcal{P}\) is a primitive element of \(\mathcal{H}^{g}\) for all \((s,t)\in\Delta_{T}\), and thus \(\mathcal{F}(\mathbb{L}_{s,t})\in\mathcal{V}\) is a vector field. Thus, we can consider the initial value problem
\[\begin{cases}dZ_{t}=\mathcal{F}(\mathbb{L}_{s,t})(Z_{t})\\ Z_{0}=z\in M\,.\end{cases} \tag{4.1}\]
Since \(\mathcal{F}\) maps into smooth vector fields, this has a unique solution up to some explosion time \(T(z)\). We claim that if \(z\) is an inner point of \(M\) and for small enough \(|t-s|\), \(T(z)\geq 1\). In this case, we set \(\mu_{s,t}(z):=Z_{1}\). To see that the claim holds, let \(z\) be an inner point of \(M\) and fix some coordinate function \(\phi:M\supset O\to U\subset\mathbb{R}^{d}\) such that \(z\in O\). We denote by \(\mathcal{F}(\mathbb{L}_{s,t})^{\phi}\) the vector field over \(U\) given by \(x\mapsto\mathcal{F}(\mathbb{L}_{s,t})\phi(x)\). We further assume that \(U\subset K\subset\mathbb{R}^{d}\) is a subset of some compact set, otherwise, we restrict it to \(U\cap\bar{B}_{1}(\phi(z))\), where \(\bar{B}_{1}(\phi(z))\) is the closed ball of radius \(1\) around \(\phi(z)\). It follows that \(\left\|\mathcal{F}(\mathbb{L}_{s,t})^{\phi}\right\|_{U}\) is finite. Furthermore, we can use Lemma 3.15 together with \(\mathcal{F}(\mathbb{L}_{s,t})=\sum_{|\tau|\leq N}\left\|\mathbb{L}_{s,t},\tau \right\rangle\mathcal{F}(\tau)\) to bound it with
\[\left\|\mathcal{F}(\mathbb{L}_{s,t})\right\|_{U} \leq\sum_{|\tau|\leq N}\left|\langle\mathbb{L}_{s,t},\tau\rangle \right|\left\|\mathcal{F}(\tau)\right\|_{U}\] \[\leq C\sup_{k=1,\ldots,N}\left\|X\right\|^{k}\sum_{|\tau|\leq N} \left\|\mathcal{F}(\tau)\right\|_{U}|t-s|^{\alpha}\]
for all \(|t-s|\leq 1\), and where the \(\tau\) form a basis of \(\mathcal{H}^{N*}=\bigoplus_{n\leq N}\mathcal{H}^{(n)*}\). Thus, by choosing \(T\) small enough such that \(\left\|\mathcal{F}(\mathbb{L}_{s,t})\right\|_{U}\leq\min(1,\operatorname{dist} (z,U^{c}))\), (3.2) gives us that the explosion time is larger or equal to \(1\). Since the \(z\)-dependence of \(T(z)\) only depends on \(\operatorname{dist}(z,U^{c})\), we further see that we can choose \(T\) continuously over \(z\). Thus, as long as \(M\) does not have a boundary, \((s,t,z)\mapsto\mu_{s,t}(z)\) exists on some admissible domain \(\operatorname{diag}_{T}\times M\subset O\subset\Delta_{T}\times M\).
We can now present the main result of this section:
**Theorem 4.5**.: _Let \(\mu:O\to M\) be the solution to (4.1) evaluated at time \(1\). Then \(\mu\) is an almost-flow on \(M\)._
While it is rather straightforward to show the Lipschitz and Holder conditions of \(\mu\), the almost-flow property needs a bit more work. Our main strategy to show the almost-flow property is to show a Taylor formula for \(\mu_{s,t}z\), and show that the Taylor approximation fulfills the almost-flow property. To get started, let us show a general result for smooth differential operators, which we need for the joined Lipschitz-almost-flow property:
**Lemma 4.6**.: _Let \(F\in\mathcal{D}\) be a smooth differentiable operator of order \(m\) and \(\phi\in C^{m+1}(M,\mathbb{R}^{d})\). If \(\phi\) is additionally a diffeomorphism, we have that for each \(x,y\in K\) for some compact set \(K\subset M\):_
\[\left|F\phi(x)-F\phi(y)\right|\leq C\left|\phi(x)-\phi(y)\right|\,,\]
_where \(C\) only depends on \(F\) and \(K\) and \(\phi\)._
Proof.: Without loss of generality, assume that \(x,y\) are in some coordinate chart \(\psi\), with \(F=\sum_{|w|\leq m}f^{w}\partial_{w}\) in the said chart. It follows that
\[F\phi(x)=\sum_{|w|\leq m}(f^{w}\partial_{w}\phi)(\phi^{-1}\circ\phi(x))\,.\]
Since \((f^{w}\partial_{w}\phi)\circ\phi^{-1}\) is continuously differentiable for each word \(|w|\leq m\), we get
\[\left|F\phi(x)-F\phi(y)\right|\leq C\left\|\phi\right\|_{C^{m+1}}\left\|\phi^{ -1}\right\|_{C^{1}}\left|\phi(x)-\phi(y)\right|\,,\]
where \(\left\|\phi\right\|_{C^{m+1}},\left\|\phi\right\|_{C^{1}}\) are the respective norms with respect to the coordinate chart \(\psi\).
Any map \(\nu:O\to M\) has an associated operator \(\nu_{*}\) mapping any function \(\phi\in C^{\infty}(M)\) into a function \(\nu_{*}\phi:O\to\mathbb{R}\) via \(\nu_{s,t*}\phi(x)=\phi(\nu_{s,t}x)\). We call \(\nu_{*}\) the push-forward of \(\nu\). Note that one can easily recover \(\nu\) from its push-forward by applying \(\nu_{*}\) to some coordinate function \(\phi:M\supset V\to U\subset\mathbb{R}^{d}\)
Furthermore, it holds that the map \(\nu\mapsto\nu_{*}\) is anti-homomorph in the following sense: For two maps \(\nu,\eta:M\to M\), we have \((\nu\circ\eta)_{*}\phi=(\eta_{*}\circ\nu_{*})\phi\).
With this notation, we can show that the push-forward of the map \(\mu\) from Theorem 4.5 has the following Taylor decomposition:
**Lemma 4.7** (Taylor decomposition).: _Let \(\phi\in C^{\infty}(M)\). Let \(K\subset M\) be a compact set and \(|t-s|\leq T(K)\), such that \((s,t,x)\in O\) for all \(x\in K\). then there is a constant \(C\) only depending on \(K\) and \(\phi\) and \(\|X\|\), such that_
\[|(\mu_{s,t*}-\mathcal{F}(\mathbb{X}_{s,t}))\phi(x)|\leq C\,|t-s|^{(N+1)\alpha}. \tag{4.2}\]
_If additionally, \(\phi:\tilde{U}\to U\) is a diffeomorphism from some open set \(\tilde{U}\subset M\) to \(U\subset\mathbb{R}^{d}\), we have that for all \(x,y\in K\cap\tilde{U}\):_
\[|(\mu_{s,t*}-\mathcal{F}(\mathbb{X}_{s,t}))(\phi(x)-\phi(y))|\leq\tilde{C}\, |t-s|^{(N+1)\alpha}\,|\phi(x)-\phi(y)|. \tag{4.3}\]
_for another constant \(\tilde{C}\) only depending on \(K\), \(\phi\) and \(\|\mathbb{X}\|\)._
Proof.: We start by showing (4.2), using the same proof as in [1]: We write \(\mathbb{L}_{s,t}=\sum_{k=1}^{n}\mathbb{L}_{s,t}^{k}\), there \(\mathbb{L}_{s,t}^{k}\) is the projection of \(\mathbb{L}_{s,t}\) onto \(\mathcal{H}^{(k)}\). Note that by Lemma 3.5, \(\mathbb{L}_{s,t}^{k}\) is a primitive element, which implies that \(\mathcal{F}(\mathbb{L}_{s,t}^{k})\) is a vector field. Since \(\phi(Z_{u})\) solves the initial value problem
\[\begin{cases}\frac{\partial}{\partial u}\phi(Z_{u})=\mathcal{F}(\mathbb{L}_{ s,t})\phi(Z_{u})\\ \phi(Z_{0})=\phi(x)\,,\end{cases}\]
it follows that we get the integral identity
\[\phi(Z_{u})=\phi(x)+\int_{0}^{u}\mathcal{F}(\mathbb{L}_{s,t})\phi(Z_{r})dr\,.\]
By iterating that identity and decomposing \(\mathbb{L}_{s,t}\), we get
\[\phi(\mu_{s,t}x) =\sum_{k=0}^{N}\frac{\mathcal{F}(\mathbb{L}_{s,t})^{k}}{k!}\phi( x)+\underbrace{\int_{0}^{1}\cdots\int_{0}^{r_{N-1}}\mathcal{F}(\mathbb{L}_{s,t} )^{N+1}\phi(Z_{r_{N}})dr_{N}\ldots dr_{1}}_{R^{1}}\] \[=\mathcal{F}(\exp_{N}(\mathbb{L}_{s,t}))\phi(x)+\underbrace{\sum_ {m=2}^{N}\sum_{k_{1}+\cdots+k_{m}\geq N}\prod_{j=1}^{m}\frac{\mathcal{F}( \mathbb{L}_{s,t}^{k_{j}})}{k_{j}!}\phi(x)}_{R_{2}}+R_{1}\] \[=\mathcal{F}(\mathbb{X}_{s,t})\phi(x)+R_{1}+R_{2}\,.\]
It remains to show that \(|R_{i}|\lesssim|t-s|^{(N+1)\alpha}\) holds for \(i=1,2\). To do so, let \((p_{1},\ldots,p_{M_{k}})\) be a basis for the primitive elements of order \(k\) for each \(k=1,\ldots m\). It then holds that
\[\begin{split}\left|\mathcal{F}(\mathbb{L}_{s,t}^{k})\phi(x)\right| &\leq C\,\|X\|^{k}\sup_{i=1,\ldots M_{k}}\left|\mathcal{F}(p_{i}) \phi(x)\right|\left|t-s\right|^{k\alpha}\\ &\leq C\,\|X\|^{k}\,|t-s|^{k\alpha}\,\end{split} \tag{4.4}\]
where we allow the constant \(C\) to change between the lines and does depend on \(\mathcal{F}\) and \(\phi\). By iterating this, we can calculate
\[\left|\prod_{j=1}^{m}\frac{\mathcal{F}(\mathbb{L}_{s,t}^{k_{j}})}{k_{j}!}\phi( x)\right|\leq C\,\|X\|^{k_{1}+\cdots+k_{m}}\,|t-s|^{\alpha(k_{1}+\cdots+k_{m})}. \tag{4.5}\]
This shows the desired inequality for \(R_{2}\). The one from \(R_{1}\) follows analogously by decomposing \(\mathbb{L}_{s,t}=\sum_{k=1}^{N}\mathbb{L}_{s,t}^{k}\).
The proof of (4.3) is similar. Note that by Lemma 4.6, (4.4) becomes
\[\left|\mathcal{F}(\mathbb{L}_{s,t}^{k})(\phi(x)-\phi(y)))\right|\leq C\left\|X \right\|^{k}\left|t-s\right|^{k\alpha}\left|\phi(x)-\phi(y)\right|\,.\]
Plugging this into (4.5) gives the desired result for \(R_{2}\). \(R_{1}\) follows as before.
We now know that we can approximate \(\mu_{s,t*}\) with its Taylor decomposition \(\mathcal{F}(\mathbb{X}_{s,t})\). To show that \(\mu_{s,t}\) is an almost-flow, it is easier to show that \(\mathcal{F}(\mathbb{X}_{s,t})\) has the almost-flow property and use the above lemma to extend this result to \(\mu_{s,t}\). The next lemma shows the almost-flow property of \(\mathcal{F}(\mathbb{X}_{s,t})\):
**Lemma 4.8**.: _Let \(\phi\in C^{\infty}(M)\) and \(x\in M\). Let \(K\subset M\) be a compact set and \(\left|t-s\right|\leq T(K)\). Further let \(s\leq u\leq t\) and assume that \(\mu_{s,u}x\in K\). It then holds that_
\[\left|(\mathcal{F}(\mathbb{X}_{s,u})\circ\mathcal{F}(\mathbb{X}_{u,t})- \mathcal{F}(\mathbb{X}_{s,t}))\phi(x)\right|\leq C(\left|t-s\right|^{(N+1) \alpha}\,. \tag{4.6}\]
_for some constant \(C\) depending on \(K,\phi,\mathcal{F}\) and \(\left\|\mathbb{X}\right\|\). If \(\phi\) is additionally an diffeomorphism mapping some open set \(\tilde{U}\subset M\) into \(U\subset\mathbb{R}^{d}\), we have that_
\[\left|(\mathcal{F}(\mathbb{X}_{s,u})\circ\mathcal{F}(\mathbb{X}_{u,t})- \mathcal{F}(\mathbb{X}_{s,t}))(\phi(x)-\phi(y))\right|\leq C(K,\phi)\left|t-s \right|^{(N+1)\alpha}\left|\phi(x)-\phi(y)\right|\,. \tag{4.7}\]
Proof.: We again begin by proving (4.6) first. Let \(\star_{N}\) be the truncated product in \(\mathcal{H}^{N*}\). It holds that
\[\mathbb{X}_{s,u}\star\mathbb{X}_{u,t} =\mathbb{X}_{s,u}\star_{N}\mathbb{X}_{u,t}+\underbrace{\sum_{k+l> N}\mathbb{X}_{s,u}^{k}\star\mathbb{X}_{u,t}^{l}}_{R_{N}}\] \[=\mathbb{X}_{s,t}+R_{N}\,.\]
By choosing a basis \(\{h_{m}^{k}\ |\ m=1\ldots,M_{k}\}\) for each \(\mathcal{H}^{(k)}\), we get that
\[R_{N}=\sum_{h_{m}^{k},h_{q}^{p},k+p>N}\left\langle\mathbb{X}_{s,u},h_{m}^{k} \right\rangle\left\langle\mathbb{X}_{u,t},h_{q}^{p}\right\rangle h_{m}^{k} \star h_{q}^{p}\,.\]
It follows, that
\[\left|\mathcal{F}(R_{N})\phi(x)\right| \leq\sum_{h_{m}^{k},h_{q}^{p},k+p>N}\left|\left\langle\mathbb{X} _{s,u},h_{m}^{k}\right\rangle\left\langle\mathbb{X}_{u,t},h_{q}^{p}\right\rangle \right|\left|\mathcal{F}(h_{m}^{k}\star h_{q}^{p})\phi(x)\right|\] \[\leq C\sum_{N<k+p\leq 2N}\left|u-s\right|^{k\alpha}\left|t-u \right|^{p\alpha} \tag{4.8}\] \[\leq C\left|t-s\right|^{(N+1)\alpha}\,,\]
where we again allow \(C\) to change between lines. Thus, we use the fact that \(\mathcal{F}\) is a pseudo bialgebra map to calculate
\[\left|\mathcal{F}(\mathbb{X}_{s,u})\circ\mathcal{F}(\mathbb{X}_{u,t})\phi(x)-\mathcal{F}(\mathbb{X}_{s,t})\phi(x))\right| =\left|\mathcal{F}(\mathbb{X}_{s,u}\star\mathbb{X}_{u,t})\phi(x) -\mathcal{F}(\mathbb{X}_{s,t})\phi(x)\right|\] \[=\left|\mathcal{F}(R_{N})\phi(x)\right|\leq C\left|t-s\right|^{(N +1)\alpha}\,.\]
(4.7) follows analogously, by using Lemma 4.6 in (4.8) to get \(\left|\mathcal{F}(h_{m}^{k}\star h_{q}^{p})(\phi(x)-\phi(y))\right|\leq C \left|\phi(x)-\phi(y)\right|\).
With these tools at hand, we can now prove our main result:
Proof of Theorem 4.5.: Fix a smooth coordinate function \(\phi:M\supset\tilde{U}\to U\subset\mathbb{R}^{d}\) as well as a compact set \(K\subset U\). We need to show the Lipschitz and Holder continuity and the almost-flow property of \(\mu\). Let us start with the Lipschitz property: We set \(\mathcal{F}(\mathbb{L}_{s,t})^{\phi}\) to be the vector field \(\mathcal{F}(\mathbb{L}_{s,t})\) on \(U\) given by \(\mathcal{F}(\mathbb{L}_{s,t})\phi\). We get the integral identity
\[Z_{u}=Z_{0}+\int_{0}^{u}\mathcal{F}(\mathbb{L}_{s,t})^{\phi}(Z_{r})dr\,.\]
Thus, it follows that if \(Z\) has starting value \(x\in K\) and \(Z^{\prime}\) has starting value \(y\in K\):
\[\left|Z_{u}-Z^{\prime}_{u}\right|\leq\left|x-y\right|+\int_{0}^{u}\left\|\mathcal{ F}(\mathbb{L}_{s,t})^{\phi}\right\|_{Lip,U}\left|Z_{r}-Z^{\prime}_{r}\right|dr\,.\]
So by Gronwall's inequality, we conclude
\[\left|Z_{1}-Z^{\prime}_{1}\right|\leq e^{\left\|\mathcal{F}(\mathbb{L}_{s,t})^ {\phi}\right\|_{Lip,U}}\left|x-y\right|\,.\]
It follows that \(\mu\) expressed in the coordinate chart \(\phi\), given by \(\mu_{s,t}^{\phi}:=\phi\circ\mu_{s,t}\circ\phi^{-1}\), is Lipschitz with Lipschitz-constant \(L(s,t)=e^{\left\|\mathcal{F}(\mathbb{L}_{s,t})^{\phi}\right\|_{Lip,U}}\) which fulfills \(L(K,s,s)=1=\lim_{\left|t-s\right|\to 0}L(K,s,t)\). For the Holder continuity, it suffices to recall that \(\left\|\mathcal{F}(\mathbb{L}_{s,t})^{\phi}\right\|_{\infty,U}\leq C\left|t- s\right|^{\alpha}\) to calculate
\[\left|\mu_{s,t}^{\phi}x-x\right|=\left|\int_{0}^{1}\mathcal{F}(\mathbb{L}_{s, t})^{\phi}(Z_{r})dr\right|\leq C\left|t-s\right|^{\alpha}\,.\]
For the almost-flow property, let \(m=\phi^{-1}(x)\in M\). We write
\[\mu_{u,t}^{\phi}\circ\mu_{s,u}^{\phi}x =\mathcal{F}(\mathbb{X}_{u,t})\phi(\mu_{s,u}m)+\epsilon_{u,t}(\mu _{s,u}m)\] \[=\mathcal{F}(\mathbb{X}_{s,u})\circ\mathcal{F}(\mathbb{X}_{u,t}) \phi(m)+\epsilon_{u,t}(\mu_{s,u}m)+\epsilon_{s,u}(m)\]
by the Taylor-decomposition, where \(\left|\epsilon_{s,u}(m)\right|\leq C\left|s-u\right|^{1+\epsilon}\) and \(\left|\epsilon_{u,t}(\mu_{s,u}m)\right|\leq C\left|u-t\right|^{1+\epsilon}\) holds, for \(\epsilon=(N+1)\alpha\)-\(1\). Lemma 4.8 together with the Taylor-decomposition of \(\mu_{s,t}^{\phi}\) gives us
\[\left|\mu_{u,t}^{\phi}\circ\mu_{s,u}^{\phi}x-\mu_{s,t}^{\phi}x\right|\leq C \left|t-s\right|^{1+\epsilon}\,.\]
For the joined Lipschitz-almost-flow property, we use the second part of the Taylor decomposition to calculate
\[\mu_{u,t}^{\phi}\circ\mu_{s,u}^{\phi}(x-y)=\mathcal{F}(\mathbb{X}_{s,u}) \circ\mathcal{F}(\mathbb{X}_{u,t})(\phi(m)-\phi(n))+\epsilon_{u,t}(\mu_{s,u}m,\mu_{s,u}n)+\epsilon_{s,u}(m,n)\,,\]
where \(n=\phi^{-1}(y)\). We get that
\[\left|\epsilon_{u,t}(\mu_{s,u}m,\mu_{s,u}n)\right|\leq C\left|t-u\right|^{1+ \epsilon}\left|\mu_{s,u}^{\phi}x-\mu_{s,u}^{\phi}y\right|\leq C\left|t-u\right| ^{1+\epsilon}\left|x-y\right|\,,\]
and \(\left|\epsilon_{s,u}(m,n)\right|\leq C\left|u-s\right|^{1+\epsilon}\left|x-y\right|\). The joined Lipschitz-almost-flow property follows again from Lemma 4.8.
Thanks to Proposition 2.9, we see that \(\mu\) generates a flow on \(M\), which we denote by \(\eta\). Our Taylor decomposition of \(\mu\) immediately gives us a Davie's formula for \(\eta\):
**Corollary 4.9** (Davie's formula).: _Let \(K\subset M\) be compact and \(\phi\in C^{\infty}(M)\). It holds that for each \(m\in M\) and \(\left|t-s\right|\leq T(K)\):_
\[\left|(\eta_{s,t*}-\mathcal{F}(\mathbb{X}_{s,t}))\phi(m)\right|\leq C\left|t- s\right|^{(N+1)\alpha}\,, \tag{4.9}\]
_where \(C\) depends on \(K,\phi,\mathcal{F}\) and \(\left\|\mathbb{X}\right\|\). Furthermore, \(\eta_{s,t}x\) is the unique \(\alpha\)-Holder continuous flow in \(M\) (up to time \(\left|s-t\right|\leq T(K)\)) with this property._
Proof.: The uniqueness immediately follows from the fact that every over \(\alpha\)-Holder continuous flow \(\tilde{\eta}\) in \(M\) fulfilling (4.9) would be a flow with
\[\left|(\mu_{s,t*}-\tilde{\eta}_{s,t*})\phi(m)\right|\leq C\left|t-s\right|^{(N +1)\alpha}\,,\]
which contradicts the sewing lemma by Remark 2.8. To see that the above inequality holds, observe that
\[\left|(\eta_{s,t*}-\mathcal{F}(\mathbb{X}_{s,t}))\phi(m)\right| \leq\left|(\eta_{s,t*}-\mu_{s,t*})\phi(m)\right|+\left|(\mu_{s,t *}-\mathcal{F}(\mathbb{X}_{s,t}))\phi(m)\right|\] \[\leq C\left|t-s\right|^{(N+1)\alpha}\,.\]
## 5. Constructing elementary differentials
The goal of this section is to provide an overview of the classical ways to construct elementary differentials and show that they indeed generate pseudo bialgebra maps. We also present a concrete form of \(\mathcal{F}\) on the Hopf algebras over trees \(\mathcal{H}_{GL}\) and \(\mathcal{H}^{g}_{MKW}\). This will especially allow us to generalize the construction of \(\mathcal{F}\) on \(\mathcal{H}^{g}_{MKW}\) to any manifold equipped with a connection, whereas the standard construction requires a flat connection with constant torsion.
The main idea behind the constructions on \(T(\mathbb{R}^{n}),\mathcal{H}_{GL}\) and \(\mathcal{H}^{*}_{MKW}\) is that all of these algebras (or the Lie algebras of the primitive elements) are free in some sense. More precisely,
* The tensor algebra \(T(V)\) over \(V=\operatorname{span}\{e_{1},\ldots,e_{n}\}\) is the free associative algebra generated by \(e_{1},\ldots,e_{n}\) for linear independent \(e_{1},\ldots,e_{n}\).
* The primitive elements of the Grossman-Larson algebra form the free pre-Lie algebra.
* The primitive elements of \(\mathcal{H}^{g}_{MKW}\) form the free post-Lie algebra.
Thus, the general strategy to generate \(\mathcal{F}\) is as follows: For the generator set \(\{e_{1},\ldots,e_{n}\}\) of \(T(\mathbb{R}^{n})\) or \(\{\bullet_{1},\ldots,\bullet^{n}\}\) in the branched case, we set \(\mathcal{F}(e_{i})=V_{i}=\mathcal{F}(\bullet^{i})\), where \(V_{i}\) is given by (1.1). Then, the universal properties of the respective algebras will generate maps \(\mathcal{F}\) on \(T(\mathbb{R}^{n})\) as well as \(\mathcal{P}(\mathcal{H}_{GL})\) and \(\mathcal{P}(\mathcal{H}^{*}_{MKW})\).
### Tensor algebra
The elementary differentials of the tensor algebra are well known, but we still want to spend a page to show that it does fit into our theory: Considering the RDE (1.1) for a smooth path \(X\) lifted to a rough path \(\mathbb{X}\), a simple Taylor expansion of \(\phi(Y_{t})\) for some \(\phi\in C^{\infty}(M)\) suggest ([10], page 128):
\[\phi(Y_{t})\approx\sum_{|w|\leq N}V_{w_{1}}\circ\cdots\circ V_{w_{n}}\phi(Y_{ s})\mathbb{X}^{w}_{s,t}\,,\]
implying that we should expect \(\mathcal{F}(w)=V_{w_{1}}\circ\cdots\circ V_{w_{n}}\) for any word \(w=(w_{1},\ldots,w_{n})\). This is precisely the map one gets from the freeness property of \(T(\mathbb{R}^{n})\): \(T(\mathbb{R}^{n})\) is the free associative algebra ([11], [12]), meaning that it has the following universal property: Every linear map \(F:\mathbb{R}^{n}\to\mathcal{A}\) for some associative algebra \((\mathcal{A},\circ,\mathbb{1})\) uniquely extends to an associative algebra map \(\mathcal{F}:T(\mathbb{R}^{n})\to\mathcal{A}\). Note that \((\mathcal{D},\circ,\operatorname{Id})\) is an associative algebra, and we assumed that \(\mathcal{F}(e_{i})=V_{i}\) for \(i=1,\ldots,n\), which gives us a linear map \(\mathcal{F}:\mathbb{R}^{n}\to\mathcal{V}\subset\mathcal{D}\). Thus, we can extend it to an associative algebra map \(\mathcal{F}:T(\mathbb{R}^{n})\to\mathcal{D}\). Furthermore, since every word \(w=(w_{1}\ldots w_{n})=w_{1}\cdot w_{2}\cdot\ldots\cdot w_{n}\), it immediately follows that
\[\mathcal{F}(w)=\mathcal{F}(w_{1})\circ\cdots\circ\mathcal{F}(w_{n})=V_{w_{1} }\circ\cdots\circ V_{w_{n}}\]
**Theorem 5.1**.: \(\mathcal{F}\) _is a pseudo bialgebra map._
Proof.: By its definition, \(\mathcal{F}:T(\mathbb{R}^{n})\to\mathcal{D}\) is an algebra map. Thus, we only need to check that
\[\mathcal{F}(\Delta x)(\phi\otimes\psi)=\mathcal{F}(x)(\phi\cdot\psi)\,.\]
Note that for letters \(i=1\ldots,n\), the Leipniz rule together with the fact that \(i\) is primitive gives us
\[\mathcal{F}(i)(\phi\cdot\psi) =V_{i}(\phi\cdot\psi)\] \[=(V_{i}\otimes\operatorname{Id}+\operatorname{Id}\otimes V_{i})( \phi\otimes\psi)\] \[=\mathcal{F}(\Delta i)(\phi\otimes\psi)\,.\]
For general words \(x=(x_{1}\ldots x_{m})\), we set \(\bar{x}=(x_{2}\ldots x_{m})\) and inductively conclude that
\[\mathcal{F}(\Delta x)(\phi\otimes\psi) =\mathcal{F}((x_{1}\otimes 1+1\otimes x_{1})\Delta\bar{x})(\phi \otimes\psi)\] \[=\sum_{(\bar{x})}\mathcal{F}(x_{1}\bar{x}^{(1)}\otimes\bar{x}^{( 2)}+\bar{x}^{(1)}\otimes x_{1}\bar{x}^{(2)})(\phi\otimes\psi)\] \[=\sum_{(\bar{x})}\left[\mathcal{F}(x_{1})\circ\mathcal{F}(\bar{x }^{(1)})\phi\cdot\mathcal{F}(\bar{x}^{(2)})\psi+\mathcal{F}(\bar{x}^{(1)}) \phi\cdot\mathcal{F}(x_{1})\circ\mathcal{F}(\bar{x}^{(2)})\psi\right]\] \[=\mathcal{F}(x_{1})\left[\mathcal{F}(\Delta\bar{x})(\phi\otimes \psi)\right]\] \[=\mathcal{F}(x_{1})\circ\mathcal{F}(\bar{x})(\phi\cdot\psi)\] \[=\mathcal{F}(x)(\phi\cdot\psi)\,.\]
_Remark 5.2_.: The set of primitive elements of \(T(V)\) is given by the free Lie algebra of \(V\) ([10],[11]). Thus, one can also construct \(\mathcal{F}\) as the unique Lie algebra map mapping \(\mathcal{P}\to\mathcal{V}\), extending \(\mathcal{F}(i)=V_{i}\). One can easily check that the pseudo bialgebra map generated by this Lie map is the same as we described above.
### Grossmann-Larson algebra
Before we start with the proper construction of \(\mathcal{F}\) over \(\mathcal{H}_{GL}\), let us recall the elementary differentials for the Grossman-Larson algebra: Taking again a smooth path \(X\) in (1.1) and Taylor expending \(\phi(Y)\), we see that ([11] calculation (1.6))
\[\phi(Y_{t})\approx\phi(Y_{s})+V_{i}\phi(Y_{s})X_{s,t}^{i}+\frac{1}{2}V_{i}^{ \alpha}V_{j}^{\beta}\partial_{\alpha,\beta}\phi(Y_{s})X_{s,t}^{i}X_{s,t}^{j}+V _{i}^{\alpha}\partial_{\alpha}V_{j}^{\beta}\partial_{\beta}\phi(Y_{s})\int_{s} ^{t}X_{s,r}^{i}dX_{r}^{j}+\ldots\]
for any smooth function \(\phi\). If we compare that to our expected series \(\sum_{|\tau|\leq N}\frac{1}{sg(\tau)}\mathcal{F}(\tau)\phi(Y_{s})\mathbb{X}_{s,t}^{\tau}\) (recall that \(\tau\) is not the dual element of \(\tau\in\mathcal{H}_{CK}\), but it is connected via \(\langle\tau,\tau\rangle=sg(\tau)\). Thus, we need to add a factor \(\frac{1}{sg(\tau)}\) to the formula.), we see a pattern arise for \(\mathcal{F}\): For \(n\) vector fields \(U_{i}^{\alpha}\partial_{\alpha},i=1\ldots,n\), we say that we apply all \(U_{i}\) to \(\phi\) by applying the differential operator \(U_{1}^{\alpha_{1}}\ldots U_{n}^{\alpha_{n}}\partial_{\alpha_{1},\ldots\alpha_{ n}}\) to \(\phi\). For a vector field \(U=U^{\beta}\partial_{\beta}\), we say that \(U_{1},\ldots,U_{n}\) applied to U is the vector field \((U_{1}^{\alpha_{1}}\ldots U_{n}^{\alpha_{n}})\partial_{\alpha_{1},\ldots\alpha_ {n}}U^{\beta}\partial_{\beta}\). Then for any tree \(\tau\), \(\mathcal{F}(\tau)\) is the vector field one gets by setting \(\mathcal{F}(\bullet^{i})=V_{i}^{\alpha}\partial_{\alpha}\) and inductively applying all children to their parent. For example,
\[\mathcal{F}(\bullet^{\bullet^{1}}\bullet^{1})=(V_{4}^{\alpha}\partial_{\alpha }V_{3}^{\beta})V_{2}^{\gamma}\partial_{\beta\gamma}V_{1}^{\delta}\partial_{ \delta}\,.\]
Finally, one gets \(\mathcal{F}(\tau_{1}\ldots\tau_{n})\phi\) for any forest \(\tau_{1}\ldots\tau_{n}\) by applying all vector fields \(\mathcal{F}(\tau_{i}),i=1\ldots,n\) to \(\phi\).
If our vector fields do not live in flat space but on the manifold \(M\), we do not have partial derivatives but only covariant derivatives. Replacing the partial derivatives with covariant derivatives gives us the following recursive formula for \(\mathcal{F}\): As before, we set \(\mathcal{F}(\bullet_{i})=V_{i}\). Inductively, we then assign to the tree \([\tau_{1},\ldots,\tau_{n}]_{i}\) the \(n\)-th covariant derivative of \(V_{i}\) in the directions \(\mathcal{F}(\tau_{1}),\ldots,\mathcal{F}(\tau_{n})\):
\[\mathcal{F}([\tau_{1},\ldots,\tau_{n}]_{i})=\nabla^{n}V_{i}(\mathcal{F}(\tau_{ 1}),\ldots,\mathcal{F}(\tau_{n}))\,.\]
Finally, we set \(\mathcal{F}(\tau_{1},\ldots,\tau_{n})\phi=\nabla^{n}\phi(\mathcal{F}(\tau_{1}),\ldots,\mathcal{F}(\tau_{n}))\).
_Remark 5.3_.: Note that \(\mathcal{F}\) is in general not well-defined (see [11], Remark 4.35): In general, \(\nabla^{2}\phi(V_{1},V_{2})\neq\nabla^{2}\phi(V_{2},V_{1})\). However, the Grossman-Larson algebra does not differentiate between the order of trees in a forest or the order of children of a vertex in a tree. Thus, we will need an additional assumption to construct \(\mathcal{F}\), namely that \(\nabla\) is flat and torsion-free. In this case, \(\nabla^{n}\phi(V_{1},\ldots,V_{n})\) does not depend on the order of \(V_{1},\ldots,V_{n}\). We discuss this map in more detail in Section 5.4.
In the flat, torsion-free case it actually suffices to require
* \(\mathcal{F}(\bullet_{i})=V_{i}\) for \(i=1,\ldots,n\) and
* \(\mathcal{F}(\tau\curvearrowleft)=\nabla_{\mathcal{F}(\tau)}\mathcal{F}(\sigma)\) for all trees \(\tau,\sigma\in\mathcal{U}\mathcal{F}\)
to construct \(\mathcal{F}\) on all trees (see Thm. 5.10), where \(\curvearrowleft\) is the crafting product defined in Section 3.4.2. This approach has a deep connection to the algebraic structure of \(\operatorname{span}(\mathcal{U}\mathcal{T})\subset\mathcal{H}_{GL}\): As shown in for example [1] or [10], \((\operatorname{span}(\mathcal{U}\mathcal{T}),\curvearrowleft)\) is the _free pre-Lie algebra_ generated by \(\{\bullet_{1},\dots\bullet_{n}\}\). Pre Lie algebras are an important concept for the analysis of Butcher series [11], and are defined as follows:
**Definition 5.4**.: A pre-Lie algebra is a vector field \(\mathcal{A}\) equipped with a (not necessarily associative) product \(\triangleright\), which fulfills the pre-Lie identity:
\[(x\triangleright y)\triangleright z-x\triangleright(y\triangleright z)=(y \triangleright x)\triangleright z-y\triangleright(x\triangleright z)\]
for all \(x,y,z\in\mathcal{A}\).
Pre-Lie algebras are _Lie admissible_, meaning that their commutator \([x,y]=x\triangleright y-y\triangleright x\) forms a Lie bracket on \(\mathcal{A}\). As mentioned above, the free pre-Lie algebra generated by the set \(\{\bullet_{1},\dots,\bullet_{n}\}\) is well known to be the span of (unordered) trees \(\operatorname{span}(\mathcal{U}\mathcal{T})\) equipped with the grafting product
\[\tau\triangleright\sigma=\tau\curvearrowleft\sigma\,,\]
where \(\curvearrowleft\) is defined in Section 3.4.2. Recall that \(\operatorname{span}(\mathcal{U}\mathcal{T})=\mathcal{P}(\mathcal{H}_{GL})\), which immediately gives us the free structure of the primitives of \(\mathcal{H}_{GL}\).
We want to connect \(\curvearrowleft\) with the the non-associative product \(V\triangleright U=\nabla_{V}U\) on \(\mathcal{V}\). Assume for the moment, that \((\mathcal{V},\triangleright)\) forms a pre-Lie algebra, such that the classical Lie bracket on \(\mathcal{V}\) is equal to the commutator \([U,V]=U\triangleright V-V\triangleright U\). By assumption, we have a linear map \(\mathcal{F}:\operatorname{span}\{\bullet_{1}\dots\bullet_{n}\}\to\mathcal{V}\), \(\mathcal{F}(\bullet_{i})=V_{i}\). The free property of \(\mathcal{P}(\mathcal{H}_{GL})\) then automatically extends this to a unique pre-Lie map \(\mathcal{F}:\mathcal{P}(\mathcal{H}_{GL})\to\mathcal{V}\). Note that for all trees \(\tau,\sigma\in\mathcal{U}\mathcal{T}\), we have
\[\tau\curvearrowleft\sigma=\tau\star\sigma-\tau\sigma\,.\]
since \(\tau\sigma=\sigma\tau\), we get that \([\tau,\sigma]=\tau\star\sigma-\sigma\star\tau=\tau\curvearrowleft\sigma- \sigma\curvearrowleft\tau\). Thus, the pre-Lie property of \(\mathcal{F}\) then immediately gives us
\[\mathcal{F}([\tau,\sigma])=\mathcal{F}(\tau)\triangleright\mathcal{F}(\sigma)- \mathcal{F}(\sigma)\triangleright\mathcal{F}(\tau)=[\mathcal{F}(\tau),\mathcal{ F}(\sigma)]\,,\]
turning \(\mathcal{F}\) into a Lie algebra map. We can summarize this result in the following proposition:
**Proposition 5.5**.: _Let \(\nabla\) be flat and torsion-free. Then the pre-Lie map \(\mathcal{F}:\mathcal{P}(\mathcal{H}_{GL})\to\mathcal{V}\) generated by \(\mathcal{F}(\bullet_{i})=V_{i}\), \(i=1\dots,n\), is a Lie-algebra map. Thus, it generates a pseudo bialgebra map on \(\mathcal{H}_{GL}\)._
Proof.: The only thing left to show is that \((\mathcal{V},\triangleright)\) is a pre-Lie algebra with \([V,U]=V\triangleright U-U\triangleright V\) for all \(U,V\in\mathcal{V}\). The fact that \((\mathcal{V},\triangleright)\) is pre-Lie if and only if \((M,\nabla)\) is flat and torsion-free has been proven in Thm 2.23 in [11]. Writing down the torsion tensor gives us for all \(V,U\in\mathcal{V}\)
\[0=T(V,U)=V\triangleright U-U\triangleright V-[V,U]\,,\]
which gives us the second property.
### The Munthe-Kaas Wright algebra
Note that the map \(\mathcal{F}\), given by \(\mathcal{F}(\bullet_{i})=V_{i},\mathcal{F}([\tau_{1},\dots,\tau_{n}]_{i})= \nabla^{n}V_{i}(\mathcal{F}(\tau_{1}),\dots,\mathcal{F}(\tau_{n}))\) is well-defined as a map from \(\mathcal{F}:\mathcal{H}^{g}_{MKW}\to\mathcal{D}\), as the forests are now ordered. This map naturally arises on \(\mathcal{H}^{g}_{MKW}\), if we loosen the assumptions on \((M,\nabla)\), which gives rise to the structure of _post-Lie algebras_. Post-Lie algebras were independently introduced in [11] and [12] (for an introduction, we recommend [10]) and are based on the observation, that without the assumption of flatness and torsion-freeness, a single operation \(\triangleright\) no longer suffices to generate all the relevant vector fields from \(V_{1},\dots,V_{n}\). The solution to this is to introduce a second operation, which is given by another Lie bracket \(\llbracket\cdot,\cdot\rrbracket\). In this Section, we will show the construction of \(\mathcal{F}\) with the use of the post-Lie structure of the primitive elements \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) (mainly following [10]), which will still require rather strong assumptions on \((M,\nabla)\). In Section 5.4, we show that this leads to the same map as directly constructing \(\mathcal{F}\) on each forest as above and that the direct approach allows us to loosen the assumptions on \((M,\nabla)\) even further.
Let us first recall the definition of a post-Lie algebra:
**Definition 5.6**.: A post Lie algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket,\triangleright)\) is a Lie algebra \((\mathfrak{g},\llbracket\cdot,\cdot\rrbracket)\) equipped with a product \(\triangleright:\mathfrak{g}\otimes\mathfrak{g}\rightarrow\mathfrak{g}\), such that for all \(x,y,z\in\mathfrak{g}\)
\[x\triangleright\llbracket y,z\rrbracket =\llbracket x\triangleright y,z\rrbracket+\llbracket y,x \triangleright z\rrbracket\.\] \[\llbracket x,y\rrbracket\triangleright z =a_{\triangleright}(x,y,z)-a_{\triangleright}(y,x,z)\]
where \(a_{\triangleright}(x,y,z):=x\triangleright(y\triangleright z)-(x \triangleright y)\triangleright z\) is the associator.
As shown in [10], [11] the free post-Lie algebra is given as the free Lie algebra over the space of trees \(\mathcal{OT}\), equipped with the left crafting product \(\curvearrowright_{l}\) and the Lie bracket \(\llbracket\tau,\sigma\rrbracket=\tau\sigma-\sigma\tau\) being the commutator with respect to the tensor product. A thorough discussion of the free post-Lie algebra can be found here: [11]. Note that \(\mathcal{H}^{g}_{MKW}\) as a space is given by the tensor algebra \(T(\mathcal{OT})\) with the coproduct being the coshuffle, which immediately gives us that the primitive elements of \(\mathcal{H}^{g}_{MKW}\) are given by the free Lie algebra over \(\mathcal{OT}\) with respect to the Lie bracket \(\llbracket\cdot,\cdot\rrbracket\). Thus, \((\mathcal{P}(\mathcal{H}^{g}_{MKW}),\llbracket\cdot,\cdot\rrbracket, \curvearrowright_{l})\) is the free post-Lie algebra.
It remains to find the post-Lie structure on \(\mathcal{V}\): We equip \(\mathcal{V}\) with the product \(V\triangleright U=\nabla_{V}U\) and the operation \(\llbracket U,V\rrbracket:=-T(U,V)\), where \(T\) is the torsion (note that this is in general not a Lie bracket). Thm 2.23 from [10] then states that \((\mathcal{V},\llbracket\cdot,\cdot\rrbracket,\triangleright)\) is a post-Lie algebra if and only if \((M,\nabla)\) is flat and \(\llbracket\cdot,\cdot\rrbracket\) is a Lie bracket, which Munthe-Kaas et al call "\((M,\nabla)\) is parallel". [10] shows that a sufficient condition such that \(\llbracket\cdot,\cdot\rrbracket\) is a Lie bracket is, that \((M,\nabla)\) is flat and has _constant torsion_, that is, its covariant derivative vanishes: \(\nabla T=0\). This is the condition we will use in the following to ensure that \((\mathcal{V},\llbracket\cdot,\cdot\rrbracket,\triangleright)\) forms a post-Lie algebra.
Since \(\mathcal{V}\) forms a post-Lie algebra, the universal property immediately gives us a unique post-Lie map \(\mathcal{F}:\mathcal{P}(\mathcal{H}^{g}_{MKW})\rightarrow\mathcal{V}\) with \(\mathcal{F}(\bullet_{!})=V_{i}\) for \(i=1,\ldots,n\). And while this immediately implies that \(\mathcal{F}\) is a Lie homomorphism with respect to the Lie brackets \(\llbracket\cdot,\cdot\rrbracket\) on \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) and \(\mathcal{V}\) respectively, we will only get a pseudo bialgebra map if it is a Lie homomorphism with respect to the Lie bracket \([\tau,\sigma]=\tau\star\sigma-\sigma\star\tau\) on \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) and \([U,V]=U\circ V-V\circ U\) on \(\mathcal{V}\). This follows easily, as the main result of this section shows:
**Proposition 5.7**.: _Let \((M,\nabla)\) be flat with constant torsion. Then \(\mathcal{F}:(\mathcal{P}(\mathcal{H}^{g}_{MKW}),[\cdot,\cdot])\rightarrow( \mathcal{V},[\cdot,\cdot])\) is a Lie map and thus generates a pseudo bialgebra map \(\mathcal{F}:\mathcal{H}^{g}_{MKW}\rightarrow\mathcal{D}\)._
Before we show this proposition, let us present a helpful Lemma:
**Lemma 5.8**.: _Let \(f\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\) and \(g\in\mathcal{O}\mathcal{F}\). Then_
\[f\star g=fg+f\curvearrowright_{l}g\,. \tag{5.1}\]
Proof.: We show the claim by induction, where we use that \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) gets generated by \(\mathcal{OT}\) and the Lie bracket \(\llbracket\cdot,\cdot\rrbracket\). It is clear by the definition of \(\star\) that (5.1) holds, whenever \(f\in\mathcal{OT}\) is a tree. Thus, let us assume that the claim holds for two \(f,g\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\), i.e. \(f\star h=fh+f\curvearrowright_{l}h\) and \(g\star h=gh+g\curvearrowright_{l}h\) for all \(h\in\mathcal{O}\mathcal{F}\). We only need to show that it holds for \(\llbracket f,g\rrbracket=fg-gf\). That is, we show that for any \(h\in\mathcal{O}\mathcal{F}\), we have \(\llbracket f,g\rrbracket\star h=\llbracket f,g\rrbracket h+\llbracket f,g \rrbracket\curvearrowright_{l}h\). To do so, we introduce a bit more notation: Let \(g\in\mathcal{O}\mathcal{F}\) with a subforest \(h\subset g\). When we denote by \(f\curvearrowright_{l}^{h}g\) the sum over all possibilities, to let the trees of \(f\) grow out of nodes of \(h\) as the left-most child of said node. For example, for \(g=\raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{1}^{4}\) and \(h=\raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{1}^{3}\), we have
\[\bullet_{5}\curvearrowright_{l}^{h}g=\raisebox{-1.29pt}{\includegraphics[height=14.454 pt]{fig/2012.eps}}_{1}^{4}+\raisebox{-1.29pt}{\includegraphics[height=14.454 pt]{fig/2012.eps}}_{1}^{5}\]
Analogously, we introduce the product \(f\otimes^{h}g\) which puts the forest \(f\) left to the leftmost tree which contains at least one node from \(h\). For example, for \(g=\raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{1}^{3}\) and \(h=\raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{1}^{3}\), we have
\[f\otimes^{h}g=\raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{5} \raisebox{-1.29pt}{\includegraphics[height=14.454pt]{fig/2012.eps}}_{1}^{4}\,.\]
The last notation we need for the proof is the following: For a forest \(f=f_{1}\dots f_{n}\) where \(f_{1},\dots,f_{n}\in\mathcal{OT}\) and \(J\subset\{1,\dots,n\}\), we denote with \(f_{J}:=f_{j_{1}}\dots f_{j_{m}}\), where \(J=\{j_{1}<\dots<j_{m}\}\) holds. One can now check, that for all forests \(f,g,h\), with \(f=f_{1}\dots f_{n}\) and \(g=g_{1}\dots g_{m}\) we have that
\[(fg)\star h =\sum_{\begin{subarray}{c}J\subset\{1,\dots,n\}\\ G\subset\{1,\dots,m\}\end{subarray}}f_{J}g_{G}(f_{J^{c}}\curvearrowright_{l}^{ h}(g_{G^{c}}\curvearrowright_{l}h))\] \[=\sum_{J\subset\{1,\dots,n\}}f_{J}(f_{J^{c}}\curvearrowright_{l}^ {h}(g\star h))\,.\]
Assume that (5.1) holds for \(f,g\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\) and \(h\in\mathcal{OF}\). Then the above becomes
\[(fg)\star h =\sum_{J\subset\{1,\dots,n\}}f_{J}(f_{J^{c}}\curvearrowright_{l} ^{h}(gh+g\curvearrowright_{l}h))\] \[=g\otimes^{h}(f\star h)+g\curvearrowright_{l}^{h}(f\star h)\] \[=fgh+g(f\curvearrowright_{l}h)+f(g\curvearrowright_{l}h)+(fg) \curvearrowright_{l}h\,.\]
We conclude that
\[\llbracket f,g\rrbracket\star h =fgh-gfh+(fg)\curvearrowright h-(gf)\curvearrowright h\] \[=\llbracket f,g\rrbracket h+\llbracket f,g\rrbracket\curvearrowright h\,,\]
which shows (5.1) for \(\llbracket f,g\rrbracket\).
Proof of Proposition 5.7.: By the definition of the torsion, we have
\[[U,V]=U\triangleright V-V\triangleright U-T(U,V)=U\triangleright V-V\triangleright U +\llbracket U,V\rrbracket\]
for all \(U,V\in\mathcal{V}\). On the other side, Lemma 5.8gives us for all \(f,g\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\)
\[[f,g]=f\star g-g\star f=g\curvearrowright_{l}f-f\curvearrowright_{l}g+ \llbracket f,g\rrbracket\.\]
It now follows from the post-Lie properties of \(\mathcal{F}\) that
\[\mathcal{F}([f,g]) =\mathcal{F}(f)\triangleright\mathcal{F}(g)-\mathcal{F}(g) \triangleright\mathcal{F}(f)+\llbracket\mathcal{F}(f),\mathcal{F}(g)\rrbracket\] \[=\left[\mathcal{F}(f),\mathcal{F}(g)\right],\]
showing that \(\mathcal{F}\) is a Lie map with respect to the Lie brackets \([\cdot,\cdot]\) on both spaces.
### A pseudo bialgebra map on \(\mathcal{H}^{*}_{MKW}\)
In this section, we take a closer look at the linear map \(\mathcal{F}:\mathcal{H}^{g}_{MKW}\to\mathcal{D}\) constructed in the previous section via
\[\mathcal{F}(\bullet_{i})\psi =V_{i}\psi \tag{5.2}\] \[\mathcal{F}(\tau_{1},\dots,\tau_{k})\psi =\nabla^{k}\psi(\mathcal{F}(\tau_{1}),\dots,\mathcal{F}(\tau_{k}))\] \[\mathcal{F}([\tau_{1},\dots,\tau_{k}]_{i}) =\nabla^{k}V_{i}(\mathcal{F}(\tau_{1}),\dots,\mathcal{F}(\tau_{ k}))\,,\]
for \(i=1\dots,n\) and \(\tau_{1},\dots,\tau_{k}\in\mathcal{OT}\) being ordered trees. To our knowledge, this map was first explicitly mentioned in [10], where it is only briefly discussed for non-planarly branched rough paths. We show that as long as \((M,\nabla)\) is a manifold with a smooth connection, \(\mathcal{F}\) is a pseudo bialgebra map, thus getting rid of the assumption that \(\nabla\) is flat and has constant torsion. We further show that if \(\nabla\) is flat and has constant torsion, \(\mathcal{F}\) restricted to \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) is a post-Lie algebra map, making it a generalization to the previously discussed post-Lie map.
While \(\mathcal{F}\) is in general ill-defined on \(\mathcal{H}_{GL}\), it is well-defined for a flat and torsion-free connection \(\nabla\). In this case, we also show that \(\mathcal{F}\) restricted to \(\mathcal{P}(\mathcal{H}_{GL})\) is a pre-Lie map, and thus becomes the map discussed in Proposition 5.5. Finally, we compare it to the map constructed in [1], where the authors managed to solve non-geometric RDEs on manifolds for level \(N=2\) for any connection \(\nabla\).
This leads us to the main result of this section:
**Theorem 5.9**.: _Assume that \((M,\nabla)\) is a manifold equipped with a smooth connection. Then the map \(\mathcal{F}:\mathcal{H}^{g}_{MKW}\to\mathcal{D}\) constructed above is a pseudo bialgebra map. Furthermore, given two non-empty trees \(\tau,\sigma\in\mathcal{OT}\), \(\mathcal{F}\) fulfills:_
\[\mathcal{F}(\tau\curvearrowright_{l}\sigma)=\mathcal{F}(\tau)\triangleright \mathcal{F}(\sigma)\,. \tag{5.3}\]
To ease notation in the following proofs, we use the notation \(V_{f}:=\mathcal{F}(f)\) for all forests \(f\in\mathcal{OF}\).
Proof.: We start by showing the relations of \(V_{\tau\star\sigma}=V_{\tau}\circ V_{\sigma}\) and \(V_{\tau\curvearrowright_{l}\sigma}=\nabla_{V_{\tau}}V_{\sigma}\) for two trees \(\tau,\sigma\in\mathcal{OT}\). Since the identity operator is not a vector field, \(\nabla_{V_{\tau}}V_{\sigma}\) is not a well-defined object for empty trees. On the other hand, \(V_{\tau}\circ V_{\sigma}=V_{\tau\star\sigma}\) is obvious, if \(\tau\) or \(\sigma\) is empty.
We show the claim by induction over the number of vertices in \(\sigma\). For the induction start, let \(\sigma=\bullet_{i}\) for some \(i\in\{1,\ldots,n\}\). Then we immediately have
\[\nabla_{V_{\tau}}V_{i}=V_{[\tau]_{i}}=V_{\tau\curvearrowright_{i}\sigma}.\]
On the other hand, (3.3) gives us for any vector fields \(U,W\) the formula \(U\circ W=\nabla_{U}W+\nabla^{2}(\cdot)(U,W)\). This leads to:
\[V_{\tau}\circ V_{i}=\nabla_{V_{\tau}}V_{i}+\nabla^{2}(\cdot)(V_{\tau},V_{i})= V_{[\tau]_{i}}+V_{\tau\bullet i}\,=V_{\tau\star\bullet i}\,,\]
showing the induction start. Now let \(\sigma=[\sigma_{1},\ldots,\sigma_{k}]_{i}\) have \(k\geq 1\) children. We consider the tree \([\tau,\sigma_{1},\ldots,\sigma_{k}]_{i}\) and calculate
\[V_{[\tau,\sigma_{1},\ldots,\sigma_{k}]_{i}} =\nabla_{V_{\tau}}(\nabla^{k}V_{i})(V_{\sigma_{1}},\ldots,V_{ \sigma_{k}})\] \[=V_{\tau}\circ V_{\sigma}-\nabla^{2}(\cdot)(V_{\tau},V_{\sigma})- \sum_{j=1}^{k}\nabla^{k}V_{i}(V_{\sigma_{1}},\ldots,\nabla_{V_{\tau}}V_{ \sigma_{j}},\ldots,V_{\sigma_{k}})\,.\]
Rearranging this expression and using the induction hypothesis on \(\tau\curvearrowright_{l}\sigma_{j}\) leads to:
\[V_{\tau}\circ V_{\sigma}=V_{[\tau,\sigma_{1},\ldots,\sigma_{k}]_{i}+\tau \sigma+\sum_{j=1}^{k}[\sigma_{1},\ldots,\tau\curvearrowright_{l}\sigma_{j}, \ldots,\sigma_{k}]_{i}}=V_{\tau\star\sigma}\,.\]
It now easily follows that
\[\nabla_{V_{\tau}}V_{\sigma}=V_{\tau}\circ V_{\sigma}-\nabla^{2}(\cdot)(V_{ \tau},V_{\sigma})=V_{\tau\star\sigma-\tau\sigma}=V_{\tau\curvearrowright_{l} \sigma}\,.\]
We continue to proof that \(V_{f_{1}}\circ V_{f_{2}}=V_{f_{1}\star f_{2}}\) holds for all forests \(f_{1},f_{2}\in\mathcal{OF}\). To get started let \(\tau\) be a tree and \(f_{2}=\sigma_{1}\ldots\sigma_{k}\) be a forest. In that case, we get that for any smooth \(\phi\):
\[V_{\tau f_{2}}\phi =\nabla_{V_{\tau}}(\nabla^{n}\phi)(V_{\sigma_{1}},\ldots,V_{ \sigma_{k}})\] \[=V_{\tau}\circ V_{f_{2}}\phi-\sum_{i=1}^{k}\nabla^{k}\phi(V_{ \sigma_{1}},\ldots,\nabla_{V_{\tau}}V_{\sigma_{i}},\ldots,V_{\sigma_{k}})\] \[=V_{\tau}\circ V_{f_{2}}\phi-V_{\sigma_{1}\ldots\tau\curvearrowright _{l}\sigma_{i}\ldots\sigma_{k}}\,.\]
Rearranging the terms again leads to \(V_{\tau}\circ V_{f_{2}}=V_{\tau\star f_{2}}\). Finally, let \(f_{1}=\tau_{1}\ldots\tau_{m}\) also be a forest. In that case,
\[V_{f_{1}}\circ V_{f_{2}}\phi =\nabla^{m}(V_{f_{2}}\phi)(V_{\tau_{1}},\ldots,V_{\tau_{m}})\] \[=V_{\tau_{1}}\circ(\nabla^{m-1}V_{f_{2}}\phi)(V_{\tau_{2}},\ldots, V_{\tau_{m}})-\sum_{i=2}^{m}\nabla^{m-1}(V_{f_{2}}\phi)(V_{\tau_{2}},\ldots, \nabla_{V_{\tau_{1}}}V_{\tau_{i}},\ldots,V_{\tau_{m}})\] \[=V_{\tau_{1}\star(\tau_{2}\ldots\tau_{m})\star f_{2}-\sum_{i=2}^{ m}(\tau_{2}\ldots(\tau_{1}\curvearrowright_{l}\tau_{i})\ldots\tau_{m})\star f_{2}} \phi=V_{f_{1}\star f_{2}}\phi\,,\]
where we again use an inductive argument over the number of trees in \(f_{1}\).
All that remains to prove, is that for two smooth functions \(\phi,\psi\) and any forest \(f\in\mathcal{OF}\):
\[V_{\Delta f}(\phi\otimes\psi)=V_{f}(\phi\cdot\psi)\,,\]
where \(\Delta\) is the coshuffle, which we are proving by yet another induction over the number m of trees in \(f=\tau_{1}\dots\tau_{m}\). For \(m=1\), \(V_{f}=V_{\tau_{1}}\) is just a vector field, so the result follows from \(V_{f}(\psi\phi)=(Vf\psi)\phi+(\psi)V_{f}\psi=(V_{f}\otimes 1+1\otimes V_{f})( \psi\otimes\phi)\).
Now consider \(m\geq 2\). We use Sweedler's notation: \(\Delta(\tau_{2},\dots,\tau_{m})=\sum_{(f)}f^{(1)}\otimes f^{(2)}\). Observe that \(\sum_{i=2}^{m}\Delta(\tau_{2}\dots(\tau_{1}\curvearrowright_{l}\tau_{i})\dots \tau_{m})=\sum_{(\bar{f})}(\tau_{1}\curvearrowright_{l}\bar{f}^{(1)}\otimes \bar{f}^{(2)}+\bar{f}^{(1)}\otimes\tau_{1}\curvearrowright_{l}\bar{f}^{(2)})\) holds, where \(\bar{f}=\tau_{2}\dots\tau_{m}\). With this in mind, we see that the following holds:
\[V_{f}(\phi\psi) =V_{\tau_{1}}(\nabla^{m-1}(\phi\psi)(V_{\tau_{2}},\dots,V_{\tau_ {m}}))-\sum_{i=2}^{m}\nabla^{m-1}(\phi\psi)(V_{\tau_{2}},\dots,\nabla_{V_{ \tau_{1}}}V_{\tau_{1}},\dots,V_{\tau_{m}})\] \[=V_{\tau_{1}}(V_{\Delta(\tau_{2}\dots\tau_{m})}(\phi\otimes\psi) )-V_{\sum_{(f)}(\tau\curvearrowright_{l}f^{(1)}\otimes f^{(2)}+f^{(1)}\otimes \tau\curvearrowright_{l}f^{(2)})}(\phi\otimes\psi)\] \[=V_{\sum_{(f)}(\tau\star f^{(1)}\otimes f^{(2)}+f^{(1)}\otimes \tau\star f^{(2)}-\tau\curvearrowright_{l}f^{(1)}\otimes f^{(2)}+f^{(1)} \otimes\tau\curvearrowright_{l}f^{(2)})}(\phi\otimes\psi)\] \[=V_{\tau f^{(1)}\otimes f^{(2)}+f^{(1)}\otimes\tau f^{(2)}}(\phi \otimes\psi)\] \[=V_{\Delta f}(\phi\otimes\psi).\]
#### 5.4.1. Consistency
We still need to check that \(\mathcal{F}\) coincides with the elementary differentials constructed in Sections 5.1-5.3. To do so, we show that
* If \((M,\nabla)\) is flat and torsion-free, \(\mathcal{F}\) is well-defined on the Grossman-Larson algebra. In this case, \(\mathcal{F}|_{\mathcal{P}(\mathcal{H}_{GL})}\) is equal to the unique pre-Lie map \(\tilde{\mathcal{F}}\) generated by \(\tilde{\mathcal{F}}(\bullet_{i})=V_{i}\).
* If \((M,\nabla)\) is flat and has constant torsion, \(\mathcal{F}|_{\mathcal{P}(\mathcal{H}_{MKW}^{g})}\) is equal to the free post-Lie algebra map \(\hat{\mathcal{F}}\) generated by \(\hat{\mathcal{F}}(\bullet_{i})=V_{i}\).
* In [1], Armstrong et al solve (1.1) for level \(N=2\) in the so far most general setting. We show that even in this case, their solution agrees with our approach with the above-constructed map \(\mathcal{F}\).
Let us start with the flat and torsion-free case, so that \((\mathcal{F},\triangleright)\) is a pre-Lie algebra. By standard results, we know that there is a set of coordinates, such that the Christoffel symbols vanish locally. This immediately implies that for all \(k\geq 1\) and a permutation \(\sigma:\{1,\dots,k\}\to\{1,\dots,k\}\), we have
\[\nabla^{k}\cdot(U_{1},\dots,U_{k})=\nabla^{k}\cdot(U_{\sigma_{1}},\dots,U_{ \sigma_{k}}) \tag{5.4}\]
for any vector fields \(U_{1},\dots,U_{k}\in\mathcal{V}\). Thus, \(\mathcal{F}:\mathcal{H}_{GL}\to\mathcal{D}\) is well-defined. We can now show the following:
**Theorem 5.10**.: _Assume \((M,\nabla)\) is flat and torsion-free. Then \(\mathcal{F}|_{\mathcal{P}(\mathcal{H}_{GL})}:(\mathcal{P}(\mathcal{H}_{GL}), \curvearrowright)\to(\mathcal{V},\triangleright)\) is a pre-Lie algebra map. Thus, \(\mathcal{F}=\tilde{\mathcal{F}}\) on \(\mathcal{P}(\mathcal{H}_{GL})\)._
Proof.: We only need to show for any trees \(\tau,\sigma\in\mathcal{P}(\mathcal{H}_{GL})=\operatorname{span}(\mathcal{U} \mathcal{T})\), \(\mathcal{F}(\tau\curvearrowright)=\mathcal{F}(\tau)\triangleright\mathcal{F}(\sigma)\). Note that (5.3) holds for unordered trees (if one replaces \(\curvearrowright_{l}\) with \(\curvearrowright\)) by the same proof as for ordered trees, showing the claim.
Let us now consider the case in that \((M,\nabla)\) is flat and has constant torsion, i.e. \(\nabla_{Z}T=0\) for all \(Z\in\mathcal{V}\). Then \((\mathcal{V},\llbracket\cdot,\cdot\rrbracket,\triangleright)\), where \(\llbracket U,V\rrbracket=-T(U,V)\), forms a post-Lie algebra. We claim that in this case, \(\mathcal{F}\) restricted to the primitive elements of \(\mathcal{H}_{MKW}^{g}\) is a post-Lie algebra map and thus equal to \(\hat{\mathcal{F}}\). To do so, we need to show that it preserves \(\llbracket\cdot,\cdot\rrbracket\) as well as the product \(\curvearrowright_{l}\). Using Lemma 5.8, we see that for \(f,g\in\mathcal{P}(\mathcal{H}_{MKW}^{g})\), we have
\[[f,g]=f\curvearrowright_{l}g-g\curvearrowright_{l}l+\llbracket f,g\rrbracket\,\]
while for two vector fields \(U,V\in\mathcal{V}\), we have
\[[U,V]=U\triangleright V-V\triangleright U+\llbracket U,V\rrbracket\.\]
Thus, showing that \(\mathcal{F}|_{\mathcal{P}(\mathcal{H}^{g}_{MKW})}\) is a post-Lie algebra map is equivalent to showing that for all \(\tau,\sigma\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\), we have
\[\mathcal{F}(\tau\curvearrowright_{l}\sigma) =\mathcal{F}(\tau)\triangleright\mathcal{F}(\sigma)\] \[\mathcal{F}([\tau,\sigma]) =\left[\mathcal{F}(\tau),\mathcal{F}(\sigma)\right].\]
We show this in the following theorem:
**Theorem 5.11**.: _Let \(\nabla\) be flat with constant torsion. Then \(\mathcal{F}|_{\mathcal{P}(\mathcal{H}^{g}_{MKW})}:\mathcal{P}(\mathcal{H}^{g }_{MKW})\rightarrow\mathcal{V}\) is a post-Lie algebra map. It follows that \(\mathcal{F}=\hat{\mathcal{F}}\)._
Proof.: Theorem 5.9 already gives us that \(\mathcal{F}\) is a Lie algebra map with respect to the Lie brackets \([\cdot,\cdot]\) on \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) and \(\mathcal{V}\), so we only need to show that \(\mathcal{F}(\tau\curvearrowright_{l}\sigma)=\mathcal{F}(\tau)\triangleright \mathcal{F}(\sigma)\). Furthermore, the Theorem already shows it for trees \(\tau,\sigma\in\mathcal{OT}\). Since any \(f\in\mathcal{P}(\mathcal{H}^{g}_{MKW})\) is a finite sum over forests, we can set \(gr(f)\) to be the highest number of trees in any summand of \(f\) and use induction over this number. So assume that the claim has already been shown for any \(f,g\) such that \(gr(f),gr(g)\leq n\) for some \(n\in\mathbb{N}\). Since \(\mathcal{P}(\mathcal{H}^{g}_{MKW})\) is the free Lie algebra with respect to the Lie bracket \([\![\cdot,\cdot]\!]\), it suffices to show for any \(f,g,h\) with \(gr(f),gr(g),gr(h)\leq n\) that
\[\mathcal{F}([\![f,g]\!]\curvearrowright_{l}h) =\mathcal{F}([\![f,g]\!]\triangleright\mathcal{F}(h) \tag{5.5}\] \[\mathcal{F}(h\curvearrowright_{l}[\![f,g]\!]) =\mathcal{F}(h)\triangleright\mathcal{F}([\![f,g]\!])\,. \tag{5.6}\]
Furthermore, since \([\![f,g]\!]=[f,g]-f\curvearrowright_{l}g-g\curvearrowright_{l}f\), we can switch the Lie brackets and show the following instead of (5.5):
\[\mathcal{F}([f,g]\curvearrowright_{l}h)=\mathcal{F}([f,g])\triangleright \mathcal{F}(h). \tag{5.7}\]
To do so, we recall the definition of flat and torsion-free:
* \((M,\nabla)\) is flat, if and only if \[[X,Y]\triangleright Z=X\triangleright(Y\triangleright Z)-Y\triangleright(X \triangleright Z)\] holds for all \(X,Y,Z\in\mathcal{V}\,\).
* \((M,\nabla)\) has constant torsion, if and only if for all \(\nabla_{Z}T=0\). One easily calculates that this is equivalent to \[Z\triangleright T(X,Y)=T(Z\triangleright X,Y)+T(X,Z\triangleright Y),\] for all \(X,Y,Z\in\mathcal{V}\), where \(T(X,Y)=\nabla_{X}Y-\nabla_{Y}X-[X,Y]\).
Let us start by showing (5.7). The flatness gives us:
\[[\mathcal{F}(f),\mathcal{F}(g)]\triangleright\mathcal{F}(h) =\mathcal{F}(f)\triangleright(\mathcal{F}(g)\triangleright\mathcal{F} (h))-\mathcal{F}(g)\triangleright(\mathcal{F}(f)\triangleright\mathcal{F}(h))\] \[=\mathcal{F}(f\curvearrowright_{l}(g\curvearrowright_{l}h)-g \curvearrowright_{l}(f\curvearrowright_{l}h))\] \[=\mathcal{F}((f\star g-g\star f)h)\] \[=\mathcal{F}([f,g]h)\,,\]
where we have used the known connection between grafting and Grossman-Larson products \(f\curvearrowright_{l}(g\curvearrowright_{l}h)=(f\star g)\curvearrowright_{l}h\) in \(\mathcal{H}^{g}_{MKW}\), see [3] for reference. To show (5.6), we use the constant torsion as well as \(\mathcal{F}([\![f,g]\!])=[\![\mathcal{F}(f),\mathcal{F}(g)]\!]\) by induction hypothesis to see:
\[\mathcal{F}(h)\triangleright\mathcal{F}([\![f,g]\!]) =\mathcal{F}(h)\triangleright[\![\mathcal{F}(f)\mathcal{F}(g)]\!]\] \[=[\![\mathcal{F}(h)\triangleright\mathcal{F}(f),\mathcal{F}(g)]\!] +[\![\mathcal{F}(f),\mathcal{F}(h)\triangleright\mathcal{F}(g)]\!]\] \[=\mathcal{F}([\![h\curvearrowright_{l}f,g]\!]+[\![f,\curvearrowright_ {l}g]\!])\] \[=\mathcal{F}(h\curvearrowright_{l}[\![f,g]\!])\,,\]
which finishes the proof.
Finally, let us discuss the relation to the results from [1]: In the paper, the authors solve the RDE (1.1) for non-geometric rough paths on manifolds in the case \(N=2\), by constructing rough integrals in each coordinate chart in a coordinate independent way. They manage to solve the equation in this case for the \(\mathcal{H}_{GL}\) algebra and any manifold \(M\) for any connection \(\nabla\). Note that they take their analysis a step further and discuss the case, in which \(X\) lives on a manifold, as well. Since we restrict ourselves to the case in which \(X\) lives in flat space, we only show that our approach gives the same solution in that case, in which their formula (3.29) simplifies to
\[Y^{k}_{s,t} \approx V^{k}_{i}(Y_{s})\mathbb{X}^{i}_{s,t}+V^{\alpha}_{i} \partial_{\alpha}V^{k}_{j}(Y_{s})\mathbb{X}^{ij}_{s,t}-\frac{1}{2}V^{i}_{ \alpha}V^{j}_{\beta}\Gamma^{k}_{ij}(Y_{s})(\mathbb{X}^{\alpha}_{s,t}\mathbb{X }^{\beta}_{s,t}-\mathbb{X}^{\alpha\beta}_{s,t}-\mathbb{X}^{\beta\alpha}_{s,t})\] \[=V^{k}_{i}(Y_{s})\mathbb{X}^{\bullet{\rm i}}_{s,t}+(\tilde{ \nabla}_{V_{i}}V_{j})^{k}(Y_{s})\mathbb{X}^{\bullet{\rm j}}_{s,t}+\frac{1}{2} \tilde{\nabla}^{2}\phi^{k}(V_{i},V_{j})\mathbb{X}^{\bullet{\rm i}\bullet{\rm j }}\,\]
where \(\phi^{k}\) is the \(k-th\) coordinate function and \(\tilde{\nabla}\) is the torsion-free version of \(\nabla\) (i.e. \(\tilde{\Gamma}^{k}_{ij}=\frac{1}{2}(\Gamma^{k}_{ij}+\Gamma^{k}_{ji})\)). The \(\approx\) means that the two sides agree up to an error of order \(o(|t-s|)\). It follows that our Davie's formula (4.9)
\[\phi(Y_{t})\approx\sum_{|\tau|\leq 2}\frac{1}{sg(\tau)}\mathcal{F}(\tau) \phi(Y_{s})\mathbb{X}^{\tau}_{s,t}=\mathcal{F}(\mathbb{X}_{s,t})\phi(Y_{s})\]
holds, as long as we replace the connection \(\nabla\) with \(\tilde{\nabla}\). (Recall that the symbols in \(\mathcal{H}_{GL}\) where chosen in such a way, that \(\langle\tau,\sigma\rangle=sg(\tau)\delta_{\tau,\sigma}\). Thus, one needs to divide by \(sg(\tau)\) in the sum.) This mirrors our analysis of \(\mathcal{F}\): While it always works on \(\mathcal{H}^{*}_{MKW}\), we can only solve the RDE on a manifold with a rough path in \(\mathcal{H}_{GL}\), if we have the commuting property (5.4) holds. Having a torsion-free connection \(\tilde{\nabla}\) ensures this property for \(k\leq 2\), whereas more general \(k\) also require a flat connection.
_Remark 5.12_.: The most interesting observation in the level \(N=2\) case is that one can start with a general connection and construct a pseudo bialgebra map \(\mathcal{F}\) by replacing \(\nabla\) with \(\tilde{\nabla}\). This raises the question if it is possible to construct a pseudo bialgebra map for higher levels with weaker conditions than flatness and torsion-freeness. It should be noted that such a map could no longer fulfill the condition \(\mathcal{F}(\tau\curvearrowright)=\mathcal{F}(\tau)\triangleright\mathcal{F}(\sigma)\) for any trees \(\tau,\sigma\in\mathcal{U}\mathcal{T}\), since \(\mathcal{P}(\mathcal{H}_{GL})\) is the free pre-Lie algebra, so the above condition forces \(\mathcal{F}\) to be as in (5.2). However, as long as one does not mind losing that condition, it is an open problem whether one can construct a pseudo bialgebra map from a general connection \(\nabla\). In fact, [10] succeded at constructing rough integrals on manifolds, giving us high hope that it is possible.
## 6. Discussion of rough paths on manifolds
One of the critical aspects of geometric rough paths theory is, that the solution to an RDE can be seen as a rough path itself. For an RDE on a manifold \(M\), it follows that the solution to (1.1) should give us a rough path on a manifold, a concept which was introduced in [12] and refined in [13]. In this section, we want to analyze whether there is a canonical rough path living over our solution \(Y_{t}\).
For a non-geometric rough path, we can almost immediately answer this in the negative: At the current state, it is unclear how to define a rough path over a general Hopf algebra on a manifold. Even in the branched case, one needs an additional structure called a bracket extension [12].
_Remark 6.1_.: It should be noted that [10] successfully constructed branched rough paths on manifolds with said bracket extension. It should also be noted, that a canonical way to choose a basis of trees for \(\mathcal{H}_{GL}\) in the sense of [14] is currently being developed by Carlo Bellingeri, Emilio Ferrucci, and Nikolas Tapia. This would give one a canonical bracket extension and thus a canonical branched rough path on a manifold. It would be an interesting follow-up project to see how the results of this section can be transferred to branched rough paths, and how this connects to pseudo bialgebra maps.
However, in this paper, we will restrict ourselves to geometric rough paths on manifolds. Let us start by recalling the notion of geometric rough paths on manifolds, before showing that there is indeed a rough path on \(\mathbb{Y}\)\(M\) such that its trace fulfills \(\mathbb{Y}_{t}^{i}=Y_{t}^{i}\approx\mathcal{F}(\mathbb{X}_{s,t})\phi^{i}(Y_{t})\), where \(\phi^{i}\) is the \(i\)-th component of some coordinate function \(\phi\).
_Remark 6.2_.: It should be noted that geometric rough path theory on manifolds is well understood. It is further clear, that the classical solution to an RDE on a manifold should have a \(Y_{t}\) as a trace, which fulfills our Davie's formula \(\phi(Y_{t})\approx\mathcal{F}(\mathbb{X}_{s,t})\phi(Y_{s})\) for any smooth function \(\phi\in C^{\infty}(M)\) since our solution agrees with the classical solution in the geometric rough path case. The only real new proof in this section is a new proof of Corollary 6.7, using mainly algebraic arguments.
However, we still want to present this section as a possible starting point towards understanding general rough path solutions on manifolds as rough paths themselves, especially considering Remark 6.1.
### Recall: Geometric rough paths on manifolds
We use the view-point used in [1] (also used in [1] and[13] for non-geometric rough paths), in which rough paths on manifolds are a collection of rough paths in coordinate sheets \((\mathbb{X}_{i},\phi_{i})\), which is consistent under the push-forward of the coordinate change functions: \((\phi_{j}\circ\phi_{i}^{-1})_{*}\mathbb{X}_{j}=\mathbb{X}_{i}\) up to some restriction on the times \(s,t\). To define this properly, we need to introduce the notion of the push-forward, which requires the integration of a rough path against a one-form, which requires the notion of half shuffles. Let us start with introducing half shuffles:
Given two words \(w,u\), the ordered shuffle \(u\bar{\sqcup}w\) is given as the sum over all words with the same letters as \(uw\), such that the order inside \(u\) and \(w\) is preserved and the order of the last letters \(u_{|u|}\) and \(w_{|w|}\) is preserved. Formally, that is given by
\[u\bar{\sqcup}w=(u\sqcup\bar{w})w_{|w|}\,,\]
where \(w=(\bar{w},w_{|w|})\) for a word \(\bar{w}\) and a letter \(w_{|w|}\). Conversely, we define the set of ordered deshuffles as follows: \(\tilde{\Delta}^{n}(w)\) is the set of all splittings of \(w\) into \(n\) many non-empty words \((u_{1},\dots,u_{n})\), such that
* \(w\in Sh(u_{1},\dots,u_{n})\), where \(Sh(u_{1},\dots,u_{n})\) is the set of all words with the same letters as \(u_{1}\dots u_{n}\), such that the order of letters in all \(u_{i},i=1,\dots,n\) is preserved. It especially holds that \(u_{1}\sqcup\dots\sqcup u_{n}=\sum_{w\in Sh(u_{1},\dots,u_{n})}w\), as long as no letter in \(u_{1},\dots,u_{n}\) appears more than once.
* The last letters of \(u_{1},\dots,u_{n}\) is ordered as it is in the word \(w\).
For example, we have that
\[\tilde{\Delta}^{2}(123)=\{(1,23),(2,13),(12,3)\}\,.\]
Using this notation, we can now introduce the integral against a one-form \(\nu\): In the original paper [10], Terry Lyons shows that the integral of a one-form against a rough path can be seen as another rough path. To do so, he constructs so-called _almost multiplicative functionals_ out of \(\nu\) and \(\mathbb{X}\). Almost multiplicative functionals uniquely give rise to rough paths, which are characterized by the fact that \(|\mathbb{X}^{w}-\mathbb{Y}^{w}|\in o(|t-s|)\). We write \(\mathbb{X}_{s,t}^{w}\approx\mathbb{Y}_{s,t}^{w}\).
Using the ordered shuffle, we split Definition 3.2.2 from [10] into each word to get the following definition:
**Definition 6.3**.: Let \(\nu:\mathbb{R}^{n}\to L(\mathbb{R}^{n},\mathbb{R}^{d})\) be a smooth one-form and let \(\mathbb{X}\) be a geometric rough path over \(\mathbb{R}^{n}\). The integral \(\int\nu(d\mathbb{X})\) is given by the unique rough path \(\mathbb{Y}\), such that for each word \(w\), we have
\[\mathbb{Y}_{s,t}^{w}\approx\sum_{|u|\geq|w|}\left(\sum_{(s_{1},\dots,s_{|w|}) \in\tilde{\Delta}^{|w|}(u)}\nu_{s_{1}}^{w_{1}}\dots\nu_{s_{|w|}}^{w_{|w|}}(X_{s })\right)\mathbb{X}_{s,t}^{u}\,. \tag{6.1}\]
We denote \(\mathbb{X}^{\nu}:=\mathbb{Y}\).
Here \(\nu_{w}\) is defined as follows: Note that \(\nu\) is an \(\mathbb{R}^{d}\) valued one form on \(\mathbb{R}^{n}\) (i.e. \(\nu:\mathbb{R}^{n}\to L(\mathbb{R}^{n},\mathbb{R}^{d})\)). Thus, it can be written as \(\nu(x)=\sum_{i=1}^{n}\nu_{i}(x)d^{i}\), where \(\nu_{i}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) are smooth functions. We set \(\nu_{w}:=\partial_{w_{1}}\dots\partial_{w_{|w|-1}}\nu_{w_{|w|}}\).
Note that any smooth function \(\phi:\mathbb{R}^{n}\to\mathbb{R}^{d}\) immediately generates a one-form \(d\phi=\sum_{i=1}^{n}\partial_{i}\phi d^{i}\) by differentiating \(\phi\). With this, one can set the push-forward of \(\phi\) to be:
**Definition 6.4**.: The push-forward of \(\mathbb{X}\) under \(\phi\) is the rough path given by \(\mathbb{X}^{d\phi}\).
This leads to the following definition of a geometric rough path on a manifold:
**Definition 6.5**.: [10] A (geometric) rough path on a manifold over the interval \(J\) is a finite collection \((x_{i},\mathbb{X}_{i},J_{i},(\phi_{i},U_{i}))\), such that
* \(U_{i}\subset\mathbb{R}^{d}\) are open sets and \(\phi_{i}:M\supset V_{i}\to U_{i}\) are diffeomorphisms.
* \((J_{i})_{i}\) are intervalls and a compact cover of the interval \(J\).
* \(x_{i}\) is a path in \(U_{i}\) with \(x_{i}^{j}(t)-x_{i}^{j}(s)=\mathbb{X}_{i;s,t}^{j}\) for each letter \(j=1,\dots,n\). (That is, \(x_{i}\) is a trace of \(\mathbb{X}_{i}\)).
* \(\mathbb{X}_{i}\) is a geometric rough path over the interval \(J_{i}\) over \(\mathbb{R}^{n}\).
* **Consistency condition:** It holds that \(\mathbb{X}_{i}=(\phi_{j}\circ\phi_{i}^{-1})_{*}\mathbb{X}_{j}\) on \(J_{i}\cup J_{j}\), as long as this interval is not empty.
### Solutions to RDEs are rough paths
In this section, we recall the construction of rough path solutions on manifolds and present a short, new proof that the solution to (1.1) is indeed a rough path on \(M\). We furthermore briefly discuss the connection to the pseudo bialgebra map \(\mathcal{F}\). The main idea towards solving an \(RDE\) on \(M\) is to express the vector fields \(V_{i}\) in some coordinate chart \(V_{i}^{\phi}=V_{i}\phi^{k}\partial_{k}\) for each \(i=1,\dots,n\). One can then solve the RDE
\[d(\phi(Y_{t}))=V_{i}^{\phi}(Y_{t})d\mathbb{X}_{t}^{i}\]
as a classical RDE over flat space for each coordinate function \(\phi\). Doing so leads to the following terms for each word \(w\in T(\mathbb{R}^{d}),s\leq t\) and coordinate function \(\phi\):
\[\mathbb{Y}_{s,t}^{\phi,w}\approx\sum_{|u|\geq|w|}\sum_{(s_{1},\dots,s_{|w|}) \in\tilde{\Delta}^{|w|}(u)}V_{s_{1}}\phi^{w_{1}}(Y_{s})\dots V_{s_{|w|}}\phi^{ w_{|w|}}(Y_{s})\mathbb{X}_{s,t}^{u}\,, \tag{6.2}\]
where we sum over words \(u\) of length less or equal to \(N\) as well as all ordered splittings \((s_{1},\dots,s_{|w|})\) of \(u\) into \(|w|\) many non-empty words. This gives a unique rough path in each coordinate chart, as long as we also choose a starting point \(y_{0}\in M\) (becoming \(\phi(y_{0})\) in the coordinate chart). See [10] for reference. Note that, as we expected, we have for all \(1\)-letter words \(i=1,\dots,d\):
\[\phi(Y_{t}) \approx\sum_{0\leq|w|\leq N}V_{w_{1}}\circ\dots\circ V_{w_{|w|}} \phi(Y_{s})\mathbb{X}_{s,t}^{w}\] \[=\mathcal{F}(\mathbb{X}_{s,t})\phi(Y_{s})\,,\]
showing that the trace on \(M\) is indeed the same path we get with our notion of solution. We also see that \(\mathbb{Y}\) does indeed seem to fulfill some form of Davie's formula:
\[\mathbb{Y}_{s,t}^{\phi,w}\approx\tilde{\mathcal{F}}(\mathbb{X}_{s,t})\phi(Y_{s })\,,\]
where for each word \(u\), \(\tilde{\mathcal{F}}(u)\) maps functions \(\phi\in C^{\infty}(M,\mathbb{R}^{d})\) onto functions in \(C^{\infty}(M,T(\mathbb{R}^{d}))\) via
\[\Big{\langle}\tilde{\mathcal{F}}(u)\phi,w\Big{\rangle}=\sum_{(s_{1},\dots,s_{| w|})\in\tilde{\Delta}^{|w|}(u)}V_{s_{1}}\phi^{w_{1}}(Y_{s})\dots V_{s_{|w|}} \phi^{w_{|w|}}(Y_{s})\,.\]
While this indicates that for more general algebra, one should try to find maps \(\tilde{\mathcal{F}}:C^{\infty}(M,\mathbb{R}^{d})\to C^{\infty}(M,\mathcal{H})\) for some Hopf algebra \(\mathcal{H}\), such that the above Davie's formula has the correct trace and is "coordinate independent". However, as stated above, it is at the moment not clear how to push forward a group-like element for a general Hopf algebra from one coordinate chart to another, so "coordinate independent" is ambiguous here. For the geometric case, everything is well-defined and
we will spend the rest of this section presenting a new proof, showing the coordinate independence: To do so, we show the following preliminary result:
**Lemma 6.6**.: _For two given words \(u,w\) with \(\left|w\right|\leq\left|u\right|\), consider the two sets_
\[A :=\{(t,s,z)\ |\ (t_{1},\ldots,t_{\left|w\right|})\in\tilde{\Delta}^{ \left|w\right|}(u),1\leq\left|s_{i}\right|\leq\left|t_{i}\right|,(z_{1}^{i}, \ldots,z_{\left|s_{i}\right|}^{i})\in\tilde{\Delta}^{\left|s_{i}\right|}(t_{i}) \text{ for }i=1,\ldots,\left|w\right|\}\] \[B :=\{(v,s,z)\ |\ \left|u\right|\geq\left|v\right|\geq\left|w\right|,(s_{1},\ldots,s_{\left|w\right|})\in\tilde{\Delta}^{\left|w\right|}(v),(z_{1},\ldots,z_{\left|v\right|})\in\tilde{\Delta}^{\left|v\right|}(u)\}\,.\]
_Then there are maps \(v(t,s,z)\) and \(t(v,s,z)\), such that \((t,s,z)\mapsto(v(t,s,z),s,z)\) as well as \((v,s,z)\mapsto(t(v,s,z),s,z)\) are inverse to each other and thus bijections._
Proof.: We start by constructing \(v(t,s,z)\): Let \((t,s,z)\in A\) and denote the index set of the \(z_{k}^{i}\) by \(I:=\{(i,k_{i})\ |\ i=1,\ldots,\left|w\right|,k_{i}=1,\ldots,\left|s_{i}\right|\}\). Let \(\sigma:\{1,\ldots,m\}\to I\) be the unique, bijective map such that \(z_{\sigma(1)},\ldots,z_{\sigma(m)}\) are ordered in such a way, that their last letters have the same order as in \(u\). (Here we assume that the letters of u are colored in the sense that we can differentiate all letters in \(u\), even if \(u\) contains the same letter several times.)
We then define \(v:=s_{\sigma(1)}\ldots s_{\sigma(m)}\), where \(s_{i,k_{i}}\) is the \(k_{i}\)-th letter of the word \(s_{i}\). It follows that \((s_{1},\ldots,s_{\left|w\right|})\) is a splitting of \(v\). Furthermore, note that the last letters of \(z_{\left|s_{i}\right|}^{i}\) are just the last letters of \(t_{i}\) for \(i=1,\ldots,\left|w\right|\), which are ordered by the order of \(u\). Thus, \(\sigma\) does not change the ordering of \((z_{\left|s_{i}\right|}^{i})_{i=1,\ldots,\left|w\right|}\), which means that the last letters of \(s_{1},\ldots,s_{\left|w\right|}\) are ordered as in \(v\). It follows that \((s_{1},\ldots,s_{\left|w\right|})\in\tilde{\Delta}^{\left|w\right|}(v)\). Furthermore, one easily sees that \(\left|u\right|\geq\left|v\right|\geq\left|w\right|\), since \(1\leq\left|s_{i}\right|\leq\left|t_{i}\right|\). It follows, that \(\zeta(t,s,z):=(v(t,s,z),s,z)\in B\).
We show that it is bijective by constructing \(t(v,s,z)\) such that \(\zeta^{-1}(v,s,z)=(t(v,s,z),s,z)\). Given \((v,s,z)\), let \(\sigma:I\rightarrow\{1,\ldots,\left|v\right|\}\) be the unique bijection, such that \(s(i,k_{i})=v_{\sigma(i,k_{i})}\) for all \((i,k_{i})\in I\). We then set \(z_{k_{i}}^{i}:=z_{\sigma(i,k_{i})}\). For each \(i\), we then set \(t_{i}\) to be the word containing the same letters as \(z_{1}^{i},\ldots,z_{\left|s_{i}\right|}^{i}\), where we reorder the letters such that they have the same order as in \(u\). It is straight-forward to see that \(v(t(v,s,z),s,z)=v\), so \(\zeta\) is bijective given that \((t(v,s,z),s,z)\in A\). To show this, note that \((t_{1},\ldots,t_{\left|w\right|})\) is a splitting of \(u\) since \((z_{1},\ldots,z_{\left|v\right|})\) is one. Further, since \((s_{1},\ldots,s_{\left|w\right|})\) respected the order of last letters in \(v\), \(\sigma\) does not change the order of \(z_{\left|s_{i}\right|}^{i}\), which thus still have last letters respecting the order of \(u\). It follows that \((t_{1},\ldots,t_{\left|w\right|})\) respects this order and is thus in \(\tilde{\Delta}(u)\). Furthermore, \(1\leq\left|s_{i}\right|\leq\left|t_{i}\right|\) and since \((z_{1},\ldots,z_{\left|v\right|})\in\tilde{\Delta}^{\left|v\right|}(u)\) respect the ordering of \(u\), we get that they respect the ordering of the subwords \(t_{i}\) (\((z_{1}^{i},\ldots z_{\left|s_{i}\right|}^{i})\) is obviously a spitting of \(t_{i}\).) Thus, \((t(v,s,z),s,z)\in A\), which finishes the proof.
A simple corollary of this is, that a rough path in a single coordinate sheet gives rise to a rough path on \(M\): Given an atlas \((\phi_{i},U_{i})\) and a rough path \(\mathbb{X}\) in \(U_{1}\), we set \(\mathbb{X}^{\phi_{i}}:=(\phi_{i}\circ\phi_{1}^{-1})_{*}\mathbb{X}\) on the respective interval. The following holds:
**Corollary 6.7**.: \((\mathbb{X}^{\phi_{i}},\phi_{i},U_{i})\) _is a rough path on \(M\)._
Proof.: The only thing one needs to check is the consistency property. To do so, it suffices to check that the push-forward factorizes. Since this result is not the main focus of this section, but simply a nice observation, we move the proof of the factorization of the push-forward in the appendix and show it in Proposition A.1.
To show that \(\mathbb{Y}\) is indeed a rough path on a manifold, we need one more technical result: We will need to calculate the general derivative of compositions \(\psi\circ\phi\) of smooth functions \(\phi,\psi\).
**Lemma 6.8**.: _Let \(w\) be any word and \(\phi,\psi\) be smooth functions. It holds that_
\[\partial_{w}(\psi\circ\phi)(x)=\sum_{1\leq\left|v\right|\leq\left|w\right|}( \partial_{v}\psi)(\phi(x))\left(\sum_{(s_{1},\ldots,s_{n})\in\tilde{\Delta}^{ \left|v\right|}(w)}\partial_{s_{1}}\phi^{v_{1}}(x)\ldots\partial_{s_{\left|v \right|}}\phi^{v_{\left|v\right|}}(x)\right)\,,\]
_where \(\partial_{w}=\partial_{w_{1}}\ldots\partial_{w_{\left|w\right|}}\)._
Proof.: For one-letter words \(w=1,\ldots,n\), this is just the chain-formula. For longer words, we do inductively get for \(i=1,\ldots n\) and any word \(w\):
\[\begin{split}\partial_{iw}(\psi\circ\phi)(x)&=\sum_{1 \leq|v|\leq|w|}\sum_{j=1}^{n}(\partial_{jv}\psi)(\phi(x))\left(\sum_{(s_{1}, \ldots,s_{n})\in\tilde{\Delta}^{|v|}(w)}\partial_{i}\phi^{j}(x)\partial_{s_{1} }\phi^{v_{1}}(x)\ldots\partial_{s_{|v|}}\phi^{v_{|v|}}(x)\right)\\ &\quad+\sum_{1\leq|v|\leq|w|}(\partial_{v}\psi)(\phi(x))\left( \sum_{(s_{1},\ldots,s_{n})\in\tilde{\Delta}^{|v|}(w)}\sum_{j=1}^{|v|}\partial_ {s_{1}}\phi^{v_{1}}(x)\ldots\partial_{is_{j}}\phi^{v_{j}}(x)\ldots\partial_{s_ {|v|}}\phi^{v_{|v|}}(x)\right)\\ &=\sum_{1\leq|v|\leq|iw|}(\partial_{v}\psi)(\phi(x))\left(\sum_{( s_{1},\ldots,s_{n})\in\tilde{\Delta}^{|v|}(iw)}\partial_{s_{1}}\phi^{v_{1}}(x) \ldots\partial_{s_{|v|}}\phi^{v_{|v|}}(x)\right)\,.\end{split}\]
We can now show our main result: That \(\mathbb{Y}\) is indeed a rough path on \(M\). The only non-trivial part of that is the consistency condition: Let \(\phi,\psi\) be two coordinate charts such that their supports on \(M\) overlap and let \(\nu:\psi\circ\phi^{-1}\) be the coordinate change map. We claim that \(\mathbb{Y}_{s,t}^{\psi}=\nu_{*}\mathbb{Y}_{s,t}^{\phi}\) holds, which shows the following theorem:
**Theorem 6.9**.: _Given an atlas \(\phi\in\Phi\), \((\mathbb{Y}^{\phi})_{\phi\in\Phi}\) from (6.2) is a geometric rough path on \(M\)._
Proof.: Let us first calculate the push-forward \(\nu_{*}\mathbb{Y}^{\phi}:\)
\[\begin{split}\nu_{*}\mathbb{Y}^{\phi,w}&\approx \sum_{|u|\geq|w|}\sum_{(s_{1},\ldots,s_{|w|})\in\tilde{\Delta}^{|w|}(u)} \partial_{s_{1}}\nu^{w_{1}}\ldots\partial_{s_{|w|}}\nu^{w_{|w|}}\mathbb{Y}^{ \phi,u}\\ &\approx\sum_{|v|\geq|u|\geq|w|}\left(\sum_{(s_{1},\ldots,s_{|w|} )\in\tilde{\Delta}^{|w|}(u)}\partial_{s_{1}}\nu^{w_{1}}\ldots\partial_{s_{|w| }}\nu^{w_{|w|}}\right)\\ &\quad\left(\sum_{(z_{1},\ldots z_{|u|})\in\tilde{\Delta}^{|u|}( v)}V_{z_{1}}\phi^{u_{1}}\ldots V_{z_{|u|}}V\phi^{u_{|u|}}\right)\mathbb{X}^{v} \end{split} \tag{6.3}\]
On the flip side, we can express \(\psi\) as \(\psi=\nu\circ\phi\). Together with Lemma 6.8, this gives us
\[\begin{split}\mathbb{Y}^{\psi,w}&\approx\sum_{|v| \geq|w|}\sum_{(t_{1},\ldots,t_{|w|})\in\tilde{\Delta}^{|w|}(v)}V_{t_{1}}(\nu \circ\phi)^{w_{1}}\ldots V_{t_{|w|}}(\nu\circ\phi)^{w_{|w|}}\mathbb{X}^{v}\\ &\approx\sum_{|v|\geq|w|}\sum_{(t_{1},\ldots,t_{|w|})\in\tilde{ \Delta}^{|w|}(v)}\left(\sum_{|s_{1}|\leq|t_{1}|\ldots|s_{|w|}|\leq|t_{|w|}|} \partial_{s_{1}}\nu^{w_{1}}\ldots\partial_{s_{|w|}}\nu^{w_{|w|}}\right)\\ &\quad\left(\sum_{(z_{1}^{1},\ldots,z_{|s_{1}|}^{1})\in\tilde{ \Delta}^{|s_{1}|}(t_{1}),\ldots,(z_{1}^{|w|},\ldots,z_{|s_{|w|}|}^{|w|})\in \tilde{\Delta}^{|s_{|w|}|}(t_{|w|})}V_{z_{1}^{1}}\phi^{s_{1},1}\ldots V_{z_{| w|}|}\phi^{s_{|w|}|s_{|w|}|}\right)\mathbb{X}^{v}\,.\end{split} \tag{6.4}\]
For fixed \(w,v\), (6.3) can be written as
\[\mathbb{Y}^{\psi,w}=\sum_{|v|\geq|w|}\left(\sum_{(u,s,z)\in B}\partial_{s_{1} }\nu^{w_{1}}\ldots\partial_{s_{|w|}}\nu^{w_{|w|}}V_{z_{1}}\phi^{u_{1}}\ldots V_{ z_{|u|}}V\phi^{u_{|u|}}\right)\mathbb{X}^{v}\]
and (6.4) can be written as
\[\mathbb{Y}^{\psi,w}=\sum_{|v|\geq|w|}\left(\sum_{(t,s,z)\in A}\partial_{s_{1}} \nu^{w_{1}}\dots\partial_{s_{|w|}}\nu^{w_{|w|}}V_{z_{1}^{1}}\phi^{s_{1},1}\dots V _{z_{|v|}^{|w|}}\phi^{s_{|w|},|s_{|w|}|}\right)\mathbb{X}^{v}\]
Thus, Lemma 6.6 gives us that (6.4) and (6.3) are the same, showing the claim.
### Discussion: General rough paths are geometric rough paths
Something an attentive reader might have noticed is that any rough path can be lifted to a geometric rough path over a larger index set: The Theorem of Milnor-Moore gives us, that any rough path lives in a universal enveloping algebra, which is given by \(U(P)=T(P)/I\) by Proposition 3.10. By the extension theorem [10], we can thus find a rough path lift \(\tilde{\mathbb{X}}\) into \(T(P)\), making it a non-homogeneous geometric rough path over \(P\).
_Remark 6.10_.: It should be noted that the extension theorem was only proven in [10] for the homogeneous case. However, the proof that any inhomogeneous rough path in \(U(P)\) can be extended to a rough path in \(T(P)\) works exactly the same.
Thus, using the last section, we can see the solution to any rough differential equation as a geometric rough path \(\mathbb{Y}\) simply by replacing
\[dY_{s}=V_{i}(Y_{s})d\mathbb{X}_{s}^{i}\]
with
\[dY_{s}=\sum_{p\in\tilde{P}}\mathcal{F}(p)(Y_{s})d\tilde{\mathbb{X}}_{s}^{p}\,,\]
where \(\tilde{P}\) is a basis of \(P^{N}\). However, the problem with this construction is that in general, \(\tilde{\mathbb{X}}\) is not unique, and \(\mathbb{Y}\) fundamentally relies on the lift. This can easily be seen by using a geometric rough path: In this case, \(\mathbb{X}\) should live in \(T(\mathbb{R}^{n})\), a much smaller space than \(T(P)\).
_Remark 6.11_.: We should note that in the special case of branched rough paths, this approach should be quite successful: For non-planarly branched rough paths [1], it is shown that \(\mathcal{H}_{GL}\) is isomorphic to \(T(B)\), where \(B\) is a subspace of \(\operatorname{span}(\mathcal{UT})=\mathcal{P}(\mathcal{H}_{GL})\). This shows that any non-planarly branched rough path can be seen as a geometric rough path over \(B\). And [1] shows in Section 6 that any planarly branched rough path is given by a geometric rough path over \(\operatorname{span}(\mathcal{OT})\). Thus, one can use the above results to make sense of solutions to branched RDEs as (geometric) rough paths on manifolds.
However, for general \(\mathcal{H}\) it does not seem likely that this approach should lead to a unique rough path \(\mathbb{Y}\) over \(Y\). Indeed, if \(\mathbb{X}\) is not unique, one easily sees that \(\mathbb{Y}\) can not be unique unless the vector fields \(V_{i},i=1,\dots,n\) interact with \(\mathbb{X}\) in such a way that all the additional information in \(\tilde{\mathbb{X}}\) vanishes. Indeed, if we have that \(\tilde{\mathbb{X}}_{s,t}^{u}\) is not unique for some word with grade \(|u|=N\) equal to the level of the rough path, (6.2) becomes for any word \(w\) with grade \(|w|=N\):
\[\mathbb{Y}_{s,t}^{\phi,w}=\sum_{|u|=|w|}V_{u_{1}}\phi^{w_{1}}\dots V_{u_{|u|} }\phi^{w_{|w|}}(Y_{s})\tilde{\mathbb{X}}_{s,t}^{u}\,.\]
By choosing appropriate vector fields \(V_{i}\), we can always ensure that \(\mathbb{Y}^{w}\) depends on the non-unique \(\tilde{\mathbb{X}}^{u}\).
## Appendix A The push-forward factorizes
The goal of this section is to show that given a geometric rough path \(\mathbb{X}\) over \(\mathbb{R}^{d}\) and two smooth functions \(\phi:\mathbb{R}^{d}\to\mathbb{R}^{n}\), \(\psi:\mathbb{R}^{n}\to\mathbb{R}^{m}\), we want to show that
\[\psi_{*}(\phi_{*}\mathbb{X})=(\psi\circ\phi)_{*}\mathbb{X}\,.\]
Before we start the proof, let us expand both sides of the equation: Let \(\mathbb{Y}=\psi_{*}(\phi_{*})\mathbb{X}\). By applying (6.1) twice, we get that
\[\begin{split}\mathbb{Y}_{s,t}^{w}&\approx\sum_{|v|\geq|w |}\left(\sum_{(s_{1},\ldots,s_{|w|})\in\tilde{\Delta}^{|w|}(v)}\partial_{s_{1} }\psi^{w_{1}}\ldots\partial_{s_{|w|}}\psi^{w_{|w|}}(\phi(X_{s}))\right)\phi^{*} \mathbb{X}_{s,t}^{v}\\ &\approx\sum_{|u|\geq|v|\geq|w|}\left(\sum_{(s_{1},\ldots,s_{|w|} )\in\tilde{\Delta}^{|w|}(v)}\partial_{s_{1}}\psi^{w_{1}}\ldots\partial_{s_{|w|} }\psi^{w_{|w|}}(\phi(X_{s}))\right)\\ &\qquad\left(\sum_{(z_{1},\ldots,z_{|v|})\in\tilde{\Delta}^{|v|}( u)}\partial_{z_{1}}\phi^{v_{1}}\ldots\partial_{z_{|v|}}\phi^{v_{|v|}}(X_{s}) \right)\mathbb{X}^{u}\end{split}\] (A.1)
holds for all words \(w\) and \(s\leq t\). On the other side, let \(\mathbb{Z}:=(\psi\circ\phi)_{*}\mathbb{X}\). One can then calculate that
\[\begin{split}\mathbb{Z}^{w}&\approx\sum_{|u|\geq|w |}\left(\sum_{(s_{1},\ldots,s_{|w|})\in\tilde{\Delta}^{|w|}(u)}\partial_{s_{1} }(\psi\circ\phi)^{w_{1}}\ldots\partial_{s_{|w|}}(\psi\circ\phi)^{w_{|w|}}(X_{s })\right)\mathbb{X}^{u}\\ &=\sum_{|u|\geq|w|}\bigg{(}\sum_{(t_{1},\ldots,t_{|w|})\in\tilde{ \Delta}^{|w|}(u)}\sum_{|s_{1}|\leq|t_{1}|,\ldots,|s_{|w|}|\leq|t_{|w|}|} \partial_{s_{1}}\psi^{w_{1}}\ldots\partial_{s_{|w|}}\psi^{w_{|w|}}(\phi(X_{s}) )\\ &\qquad\qquad\sum_{(z_{1}^{1},\ldots,z_{|s_{1}|}^{1})\in\tilde{ \Delta}^{|s_{1}|}(t_{1}),\ldots,(z_{1}^{|w|},\ldots,z_{|s_{|w|}|}^{|w|})\in \tilde{\Delta}^{|s_{|w|}|}(t_{|w|})}\partial_{z_{1}^{1}}\phi^{s_{1,1}}\ldots \partial_{z_{|s_{|w|}|}^{|w|}}\phi^{s_{|w|},|s_{|w|}|}(X_{s})\bigg{)}\mathbb{X} ^{u}\,.\end{split}\] (A.2)
where we used Lemma 6.8. We claim that both are the same:
**Proposition A.1**.: _The push-forward factorizes. That is, for any geometric rough path and \(\psi,\phi\) as above, we have that \(\psi_{*}(\phi_{*}\mathbb{X})=(\psi\circ\phi)_{*}\mathbb{X}\)._
Proof.: We need to show, that \(\mathbb{Z}=\mathbb{Y}\). To do so, it suffices to show that the RHS of (A.2) and (A.1) coincide. Thus, we need to show that for any words \(w,u\), we have that
\[\begin{split}&\sum_{(t_{1},\ldots,t_{|w|})\in\tilde{\Delta}^{|w|}(u )}\sum_{|s_{1}|\leq|t_{1}|,\ldots,|s_{|w|}|\leq|t_{|w|}|}\partial_{s_{1}}\psi^{ w_{1}}\ldots\partial_{s_{|w|}}\psi^{w_{|w|}}(\phi(X_{s}))\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad |
2308.12402 | Skew-convex function rings and evaluation of skew rational functions | The product formula for evaluating products of skew polynomials is used to
construct a class of rings. As an application, we present a method of
evaluating quotients of skew polynomials. | Masood Aryapoor | 2023-08-23T19:49:41Z | http://arxiv.org/abs/2308.12402v1 | # Skew-convex function rings and evaluation of skew rational functions
###### Abstract
The product formula for evaluating products of skew polynomials is used to construct a class of rings. As an application, we present a method of evaluating quotients of skew polynomials.
## 1 Introduction
We present the notion of the "skew product" which is a particular binary operation on sets of functions with values in a fixed skew field (see Definition 2.1). The concept of the skew product is motivated by the "product formula" for evaluating skew polynomials introduced by Lam and Leroy [6, Theorem 2.7]. We show that the skew product gives rise to near-ring structures. Restricting our attention to the class of "skew-convex" functions, we arrive at the notion of skew-convex function rings (see Definition 2.2 and Theorem 2.4). It turns out that skew-convex function rings are closely related to endomorphism rings (see Section 2.2). The method of evaluating skew polynomials has found interesting applications (see, for example, [2, 6, 8, 9]). Using skew-convex function rings, we extend the method of evaluating skew polynomials to a method of evaluating quotients of skew polynomials.
The paper is organized as follows. Subsection 2.1 introduces the notions of the skew product and skew-convex functions, and gives their basic properties. Subsection 2.2 deals with some general structural results. In Subsection 2.3, we study skew-invertible functions, that is, functions that are invertible with respect to the skew product. Section 3 deals with evaluating quotients of skew polynomials.
## 2 Skew-convex function rings
In [6], the authors presented a method of evaluation of skew polynomials over a skew field using which skew polynomials can naturally be considered as
functions on the ground skew field. It turns out that the value of the product of two skew polynomials at a given point may not be equal to the product of the values of the skew polynomials at the same point. The correct formula for evaluating products of skew polynomials, called the product formula, is given in [6, Theorem 2.7]. The product formula can be regarded as a binary operation on functions which we shall call the _skew product_ (see Formula 2.1). This section introduces the notion of the skew product, and gives some general facts regarding the skew product.
### Skew-convex function rings
Let \(K\) be a skew field and \(X\) be a (nonempty) set on which the multiplicative group \(K^{*}:=K\setminus\{0\}\) acts (on the left). The action of \(a\in K^{*}\) on \(x\in X\) is denoted by \({}^{a}x\). We will freely use the standard terminology of Group Theory. In particular, we use the following notions: An _invariant_ subset of \(X\) is a set \(Y\subset X\) such that \({}^{a}y\in Y\) for all \(a\in K^{*}\) and \(y\in Y\); An _orbit_ is a nonempty invariant subset of \(X\) which is minimal with respect to inclusion. We denote the set of all functions \(f\colon X\to K\) by \(\mathcal{F}(X)\). By abuse of notation, a constant function in \(\mathcal{F}(X)\), whose value is \(a\in K\), is simply denoted by \(a\). Given functions \(f,g\colon X\to K\), we let \(f+g\) denote the pointwise addition of the functions \(f\) and \(g\).
**Definition 2.1**.: _The left skew product of functions \(f,g\in\mathcal{F}(X)\) is a function \(f\diamond g:X\to K\) defined as follows_
\[(f\diamond g)(x)=\begin{cases}f\left(\,{}^{g(x)}x\,\right)g(x)&\text{if }g(x) \neq 0,\\ 0&\text{if }g(x)=0.\end{cases} \tag{2.1}\]
The _right skew product_ is defined using the formula
\[(f\diamond_{r}g)(x)=\begin{cases}f(x)g\left(\,{}^{f(x)^{-1}}x\,\right)&\text{ if }f(x)\neq 0,\\ 0&\text{if }f(x)=0.\end{cases} \tag{2.2}\]
In this paper, we will exclusively work with the left skew product. Therefore, we shall drop the adjective "left". We leave it to the reader to formulate and prove similar results for the right skew product.
It is easy to see that \((a\diamond f)(x)=af(x)\), for every \(a\in K,x\in X\). We shall henceforth denote \(a\diamond f\) by \(af\). Note that \((a,f)\mapsto af\) turns \(\mathcal{F}(X)\) into a left \(K\)-vector space. In the following lemma, the proof of which is straightforward, we collect some properties of the skew product.
**Lemma 2.1**.: _Let \(f,h,g\colon X\to K\) be arbitrary functions. Then: (1) The constant function \(1\) is a unit for \(\diamond\), that is, \(f=f\diamond 1=1\diamond f\). (2) \((f+g)\diamond h=f\diamond h+g\diamond h\), that is, the right distributive law (with respect to pointwise addition) holds for the skew product. (3) \((f\diamond g)\diamond h=f\diamond(g\diamond h)\), that is, \(\diamond\) is associative._
It follows from this lemma that the set \(\mathcal{F}(X)\) equipped with pointwise addition and the skew product is a structure known as "right near-ring" in the literature (see [12]). We note that the skew product may not be left distributive with respect to pointwise addition. However, the left distributive law holds for a class of functions described below.
**Definition 2.2**.: _A function \(f\colon X\to K\) is called skew convex if_
\[f\circ(a+b)=f\circ a+f\circ b,\text{ for all }a,b\in K.\]
The set of all skew-convex functions \(f\colon X\to K\) is denoted by \(\mathcal{S}(X)\). Any constant function belongs to \(\mathcal{S}(X)\) since \(a\circ b=ab\) for all \(a,b\in K\). In particular, \(K\) is a subring of \(\mathcal{S}(X)\). More generally, we have the following result. The easy proof is left to the reader.
**Proposition 2.2**.: _A function \(f\colon X\to K\) which is constant on every orbit in \(X\), is skew convex._
The following lemma gives an important property of skew-convex functions.
**Lemma 2.3**.: _Let \(h\colon X\to K\) be given. The condition_
\[h\circ(f+g)=h\circ f+h\diamond g,\]
_holds for all functions \(f,g\colon X\to K\) if and only if \(h\) is skew convex._
Proof.: The result follows from the identity
\[(h\circ f)(x)=(h\diamond f(x))(x),\text{ for all }f\in\mathcal{F}(X)\text{ and }x\in X.\]
As a consequence of this lemma, we have the following result.
**Theorem 2.4**.: _Equipped with the left skew product, the additive group \(\mathcal{S}(X)\) is a ring with identity._
Proof.: The result follows from Lemma 2.3 and the general fact that in any right near-ring \(R\), the set
\[\{r\in R\,|\,r(s_{1}+s_{2})=rs_{1}+rs_{2}\text{ for all }s_{1},s_{2}\in R\},\]
is a ring.
We call \(\mathcal{S}(X)\), equipped with pointwise addition and the skew product, _the ring of skew-convex functions_ on \(X\) determined by the action of \(K^{*}\) on \(X\). We now give some examples of skew-convex function rings.
**Example 2.1**.: _If the action of \(K^{*}\) on \(X\) is trivial, the ring \(\mathcal{S}(X)\) is just the familiar ring of all functions \(f\colon X\to K\) equipped with pointwise addition and pointwise multiplication._
The following example justifies the terminology and explains the link between skew-convex function rings and skew polynomial rings. For an introduction to skew polynomial rings, we refer the reader to [3].
**Example 2.2**.: _Let \(\sigma\colon K\to K\) be an endomorphism and \(\delta\colon K\to K\) be a \(\sigma\)-derivation. Let \(K[T;\sigma,\delta]\) denote the ring of skew polynomials determined by \(\sigma\) and \(\delta\). Every nonzero element of \(K[T;\sigma,\delta]\) can uniquely be written as \(\sum_{m=0}^{n}a_{m}T^{m}\) where \(a_{m}\in K\) with \(a_{n}\neq 0\). The identity \(Ta=\sigma(a)T+\delta(a)\), where \(a\in K\), holds in \(K[T;\sigma,\delta]\). Following [6], we consider the \((\sigma,\delta)\)-action of \(K^{*}\) on \(K\), that is,_
\[{}^{b}a=\sigma(b)ab^{-1}+\delta(b)b^{-1}. \tag{2.3}\]
_The ring of skew-convex functions determined by the \((\sigma,\delta)\)-action is denoted by \(K[\sigma,\delta]\). One can verify that there exists a unique ring homomorphism_
\[K[T;\sigma,\delta]\to K[\sigma,\delta],\]
_which sends each \(a\in K\) to the constant function \(a\), and \(T\) to the identity function \(id\colon K\to K\). In particular, every skew polynomial \(P(T)\in K[T;\sigma,\delta]\) can, under this homomorphism, be considered as a skew-convex function \(P\colon K\to K\). The reader can verify that the value \(P(a)\) of \(P\) at \(a\in K\) coincides with the evaluation map introduced in [6], that is, \(P(a)\) is the unique element of \(K\) for which we have_
\[P(T)-P(a)\in K[T;\sigma,\delta](T-a).\]
_Let us remark that the product formula for evaluating products of skew polynomials is an important consequence of the existence of the above ring homomorphism._
**Example 2.3**.: _Consider the left regular action of \(K^{*}\) on \(K^{*}\), i.e., \({}^{b}a=ba\). The reader can easily verify that a function \(f:K^{*}\to K\) is skew-convex (with respect to the left regular action) iff there exists a group homomorphism \(\phi_{f}:(K,+)\to(K,+)\) such that \(f(x)=\phi_{f}(x)x^{-1}\) for all \(x\in K^{*}\). It is straightforward to check that the assignment \(f\mapsto\phi_{f}\) establishes an isomorphism between the ring of skew-convex functions on \(K^{*}\) and the endomorphism ring \(End(K,+)\). For a more general result, see Proposition 2.7._
### Structure of skew-convex function rings
We begin with a result regarding homomorphisms between skew-convex function rings for which we need some preliminaries. Let \(K\) be a skew field,
and \(X,Y\) be (nonempty) sets on which \(K^{*}\) acts (on the left). For a map \(\phi\colon X\to Y\), let \(\phi^{*}\colon\mathcal{F}(Y)\to\mathcal{F}(X)\) denote the pullback map \(\phi^{*}(f)=f\circ\phi\). Recall that a map \(\phi\colon X\to Y\) is called _action-preserving_ if \(\phi(^{a}x)={}^{a}\phi(x)\) for all \(a\in K^{*}\) and \(x\in X\).
**Proposition 2.5**.: _Assume that \(\phi\colon X\to Y\) is action-preserving. Then, for any function \(f\in\mathcal{S}(Y)\), we have \(\phi^{*}(f)\in\mathcal{S}(X)\). Moreover, the map \(\phi^{*}\colon\mathcal{S}(Y)\to\mathcal{S}(X)\) is a homomorphism of rings._
Proof.: The fact that \(\phi^{*}(f)\in\mathcal{S}(X)\), for any \(f\in\mathcal{S}(Y)\), follows from the identity
\[(f\circ\phi)\circ a=(f\circ a)\circ\phi,\text{ where }a\in K.\]
The rest of the proof is straightforward.
The following result reduces the problem of classifying skew-convex function rings to the case of transitive actions, i.e., actions with a single orbit.
**Proposition 2.6**.: _Let \(X_{i},i\in I,\) be the family of all orbits in \(X\). Then, the ring \(\mathcal{S}(X)\) is isomorphic to the direct product of the rings \(\mathcal{S}(X_{i}),i\in I\)._
Proof.: For any function \(f:X\to K\), let \(f_{i}\) denote the restriction of \(f\) to the set \(X_{i}\). It is easy to check that the assignment \(f\mapsto(f_{i})_{i\in I}\) gives an isomorphism between \(\mathcal{S}(X)\) and the direct product \(\prod_{i\in I}\mathcal{S}(X_{i})\).
Any transitive action of \(K^{*}\) is isomorphic to a left regular action in the sense that it is of the form
\[{}^{a}(bG)=(ab)G,\]
where \(G\) is a subgroup of \(K^{*}\) and \(K^{*}/G=\{bG|\,b\in K^{*}\}\) is the set of all left cosets of \(G\) in \(K^{*}\). A map \(\phi:K\to K\) is called _right \(G\)-linear_ if \(\phi(a-b)=\phi(a)-\phi(b)\) and \(\phi(ac)=\phi(a)c\), for all \(a,b\in K\) and \(c\in G\). The following proposition gives a characterization of skew-convex function rings for the case of transitive actions.
**Proposition 2.7**.: _Let \(G\) be a subgroup of \(K^{*}\) and consider the left regular action of \(K^{*}\) on \(K^{*}/G\). Then: (1) A function \(f:K^{*}/G\to K\) is skew convex iff there exists a (unique) right \(G\)-linear map \(\phi_{f}:K\to K\) such that \(f(xG)=\phi_{f}(x)x^{-1}\) for all \(x\in K^{*}\). (2) The assignment \(f\mapsto\phi_{f}\) establishes an isomorphism between \(\mathcal{S}(K^{*}/G)\) and the endomorphism ring \(End(K_{G})\) of right \(G\)-linear operators on \(K\)._
Proof.: Given a function \(f:K^{*}/G\to K\), we define \(\phi_{f}:K\to K\) as follows
\[\phi_{f}(x)=\begin{cases}f(xG)x&\text{if }x\neq 0,\\ 0&\text{if }x=0.\end{cases}\]
It is straightforward to check that \(f\) is skew-convex iff \(\phi_{f}\) is right \(G\)-linear. The easy proof of (2) is left to the reader.
We end this section with a remark.
**Remark 2.1**.: _Keeping the notations as in Example 2.2, let \(a\in K\) be fixed. The \((\sigma,\delta)\)-conjugacy class of \(a\), that is, the set_
\[\Delta^{\sigma,\delta}(a):=\{\,{}^{b}a\,|\,b\in K^{*}\},\]
_is an invariant subset of \(K\), and the \((\sigma,\delta)\)-action is transitive on \(\Delta^{\sigma,\delta}(a)\). One can show that the set_
\[C^{\sigma,\delta}(a):=\{b\in K^{*}|\,{}^{b}a=a\}\cup\{0\},\]
_is a skew subfield of \(K\) (see Lemma 3.2 in [6]). An application of Proposition 2.7 reveals that the ring of skew-convex functions on \(\Delta^{\sigma,\delta}(a)\) is isomorphic to the ring of right \(C^{\sigma,\delta}(a)\)-linear operators on \(K\). This isomorphism has implicitly been used in the proof of [6, Proposition 3.16]. In particular, we obtain a ring homomorphism_
\[\lambda:K[T;\sigma,\delta]\to End(K_{C^{\sigma,\delta}(a)}).\]
_This ring homomorphism coincides with the homomorphism \(\Lambda_{a}\) introduced and studies in [11] (see Corollary 1.13 in the reference). We also note that \(\lambda(P(T))=\lambda_{P,a}\) where \(\lambda_{P,a}\) is the so-called \(\lambda\)-transform defined in Definition 4.10 of [8]._
### Skew-invertible functions
As before, let \(K\) be a skew field and \(X\) be a set on which \(K^{*}\) acts. A function \(f\in\mathcal{F}(X)\) is called _skew invertible_ if it is invertible with respect to the skew product, in which case, the inverse of \(f\) with respect to the skew product is called its _skew inverse_ and denoted by \(f^{\langle-1\rangle}\).
**Lemma 2.8**.: _If \(f\in\mathcal{S}(X)\) is skew invertible, then the skew inverse of \(f\) belongs to \(\mathcal{S}(X)\)._
Proof.: By Lemma 2.3, we have
\[\mathcal{S}(X)=\{f\in\mathcal{F}(X)\,|\,f\diamond(g+h)=f\diamond g+f\diamond h \text{ for all }g,h\in\mathcal{F}(X)\}.\]
The result follows from this identification and the following general fact whose proof is left to the reader: Let \(R\) be a right near-ring and consider the ring
\[R^{\prime}=\{r\in R\,|\,r(s_{1}+s_{2})=rs_{1}+rs_{2}\text{ for all }s_{1},s_{2}\in R\}.\]
If \(r\in R^{\prime}\) is invertible in \(R\), then its inverse belongs to \(R^{\prime}\).
Next, we give a characterization of skew-invertible functions.
**Lemma 2.9**.: _(1) Let \(f\in\mathcal{F}(X)\). There exists \(g\in\mathcal{F}(X)\) such that \(f\diamond g=1\) if and only if for every \(x\in X\), there exists some \(a\in K^{*}\) such that \(f({}^{a}x)=a^{-1}\). (2) Let \(g\in\mathcal{F}(X)\). There exists \(f\in\mathcal{F}(X)\) such that \(f\diamond g=1\) if and only if \(g(X)\subset K^{*}\), and the map \(x\mapsto{}^{g(x)}x\) is 1-1. (3) A function \(f\in\mathcal{F}(X)\) is skew invertible if and only if \(f(X)\subset K^{*}\) and the assignment \(x\mapsto{}^{f(x)}x\) establishes a bijection from \(X\) onto \(X\)._
Proof.: (1) The proof is straightforward.
(2) The proof of the "if" direction may be left to the reader. To prove the other direction, let \(g\) satisfy the stated properties. We define a map \(f\colon X\to K\) as follows: If \(y={}^{g(x)}x\) for some \(x\in X\), we set \(f(y)=g(x)^{-1}\). Otherwise, we set \(f(y)=x_{0}\) where \(x_{0}\in X\) is a fixed element. It is easy to see that \(f\) is well-defined and \(f\diamond g=1\).
(3) Assume that \(f\) is skew invertible and let \(g\) be its skew inverse. By (2), \(f\) is nonzero on \(X\) and \(x\mapsto{}^{f(x)}x\) is 1-1. The fact that \(x\mapsto{}^{f(x)}x\) is onto follows from the following identity
\[{}^{f({}^{g(y)}y)}\left({}^{g(y)}y\right)=y.\]
Conversely, assume that \(f\) satisfies the stated properties. By Part (2), there exists \(g\in\mathcal{F}(X)\) such that \(g\diamond f=1\). We need only show that \(f\diamond g=1\). Given an arbitrary element \(x\in X\), we can choose \(y\in X\) such that \(x={}^{f(y)}y\). We have
\[(g\diamond f)(y)=1\implies g({}^{f(y)}y)f(y)=1\implies g(x)f(y)=1.\]
Therefore, we have \({}^{g(x)}x={}^{g(x)}f(y)\)\(y=y\) from which it follows that
\[g(x)f(y)=1\implies f(y)g(x)=1\implies f({}^{g(x)}x)g(x)=1\implies(f\diamond g )(x)=1.\]
Since \(x\in X\) was arbitrary, we conclude that \(f\diamond g=1\).
**Remark 2.2**.: _The map \(x\mapsto{}^{f(x)}x\) has also been used in the context of W-polynomials (see the definition of the \(\Phi\)-transform in [8, Definition 4.5])._
Regarding skew-invertible elements of \(\mathcal{S}(X)\), we have the following characterization.
**Proposition 2.10**.: _A skew-convex function \(f:X\to K\) is skew invertible if and only if \(f(X)\subset K^{*}\) and for any \(x\in X\), there exists some \(a\in K^{*}\) such that \(f({}^{a}x)=a^{-1}\)._
Proof.: The "only if" direction follows from Part 1 of Lemma 2.9. To prove the other direction, we use Part (2) of Lemma 2.9. Therefore, we need only show that if \({}^{f(x)}x={}^{f(y)}y\), then \(x=y\). Let \({}^{f(x)}x={}^{f(y)}y\) for some \(x,y\in X\). Then \(y\) and \(x\) are in the same orbit, implying that there exists \(a\in K^{*}\) such that \(y={}^{a}x\). So,
\[{}^{f(x)}x={}^{f(y)}y={}^{f({}^{a}x)}x.\]
It follows that \(f(x)b=f(^{a}x)a\) for some \(b\in K^{*}\) satisfying \({}^{b}x=x\). Note that \(f(^{-a}x)=f(^{a}x)\) because
\[0=f\diamond(a+(-a))=f\diamond a+f\diamond(-a)\implies f\diamond(-a)=-f\diamond a.\]
Therefore, we can write
\[0=f(^{b}x)b-f(^{-a}x)a=(f\diamond b+f\diamond(-a))(x)=(f\diamond(b-a))\left(x \right).\]
It follows that \(b=a\), since \(f(X)\subset K^{*}\). Thus \(x={}^{b}x={}^{a}x=y\).
## 3 Evaluation of skew rational functions
The method of evaluating skew polynomials introduced in [6] is a natural generalization of evaluation of polynomials in the commutative setting. Therefore, it is of importance to extend the method to skew rational functions, that is, quotients of skew polynomials. In this section, using the material developed in the previous section, we present a method of evaluating skew rational functions. Let us fix some notations. Throughout this section, \(K\) is a skew field, \(\sigma\colon K\to K\) is an endomorphism and \(\delta\colon K\to K\) is a \(\sigma\)-derivation. We will work with the \((\sigma,\delta)\)-action (see Example 2.2). The \((\sigma,\delta)\)-conjugacy class of an element \(a\in K\) is denoted by \(\Delta^{\sigma,\delta}(a)\). If \(a,b\in K\) are \((\sigma,\delta)\)-conjugate, we write \(a\sim b\). The notation \(a\nsim b\) means \(b\notin\Delta^{\sigma,\delta}(a)\).
### Basic definitions
We begin with some general facts. For more details, see [3]. It is known that \(K[T;\sigma,\delta]\) is a left PID, and therefore, a left Ore domain. Its field of fractions (called the skew rational function field) is denoted by \(K(T;\sigma,\delta)\). Every skew rational function can be represented as a left quotient \(P(T)^{-1}Q(T)\) for some skew polynomials \(0\neq P(T),Q(T)\), where \(P(T)\) is called the denominator and \(Q(T)\) is called the numerator of the quotient. Although such a representation is not unique for a given \(f(T)\in K(T;\sigma,\delta)\), there exists a unique representation \(P(T)^{-1}Q(T)\) of \(f(T)\) such that \(P(T)\) is monic and has the least possible degree among all representations of \(f(T)\). This representation will be called the _minimal representation_ of \(f(T)\).
After the above preliminaries, we shall describe a method of evaluating skew rational functions. Since in the commutative setting, a rational function is not defined at the roots of its denominator, some care is needed in evaluating skew rational functions. Therefore, we introduce the following definition.
**Definition 3.1**.: _A skew rational function \(f(T)\in K(T;\sigma,\delta)\) is said to be defined at \(a\in K\) if the denominator of the minimal representation of \(f(T)\) is skew-invertible as a function on the \((\sigma,\delta)\)-conjugacy class \(\Delta^{\sigma,\delta}(a)\) of \(a\)._
We now define the evaluation of a skew rational function at elements of \(K\). Recall that every skew polynomial \(P(T)\) can be regarded as a skew-convex function on any invariant set \(A\subset K\) (see Example 2.2). By abuse of notation, the skew-convex function associated to \(P(T)\) is denoted by \(P:A\to K\).
**Definition 3.2**.: _Let \(f(T)\in K(T;\sigma,\delta)\) have the minimal representation \(P(T)^{-1}Q(T)\). Assume that \(f(T)\) is defined at \(a\in K\). The value of \(f(T)\) at \(a\) (denoted by \(f(a)\)) is defined to be_
\[f(a):=\left(P^{(-1)}\diamond Q\right)(a),\]
_where \(P^{(-1)}:\Delta^{\sigma,\delta}(a)\to K\) is the skew-inverse of \(P:\Delta^{\sigma,\delta}(a)\to K\)._
It is convenient to introduce one more definition.
**Definition 3.3**.: _Let \(f(T)\in K(T;\sigma,\delta)\). The set of all \(a\in K\), at which \(f(T)\) is defined, is denoted by \(dom(f)\), and called the domain of \(f(T)\). The function sending \(a\in dom(f)\) to \(f(a)\) is denoted by \(f:dom(f)\to K\)._
It follows from the definitions that if \(f(T)\in K(T;\sigma,\delta)\) is defined at some \(a\in K\), then it is also defined at all \(c\in\Delta^{\sigma,\delta}(a)\). Therefore, \(dom(f)\) is a \((\sigma,\delta)\)-invariant subset of \(K\), for all \(f(T)\in K(T;\sigma,\delta)\).
In general, it is difficult to find the domain of an arbitrary skew rational function. Here, we present a partial result. Recall that a skew polynomial \(P(T)\in K[T;\sigma,\delta]\) is called semi-invariant if \(P(T)K\subset KP(T)\). For a detailed account of semi-invariant skew polynomials, we refer the reader to [10].
**Proposition 3.1**.: _Assume that \(\sigma:K\to K\) is an automorphism. Let \(f(T)\in K(T;\sigma,\delta)\) have the minimal representation \(P(T)^{-1}Q(T)\) such that \(P(T)\) is semi-invariant and of degree \(n\). Then, for any \(a\in K\), \(f(T)\) is defined at \(a\) iff \(P(a)\neq 0\). Furthermore, we have_
\[f(a)=\sigma^{-n}\left(P^{(Q(a)}a)\right)^{-1}Q(a),\ \ \text{for all}\ \,a\in K\,\ \text{such that}\ \,P(a)\neq 0.\]
Proof.: Let \(n\) be the degree of \(P(T)\). Fix \(a\in K\). For every \(x\in K\), we have \(P(T)x=\sigma^{n}(x)P(T)\) (see [10, Lemma 2.2]). Evaluating at \(a\), we obtain \(P^{(x}a)x=\sigma^{n}(x)P(a)\), using which one can prove the proposition. The details are left to the reader.
Let us give an example illustrating the proposition.
**Example 3.1**.: _Let \(\sigma\) be an involution and \(\delta=0\). Let \(b\in K\) belong to the center of \(K\). Then, the polynomial \(P(T)=T^{2}+b\) is semi-invariant (see also Example 2.5.(a) in [10]). By Proposition 3.1, \(f(T)=P(T)^{-1}\) is defined at \(a\in K\) iff \(\sigma(a)a+b\neq 0\). Moreover, we have_
\[f(a)=(\sigma(a)a+b)^{-1},\ \ \text{for all}\ \,a\in K\,\ \text{with}\ \,\sigma(a)a+b\neq 0.\]
We end this part with some remarks.
**Remark 3.1**.: _Our method of evaluation of skew rational functions has some features not present in the commutative setting. One such feature is that it may happen that \(dom(f)\) is empty for every skew rational function not in \(K[T;\sigma,\delta]\). There exist examples of \(K[T;\sigma,\delta]\) in which every irreducible skew polynomial is of degree one, and \(\Delta^{\sigma,\delta}(a)=K\) for all \(a\in K\). Examples are universal differential fields discovered by Kolchin ([5]). For such examples, we have \(dom(f)=\emptyset\), for every skew rational function \(f\) not in \(K[T;\sigma,\delta]\)._
The following remarks deal with some equivalent formulations of the above definitions.
**Remark 3.2**.: _Working with the notations of Corollary 1.13 in [11], we can see that \(f(T)=P(T)^{-1}Q(T)\) is defined at \(a\in K\) iff \(P(T_{a}):V\to V\) is a bijection. Moreover, the value of \(f(T)\) at \(a\in K\) is_
\[f(a)=\left(P(T_{a})^{-1}\circ Q(T_{a})\right)(1)=P(T_{a})^{-1}(Q(a)).\]
_This approach has the advantage that one can define evaluation of skew rational functions over arbitrary rings._
**Remark 3.3**.: _Using the \(\lambda\)-transform (see Definition 4.10 in [8]), we can see that \(f(T)=P(T)^{-1}Q(T)\) is defined at \(a\in K\) iff \(\lambda_{P,a}:K\to K\) is a bijection._
### Evaluation of the skew rational function \((T-b)^{-1}\)
This part deals with evaluation of skew rational functions of the form \((T-b)^{-1}\), where \(b\in K\).
**Proposition 3.2**.: _Let \(a,b\in K\). Then, (1) \((T-b)^{-1}\) is defined at \(a\) iff \(b\nsim a\) and the \((\sigma,\delta)\)-metro equation_
\[\sigma(x)c+\delta(x)-bx=1,\]
_has a solution \(x\in K\), for all \(c\in\Delta^{\sigma,\delta}(a)\). (2) If \((T-b)^{-1}\) is defined at \(a\), then the value of \((T-b)^{-1}\) at \(a\) is the (unique) solution \(x\) of the \((\sigma,\delta)\)-metro equation_
\[\sigma(x)a+\delta(x)-bx=1.\]
Proof.: (1) Using Lemma 2.9 and Proposition 2.10, we see that \(T-b\) is skew invertible as an element of \(\mathcal{S}(\Delta^{\sigma,\delta}(a))\) if and only if (1) \(T-b\) does not have a root in \(\Delta^{\sigma,\delta}(a)\), and (2) for any \(c\in\Delta^{\sigma,\delta}(a)\), there exists \(x\in K^{*}\) such that \((T-b)(^{x}c)=x^{-1}\). The first condition is equivalent to \(b\notin\Delta^{\sigma,\delta}(a)\). The second condition is equivalent to saying that for any \(c\in A\), there exists
\(x\in K^{*}\) satisfying the equation \({}^{x}c-b=x^{-1}\), or equivalently, the \((\sigma,\delta)\)-metro equation
\[\sigma(x)c+\delta(x)-bx=1.\]
(2) This is a direct consequence of (1) and the definition of the skew product.
In [8], the notion of a \((\sigma,\delta)\)-metro equation is studied in the context of Wedderburn polynomials (also called W-polynomials). Using the results of [8], we obtain the following criterion.
**Corollary 3.3**.: _Let \(a,b\in K\). Then, \((T-b)^{-1}\) is defined at \(a\) iff \(b\nsim a\) and \((T-c)(T-b)\) is a W-polynomial for every \(c\in\Delta^{\sigma,\delta}(a)\). In particular, if \(\Delta^{\sigma,\delta}(a)\) is an algebraic \((\sigma,\delta)\)-conjugacy class, then \((T-b)^{-1}\) is defined at \(a\), for all \(b\nsim a\)._
Proof.: The first statement follows from Proposition 3.2 and [8, Theorem 6.6]. The second statement follows from [8, Corollary 6.7].
The following result sheds light on evaluating skew rational polynomials of the type \((T-b)^{-1}\).
**Proposition 3.4**.: _Let \(a,b,c,d\in K\) satisfy \(a\sim c\) and \(b\sim d\). If \((T-b)^{-1}\) is defined at \(a\), then \((T-d)^{-1}\) is defined at \(c\)._
Proof.: The result follows from the definition and the identity
\[T-d=\sigma(x)^{-1}(T-b)x,\]
where \(x\in K^{*}\) satisfies \(b={}^{x}d\).
We now give some examples.
**Example 3.2**.: _Let \(K[T;\sigma]\) be a skew polynomial ring of endomorphism type, that is, \(\delta=0\). It follows from Proposition 3.2 that the skew rational function \(T^{-1}\) is defined at \(a\in K\) if and only if \(a\neq 0\) and \(\Delta^{\sigma,\delta}(a)\subset\sigma(K)\). Furthermore, the value of \(T^{-1}\) at such an element \(a\) is \(\sigma^{-1}\left(a^{-1}\right)\). In particular, if \(\sigma\) is an automorphism, then \(dom(T^{-1})=K^{*}\)._
**Example 3.3**.: _Consider the skew polynomial ring \(\mathbb{C}[T;\overline{\cdot}]\) where\({}^{-}\) is the complex conjugate map and \(\delta=0\). Using Proposition 3.2, one can show that the domain of \((T-b)^{-1}\), where \(b\in\mathbb{C}\), is the set \(\{z\in\mathbb{C}|\,|z|\neq|b|\}.\) Furthermore, the value of \(f(T)=(T-b)^{-1}\) at \(z\in dom((T-b)^{-1})\) is_
\[f(z)=\frac{z+\overline{b}}{|z|^{2}-|b|^{2}}.\]
_We remark that similar results hold more generally for the case when \(\sigma:K\to K\) is an involution of a commutative field \(K\)._
**Example 3.4**.: _Consider the ring \(\mathbb{H}[T]\) of polynomials over the skew field of quaternions in a central indeterminate \(T\). Using Proposition 3.2, one can show that the domain of \((T-q_{0})^{-1}\), where \(q_{0}\in\mathbb{H}\), is the set of all \(q\in\mathbb{H}\) not conjugate to \(q_{0}\). Moreover, the value of \(f(T)=(T-q_{0})^{-1}\) at \(q\in dom((T-q_{0})^{-1})\) is_
\[f(q)=(q-\overline{q}_{0})\left(q^{2}-2\text{Re}(q_{0})q+|q_{0}|^{2}\right)^{-1 }.\]
_This function plays a central role in the version of "quaternionic analysis" introduced in [1]. It has also been used as a Cauchy kernel, see [4]._
### Evaluation of skew rational functions over centrally finite skew fields
For \(a\in K\), we set
\[C^{\sigma,\delta}(a):=\{b\in K^{*}\,|\,^{b}a=a\}\cup\{0\}.\]
Note that \(C^{\sigma,\delta}(a)\) is a skew subfield of \(K\) (see [6, Lemma 3.1]).
**Proposition 3.5**.: _Let \(a\in K\) such that \(K\) is finite-dimensional as a right \(C^{\sigma,\delta}(a)\)-vector space. Then, a skew rational function with minimal representation \(P(T)^{-1}Q(T)\) is defined at \(a\) iff \(P(c)\neq 0\) for all \(c\in\Delta^{\sigma,\delta}(a)\)._
Proof.: The "only if" direction is trivial. To prove the other direction, one can use Proposition 2.7 (see also Remark 2.1) and the fact that \(K\) is finite-dimensional as a right \(C^{\sigma,\delta}(a)\)-vector space. Alternatively, one can use Part (a) of Theorem 6.2 in [8].
**Remark 3.4**.: _It is known that \(K\) is finite-dimensional as a right \(C^{\sigma,\delta}(a)\)-vector space iff the conjugacy class of \(a\) is \((\sigma,\delta)\)-algebraic (see Proposition 4.2 in [10])._
As special cases of this proposition, we give the following results.
**Corollary 3.6**.: _Assume that every conjugacy class of \(K\) is \((\sigma,\delta)\)-algebraic. Then, for any irreducible element \(P(T)\in K[T;\sigma,\delta]\) of degree \(>1\), the skew rational function \(P(T)^{-1}\) is defined everywhere, i.e., \(dom(P(T)^{-1})=K\)._
**Corollary 3.7**.: _Let \(K\) be a centrally finite skew field. Assume that \(\sigma\) is the identity homomorphism and \(\delta=0\). Then, a skew rational function in \(K[T]=K[T;id,0]\) is defined at \(a\in K\) iff the denominator of its minimal representation does not have a root in the conjugacy class \(\{bab^{-1}\,|\,b\in K^{*}\}\)._
Let us now give some examples.
**Example 3.5**.: _The skew field \(\mathbb{H}\) of quaternions is centrally finite. A classical result of Niven states that \(\mathbb{H}\) is left algebraically closed. Therefore, every skew rational function \(f(T)\) over \(\mathbb{H}\) has a minimal representation of the form_
\[f(T)=\left((T-q_{1})(T-q_{2})\cdots(T-q_{n})\right)^{-1}Q(T).\]
_The domain of \(f(T)\) consists of all \(q\in\mathbb{H}\) which are not conjugate to any of the \(q_{i}\)'s. Note that every conjugacy class in \(\mathbb{H}\) is either a singleton or a 2-dimensional sphere._
More generally, we have the following example.
**Example 3.6**.: _Assume that every \((\sigma,\delta)\)-conjugacy class in \(K\) is algebraic. In the light of Remark 3.4, we see that for every skew rational function \(f\), the complement of \(dom(f)\) in \(K\) is a finite union of conjugacy classes._
In the light of Proposition 3.5, the following example completes the discussion of evaluating skew rational functions in \(\mathbb{C}(T;\vec{\cdot})\).
**Example 3.7**.: _Consider the skew polynomial ring \(\mathbb{C}[T;\vec{\cdot}]\) where\({}^{-}\) is the complex conjugate map and \(\delta=0\). It is known that every irreducible monic element of \(\mathbb{C}[T;\vec{\cdot}]\) is either of the form \(T-a\) or the form \(T^{2}+bT+c\), where \(|z|^{2}+bz+c\neq 0\) for all \(z\in\mathbb{C}\) (see Example 1.15.4 in [11]). The case \(T-b\) was treated in Example 3.3. For an irreducible element \(P(T)=T^{2}+bT+c\), the skew rational function \(P(T)^{-1}\) is defined everywhere, and moreover, the value of \(f(T)=P(T)^{-1}\) at \(z\in\mathbb{C}\) is equal to_
\[f(a)=\frac{|z|^{2}-bz+\overline{c}}{||z|^{2}+c|^{2}-|zb|^{2}}.\]
### The product formula for skew rational functions
For \(a\in K\), we let \(Def(a)\) denote the set of all skew rational functions that are defined at \(a\). The assignment \(a\mapsto f(a)\) gives rise to the map
\[ev_{a}:Def(a)\to\mathcal{S}(\Delta^{\sigma,\delta}(a)),\]
where \(\mathcal{S}(\Delta^{\sigma,\delta}(a))\) is the ring of skew-convex functions on the conjugacy class of \(a\). This part deals with \(Def(a)\) and \(ev_{a}\), and their properties.
Fix \(a\in K\). Using the observation that the union of any family of left Ore subsets of \(K[T;\sigma,\delta]\) is a left Ore set, we see that there exists a unique left Ore subset \(S(a)\) of \(K[T;\sigma,\delta]\) which is maximal with respect to inclusion among all left Ore subsets \(S\) of \(K[T;\sigma,\delta]\) satisfying the property
\[P(T)\in S\implies Q(T)^{-1}\in Def(a),\forall\mbox{ monic irreducible factor }Q(T)\mbox{ of }P(T).\]
Here, the word "factor" means that \(P(T)=Q_{1}(T)Q(T)Q_{2}(T)\) for some skew polynomials \(Q_{1}(T),Q_{2}(T)\). It is clear that the Ore localization \(S(a)^{-1}K[T;\sigma,\delta]\) is a subset of \(Def(a)\). We denote \(S(a)^{-1}K[T;\sigma,\delta]\) by \(Def_{o}(a)\).
**Proposition 3.8**.: _The evaluation map \(ev_{a}:Def_{o}(a)\to\mathcal{S}(\Delta^{\sigma,\delta}(a))\) is a ring homomorphism._
Proof.: This follows from the above discussion and the universal property of the Ore localization.
As an application of this proposition, we present the product formula for skew rational functions.
**Corollary 3.9**.: _Let \(f(T),G(T)\in Def_{o}(a)\) and set \(h(T)=f(T)g(T)\). Then_
\[h(a)=\begin{cases}f\left(\,{}^{g(a)}a\,\right)g(a)&\text{if }g(a)\neq 0,\\ 0&\text{if }g(a)=0.\end{cases}\]
In general, the problem of determining \(S(a)\) seems to be difficult. Here, we present two partial results. The first result concerns skew polynomials of degree \(1\).
**Proposition 3.10**.: _A polynomial \(T-b\) belongs to \(S(a)\) iff \((T-b)^{-1}\in Def(a)\)._
Proof.: The "only if" direction is trivial. To prove the other direction, let \((T-b)^{-1}\in Def(a)\). Consider the set \(S\) of all skew polynomials \(T-c\) where \(c\) is a conjugate of \(b\). It is enough to show that \(S\) is a left ore set since, by Proposition 3.4, \((T-c)^{-1}\in Def(a)\), for all \(T-c\in S\). The fact that \(S\) is a left Ore set is a consequence of the fact that
\[(T-^{P(c)}c)P(T)\in K[T;\sigma,\delta](T-c),\]
for any \(P(T)\in K[T;\sigma,\delta]\) with \(P(c)\neq 0\).
Our second result solves the problem in the case of algebraic conjugacy classes.
**Theorem 3.11**.: _Assume that \(\Delta^{\sigma,\delta}(a)\) is \((\sigma,\delta)\)-algebraic. Then, a skew polynomial \(P(T)\) belongs to \(S(a)\) iff \(P(T)\) has no (right) roots in the conjugacy class of \(a\). In particular, we have \(Def_{o}(f)=Def(a)\)._
Proof.: The "only if" direction is proved in Proposition 3.5 (note that if a factor of \(P(T)\) has a root in \(\Delta^{\sigma,\delta}(a)\), then \(P(T)\) has a root in \(\Delta^{\sigma,\delta}(a)\), as proved in [8, Coroallry 6.3] ). To prove the reverse direction, let \(S\) be the set of all skew polynomials \(P(T)\) with (right) roots in the conjugacy class of \(a\). It is enough to show that \(S\) is a left Ore set. We need to show that the set \(SQ(T)\cap K[T;\sigma,\delta]P(T)\) is nonempty for every \(P(T)\in S\) and \(Q(T)\in K[T;\sigma,\delta]\). Without loss of generality, we may assume that \(P(T)\) is irreducible. If \(Q(T)\in K[T;\sigma,\delta]P(T)\), the proof is trivial. So, assume that \(Q(T)\notin K[T;\sigma,\delta]P(T)\). Let \(L(T)\) be the least left common multiple
of \(P(T)\) and \(Q(T)\). Then, \(L(T)=P_{1}(T)Q(T)=Q_{1}(T)P(T)\) for some \(P_{1}(T),Q_{1}(T)\in K[T;\sigma,\delta]\). Since \(K[T;\sigma,\delta]\) is a UFD, we see that \(P_{1}(T)\) and \(P(T)\) must be similar, i.e., the \(K[T;\sigma,\delta]\)-modules \(K[T;\sigma,\delta]/K[T;\sigma,\delta]P_{1}(T)\) and \(K[T;\sigma,\delta]/K[T;\sigma,\delta]P(T)\) are isomorphic. It follows from Lemma 6.4 in [7] that every right root of \(P_{1}(T)\) is conjugate to a right root of \(P(T)\). Therefore, \(P_{1}(T)\) has no right root in \(\Delta^{\sigma,\delta}(a)\), and consequently \(P_{1}(T)\in S\). This completes the proof.
We conclude the paper with the following example as an application of the above theorem.
**Example 3.8**.: _Consider the ring \(\mathbb{H}[T]\) of polynomials over the skew field of quaternions in a central indeterminate \(T\). For any \(q_{0}\in\mathbb{H}\), \(Def(q_{0})\) is a ring consisting of all \(P(T)^{-1}Q(T)\in\mathbb{H}(T)\) such that \(P(q)\neq 0\) when \(q\) is conjugate to \(q_{0}\)._
|
2306.04914 | Unlocking the Potential of GeS Monolayer: Strain-Enabled Control of
Electronic Transports and Exciton Radiative Lifetimes | Monolayer germanium sulfide is gaining significant attention for its
exceptional anisotropic electronic conductance, notable excitonic effects, and
wide range of potential applications. In our study, we used density functional
theory, many-body perturbation theory, and non-equilibrium Green function to
investigate electronic transport properties and exciton radiative lifetime of
single-layer germanium sulfide. Our theoretical findings showed that applying
up to 8 percent compressive strain increased carrier mobility by nearly
threefold, and thus, dramatically enhance the device's current intensity.
Moreover, we observed that strain engineering allowed fine-tuning of the
electron-hole recombination time. At 6 percent tensile strain, the effective
radiative lifetime was as short as 19 picoseconds, which is 4.5 times faster
than the intrinsic state and 80 times faster than at 8 percent compressive
strain. These results highlight the potential of strain engineering to
customize the electronic and optical properties of GeS monolayer for specific
electronic, optoelectronic, and photovoltaic device requirements | Vo Khuong Dien, Pham Thi Bich Thao, Nguyen Thi Han, Nguyen Duy Khanh, Le Vo Phuong Thuan, Ming-Fa Lin, Nguyen Thanh Tien | 2023-06-08T03:27:12Z | http://arxiv.org/abs/2306.04914v2 | Unlocking the Potential of GeS Monolayer: Strain-Enabled Control of Electronic Transports and Exciton Radiative Lifetimes
###### Abstract
Monolayer germanium sulfide (GeS) is gaining significant attention for its exceptional anisotropic electronic conductance, notable excitonic effects, and wide range of potential applications. In our study, we used density functional theory (DFT), many-body perturbation theory (MBPT), and non-equilibrium Green's function (NEGF) to investigate electronic transport properties and exciton radiative lifetime of single-layer germanium sulfide. Our theoretical findings showed that applying up to 8% compressive strain increased carrier mobility by nearly threefold, and thus, dramatically enhance the device's current intensity. Moreover, we observed that strain engineering allowed fine-tuning of the electron-hole recombination time. At 6% tensile strain, the effective radiative lifetime was as short as 19 picoseconds, which is 4.5 times faster than the intrinsic state and 80 times faster than at 8% compressive strain. These results highlight the potential of strain engineering to customize the electronic and optical properties of GeS monolayer for specific electronic, optoelectronic, and photovoltaic device requirements.
GeS monolayer, electronic transports, exciton radiative lifetime, strain engineering, and first-principles calculations.
## 1 Introduction
The successful fabrication of monolayer graphene [1] has sparked significant interest in exploring other two-dimensional (2D) materials, including transition metal chalcogenides (TMDCs) [2], group III mono-chalcogenides [3, 4], hexagonal boron nitride (h-BN) [5, 6], phosphorene [7], and others. Recently, 2D germanium sulfide (GeS) has emerged as a highly researched material [8-10]. Like black phosphorus, bulk GeS also adopts a layered structure with weak Van-der-Waals (vdWs) interactions between interlayers and strong covalent bonding within layers. The few layers of GeS have been successfully fabricated via either vapor transport process [9] or mechanical exfoliation [10], while the monolayer GeS is predicted to be dynamic stability suggesting the high ability to exfoliate the monolayer GeS from its bulk counterpart. In contrast to semimetal graphene, monolayer GeS has a sizable electronic band gap (\(\sim\)2.3 eV) [11] making it suitable for semiconductor applications. Additionally, the monolayer form of GeS is predicted to have a larger free carrier mobility (\(\sim\)10\({}^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\)) [11] compared to MoS\({}_{2}\) (\(\sim\)200 cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\)) [12]. As a result of the ultrathin monolayer and the significant reduction of dielectric screening, the excitonic effects
are predicted to be very strong in the GeS single layer [13, 14]. One notable feature of GeS is its anisotropic electric conductance and optical responses [15], which distinguish it from isotropic 2D crystals such as graphene and MoS\({}_{2}\). Therefore, it would be exciting to explore ways to manipulate these anisotropies further.
The current research focus in the field of condensed matter physics involves modifying the electronic and optical properties of layered materials [16, 17]. This can be achieved through the introduction of ad-atoms [18, 19], the application of electric and magnetic fields [20], the adsorption of molecule clusters [16], and the creation of defects [21, 22]. Another effective method for altering the properties of materials is strain engineering, which is particularly useful for one-dimensional [23] and two-dimensional [24] crystals due to their ability to withstand larger strains compared to bulk crystals. For instance, monolayer graphene [25] and MoS\({}_{2}\)[26] can sustain strains up to their intrinsic limit (approximately 15% for graphene and 11% for MoS\({}_{2}\)) without causing significant damage to their crystal structures. This provides a wide range of opportunities for tuning their mechanical and electronic performances.
Herein, by combining density functional theory (DFT) [27], the many-body perturbation theory (MBPT) [28], and the non-equilibrium Green's Function (NEGF) [29], we illustrated that strain engineering can serve as an effective tool to tailor the electronic transport properties and the recombination time scale of exciton states. Our theoretical calculations and analytical analyses indicated that electron mobility could be significantly enhanced under compressive strain, the current-voltage (I-V) characteristic of the device, additionally, shows an extremely high current intensity. Moreover, the excitonic effects, especially the exciton radiative lifetime can be fine-tuned upon applying the external strains. The theoretical results achieved in the current research are paramount not only for basic sciences but also for high-tech applications, such as ultrafast field-effects transistors (FETs), light-emitting-diodes (LEDs), and photovoltaic (PVs) applications.
## 2 Computational details
In this work, Vienna Ab-initio Simulation Package (VASP) [30] was utilized to perform the ground state and the excited state calculations, while the Quantum ATK [31] was used to investigate the electronic transport properties of the GeS monolayer. The Perdew-Burke-Ernzerhof (PBE) of generalized gradient approximation [32] was adopted for the exchange-correlation function. Projector-augmented wave (PAW) pseudopotentials are utilized to describe the electronic wave functions in the core region [33]. The cutoff energy for the plane wave basis expansion was 500 eV. Geometric optimization was performed using the Monkhorst-Pack sampling technique [34] with a special k-point mesh of 32\(\times\)24\(\times\)1. The full relaxation of all atoms was allowed until the Hellmann-Feynman force acting on each atom was smaller than 0.01 eV/A. The single-shot GW (G0W0) approach [35] was employed to calculate the quasi-electronic band structure. We described the screening effects using the plasmon-mode model proposed by Hybertsen and Louie [35]. To ensure the accuracy of our calculations, we have performed convergence tests using various k-mesh, and cutoff energy values for the response functions, as well as the number of empty conduction bands including. Our results (**Figure S1**) demonstrated that the electronic properties are very sensitive to
the input parameters, the KPOINTS of 36\(\times\)27\(\times\)1, response functions with a cutoff energy of 120 eV, and 120 empty conduction bands were sufficient to achieve convergence for the quasi-particle band gap. Regarding the optical response and the excitonic effects, the dielectric functions were achieved by solving the Bethe-Salpeter equation (BSE) [36] on top of the G0W0 calculations. In this calculation, the 6 highest occupied valence bands (VBs) and 4 lowest unoccupied conduction bands (CBs) are included as a basis for the excitonic states with a photon energy region from 0 eV to 5 eV. In addition, the Lorentz broadening parameter \(\gamma\) was set at 40 meV to replace the delta function.
## 3 Results and discussions
### Electronic Transport Properties
As a typical benchmark to investigate the impact of strain effects on electronic and optical properties, the geometric structure of pristine monolayer GeS is considered and shown in **Figure 1(a)** and **Table 1**. Similar to the black phosphorene, the monolayer GeS also exhibited a buckled structure with each germanium atom covalently bonded with three adjacent sulfur atoms. The optimized lattice constants are a = 3.665 A (zigzag direction), and b = 4.471 A (armchair direction). The calculated parameters are in good agreement with previous theoretical calculations [37-40] and experimental measurements [41].
The electronic band structure along the high-symmetry points within DFT and G0W0 levels of theory is shown in **Figure 1(b)**, since the spin-orbit couplings (SOC) only minor influent on the electronic band gap of GeS (**Figure S2**), the relativity effects are ignored in our calculations for the sake of reducing of computational cost. GeS exhibits anisotropic electronic properties, the dispersion of the occupied hole along the \(\Gamma-Y\) direction is significant, indicating the small effective mass. The opposite behavior is true for the hole transport along the \(\Gamma-X\) direction with relatively flat energy dispersion related to the large effective mass. Similar characteristics are also found for the electron in the unoccupied states. The anisotropic of these bands can be easily detected by the 3D contour plot in **Figure 1(c)**, while the spatial-dependent of electron and hole effective mass of the GeS monolayer, which exhibits the "heart" and "peanut" shapes, are illustrated in **Figure 1(d)**.
Figure 1: (a) Top and side views of geometric structure, (b) the electronic band structure with DFT and G0W0 levels of theory, and (c) the 3D band structure of GeS. (d) The DFT electronic band structure of the GeS monolayer as a function of strain, and (e) the band-decomposed charge density of the corresponding critical points marked by the red dots in (d).
Figure S3. GeS monolayer is an indirect band gap of 1.735 eV with the highest occupied state and lowest unoccupied state located between \(\Gamma\)and \(Y\) and \(\Gamma\) and X symmetry points, respectively. The electronic band gap is enhanced to 2.665 eV when the electron-electron interactions (GW approximations) are adopted. The theoretical prediction is good in agreement with previous works (**Table 1**).
Figure 1(d) depicts the electronic properties of a strained GeS monolayer, with critical points A, B, C, D, and E marked by red dots indicating the band edge states that make the band gap evolution. The corresponding orbital characters for these critical points are shown in Figures 1(e) and Figure S4 and are organized into five categories: (A) out-of-plane interactions of Ge-4p\({}_{z}\) and S-3p\({}_{z}\) orbitals, (B) out-of-plane hybridizations of Ge-4p\({}_{z}\) and S-(3p\({}_{x}\), 3p\({}_{y}\)) orbitals, (C) in-plane couplings Ge-4p\({}_{y}\) and S-3s orbitals, (D) interactions of in-plan Ge-4s and S-3p\({}_{y}\) orbitals, and (E) in-plane interactions of Ge-4p\({}_{x}\) and S-3p\({}_{x}\) orbitals. As indicated in Figure S5 and Table S1, compressive strain causes the d\({}_{1}\) chemical bonding and h vertical height of Ge-S to increase, significantly reducing the out-of-plane Ge-4p\({}_{z}\) and S-3p\({}_{z}\), and Ge-4p\({}_{z}\) and S-(3p\({}_{x}\), 3p\({}_{y}\)) orbital interactions, leading to a significant shift down of energy levels for the edge states at A and B. On the other hand, the interactions of in-plane couplings of Ge-4p\({}_{y}\) and S-3s, and Ge-4s and S-3p\({}_{y}\) orbitals increase due to the d\({}_{2}\) chemical bonding reduction, resulting in an increase in energy for the edge states C and D. An increase in the \(\alpha\) (Ge-S-Ge) angle reduces in-plane interactions of Ge-4p\({}_{x}\) and S-3p\({}_{x}\) orbitals, causing a downshift in the energy of E critical point. Conversely, the opposite evolution takes place for tensile strains. Figure S5(b) displays the evolution of the band gap for the strained GeS monolayer. The electronic band gap decreases linearly with increasing compressive pressure. However, the band gap evolution under tensile strain is more complex. The electronic band gap initially increases to 1.9 eV, accompanied by an indirect-direct transition. However, at higher tensile strains, the gap value dramatically decreases, and the same trend is observed in the band gap evolution with the GW corrections.
To connect the anisotropic band dispersion with the electronic conductance, we further estimated the carrier mobility along the zigzag and armchair directions according to the deformation theory [42, 43]:
\[\mu_{2D}=\frac{e\hbar^{3}C_{2D}^{i}}{k_{B}Tm_{i}^{*}m_{d}{E^{i}}^{2}}\]
Here, \(m_{i}^{*}\) is effective mass along the transport direction, \(m_{d}=\sqrt{m_{x}^{*}m_{y}^{*}}\) is the average effective mass, \(C_{2D}^{i}\) is the elastic module and can be obtained by the quadratic fitting of the total energy E with respect to the varying of the lattice constant \(\Delta l/l_{0}\) as \((C_{2D}/2)(\Delta l/l_{0})^{2}=(E-E_{0})/S_{0}\), where \(S_{0}\) is the lattice area at the equilibrium of the 2D lattice. The deformation potential constant \(E^{i}=\partial E_{edge}/\partial\varepsilon\) is obtained by checking the changing of the valence band maximum (VBM) or conduction band minimum (CBM) upon the small lattice compression or lattice expansion along the transport direction. The analytical analysis was established at room temperature T = 300 K. Apparently, the current estimation only depicts the simplest picture of electron-phonon interactions and thus may give overestimated values of the realistic carrier mobility. However, the prediction is
accurate enough to capture the anisotropic as well as the tendency of conductance of the system under strains.
Since the effective mass of carriers in the zigzag and armchair directions behaves differently and exhibits two extreme values in these directions (**Figure S3**), we focus on calculating the effective mass and mobility for carriers along the \(\Gamma-X\), and \(\Gamma-Y\) paths. **Figure 2**(a) illustrates the strain-dependent effective mass of the highest valence hole and lowest conduction electron, which demonstrates a linear decrease under compressive strain. This decrease is reflected in the band curvature shown in **Figure 1**(d) and contributes to enhanced carrier mobility. The carrier effective mass increases significantly under an elongation of the lattice constant, and the anisotropy of carriers along the calculated directions becomes more pronounced. Additionally, the evolution of effective mass is complex. For example, there is a notable jump in \(\mathrm{m_{h}^{*}}\)(zigzag) at 2% tensile strain due to the transition of the VBM from \(\Gamma-Y\) to \(\Gamma\) band edge states, resulting in a shift from light holes to heavy holes.
The calculated carrier mobility of the GeS monolayer at room temperature (T = 300K) according to the compression and elongation of the lattice constant is shown in **Figure 2**(b). In the intrinsic case, the relatively small effective mass of electrons and the significant \(C_{2D}^{i}/{E^{i}}^{2}\) ratio (**Table S2**) contribute to the high electron carrier mobility of GeS monolayer, with a typical value of \(13\times 10^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\) for electrons in the zigzag-direction and about \(0.35\times 10^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\) for electrons in the armchair direction. The carrier mobility values for valence holes are lower, with about \(0.061\times 10^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\), and \(0.036\times 10^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\) for zigzag and armchair directions, respectively. The high electron mobility of 2D GeS is consistent with previous reports [11, 44] and compatible with that of phosphorene [45], but much higher than that of MoS\({}_{2}\)[2], indicating its potential for high-speed electronic applications. The carrier mobility can be efficiently modulated by strain-induced deformations. As expected, carrier mobility decreases upon lattice expansion due to the increasing carrier effective mass and the decreasing of \(C_{2D}^{i}/{E^{i}}^{2}\) ratio. Conversely, the mobility of carriers significantly increases with lattice compression. Although the mobility of holes can be controlled via applying external strain, it cannot surpass that of the electrons. Interestingly, under -8% compression, the mobility of electrons of GeS monolayer along a zigzag direction reaches approximately \(35\times 10^{3}\) cm\({}^{2}\).V\({}^{\text{-1}}\).s\({}^{\text{-1}}\), more than 2.5 times and 400 times larger than the values under free-strain and +8% tensile strain, respectively.
Due to its exceptionally high carrier mobility, we opted for strained GeS monolayers as the channel material in our device construction. The transport properties have been calculated via the NEGF method as implemented in the Quantum ATK package [31]. 50\(\times\)50\(\times\)1 k-points were used for the central region and the electrodes. When a given voltage is applied, the current is allowed to flow across the system. The electric current (I) is further calculated using the Landauer approach [46], and this can be obtained from the integration of the transmission curve as
Figure 2: (a) The evolutions of the effective mass, and (b) the carrier mobility under biaxial strains. I-V characteristics for (c) zigzag model and (d) armchair model at different biaxial strains. The device model for electron transport along the (e) armchair (armchair model) and (f) zigzag (zigzag model) directions, in which, two- and three-unit cells were used to construct the electrodes, and the active region, respectively.
\[I(V_{b})=\frac{2e}{h}\int_{-\infty}^{+\infty}T(E,V_{b})[f(E-\mu_{L})-f(E-\mu_{R})] \,dE,\]
where \(f\big{(}E-\mu_{L/R}\big{)}\)is the Fermi-Dirac distribution function of the \(L\) and \(R\) electrodes, \(\mu_{L/R}\) is the chemical potential, which can move up and down according to the Fermi energy, and \(T(E,V_{b})\) is the transmission function at energy \(E\) and bias \(V_{b}\). The expression of \(T(E,V_{b})\) is as follows:
\[T(E,V_{b})=Tr[\Gamma_{L}(E,V_{b})G(E,V_{b})\Gamma_{R}(E,V_{b})G^{\dagger}(E,V_{ b})],\]
in which, the coupling matrices are given as \(\Gamma_{L/R}\), and the retarded and the advanced Green's functions of the scattering region are presented \(G^{\dagger}\) and \(G\).
**Figure 2(e)** and **Figure 2(f)** illustrates the fundamental architecture of the GeS device, highlighting its transport characteristics along the zigzag and armchair directions. These transport properties are further depicted in **Figure 2(c)** and **Figure 2(d)**. Given the negligible carrier mobility of holes, our primary focus lies on the transport properties of electrons. To achieve efficient carrier injection and attain optimal device performance, we employ left and right electrodes with an n-type doping concentration of 3\(\times\)10\({}^{19}\) e/cm\({}^{2}\). The intrinsic monolayer GeS exhibits a remarkable anisotropic behavior in its transport properties. The I-V curve, when biased along the zigzag direction, resembles that of a characteristic semiconductor, with a peak current of approximately 1200 nA at V\({}_{\rm bias}\) = 2 V. Conversely, carrier transport along the armchair direction is negligible, with the highest current reaching only 0.012 nA at 0.2 V, followed by slight fluctuations at higher applied voltages. These anisotropic transport characteristics of the intrinsic GeS monolayer align well with previous findings [47] and reflect the primary trend in carrier mobility in their respective directions. Both models demonstrate a high sensitivity of the I-V curves to external strain. As the lattice elongation increases, the maximum current intensity of the zigzag and armchair models experiences a sharp decline. Both models exhibit strong negative differential resistance, indicating a diminishing of current intensity with increasing bias voltage, particularly evident under +8% lattice elongation. Interestingly, the I-V curves of the GeS monolayer device under compression consistently exhibit semiconductor characteristics, even for the armchair model. The current intensity of the compressive GeS device experiences a significant enhancement. For the zigzag model, the highest current intensity exceeds 2000 nA and 10000 nA under -4% and -8% compression, respectively, whereas the corresponding values for the armchair model are approximately 170 nA and 5200 nA. These findings indicate that the devices of compressively strained GeS monolayer possess an extremely high sensitivity, making them well-suited for high-speed electronic applications.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(a\bigl{(}\AA\bigr{)}\) & \(b\bigl{(}\AA\bigr{)}\) & \multicolumn{2}{c}{Fundamental band gap} \\ \cline{3-4} & & DFT & G0W0 \\ \hline
4.471\({}^{\mathrm{a}}\) & 3.665\({}^{\mathrm{a}}\) & 1.728\({}^{\mathrm{a}}\) & 2.661\({}^{\mathrm{a}}\) \\ \hline
4.470\({}^{\mathrm{b}}\) & 3.666\({}^{\mathrm{b}}\) & - & 2.74\({}^{\mathrm{b}}\) \\ \hline
4.459\({}^{\mathrm{c}}\) & 3.662\({}^{\mathrm{c}}\) & 1.90\({}^{\mathrm{c}}\) & - \\ \hline
4.33\({}^{\mathrm{d}}\) & 3.67\({}^{\mathrm{d}}\) & - & - \\ \hline
4.492\({}^{\mathrm{e}}\) & 3.62\({}^{\mathrm{e}}\) & 1.713\({}^{\mathrm{e}}\) & - \\ \hline
4.467\({}^{\mathrm{f}}\) & 3.666\({}^{\mathrm{f}}\) & 1.722\({}^{\mathrm{f}}\) & - \\ \hline
4.474\({}^{\mathrm{g}}\) & 3.675\({}^{\mathrm{g}}\) & 1.82\({}^{\mathrm{g}}\) & - \\ \hline
4.29\({}^{\mathrm{h}}\) & 3.64\({}^{\mathrm{h}}\) & - & - \\ \hline \end{tabular}
* The theoretical approach in this work
* The theoretical approach in Ref [15]
* The theoretical approach in Ref [48]
* The theoretical approach in Ref [11]
* The theoretical approach in Ref [38]
* The theoretical approach in Ref [40]
* Experimental data for bulk GeS in Ref [41]
\end{table}
Table 1: The optimize geometric parameters and the electronic band gap of monolayer Gelium Sulfide. The previous theoretical and experimental measurements are also shown for comparison.
### Optical Properties and Excitonic Effects
Figure 3: The imaginary part of dielectric functions of GeS monolayer along (a) armchair and (b) zigzag polarizations. The red curve indicates the optical spectrum with includes the excitonic effects, while the blue-filled curve excludes these effects. The exciton wave functions are projected onto the electronic band structure for the (c) I\({}^{1}\), (d) I\({}^{2}\): and (e) I\({}^{3}\) excitons, showing the vertical excitation from the VBM to the CBM. The radii of circles represent the contribution of electron–hole pair at that k-point to the \(i\)th exciton wave function, the dots in the background are the corresponding G0W0 quasi-particle band structures. (f) The exciton energy spectrum of the GeS monolayer and the corresponding k-space distribution of the first eight exciton envelope functions.
**Figure 3** shows the optical properties of the GeS monolayer with and without excitonic effects. The absorbance spectra exhibit significant anisotropy due to the non-uniform environment along the armchair and zigzag directions. The low-frequency optical properties are primarily influenced by the armchair-polarization. In the absence of strain, the optical properties of GeS along the armchair and zigzag directions are characterized by three exciton states, denoted as I\({}^{1}\), I\({}^{3}\): and I\({}^{2}\) in **Figures 3(a)** and 3(b), respectively. These excitonic states, correspondingly, originated from the coupling of excited holes and electrons at the \(\Gamma\) valley \(\left(S_{3p_{Z}}\to Ge_{4p_{Z}}\right)\), the band edge state at \(\Gamma-Y\) path \(\left(S_{3p_{xy}}\to Ge_{4p_{xy}}\right),\) and the critical point along the \(\Gamma-X\) direction \(\left(Ge_{4p_{Z}}\to S_{3p_{xy}}\right)\) as indicated by the fat band in **Figures 3(c)** to 3(e). The I\({}^{2}\) exciton state is rather weak since the
Figure 4: (a) The imaginary part of the dielectric function is a function of external strains of the GeS monolayer. (b) The evolution of the optical gap, the direct energy band gap at \(\Gamma\) point, and the exciton binding energy under strain. (c) The effective radiative lifetime \(\left\langle\tau_{eff}\right\rangle\), the oscillation strength ratio \(\left(\mu_{S}/\mu_{S_{0}}\right)\) and exciton effective mass ratio \(\left(M_{S}/M_{S_{0}}\right)\) of the first bright exciton state as a function of external strain.
transition from the occupied state to the unoccupied one of this exciton is approximately forbidden, which arises from the different parity of its excited hole and excited electron [49], the opposite behavior is true for I\({}^{1}\) and I\({}^{3}\) excitons. To assess the strength of excitonic effects, we calculated exciton binding energy as the energy difference between the optical gap and the fundamental direct GW band gap at \(\Gamma\). The exciton binding energy of the GeS monolayer is about 0.706 eV (Table S4), and is consistent with previous reports [13]. The large binding energies and the significant modifications in the adsorption spectra (compared with GW-RPA spectra) indicate that these exciton states are strong and potentially stable at high temperatures. The dissociation temperature T\({}_{\rm d}\) (T\({}_{\rm d}\approx 0.1\)E\({}_{\rm b}\)/K\({}_{\rm B}\)) for the I\({}^{1}\) exciton is around 820K, which is higher than room temperature.
To better understand the character of specific exciton states, Figure 3(f) illustrates the energy diagram of the bound exciton states in the GeS monolayer, as well as the k-space distribution of the squared amplitude of the free electron-hole pairs that constitute the exciton wave functions in the Brillouin zone. In addition to the bright exciton states, the peculiar parity symmetries of band edges result in the lowest optical transition dipole being forbidden, leading to numerous dark states that are visible as blue-grey colors in Figure 3(f). Although not detectable in the optical absorbance spectra, the dark exciton states are important as they provide fingerprints for the optical properties of typical materials. The nodal structures of these excitonic wave functions reveal a hydrogen-like series of states with clear angular momentum assignments. Interestingly, the excitonic energy diagram does not follow the Rydberg series for the 2D hydrogenic model, in which excitons with higher azimuthal quantum numbers have lower energies than those with smaller azimuthal quantum numbers. For instance, the energy of 4d exciton is smaller than that of the 3d and 4p ones, this behavior is universal in 2D materials due to their unique screening [50]. Another noteworthy feature is the degeneracy of the 2p states resulting from the in-plane anisotropy, which is a characteristic of the GeS monolayer and other 2D materials [51, 52].
Figure 4 (a) illustrates the optical excitation of GeS when subjected to biaxial strain, while Figure 4(b) summarizes the changes in the optical gap and the direct GW electronic band gap at \(\Gamma\), as well as the exciton binding energy. For the sake of simplification, we only focus here on the features of the first bright exciton I\({}^{1}\). When compressed, the optical gap decreases due to the reduction of the direct electronic band gap at the \(\Gamma\) point. The exciton binding energy of prominent excitations and their intensity exhibits a significant alteration. The most notable feature is that the exciton binding energy of GeS under compressive strain is weaker than that of the strain-free and lattice-expanded cases. This is primarily due to the enhanced screening ability (increase in static dielectric constant \(\varepsilon(0)\) as shown in Table S3) or the reduction of the electronic band gap. Additionally, the anisotropy of the optical absorbance spectrum along the armchair and zigzag polarizations at P = -6% gradually decreases compared to that of the intrinsic case, but they start to distinguish at the higher strain applied. This is due to the evolution of the electronic anisotropy of the GeS monolayer under compression, as indicated by the contour plots of the direct valence to conduction band transitions in Figure 5.
Conversely, when subjected to lattice expansion, the optical gap increases, and the exciton binding energy and the anisotropy of the optical spectrum exhibit opposite changes to those
observed under compression. The changes in optical properties of the GeS monolayer upon elongation are rather complicated, as the optical gap gradually increases but then slowly decreases at higher applied strain. The distinguishing of the optical properties along the armchair and zigzag directions are more obvious due to the anisotropic electronic wave functions of tensile strained GeS monolayer (**Figure 5**). Moreover, the exciton binding energy and the intensity of the first bright state I\({}^{1}\) significantly increase. The evolution of binding energy of the first bright state could be deduced by the increase of the electronic band gap or decrease of the dielectric screening environment (**Table S3**). On the other hand, the enhancement of its intensity can be explained as follows: the excited hole mostly localized around the Phosphorus atom, while the excited electron relied around the sulfur atom of the opposite plane as shown in the two-first panels of **Figure 1(d)**, indicated in the reciprocal space in **Figure 3(c)** and be replotted in **Diagram S6.** The significant reduction of monolayer thickness of GeS upon lattice expansion induces the hole and electron to come closer, thereby enhancing the electron-hole overlap, and the transition probabilities, as well as exciton binding energy. However, the impact of electron-hole physical distance does not always express the linear relation, the binding energy of excitons and their oscillation strength begin to decrease when the critical strain reaches +8%. This phenomenon occurs because the nature of the first bright state gradually shifts from the 1s state to the 5d state as the GeS monolayer undergoes elongation. This information is illustrated in **Figure 5**.
\begin{table}
\begin{tabular}{l c} \hline Materials & \(\tau_{eff}^{RT}\) \\ \hline GeS & 84.86 (ps)\({}^{\text{a}}\) \\ Phosphorene & 179.05 (ps)\({}^{\text{a}}\) \\ & 194.21 (ps)\({}^{\text{b}}\) \\ & 221.35 (ps)\({}^{\text{c}}\) \\ MoS2 & 0.83 (ns)\({}^{\text{a}}\) \\ & 0.82 (ns)\({}^{\text{d}}\) \\ & 0.85 (ns)\({}^{\text{e}}\) \\ MoSe2 & 0.87 (ns)\({}^{\text{a}}\) \\ & 0.80 (ns)\({}^{\text{d}}\) \\ & 0.90 (ns)\({}^{\text{f}}\) \\ \hline \end{tabular}
* Current theoretical prediction using the Fermi-golden rule
* Theoretical prediction using Hefei-NAMD code in Ref [53]
* Experimental measurement in Ref [54]
* Theoretical prediction using the Fermi-golden rule in Ref [55]
* Experimental measurement from Ref [56]
* Experimental measurement from Ref [57]
\end{table}
Table 2: The effective excition lifetime calculate at room temperature (\(\tau_{eff}^{RT}\)) of intrinsic GeS monolayer. The previous calculation and experimental values are also listed for comparison.
Figure 5: The direct valence to conduction band transition energies in the first Briulouin zone of strained-GeS monolayer is plotted in the color map. The red area indicated the low energy regime, while the green one illustrated the high-energy one. The wave functions of the first exciton state are visualized as blue circles. As the lattice constant is compressed, the electronic functions become more isotropic, and the exciton state I\({}^{1}\) corresponds to a 1s orbital. Conversely, with an elongated lattice constant, the wave functions along the zigzag and armchair directions exhibit noticeable differences. The energies of 1s and 3d become closer to each other and the latter will surpass the former at +8% elongation of the lattice constant, resulting in switching their nature.
Based on the above discussion, it was found that strain plays a vital role in modifying the probability of recombination for excited states. Specifically, we conducted additional analysis on the radiative lifetimes of excitons in monolayer GeS. It is worth noting that the short lifetime of excitons can be advantageous for internal quantum efficiency and telecommunications applications. Conversely, the ultra-long timescale of electron-hole recombination is highly beneficial for advanced optoelectronic and thin-film photovoltaic cells. Using the methodology developed for assessing radiative exciton lifetimes in two-dimensional materials [58], the radiative lifetime \(\langle\tau_{S}\rangle\) at room temperature (T = 300 K) of exciton states S is defined as follows:
\[\langle\tau_{S}\rangle=\left(\frac{8\pi e^{2}E_{S}(0)}{\hbar^{2}c}\frac{\mu_{ S}^{2}}{A_{uc}}\right)^{-1}\frac{3}{4}\left(\frac{E_{S}(0)^{2}}{2M_{S}c^{2}} \right)^{-1}k_{B}T,\]
where \(A_{uc}\) is the area of the unit cell, \(\mu_{S}^{2}\) is the square modulus of the BSE exciton transition dipole divided by the number of 2D k-points, and \(E_{S}(0)\) is the exciton energy calculated using the BSE method, and \(M_{S}=m_{e}^{*}+m_{h}^{*}\) is the effective mass of exciton. It is important to note that \(m_{e(h)}^{*}\) here indicated the effective mass of the excited electron (hole) related to the exciton bound-state, but not for the effective mass of the CBM (VBM) as discussed in the electronic transport section. At zero strain, the exciton lifetime of I\({}^{1}\), I\({}^{2}\), and I\({}^{3}\) exciton is about 84.86 ps, 137.63 ns, and 21.17 ps. The ultra-long lifetime of dark I\({}^{2}\) exciton arises from its extremely small dipole strength.
With the assume the presence of perfect thermalization of the exciton states, we further define an effective radiative lifetime \(\langle\tau_{eff}\rangle\) by averaging the decay rates over the lowest energy bright and dark excitons:
\[\langle\tau_{eff}\rangle^{-1}=\frac{\sum_{S}\langle\tau_{S}\rangle^{-1}e^{-E_{ S}(0)/k_{B}T}}{\sum_{S}e^{-E_{S}(0)/k_{B}T}}.\]
The calculated effective exciton lifetime of the intrinsic GeS monolayer is shown in Table 2. We also add the results of other 2D materials using the same and different schemes of the current and previous works for comparison. The overall agreement of this approach and other theoretical predictions and experimental measurements is quite good, and thus, our calculations are reliable. The GeS monolayer exhibits an effective exciton lifetime of approximately 84.86 ps, which is close to that of I\({}_{1}\) and comparable to blue phosphorene (179.05 ps), but faster than MoS\({}_{2}\) (0.83 ns) and MoSe\({}_{2}\) (0.87 ns). The dependence of the effective exciton lifetime on strain is shown in Figure 4(c), we also include the evolution of the effective mass and relative oscillation strength of the bright exciton state I\({}^{1}\) for comparison since it contributes significantly to the effective exciton lifetime. Generally, the compressive strain leads to a rapid increase in \(\langle\tau_{eff}\rangle\) due to decreased oscillation strength and increased effective mass. For example, at 0% strain, \(\langle\tau_{eff}\rangle\) is approximately 84.86 ps, while at -4% and -8% strains, the \(\langle\tau_{eff}\rangle\) values are approximately 280.52 ps and 1514.59 ps, respectively. In contrast, even though a dramatic enhancement in effective exciton mass, the radiative lifetime only gradually decreases with tensile strain applied and starts to enhance at +8% strain, which is most likely due to a slightly changing in electron-hole recombination intensity. The
smallest effective radiative lifetime is about 19 ps at +6% tensile strain, which is 4.5 times and 80 times faster than that of intrinsic and -8% compressive strain.
## 4 Conclusions
To summarize, our study focused on examining the impact of biaxial strain on the electronic transport properties and exciton radiative lifetime of the GeS monolayer. The theoretical works are based on the delicate combination of high-precise simulations and the appropriate theoretical models, such as DFT, MBPT, NEGF, the deformation-potential theory for carrier mobility, and the Fermi-golden rule for exciton lifetime. Our findings revealed a significant enhancement in the I-V characteristic when the lattice is compressed due to an improvement in carrier mobility. The optical gap, the anisotropic optical properties, the absorption coefficient, and the excition binding energy are strongly dependent on the applied biaxial strains. Moreover, the strain also can finely adjust the time scale of electron-hole recombination. With the electronic and optical properties that can be flexibly modified via strain engineering, the GeS monolayer may hold great potential for high-tech applications such as ultrafast FETs, PVs, and optoelectronic applications.
## Acknowledgments
This research was funded by the Vietnam Ministry of Education and Training under grant number B2023-TCT-03.
|
2302.08401 | LinSets.zip: Compressing Linear Set Diagrams | Linear diagrams are used to visualize set systems by depicting set
memberships as horizontal line segments in a matrix, where each set is
represented as a row and each element as a column. Each such line segment of a
set is shown in a contiguous horizontal range of cells of the matrix indicating
that the corresponding elements in the columns belong to the set. As each set
occupies its own row in the matrix, the total height of the resulting
visualization is as large as the number of sets in the instance. Such a linear
diagram can be visually sparse and intersecting sets containing the same
element might be represented by distant rows. To alleviate such undesirable
effects, we present LinSets.zip, a new approach that achieves a more
space-efficient representation of linear diagrams. First, we minimize the total
number of gaps in the horizontal segments by reordering columns, a criterion
that has been shown to increase readability in linear diagrams. The main
difference of LinSets.zip to linear diagrams is that multiple non-intersecting
sets can be positioned in the same row of the matrix. Furthermore, we present
several different rendering variations for a matrix-based representation that
utilize the proposed row compression. We implemented the different steps of our
approach in a visualization pipeline using integer-linear programming, and
suitable heuristics aiming at sufficiently fast computations in practice. We
conducted both a quantitative evaluation and a small-scale user experiment to
compare the effects of compressing linear diagrams. | Markus Wallinger, Alexander Dobler, Martin Nöllenburg | 2023-02-16T16:26:27Z | http://arxiv.org/abs/2302.08401v1 | # LinSets.zip: Compressing Linear Set Diagrams
###### Abstract
Linear diagrams are used to visualize set systems by depicting set memberships as horizontal line segments in a matrix, where each set is represented as a row and each element as a column. Each such line segment of a set is shown in a contiguous horizontal range of cells of the matrix indicating that the corresponding elements in the columns belong to the set. As each set occupies its own row in the matrix, the total height of the resulting visualization is as large as the number of sets in the instance. Such a linear diagram can be visually sparse and intersecting sets containing the same element might be represented by distant rows. To alleviate such undesirable effects, we present LinSets.zip, a new approach that achieves a more space-efficient representation of linear diagrams. First, we minimize the total number of gaps in the horizontal segments by reordering columns, a criterion that has been shown to increase readability in linear diagrams. The main difference of LinSets.zip to linear diagrams is that multiple non-intersecting sets can be positioned in the same row of the matrix. Furthermore, we present several different rendering variations for a matrix-based representation that utilize the proposed row compression. We implemented the different steps of our approach in a visualization pipeline using integer-linear programming, and suitable heuristics aiming at sufficiently fast computations in practice. We conducted both a quantitative evaluation and a small-scale user experiment to compare the effects of compressing linear diagrams.
Set Visualization, Linear Diagrams, User Evaluation, Computational Experiment
## 1 Introduction
Set systems occur naturally in various use-cases, such as social networks, document analysis, biological data, or more generally, whenever categorical data can be grouped. The visualization of such set systems is crucial in understanding the relationship between elements and sets, sets and sets, or attributes and sets. Linear diagrams are a set visualization approach that has been recently proposed [1]. In such linear diagrams sets are depicted as one or more line segments in a matrix. Each row represents a single set and each column represents a single element. Each line segment of a set is shown in a contiguous range of cells of the matrix whenever the corresponding elements (columns) belong to the set (row). The focus of linear diagrams is on aggregating membership of individual elements as contiguous segments, thus, focusing on overlapping sets similar to Euler diagrams. However, it has been empirically shown in user studies that linear diagrams significantly outperform Euler diagrams on set-theoretic tasks [2, 3]. Reasons are that Euler diagrams might not be well-matched or proportional depending on the structure of the represented data and therefore are harder to read or it is even impossible to visualize all occurring relationships between sets [4].
One observation about linear diagrams is that they liberally use vertical, and to a lesser degree, horizontal space as each set occupies its own individual row in the matrix. In the case of vertical space, set-to-set relationship tasks become harder for sets that are positioned in rows that are far apart. Similarly, as labels are usually positioned above the diagram, element-to-set relationships become harder with increasing distance between sets and element labels. In case of horizontal space, sets containing only a small number of elements consequently produce lots of horizontal white space. Therefore, in cases where the available screen size is restricted linear diagrams might be a suboptimal choice as either the diagram must be scaled down or the dataset can not be shown at the same time.
Our alternative approach LinSets.zip presents space-efficient linear set diagrams while still preserving as many design principles of linear diagrams; see Figure 1. As it is considered best practice in linear diagrams, our approach also optimizes the number of line segments necessary to represent each set. These line segments resemble _blacks_ in our visualization due to the necessary label placement inside these blocks. Hence, we only use the term blocks throughout the paper even when specifically talking about linear diagrams. In LinSets.zip vertical space is used more efficiently by allowing multiple compatible sets (i.e., not sharing any elements) to occupy the same row, using different colors to distinguish them. This works especially well when a set system has many non-overlapping sets. We formulate this problem as a graph coloring problem that gives rise to multiple, more restrictive, formulations. In the base variant the goal is to maximally reduce the number of rows without violating the compatibility. Other variations restrict the possibility of blocks of different sets alternating or restrict the number of alternating sets to two. All mentioned variations can also be bounded on the total number of sets that can be placed in each row.
We present several visualization styles that utilize the different variants while still being similar to linear diagrams. We introduce the concept of _block links_, thin lines that connect different blocks of the same set, which aid the viewer in distinguishing between different sets in the same row.
As the underlying column ordering and graph coloring problems are computationally hard, we implemented LinSets.zip using both heuristics and exact algorithms that yield optimal solutions. The source code of the implementation is available on OSF.1 Based on this, we
performed a quantitative analysis on runtime and quality criteria to give an indication of what problem sizes can be efficiently solved optimally and the trade-off of using fast heuristics compared to optimal algorithms.
Furthermore, we also conducted a small-scale user study to give indication if placing several sets in the same row affects a user in performing typical set visualization tasks. Here, we asked the participants to perform five standard set- and element-based tasks [5] on static images generated with four variants of our framework. We measured task completion time and accuracy.
## 2 Related Work
Set visualization is a subfield of information visualization that has been extensively investigated. Often, set visualization is integrated in visualization systems to provide a supplementary view on the presented data. We focus mainly on visualizations that consider only abstract set data, where no additional attributes in the data are necessary.
Alsallakh et al. [5] published a state-of-the-art report on set visualization in 2016. They provide an overview of existing set visualization techniques and approaches and classify them by visual representation, scalability, and capacity of solving set-theoretic tasks according to a taxonomy. The taxonomy itself gives an overview over set-theoretic tasks that commonly arise in set visualization systems. The tasks are classified in three categories: _element-based tasks_ are tasks concerned with relationships between elements and sets; _set-based tasks_ are tasks concerned with relationships between two or more sets; _attribute-based tasks_ are concerned with the relationship between element attributes and their appearance and distribution in sets. Overall, they define a total of 26 tasks and six categories of set visualizations. In the related work we will focus on the three most relevant categories.
Generally, Euler and Venn type diagrams are the most intuitive. In both variants sets are depicted as closed curves where overlapping regions represent set intersections. In Venn diagrams all possible intersections are shown, even if they are empty. Euler diagrams show only non-empty set intersections. For Euler and Venn diagrams automated approaches with regular shapes (e.g. [6, 7]) as well as irregular shapes (e.g. [8, 9, 10]) have been proposed. Since well-formed Euler and Venn diagrams do not always exist, the resulting visualization might not be well-matched, i.e., non-existing intersections are shown or existing intersections are shown twice. Due to this observation, Euler diagrams hardly scale for datasets beyond 4-6 sets [5].
Matrix-based techniques are another class of techniques. Here, sets and elements are depicted as rows and columns of a matrix. Set membership of elements is indicated by coloring or marking the respective cell with a glyph in the matrix. Typically, approaches in this class are designed for interactive analysis and exploration which requires interactivity, e.g. permuting rows or columns. Examples, such as RainBio [11], or UpSet [12], are powerful and scalable visual analytics systems that incorporate multiple views on the data. However, due to their complexity they are not necessarily intuitive. LinSets.zip is related to matrix-based techniques as the underlying visual metaphor is in its essence a matrix. Similarly, rows and columns can be permuted, however, LinSets.zip focuses on minimizing blocks by reordering columns to increase readability instead of highlighting patterns in the data. Furthermore, multiple rows of the matrix are compressed to show a more compact representation. Also, similar to linear diagrams a block in LinSets.zip represents an aggregation over multiple cells in a row instead of explicitly indicating membership of individual elements.
Aggregation-based technique, such as Radial Sets [13], or MetroSets [14] handle increasing number of elements by not explicitly showing individual elements belonging to sets but rather aggregating multiple set elements into one visual element to indicate membership frequency. Often, they are combined with interactivity, as in UpSet [12], Dual Radial Sets [15] or RainBio [11], to show details of specific sets or elements on demand. The most relevant aggregation-based technique concerning Linset.zip are linear diagrams [1]. The underlying visual metaphor of linear diagrams is a matrix where each set is represented as a horizontal line in its respective row and each element as a column. Contrary to matrix-based techniques, contiguous cells of the matrix are represented by a single block to indicate all elements in this range belong to the set. Several user studies have been conducted [2, 16, 17] where it has been shown that linear diagrams perform equally well or better than other diagram types. Similarly, different design decisions and their impact on readability and task performance have been compared [1, 18]. Here, findings indicated that minimizing the number of blocks (called line segments there) in the linear diagram had the most effect on readability. It has been shown that reordering columns to minimize blocks is an NP-hard problem [19, 20] but heuristic [2, 16, 21] and exact [20] algorithms to compute optimal solutions have been proposed. Lastly, interactivity [22] in linear diagrams has been investigated.
LinSets.zip builds on the main findings of linear dia
Fig. 1: A linear diagram (a) of a project management dataset. While the linear diagram uses vertical space very liberally, both variants (b) and (c) of the LinSet.zip approach present a more compact representation.
grams. Columns are reordered to reduce blocks while the visual encoding is kept similar to what has been experimentally validated. However, space is better utilized as several sets can be compressed into the same row. Additionally, Lin-Sets.zip gives the option to show all set elements explicitly which we think is a viable assumption for up to 50 elements.
Compressing multiple sets into a single row is not an entirely novel concept in set visualization and has been explored previously. The rainbow boxes system [23] allows compressing variable height rows. Moreover, timelines or Gantt charts generally allow compression; for example, as in TimeSets [24]. In both examples the problem statement is different to LinSets.zip and the proposed approaches only consider greedy heuristics. We show that finding an optimal solution is possible in reasonable time.
## 3 Design Decisions
Degrees of freedom in the design of a linear diagram can differentiate between two concerns. The first concern is the layout of a linear diagram as it is determined by the mapping of sets and elements to axes of a matrix and subsequently the row and column order of said matrix. While the mapping only affects the orientation of the diagram, the order of rows and columns has more drastic implications. The column order impacts the number of blocks necessary to represent a set. For example, if elements \(a\) and \(c\) are in the same set then the order \((a,b,c)\) will require two blocks while the order \((a,c,b)\) only requires one. Moreover, the row order determines the visual distance between sets. The second concern is how the data is encoded as graphical features. Here, thickness of blocks, color of blocks, label placement for elements and sets, guide-lines to labels, and margins all impact the appearance of the diagram.
From a theoretical perspective on set visualization, linear diagrams scale well with an increasing number of sets as all set-to-set relationships can be visualized. However, the height of a linear diagram is proportional to the number of sets in the set system as each row represents exactly one set. We started our investigation based on this observation with the assumption in mind that vertical space is limited. For example, static diagrams in print media, multi-view visualization interfaces, or even interactive single-view visualization interfaces all have constraints on what can be shown or perceived at the same time.
Furthermore, tasks where blocks of different sets are vertically distant should be intuitively harder to solve than tasks where the blocks are vertically close. Even though this could be tackled with integrating interactivity to allow reordering of rows, this could over-complicate the interface and is a non-viable solution for static visualizations.
From those observations our initial idea was to find a more space-efficient representation of linear diagrams while trying to improve or at least keep a similar level of readability. Our approach computes a layout that reduces the vertical space necessary to draw a linear diagram by packing multiple _compatible_ sets into the same row. First, we state the three definitions of compatibility that result in different visual encodings as seen in Figure 2. We say that two sets \(A\) and \(B\)_alternate_ if their blocks alternate under a given column order, i.e., a block of \(A\) is followed by a block of \(B\), which again is followed by a block of \(A\) (or vice versa).
* Two sets are compatible if they do not intersect.
* Two sets are compatible if they do not intersect and their blocks do not alternate within a given column order.
* Two sets are compatible if they do not intersect and their blocks alternate only with each other within a given column order.
Compatibility definition \(\Gamma_{1}\) is the minimum requirement, which in turn allows to maximally compress the linear diagram. However, in this case we can only distinguish between blocks of different sets in the same row by color. This assumption is problematic as humans cannot reliably distinguish between multiple colors and it is not inclusive for people with color vision deficiencies.
To alleviate this problem we propose the refined compatibility definitions \(\Gamma_{2}\) and \(\Gamma_{3}\). Here, we further restrict sets to be put in the same row, based on the occurrence of alternating blocks. The reason here is that we can use additional visual elements to indicate blocks belonging to the same set and therefore redundantly encode blocks belonging to the same set. We call this concept _block links_, which are essentially thin straight lines connecting all blocks of a set as seen in Figure 1(c). For \(\Gamma_{2}\) we can draw the block links in the vertical center to indicate blocks belonging to the same set as blocks of different sets never alternate. In the case of \(\Gamma_{3}\) we can draw the block links at either the top or bottom of the row and therefore we can only allow blocks of two sets to alternate at the same time. Furthermore, maximally compressing the rows can have the effect of creating diagrams that are visually too dense. We propose the option to limit the number of sets that can be placed in the same row by a positive integer \(B\).
While we have introduced the intention of compressing several sets into the same row to create more compact layouts, we have not talked much about what is considered best-practice for linear diagrams and how this ties into our approach. We looked at existing design principles of linear diagrams which have been empirically evaluated. Three design principles had statistically significant positive impact on set-theoretic tasks [1].
Fig. 2: Different visual encodings of LinSets.zip compared to linear diagrams (a). \(\Gamma_{1}\) (c) allows non-intersecting sets in the same row. \(\Gamma_{2}\) (b) additionally requires that blocks do not alternate. \(\Gamma_{3}\) (d) allows for a maximum of two sets to have alternating blocks.
* **Design Principle 1:** draw linear diagrams with a minimal number of blocks.
* **Design Principle 2:** draw linear diagrams with guidelines at the beginning and end
* **Design Principle 3:** draw linear diagrams with thin blocks.
The first design principle still applies to our approach. Therefore, we try to minimize the number of blocks for all sets by reordering the columns. In the case of \(\Gamma_{1}\) compressing sets is independent of column order. For \(\Gamma_{2}\) and \(\Gamma_{3}\) this is not the case as the column order is directly connected to whether blocks alternate or not. Here, we propose to first order the columns to minimize the number of blocks before compressing the linear diagram as this should decrease the visual complexity. The second design principle states that using guide-lines, thin unobtrusive vertical lines, as a visual aid to indicate the beginning and end of blocks. This can be applied to our design as it does not interfere with compression. Adhering to the third design principle is more difficult, if not impossible. As our approach packs multiple sets into the same row, it is impossible to place a single set label at the beginning of each row. Therefore, we have to place labels near one of the blocks of each set. Here, we think that increasing the line height for representing blocks in order to embed the labels within blocks helps in identifying which label belongs to which set. Therefore, we cannot use thin lines to represent sets.
After implementing a first prototype we conducted an expert interview with a graphic designer. The designer provided valuable feedback and recommended to use margins between blocks in the same row and between rows, adding white background color to text labels to make them stand out and to limit the color palette.
## 4 Overview of the LinSets.zip Approach
In this section we give a high level overview of the LinSets.zip approach, which is modelled as a four-stage (I-IV) pipeline depicted in Figure 3. We provide different modules for each pipeline stage.
The input is an abstract set system modelled as a hypergraph \(\mathcal{H}=(\mathcal{V},\mathcal{S})\). The vertex set \(\mathcal{V}\) represents all elements in the set system and \(\mathcal{S}\subseteq 2^{\mathcal{V}}\) the sets themselves. A detailed description of the algorithms and techniques used in stages (I) and (II) can be found in Section 5.
**Column Ordering (I).** In the first stage we reorder columns in LinSets.zip such that the total number of blocks is minimized. This is equivalent with reordering columns in linear diagrams and the same techniques can be applied. Unfortunately, this problem is NP-hard but we can reformulate the problem of finding an optimal column order as a traveling salesperson problem. We provide modules that solve this problem optimally using an efficient travelling salesperson solver or use heuristics. The output of this stage is an ordering \(\pi^{c}\) of the columns.
**Row Compression (II).** Next, we are concerned with minimizing the total height of the LinSets.zip diagram by compressing multiple compatible sets into the same row. To reiterate from Section 3, we present three definitions of set compatibility that each have implications for the rendering later. The first variant \(\Gamma_{1}\) can assign sets to the same row if they do not pairwise intersect. This can be modelled as a conflict graph where each vertex represents a set and edges represent conflicts, namely intersections between sets. By solving a graph coloring problem on the conflict graph a mapping of sets to rows can be extracted as each color represents a row in the matrix. In the second variant \(\Gamma_{2}\) we additionally restrict that blocks of sets in the same row cannot alternate. Similarly, the third variant \(\Gamma_{3}\) restricts that blocks of a maximum of two sets in a row can alternate at the same time. Both variants add constraints to the coloring problem. While \(\Gamma_{1}\) works independent of a given column order, it is necessary to have a given column order for \(\Gamma_{2}\) and \(\Gamma_{3}\) as the definition of alternating is entirely dependent on the order of columns.
Moreover, in some cases it might be beneficial to restrict the maximum number of sets per row, e.g., to leave space for labels or guarantee unique colors from a given palette. This can be modelled as a bounded graph coloring problem. Again, graph coloring problems are considered NP-hard [25] but for both, unbounded and bounded coloring problem, we provide modules that compute an optimal solution with a minimal number of rows necessary and a heuristic solution. The output of this stage is a mapping of sets to rows.
**Row Order and Color Assignment (III).** After computing a column order and row mapping we have to consider two more aspects that impact the LinSets.zip diagram. Firstly, the row order is not fixed. Potentially, we can apply the same procedure as for column ordering to reduce vertical blocks -- consecutive rows of a column that all contain a block. However, compared to horizontal blocks it is unclear what implications this additional step has on readability. As row order has no impact on compression and the instances in the user experiment were small, we opted to use a random order.
Secondly, the assignment of colors is non-trivial as the total number of sets can exceed the number of colors available in our palette. No unique color should be assigned to two sets in the same row. Furthermore, the perceptual distance between colors assigned to a row should be maximal. However, this problem is known as the maximum differential coloring problem [26] and is known to be NP-hard. Runtime experiments on a prototype showed, unlike (I) and (II), that solving the problem optimally would bottleneck the overall runtime of the pipeline. A heuristic solution did run faster but did only marginally improve the result from a more simplistic approach. Therefore, we implemented a circular color assignment that assumes that the row order is fixed. Both problems open interesting questions on their own, however, due to space limitations we will not give a more comprehensive description.
**Rendering (IV).** Finally, we need to render the output by using the results computed in the previous stages (I-III). In total, we provide four rendering styles that capture linear diagrams and the three compatibility definitions of LinSets.zip. All styles are built on the visual metaphor of a matrix. Rows are used to represent (multiple) sets and columns represent the individual elements. Blocks are drawn as horizontal bars covering the respective set elements. The blocks are colored with one of the available colors in the Tableau10 color palette. For all styles we place
labels for elements or the intersection cardinality above the matrix. Vertical guide-lines indicate the beginning and end of intersections. For linear diagrams we place set labels to the left of the matrix while for the LinSets.zip styles labels are placed in the largest block of their respective sets. For \(\Gamma_{1}\) only the color is used to distinguish blocks of different sets in the same row. For \(\Gamma_{2}\) block links are drawn in the center from the first to the last block of a given column order, or, at the top or bottom of blocks for \(\Gamma_{3}\) when blocks of two sets are allowed to alternate. We use whitespace as margin between blocks in the same row and between rows.
## 5 Algorithms
In this section we go into detail about the different _exact_ and _heuristic_ algorithms used in the modules for stage I-II of the pipeline.
### _Column Ordering (Step I)_
We minimize the number of drawn blocks by computing a column ordering \(\pi^{c}\). This is done by formulating the problem as a problem on binary matrices and using a known travelling salesperson (TSP) formulation of this problem. For the input hypergraph \(\mathcal{H}\) we compute the binary matrix \(A\) of size \(|\mathcal{S}|\times|\mathcal{V}|\) such that \(A_{i,j}=1\) if vertex \(v_{j}\) is in the hyperedge \(E_{j}\). We find a column ordering of \(A\) that minimizes the number of so-called _blocks of consecutive ones_, which are maximal consecutive entries of ones in a row of a matrix. There is a one-to-one correspondence between these blocks of ones and the blocks in the linear diagram. The corresponding minimization problem is known as _Consecutive Block Minimization_ and is NP-hard [27]. However, the problem can be formulated as a TSP problem [28]: First we add an auxiliary column of ones to the matrix \(A\). Then, we construct from \(A\) a graph \(G\) such that vertices of \(G\) correspond to columns of \(A\). The distance \(D_{i,j}\) between two vertices \(v_{i}\) and \(v_{j}\) in \(G\) is the _Hamming distance_\(\sum_{1\leq k\leq|\mathcal{S}|}|A_{k,i}-A_{k,j}|\) between the corresponding columns \(c_{i}\) and \(c_{j}\). The auxiliary vertex \(v\) corresponding to the added column of ones serves as start and end vertex of the tour, and from a minimum-length tour in \(G\) we obtain a permutation of the columns of \(A\) with the minimum number of blocks of consecutive ones (blocks in the linear diagram), where the auxiliary column is the last one in the permutation. By removing the auxiliary column from this permutation, we obtain a permutation of the original matrix, that serves as permutation of the set \(\mathcal{V}\).
_Exact_. To find a tour of minimal distance in \(G\) in the exact pipeline we use the Concorde TSP solver2.
Footnote 2: [https://www.math.uwaterloo.ca/tsp/concorde.html](https://www.math.uwaterloo.ca/tsp/concorde.html)
_Heuristic._ The heuristic version applies the simulated annealing algorithm of NetworkX to find a short TSP tour, starting with an approximation based on the Christofides algorithm [29].
### _Compression (Step II)_
Once we have an ordering \(\pi^{c}\) of the columns (the vertices \(\mathcal{V}\) of \(\mathcal{H}\)), we perform the actual compression of the linear diagram. Our aim is to use as few rows as possible. For now, let us assume that we do not want to connect blocks of a set by block links. The main observation is that we can place two sets \(S_{1},S_{2}\in\mathcal{S}\) of \(\mathcal{H}\) into the same row if and only if \(S_{1}\cap S_{2}=\emptyset\). This is independent of the column ordering \(\pi^{c}\). Hence, this can be modelled as a graph coloring problem: Let \(G\) be a _conflict graph_ with \(V(G)=\mathcal{S}\), \(E(G)=\{\{S_{1},S_{2}\}\mid S_{1},S_{2}\in\mathcal{S},S_{1}\cap S_{2}\neq\emptyset\}\), and \(C\) be a set of colors. A valid coloring \(\mathsf{col}:V(G)\to C\) of \(G\), that is \(\mathsf{col}(u)\neq\mathsf{col}(v)\) for \(\{u,v\}\in E(G)\), immediately gives us a compression of the linear diagram into \(|C|\) rows. Each color \(c\in C\) corresponds to a row in the linear diagram, and a set \(S\in\mathcal{S}\) is in row \(c\) if \(\mathsf{col}(S)=c\). We also give the option to bound the number of sets per row by a positive integer \(B\). This translates to finding a coloring \(\mathsf{col}\) such that for all \(c\in C\), \(|\mathsf{col}^{-1}(c)|\leq B\). The problem of finding such a coloring is know as _Bounded Vertex Coloring_ (BVC) and has been studied in the literature (refer to [30] for a survey).
For both the unbounded case and the bounded case we apply a heuristic and an exact algorithm. Prior, we compute a large clique \(K\subseteq V(G)\) using the NetworkX3 implementation of an approximation by Boppana and Halldorsson [31]. Every valid coloring has more colors than \(|K|\), and we will pre-specify a different color for each vertex \(v\in K\). We now present our algorithms for the two cases where the number of sets per row is unbounded or bounded.
Footnote 3: [https://networkx.org/](https://networkx.org/)
**Unbounded coloring.** The _heuristic_ algorithm to obtain a coloring with few colors is the NetworkX implementation of the greedy DSATUR algorithm [32]: The vertices are colored one by one in decreasing order of their _saturation_. The saturation of a vertex \(v\) is the number of different colors the neighbors of \(v\) are assigned to. Ties are broken by decreasing degree. When a vertex is colored, it is assigned to the first free color that none of its neighbors has.
_Exact_. For the exact unbounded coloring we use a standard ILP formulation of the coloring problem. Let \(C=\{c_{1},\ldots,C_{U}\}\) be a set of colors, where \(U\) is the number of colors required by the heuristic. The ILP has binary variables \(x_{v,c}\) for \(v\in V(G)\) and \(c\in C\), and binary variables \(y_{c}\) for \(c\in C\). Variables \(x_{v,c}\) should be one if and only if vertex
Fig. 3: First, the LinSets.zip approach reorders elements (a) of the input before a compression variant is applied (b). Next, rows are reordered and colors are assigned to sets (c). Finally, the diagram is rendered in the LinSets.zip style (d).
has color \(c\), variables \(y_{c}\) should be one if and only if at least one vertex has color \(c\). We obtain the following formulation.
minimize: \[\sum_{c\in C}y_{c}\] (1) subject to: \[\sum_{c\in C}x_{v,c} =1, v\in V\] (2) \[x_{u,c}+x_{v,c} \leq y_{c}, \{u,v\}\in E(G),c\in\] (3) \[y_{c_{i+1}} \leq y_{c_{i}}, i=1\ldots U-1\] (4) \[x_{v,c_{v}} =1, v\in K\] (5)
The objective (1) minimizes the number of required colors. (2) ensures that each vertex has exactly one color. (3) ensures that two adjacent vertices have different colors, while also forcing \(y\)-variables to take the correct value. We assume that each vertex has at least one neighbor for this constraint to work, otherwise we have to add further constraints \(x_{v,c}\leq y_{c}\) for each vertex \(v\) without neighbors and each color \(c\in C\). (4) reduces the search space. For each \(v\in K\), \(c_{v}\) is a color in \(C\) such that \(c_{u}\neq c_{v}\) for two different \(u,v\in K\). With (5) we fix these colors for the clique \(K\). It follows that from the values \(x_{v,c}\) in an optimal solution of the ILP we obtain a bounded vertex coloring of \(G\) with the minimum number of required colors, which gives an assignment of sets to rows that minimizes the number of rows. Namely, we color vertex \(v\) with color \(c\) if and only if \(x_{v,c}=1\), or, in other words, we put set \(v\) into row \(c\).
**Bounded coloring.** For \(B=2\) we apply Edmonds Blossom Algorithm [33] for computing maximum matchings in the complement graph \(G^{c}\) of \(G\). The graph \(G^{c}\) consists of the same vertices as \(G\) and \(\{u,v\}\in E(G^{c})\) if and only \(\{u,v\}\not\in E(G)\). A matching \(M\subseteq E(G^{c})\) of maximal size immediately gives us a bounded vertex coloring in \(G\) with the minimum number of colors. For each \(\{x,y\}\in M\), we color \(x\) and \(y\) with the same color. All other pairs of different vertices have different colors. We obtain a coloring with \(|V(G)|-|M|\) colors, which is the minimum possible. If there was a coloring with less colors, then there would be a larger matching in \(G^{c}\). As the algorithm runs in polynomial time, we do not need a heuristic for this case.
Let us consider the case \(B>2\). Our _heuristic_ is a slightly adapted version of the DSATUR vertex coloring heuristic from before. When coloring a vertex \(v\), we must ensure that at most \(B\) vertices use a given color in addition to the color being different from neighbors. The new saturation of a vertex \(v\) is the number of distinct colors of the union \(C_{1}\cup C_{2}\), where \(C_{1}\) are the colors of \(v\)'s neighbors and \(C_{2}\) are the colors \(c\in C\) such that \(|\text{col}^{-1}(c)|=B\). Vertices are again colored in decreasing order of their saturation. The new augmented heuristic computes a bounded vertex coloring.
_Exact._ The exact algorithm is an adaptation of the ILP above. Again, the computed clique \(K\) gives us a lower bound on the required number of colors. Furthermore, the new bound \(U\) on the required number of colors is computed by the heuristic algorithm for bounded vertex coloring explained before. The ILP consists of (1)-(5), and we use the following constraint, which ensures that no more than \(B\) vertices have the same color.
\[\sum_{v\in V(G)}x_{v,c}\leq y_{c}\cdot B, c\in C \tag{6}\]
The optimal solution of the ILP is transformed into an assignment of sets to rows in the same way as before.
**Considering block links.** To be able to visualize block links in our two variants, we need to adapt the algorithms for compression. Let us first specify at which column a block link of a set starts and where it ends. A block link should have to cover all blocks of a set. Hence, for set \(S\in\mathcal{S}\) it starts at \(s_{S}=\min\{i\mid\pi^{\prime}(i)\in S\}\) and ends at \(e_{S}=\max\{i\mid\pi^{c}(i)\in S\}\). For a set \(S\in\mathcal{S}\) we define \(\text{range}(S)=[s_{S},e_{S}]\) as the _active range of the block_ link of a set.
_Heuristic and exact \(\Gamma_{2}\)._ Let us continue to adapt our algorithms for compatibility \(\Gamma_{2}\). If for two sets \(S,\hat{S}^{\prime}\in\mathcal{S}\) its block link ranges \(\text{range}(S)\) and \(\text{range}(S^{\prime})\) overlap, then they cannot be placed in the same row. In fact, this is the only requirement, and we can model this by adding further edges to the conflict graph \(G\) defined above: For each pair of different sets \(S,S^{\prime}\in\mathcal{S}\) such that \(\text{range}(S)\cap\text{range}(S^{\prime})\neq\emptyset\) we add the edge \(\{S,S^{\prime}\}\) to \(G\) if it does not already exist. The algorithms are the same as described already, but instead they work with the slightly adapted conflict graph \(G\).
If at most two block links can be drawn at once in a column of a row (model \(\Gamma_{3}\)), then adapting our algorithms is not as straight-forward: Instead of considering pairs of sets, we have to consider triples of sets. Namely, let us call a triple \((S_{1},S_{2},S_{3})\in\mathcal{S}^{3}\) of different sets a _conflicting triple_ if \(\text{range}(S_{1})\cap\text{range}(S_{2})\cap\text{range}(S_{3})\neq\emptyset\), and let \(\mathcal{T}\) be the set of conflicting triples. We assume that a triple of conflicting sets only appears once in the set \(\mathcal{T}\). We have to adapt our algorithms for the ILP-models, and heuristics such that a conflicting triple is never assigned the same color (same row).
_Heuristic \(\Gamma_{3}\)._ We extend the heuristic algorithms based on the DSATUR greedy algorithm, by first redefining the saturation of a vertex. The saturation of an uncolored vertex \(v\) is the cardinality of the set \(C_{v}\), that consists of the colors of neighbors of \(v\) and the colors \(c\) such that there exists a triple \(t\in\mathcal{T}\) with \(v\in t\) and the other two vertices in \(t\) are colored with color \(c\). In the case of bounded vertex coloring, \(C_{v}\) additionally contains the colors \(c\) with \(|\text{col}^{-1}(c)|=B\). The vertices are colored in decreasing order of their saturation, ties between sets \(S\) (the vertices of \(G\)) are broken by decreasing values \(\text{degree}_{G}(S)+|\{t\in\mathcal{T}\mid S\in t\}|\). When a vertex \(v\) is colored, it is assigned the first color \(c\) that is not present in \(C_{v}\).
_Exact \(\Gamma_{3}\)._ For the ILP it would be easy to model the constraints on triples. But we can reduce the number of constraints by realizing that we can encapsulate constraints imposed by multiple triples into a single constraint. For each \(S\in\mathcal{S}\) let \(T_{S}=\{S^{\prime}\in\mathcal{S}\mid s_{S}\in\text{range}(S^{\prime})\}\). If \(|T_{S}|\geq 3\) we add the following constraint to the ILP formulation, which does not allow more than 2 sets in \(T_{S}\) to be part of a color \(c\):
\[\sum_{S^{\prime}\in T_{S}}x_{S^{\prime},c}\leq 2, c\in C,S\in\mathcal{S} \tag{7}\]
This captures all conflicting triples, as every conflicting triple is a subset of some \(T_{S}\). In this way we have at most \(|\mathcal{S}|\) constraints instead of \(|\mathcal{S}|^{3}\). The sets \(T_{S}\) can be computed by iterating over the values \(s_{S}\) and \(e_{S}\) in increasing order and \(T_{S}\neq T_{S^{\prime}}\) for \(S\neq S^{\prime}\). Each constraint imposed by the triples \(\mathcal{T}\) is modelled by at least one constraint of the form (7).
## 6 Quantitative Evaluation
We conducted a computational experiment with real-world data to be able to answer the following three questions in this section.
* What is the scalability with respect to runtime of our exact algorithms?
* How do our heuristic approaches compare to the exact approaches with respect to the quality metrics number of blocks (step I) and compression (step II)?
* By how much can we compress linear diagrams for real-world instances?
### _Experimental Setup_
**Computational environment.** All experiments were performed on a cluster of three nodes. Each node is equipped with two AMD EPYC 7402, 2.80GHz 24-core processors and 1 TB of RAM. All implementations were done in Python 3.7. The ILP formulations given in Section 5 were optimized using the Gurobi4 optimizer. Multithreading was disabled, so the algorithms will perform similarily on end-user hardware whose processors have similar or higher processor frequency.
Footnote 4: [https://www.gurobi.com/](https://www.gurobi.com/)
For each instance in our dataset we performed 12 experiments for combinations of our algorithmic pipeline. We performed experiments for the heuristic-based and exact pipelines. Each experiment consists of three steps, corresponding to steps in the pipeline: column ordering, compression, and row ordering. The rendering step (IV) has no influence on this experiment. In our evaluation we prioritise the first two steps, as the row ordering algorithm is equivalent to the column ordering algorithm, and column ordering is evidently more important [1]. In a heuristic experiment all of these steps are performed by the heuristic, hence, the output of one step is the input for the next step. In an exact experiment, all of these steps are performed by exact algorithms, unless they run into the pre-specified timeout of 300s--in that case the heuristic solution is used for the remaining pipeline steps. For the compression we consider six options: First, we evaluate the three compatibilities \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\) between sets. Second, we consider the unbounded case (\(B=\infty\)), where any number of sets can be put into a single row, and the bounded case (\(B=3\)), where an upper bound of three sets per row is specified. This results in \(2\cdot 3=6\)_compression variants_, and \(12\) combinations overall.
**Instances.** We performed all experiment on a set of real world-instances taken from DBLP5. These instances correspond to papers from the Graph Drawing conferences 1994-2021, from the PacificVis conferences 2001-2022, and the Symposium on Computational Geometry 1985-2022. Each paper corresponds to a set and each author corresponds to an element. We disregard papers, that do not have any overlapping author with another paper, as those do not influence the combinatorial complexity of an instance. We generated instances by taking all papers from one conference and from year \(x\) to year \(y\) where first \(\leq x\leq y\leq\) last, and \(y-x\leq 10\). The values first and last are the first and last year of a conference we are considering, e.g., 2001 and 2022 for the PacificVis conference. Overall, we have 734 instances, with up to 627 sets, and up to 877 elements.
Footnote 5: [https://dblp.org/](https://dblp.org/)
**Metrics.** For question A, we record the runtime for each step of the pipeline. Further, we record the number of blocks resulting from the column order, and the _compression ratio_, which is the ratio between the number of rows required by the compressed linear diagram and the number of sets in the instance.
### _Results_
Let us now present the results of our experiments, answering each question A-C separately.
**Scalability of exact algorithms (Question A).** We start by presenting the time required for the ordering of columns by our TSP model in the exact pipeline, denoted by \(t_{\text{ord}}\). We present these values in Figure 4 by the number of unique elements--elements that are non-equivalent with regard to their set-membership, and which are directly proportional to the size of the resulting TSP-instance. Each data point represents one instance, taking the average of five executions of our algorithm. Even though the maximum number of unique elements is 662 and the underlying computational problem is NP-hard, only two instances exceeded the time limit of 300s. From the figure it is not immediate that the runtime increases exponentially. We suspect that in most cases the generation of the distance matrix for the TSP-solver dominates the actual time required by the solver. The outliers are instances where the TSP solver actually takes more time. Nonetheless, instances with less than 100 columns that are realistically suitable for a linear-diagram-based set visualization, all require less than two seconds.
Figure 5 shows the runtime for the different compression variants in the exact pipeline (\(t_{\text{comp}}\)) by the number of sets, including a running mean. We can clearly see that \(\Gamma_{1}\) without bound takes the least time, while the asymptotic growth of the running time follows a similar function for all compression variants. The outliers of \(\Gamma_{3}\) without bound result in the jagged running mean. Instances with less than 100 sets all require less then 5 seconds for compression.
Let us also briefly discuss the timeouts occurring during compression, the full data can be found on OSF. There was no timeout for \(\Gamma_{1}\) without bound. Overall, all timeouts appeared in instances which contain over 200 sets. Each compression variants had less than 50 timesouts out of the 734 instances, the only exception being \(\Gamma_{3}\) without bound with 70 timeouts. To summarize this evaluation, we conclude that
Fig. 4: Time required for column ordering by number of unique elements for the exact pipeline. The \(y\)-axis is scaled logarithmically.
the exact pipeline works quite fast for all instances whose size is realistic for visualization in applications.
Detailed runtimes for the heuristics can be found on OSF. Overall, the whole heuristic pipeline took at most 125 seconds for the largest instance. The heuristics are faster than the exact algorithms, but most of the time the difference is within one order of magnitude. Thus, the exact pipeline is preferable unless dealing with hard instances that time out during execution.
**Performance of the heuristics (Question B).** Next we want to compare the heuristic pipeline to the exact pipeline with regard to quality metrics. For this, we present for our quality metrics _blocks_, and _compression ratio_, the exact to heuristic ratio (EH-ratio). The EH-ratio is the value of a quality metric achieved by the exact pipeline divided by the quality metric of the heuristic pipeline on the same instance.
The EH-ratio for the number of blocks is between 1 and 1.15 for all instances but one. That means that the heuristic algorithm is no more than 15% worse than the exact algorithm with regard to the number of blocks in all instances but one. A scatter plot of the EH-ratio for the number of blocks can be found on OSF.
Figure 6 shows the EH-ratio for the compression faceted by the different compression variants. Values below one are possible now because models \(\Gamma_{2}\) and \(\Gamma_{3}\) depend on the column order which is different for the heuristic and exact pipelines. Note that even though our pipeline uses exact modules, overall it does not necessarily compute linear diagram with maximal compression for \(\Gamma_{2}\) and \(\Gamma_{3}\). The heuristics perform almost the same as the exact algorithms for \(\Gamma_{1}\) for both the bounded as unbounded case. The heuristic for the unbounded case of \(\Gamma_{3}\) also performs quite well, we suspect that this is because \(\Gamma_{3}\) is not as restricting as \(\Gamma_{2}\) and only packing a fixed amount of sets into a row makes the problem "easier" in less restricted coloring variants. The largest differences between heuristics and exact pipelines are visible in the unbounded case for \(\Gamma_{2}\) and \(\Gamma_{3}\), which could have two reasons: Either different column orders have a large impact on the performance of a compression algorithm in these configurations, or our heuristics for these combinations have the potential to be further improved. Still, the running mean line shows that in expectation the heuristic and exact pipeline perform rather similar, with the exact one being slightly better.
**Achievable compression with LinSet.zip (Question C).** To conclude this evaluation, we present the compression ratios that could be achieved by the exact pipeline for the different compression variants. These are presented as boxplots with outliers in Figure 7. We see that in the bounded case the maximum achievable compression of 1/3 is achieved in almost all of the instances. In the unbounded case we observe the expected differences between the different models resulting from the constraints on the compression that these models impose. Namely, \(\Gamma_{1}\) allows for the most compression, followed by \(\Gamma_{3}\) and \(\Gamma_{2}\). Overall, LinSet.zip allows for significant compression in any variant for a large set of real-world data.
## 7 User Experiment
To better understand the impact of compressing linear diagrams on diagram readability, we conducted a small-scale user study on accuracy and task completion time over multiple tasks on static images generated with four distinct rendering styles. We compared linear diagrams (\(\Gamma_{0}\)) to three variants of LinSets.zip that represent the different set compatibility definitions (\(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\)). Mainly, we were interested if the higher information density of LinSets.zip diagrams would lead to worse performance than linear diagrams. We carefully selected element-based and set-based tasks that spanned the spectrum of typical set visualization tasks. All datasets used were real-world instances. The raw
Fig. 5: Time required for the compression by the number of sets in the instance for the exact pipeline. The \(y\)-axis is scaled logarithmically. The lines correspond to a running mean of 20 data points for the corresponding compression.
Fig. 6: The EH-ratio of the compression ratio by the number of sets for the different compression variants. The lines correspond to a running mean of 20 data points.
Fig. 7: Boxplot of the compression ratio achieved by the exact pipeline with different compression variants.
datasets, stimuli, screenshots of the study as well as the evaluation code can be found on OSF.
### _Participants and Setting_
Even though conditions in a lab setting can be better controlled, we conducted the study in an online setting. We designed the experiment as a within-group experiment where every participant was shown all five tasks on the four different styles. The target time for completion was set between 15-20 minutes. Participation required to be at least 18 years of age, no known color vision deficiency and a sufficiently large screen (at least 768px wide). In total, we were able to gather 52 complete responses. \(73\%\) of the participants identified as male, \(25\%\) of participants identified as female and the rest did prefer not to answer. \(76\%\) of all participants were between 18-34 years old with a trivial number of participants in the remaining age brackets. Overall, the participants were well educated with all having a post-secondary degree or higher. The average self-rating of knowledge on the topic of set visualization was \(2.73\) out of \(5\) as most participants reported to have at least some degree of knowledge. We required participants to agree to not change the screen size after a screening question, which asked if they could distinguish between all visual elements and fully see the diagram, and logged their screen width. All participants fulfilled the requirement to have a large enough screen.
### _Datasets_
We used the same approach as in Section 6 and extracted co-authorship hypergraphs of single years of the Graph Drawing conference. From all extracted datasets we picked five that had between 36 and 56 elements and 16 to 19 sets. Next, we replaced set labels with arbitrary labels 'Project 1' to 'Project i' and assigned random, gender-balanced, names for element labels. This was necessary to ensure label readability and minimize confounding factors of labeling and familiarity.
Each dataset was assigned to exactly one task to eliminate confounding factors of using different datasets for each style. To mitigate learning effects we randomly permuted set and element labeling to have exactly one unique task-dataset pair for each style.
### _Tasks_
We selected several tasks the participants had to complete. The selection was guided by the task taxonomy of Alsallakh et al. [5]. Below are the tasks, how the questions were stated, and the possible answers participants could give.
**T1: Find sets containing a specific element:** which projects do "Alice" and "Bob" have in common? All projects were given as possible answers.
**T2: Find/Select elements that belong to a specific set:** check all people who are in "Project i". Six people were given as possible answers.
**T3: Analyze and compare set cardinalities:** how many people does "Project i" have in total? Ten possible values where given as possible answers
**T4: Analyze intersection relations:** which project(s) overlap with "Project i"? All other projects were given as possible answers.
**T5: Analyze and compare intersection cardinalities:** with which project(s) does "Project 1" have the most overlaps in project members? All other projects were given as possible answers.
T1 and T2 are element-based tasks while T3-T5 are set-based tasks. All questions were modelled as multiple-choice questions. Whenever we asked for projects in the task we used the same order, from Project 1 - Project X, in the answer as otherwise participants had to spend time locating their respective answer. For T2 we ordered the names in the answer as they appeared in the column order of the stimuli.
### _Stimuli_
We generated static images, or stimuli, for all pairs of datasets and styles. All images were generated with our own implementation and used the same color palette, font and font-size. We fixed the width of the generated images to 1270px but scaled down to 70% of available screen size if a participants screen size was less. The height was solely set by the requirements of the style. To remove potential confounding factors and floor effects we paid attention to positioning visual elements representing sets at similar locations in the images. For example, we did manually move rows to be in the center instead of the bottom or top as this would have a direct impact on the visual distance between labels and the set asked in the task. Similarly, we used image processing software to place easily identifiable dots over labels in T1. The reason here is that otherwise variance in time could be explained mostly by the time a participant requires to find both elements. Adding such dots to other tasks was not required as the variance of finding sets in styles with different information density is exactly what we wanted to capture in the experiment. In some cases for \(\Gamma_{2}\) and \(\Gamma_{3}\) we had overlapping labels. We ensured that no question was asked, where participants had to identify sets with overlapping labels. Lastly, we mirrored the column order for \(\Gamma_{2}\) and \(\Gamma_{3}\) to further mitigate potential learning effects of participants when shown the same dataset.
### _Experimental Procedure_
For each of the four conditions we asked the participants to solve two element-based tasks and three set-based tasks. In total we had \(4\,\text{conditions}\,\times\,(2+3)\,\text{tasks}\,=20\,\text{trials}\). The experiment followed a five stage template of (1) consent and screening, (2) demographic questions, (3) tutorial, (4) formal study and (5) post-task questionnaire. In (1) participants were given the general study information, requirements, study procedure, data policy and consent form. After giving consent and agreeing to meet the requirements we showed the participants one image and asked if the image is correctly displayed on their system as well as if they are able to identify all visual elements. Also, we required participants to not change their screen size during the experiment. Afterwards in (2), we asked demographic information such as age, gender, education and prior knowledge on set visualization. Next, we showed the participants a tutorial (3) of all 5 tasks. As each task was paired with a different style the participants were familiarized with all styles and tasks in the study. We asked the participants to take their time and only
proceed when they understood the task and style. Furthermore, we only allowed participants to proceed if their selected answers were correct. Next, the formal study (4) was conducted. As this was timed, we reminded the participants to answer questions correctly but as quickly as possible. The participants were given the same task on all four styles and for each participant we permuted the order of tasks and the order of styles in a specific task to minimize the learning effect. In order to reduce fatigue we showed participants a break screen between task groups which paused the timing until they proceeded. Finally, we collected qualitative information (5) about how confident they were on a 5-point Likert scale, how likely they would use the style again on a 5-point Likert scale, and a free-form text field to leave general thoughts on the study and layout styles.
### _Pilot_
Before the actual experiment we invited several people to take part in a pilot study and answer a questionnaire about the study design afterwards. A total of five people with various levels of expertise in the design of user studies agreed to participate. Overall, the pilot participants did not vocalize major concerns about the general design. All participants finished the study slightly above the predicted target time. Therefore, we removed some qualitative questions at the beginning and end of the study. One participant asked us to clarify some details in the tutorial, which we implemented for the final version. We specifically asked if the participants could identify floor or ceiling effects, which all denied. Also, we asked the participants if they were aware that the underlying data and questions were the same for each style. Two participants noticed but thought that this did not help them answer the questions. Still, we adapted the final version by mirroring the element order in two of the four styles to mitigate this effect.
### _Hypotheses_
Before conducting the user experiment we formulated several alternative hypotheses. If we state that style A outperforms style B on task accuracy, this means that style A has a higher accuracy. Similarly, if we state that style A outperforms style B on task completion time then style A has a lower task completion time.
* \(\Gamma_{0}\) will outperform all other styles on accuracy and task completion time for element-based tasks (T1, T2).
* \(\Gamma_{0}\) will perform worse than \(\Gamma_{2}\) and \(\Gamma_{3}\) on accuracy and task completion time for set-based tasks (T3-T5).
* \(\Gamma_{1}\) will perform worse on accuracy and task completion time than styles \(\Gamma_{2}\) and \(\Gamma_{3}\).
* \(\Gamma_{2}\) and \(\Gamma_{3}\) will perform similarly with no statistically significant difference.
Hypothesis H1 was guided by the intuition that element-based tasks are easier in linear diagrams as correctly identifying sets only requires scanning the vertical line below an element and finding the labels of the respective sets at its expected position. For the other styles a second visual search for the project labels is necessary. Hypothesis H2 captures the intuition that set-based tasks are easier in LinSets.zip. Here we assumed that the more compact representation makes it easier to identify the involved sets. Hypothesis H3 captures the intuition that block links make it easier to identify which blocks belong to the same set instead of solely relying on color. Lastly, the intuition in H4 is neither style with block links has an inherent advantage over the other.
### _Results and Analysis_
We describe the used statistical tests for task accuracy, task completion time and qualitative feedback below. Generally, we performed a post-hoc analysis if a statistical significance (\(\alpha=0.05\)) was given. We evaluated each task independently and the full tables containing the statistical analysis can be found on OSF. Figure 8 shows the mean task accuracy.
**Task Accuracy.** As participants' answers are binary correct/incorred dependent variables we used Cochran's Q test to determine if there are statistical significant differences between the styles. If a significant difference was detected, we created a pair-wise contingency table and performed an asymptotic McNemar's test.
For element-based tasks T1 and T2 and set-based task T5 we could not find statistically significant differences between any of the styles. For T3 there were statistically significant differences (\(p=0.02\)). However, the pair-wise comparison only showed that \(\Gamma_{0}\) (\(p=0.03\)) and \(\Gamma_{2}\) (\(p=0.03\)) outperformed \(\Gamma_{1}\) while no significant difference was found for all other pairs. In T4 (\(p=0.03\)), \(\Gamma_{3}\) outperformed \(\Gamma_{0}\) (\(p=0.01\)), \(\Gamma_{1}\) (\(p=0.03\)) and \(\Gamma_{2}\) (\(p=0.03\)).
In summary, there is only marginal support for H2 and H3 and we have to reject hypotheses H1, H2 and H3 on task accuracy. H4 is supported by our findings with the exception of T4.
**Task Completion Time.** We first tested the task completion time for normal distribution. As this was not the case we applied Friedman's test for repeated measurements with the F-Test method. Whenever we detected a statistically significant difference we performed a two-sided non-parametric pair-wise test.
T1 showed a significant difference (\(p<0.01\)) where \(\Gamma_{0}\) is significantly outperformed by \(\Gamma_{1}\) (\(p=0.04\)) and \(\Gamma_{2}\) (\(p<0.01\)). \(\Gamma_{2}\) outperforms \(\Gamma_{3}\) (\(p=0.04\)). For T2 (\(p=0.02\)) \(\Gamma_{3}\) outperformed \(\Gamma_{0}\) (\(p<0.01\)), \(\Gamma_{1}\) (\(p=0.01\)) and \(\Gamma_{2}\) (\(p=0.01\)). In the case of T3 the pair-wise tests showed significant difference (\(p<0.01\)) between all styles. The ranking of performance was \(\Gamma_{0}\), \(\Gamma_{1}\), \(\Gamma_{2}\) and then \(\Gamma_{3}\). Significance (\(p<0.01\)) was also detected between the styles in T4. The pair-wise test showed that the only significant differences were between \(\Gamma_{0}\) outperforming \(\Gamma_{2}\) (\(p<0.01\)) and \(\Gamma_{1}\) outperforming \(\Gamma_{2}\) (\(p<0.01\)). For task T5 we did not detect any significant differences in completion time between the different styles.
Fig. 8: The participant’s mean accuracy on the individual tasks.
Overall, H1 is partially supported in T1 but not in T2. H2 is unsupported while there is some partial support for H4 in T5. H3 is only partially supported in T2 and T3.
**Qualitative Feedback.** We used the Kruskal-Wallis test to identify significant differences between styles on the two qualitative answers. As both questions had significant difference, we also performed a Mann-Whitney U test with Bonferroni correction. We report the _area under the curve_ (AUC) to measure effect size. The AUC value can be interpreted as the chance that one value is larger than the other when randomly picking two samples. Finally, we computed a Spearman correlation between performance of participants and their qualitative answers. For this we summed task accuracy and task completion time for each participant. The distribution of participant's answers can be seen in Figure 9.
There were statistically significant (\(p<0.01\)) differences between the styles regarding how confident participants were about giving a correct answer. Participants felt least confident with \(\Gamma_{1}\) which can be attributed to high information density and the fact that blocks of the same set only indicate this via color. We could not find a significant difference between \(\Gamma_{0}\) and \(\Gamma_{3}\). Participants felt most confident with \(\Gamma_{2}\) compared against \(\Gamma_{0}\) (AUC \(=0.67\)), \(\Gamma_{1}\) (AUC \(=0.78\)) and \(\Gamma_{3}\) (AUC \(=0.63\)). We could not find statistically significant correlation between confidence and task accuracy or task completion time.
Similarly, participant's answers if they would use a style again had significant (\(p<0.01\)) differences. There is no significance between \(\Gamma_{0}\) compared to \(\Gamma_{1}\) or \(\Gamma_{3}\). Again, people felt less likely to use \(\Gamma_{1}\) compared to \(\Gamma_{2}\) (AUC \(=0.77\)) and \(\Gamma_{3}\) (AUC \(=0.68\)). Participants felt most confident with \(\Gamma_{2}\) compared against \(\Gamma_{0}\) (AUC \(=0.65\)) and \(\Gamma_{3}\) (AUC \(=0.62\)). Besides \(\Gamma_{1}\) (correlation \(=0.30\)) on completion time, we could not find significant correlation to task accuracy or task completion time.
We read all answers in the free-form text question and report on three general sentiments that recurred. The first recurring sentiment is that people generally agreed that \(\Gamma_{1}\) is confusing and stated that they had difficulties either finding the correct project labels or figuring out which blocks belonged to a project. The second recurring sentiment was that participants see more benefits in linear diagrams over LinSets.zip as the visual representation is clearer and labels are always at an expected position. The third sentiment stands in contrast to the second. Here, participants stated that they had problems tracing sets to labels in linear diagrams and much more prefer the compact representation of \(\Gamma_{2}\) and \(\Gamma_{3}\) of LinSets.zip.
**Discussion.** The user study we conducted to compare different variants of LinSets.zip and linear diagrams has shown that there are differences between the systems. However, we could not conclude that the more predictable and clean style of linear diagrams outperformed LinSets.zip on task accuracy and task completion time. Similarly, there is also no indication that a more compact diagram has a clear advantage over linear diagrams. On the other hand, we can draw the conclusion that LinSets.zip does perform equally well to linear diagrams and can therefore be a viable alternative when vertical space is limited. This finding is also supported by the qualitative feedback gathered in the study. Participants felt a similar level of confidence with linear diagrams and variant \(\Gamma_{2}\) of LinSets.zip.
Similarly, we could not find clear statistical differences between \(\Gamma_{1}\), \(\Gamma_{2}\), and \(\Gamma_{3}\) of LinSets.zip. However, the qualitative feedback showed that participants appreciated the more intuitive block links instead of relying on color alone. Furthermore, there is a possibility that participants felt confident with \(\Gamma_{2}\) as no two sets are alternating in a row, which leads to a less compact but therefore also less information dense diagram. Even though we implemented the possibility to restrict the maximum number of sets per row we did not test this in the user study.
## 8 Limitations and Future Work
The participants we recruited for the user study tend to be well educated young men. Therefore, it is not clear how well our results generalize. Also, our study was conducted as an online experiment. It is not clear how attentive participants were during the study. We also only asked a small number of tasks from our participants. Overall, the user study gives some indication but is not necessarily conclusive. Another limitation is the placement of labels. Currently, we place labels in the largest block. This is problematic when the largest blocks of different sets are close which leads to overlapping labels. Furthermore, we assume that labels are short. Long labels have to be either truncated or shown on demand, as otherwise they would cover blocks and decrease readability. We also did not explore interactivity. In some cases interactivity could resolve some of the limitations of LinSets.zip.
**Future Work.** We did not explore interactivity in this work. For \(\Gamma_{1}\) it would be easy to interactively reorder columns to keep a single set together, as this style is independent of the column order, it is non-trivial for \(\Gamma_{2}\) and \(\Gamma_{3}\). For the latter two adaptions it would be necessary to not invalidate block links. Furthermore, the user study in this work only gives an indication and a broader study would be needed to find a decisive conclusion. Lastly, a rendering style on concentric circles could be implemented. Most algorithms can easily adapt from a linear order to a circular order and the resulting diagram could have a predictable aspect ratio.
## 9 Conclusion
In this paper we have presented LinSets.zip, a compact diagram to visualize set systems. As LinSets.zip is similar to linear diagrams we adopt concepts that have been evaluated in the context of linear diagrams. The design
Fig. 9: 5-point Likert scale (1 – worst, 5 – best) of qualitative answers.
space we have explored focuses on creating maximally compact representations but we also present different variants that use block links as visual aids to create clearer and more readable visualizations. We show that the presented variants can be modelled as known coloring problems for which algorithms exist that work well in practice. Furthermore, we have implemented all variants and conducted computational experiments and a small-scale user study. The computational experiments show that striving for optimality is feasible in most cases and that real-world data can be significantly compressed. The findings of the user study indicate that the task accuracy and task completion time is on par with linear diagrams.
## Acknowledgments
The authors would like to thank all of the participants in our experiment, and especially the experts of our pilot study. This work has been funded by the Vienna Science and Technology Fund (WWTF) [10.47379/ICT19035].
|
2307.13199 | An Investigation into Glomeruli Detection in Kidney H&E and PAS Images
using YOLO | Context: Analyzing digital pathology images is necessary to draw diagnostic
conclusions by investigating tissue patterns and cellular morphology. However,
manual evaluation can be time-consuming, expensive, and prone to inter- and
intra-observer variability. Objective: To assist pathologists using
computerized solutions, automated tissue structure detection and segmentation
must be proposed. Furthermore, generating pixel-level object annotations for
histopathology images is expensive and time-consuming. As a result, detection
models with bounding box labels may be a feasible solution. Design: This paper
studies. YOLO-v4 (You-Only-Look-Once), a real-time object detector for
microscopic images. YOLO uses a single neural network to predict several
bounding boxes and class probabilities for objects of interest. YOLO can
enhance detection performance by training on whole slide images. YOLO-v4 has
been used in this paper. for glomeruli detection in human kidney images.
Multiple experiments have been designed and conducted based on different
training data of two public datasets and a private dataset from the University
of Michigan for fine-tuning the model. The model was tested on the private
dataset from the University of Michigan, serving as an external validation of
two different stains, namely hematoxylin and eosin (H&E) and periodic
acid-Schiff (PAS). Results: Average specificity and sensitivity for all
experiments, and comparison of existing segmentation methods on the same
datasets are discussed. Conclusions: Automated glomeruli detection in human
kidney images is possible using modern AI models. The design and validation for
different stains still depends on variability of public multi-stain datasets. | Kimia Hemmatirad, Morteza Babaie, Jeffrey Hodgin, Liron Pantanowitz, H. R. Tizhoosh | 2023-07-25T01:35:37Z | http://arxiv.org/abs/2307.13199v1 | # An Investigation into Glomeruli Detection
###### Abstract
Context - Analyzing digital pathology images is necessary to draw diagnostic conclusions by investigating tissue patterns and cellular morphology. However, manual evaluation can be time-consuming, expensive, and prone to inter- and intra-observer variability.
Objective - To assist pathologists using computerized solutions, automated tissue structure detection and segmentation must be proposed. Furthermore, generating pixel-level object annotations for histopathology images is expensive and time-consuming. As a result, detection models with bounding box labels may be a feasible solution. Design - This paper studies YOLO- v4 (You-Only-Look-Once), a real-time object detector for microscopic images. YOLO uses a single neural network to predict several bounding boxes and class probabilities for objects of interest. YOLO can enhance detection performance by training on whole slide images. YOLO-v4 has been used in this paper for glomeruli detection in human kidney images. Multiple experiments have been designed and conducted based on different training data of two public datasets and a private dataset from the University of Michigan for fine-tuning the model. The model was tested on the private dataset from the University of Michigan, serving as an external validation of two different stains, namely hematoxylin and eosin (H&E) and periodic acid-Schiff (PAS).
Results - Average specificity and sensitivity for all experiments, and comparison of existing segmentation methods on the same datasets are discussed.
Conclusions - Automated glomeruli detection in human kidney images is possible using modern AI models. The design and validation for different stains still depends on variability of public multi-stain datasets.
## I Introduction
For investigations of tissue morphology and, as a result, for making diagnostic conclusions, computational pathology approaches may offer fast and reliable solutions compared to conventional microscopy-based workflows. On the other hand, any manual evaluation of tissue samples can be time-consuming, costly, and subject to both inter- and intra-observer variability [1]. Consequently, researchers have recently focused their attention on automated solutions to detect and segment tissue structures in digital pathology whole slide images (WSIs). Many studies, such as determining tissue types, rely on the accuracy of tissue pattern segmentation, which is regarded as the foundation of automated image analysis. However, due to the complexity of tissue clustering into types, with architecture such as glands and organelles overlapping with each other, establishing precise segmentation is not a simple operation. This makes distinguishing these patterns from the tissue background, and mainly from each other a challenge. In addition, histopathological images may contain noise and artifacts created during image acquisition, as well as low contrast between foreground and background [1]. Segmentation models have been widely used in digital pathology to segment cells and other regions of interest [2]. However, training these models needs pixel-level object annotations made by an expert. Detailed labels (pixel-level) for histopathology images are expensive, time-consuming, and hard to achieve [3]. Moreover, in some of the applications in histopathology, only detecting the position of the specific tissue pattern without precisely outlining the borders may be sufficient [1]. These techniques are called tissue pattern detection, and are usually faster compare to segmentation methods. The main advantage of detection models is that they construct a bounding box around the tissue of interest rather than pixel-level labelling, making its training much more convenient.
Deep object detectors typically consist of two parts: a backbone trained on ImageNet and a head used to forecast object classes and bounding boxes. One-stage object detectors and two-stage object detectors are the most common head types [4]. Regions with convolutional neural networks (R-CNN) [5] series, is a good example of a two-stage object detection category. YOLO (you-only-look-once) is one of the examples for one-stage object detectors [4] which has been studied and explored on two different applications for detecting specific tissue patterns in this paper.
YOLO is a simple concept with several advantages. To begin with, YOLO is very fast as it does not require a complicated pipeline for a regression problem. Furthermore, the mean average accuracy of YOLO is higher than that of comparable real-time systems. Therefore, the network can be considered as a real-time object detector. Secondly, with YOLO, context information about object classes is encoded as well as their appearance during training and testing, unlike sliding window and region proposal-based approaches [6]. A popular object recognition approach, Fast R-CNN [7], may misidentifify background patches in an image as objects. In comparison to Fast R-CNN, YOLO creates half the number of background errors [6]. And thirdly, YOLO learns to represent objects in a universally applicable way. YOLO surpasses the best detection algorithms like deformable parts models (DPM)
and R-CNN when trained on natural images and evaluated on art. YOLO is less likely to fail when applied to new domains or unexpected inputs because of its high degree of generalizability [6].
In this paper, YOLO-v4 has been employed to find particular tissue patterns in WSIs. Comparisons on the same datasets, with segmentation approaches, will be performed. The histological evaluation of "glomeruli" is critical for identifying whether a kidney is transplantable [8]. The Karpinski score, which includes the ratio of scleroic glomeruli to total number of glomeruli in a kidney segment, is critical for determining the necessity for a single or dual kidney transplant [9]. Clinical symptoms, immunopathology, and morphological abnormalities are all factors that go into classifying glomerular disorders. To classify glomerular diseases, these anatomic structures need to be detected. Automated glomeruli identification frameworks for kidney biopsies conducted by pathologists can be quite helpful because manual examination of kidney samples is time-consuming and error-prone [9, 8]. There are several segmentation methods to detect glomeruli in kidney images [10]. However, these methods require pixel-level annotation of the images. In detection methods, only determining the location of a given tissue pattern, the glomerulus, is required without the need to precisely delineating its borders.
In the field of histopathology, the lack of image data, annotation, and labels has always been a problem [11]. Hence, it is important to validate deep networks on their generalization capability. By training a network with public datasets, and then fine-tuning it with only limited data from a specific hospital or specific resource, we may be able to significantly improve the accuracy of the network on the validation set from the same resource.
In another application, YOLO-v4 as a detection network has been trained to recognize all glomeruli in a given kidney image. Multiple experiments were designed and carried out based on different training data from two public datasets to fine-tune the model, and tested on the private dataset from the University of Michigan as an external validation on two differently stained tissues, namely periodic acid-Schiff (PAS) staining and hematoxylin and eosin (H&E) staining.
The first dataset is a public collection of 31 tiled TIFF (SVS) WSIs. The annotation of the bounding boxes of these 31 WSIs has been performed by collaborating pathologists. This data is part of the WSI datasets generated within the European project AIDPATH (source: [http://aidpath.eu/](http://aidpath.eu/)). The second dataset has been used for the HubMap competition (source: [https://www.kaggle.com/c/hubmap-kidney-segmentation/overview](https://www.kaggle.com/c/hubmap-kidney-segmentation/overview)). TIFF files ranging in size from 500MB to 5GB make up the dataset containing 8 WSIs for training, and 5 WSIs for subsequent testing. The segmentation annotation was provided for each of the WSIs in this competition. The generalization of the network has been tested by training on these two public datasets, followed by the external validation on the private dataset from University of Michigan. Another (private) dataset that has been used for training and fine-tuning the models has 7 PAS stained WSIs which has been collected from the University of Michigan annotated by an expert pathologist. In Figure 1 three samples of the training dataset for the network are shown.
The three datasets, two public and one private, have been used to design and conduct 14 experiments. These experiments have been trained on different combinations of public and private datasets. Results have been validated on the private dataset from the University of Michigan with two stains, 20 PAS stained WSIs and 16 H&E stained WSIs. YOLO served as the detector network which will be described in details in the following sections.
In Figure 1, two samples of the private validation dataset along with the annotated bounding boxes have been shown. On the top is a sample of tissue derived from a H&E stained WSI, and on the bottom is a sample of tissue from a PAS stained WSI. The results, average specificity and sensitivity for all experiments. Comparison of existing segmentation methods on the same datasets are discussed in the results section. In general, one could observe that the average specificity and sensitivity are higher on the PAS validation set, because all of the images in the training dataset are PAS stained. Also, there is an improvement in average specificity and sensitivity while fine-tuning the network with only 7 PAS WSIs from the University of Michigan.
## II Literature Review
A significant step in determining whether a kidney is transplantable is the histological examination of renal samples by experienced pathologists [9, 8]. The histopathology evaluation of the number of globally sclerosed glomeruli in relation to the overall number of glomeruli is essential for accepting or rejecting a donor's kidneys [9]. Mutlip glomerularocentric pathology classification systems are employed for native kidney diseases [12, 13, 14] emphasizing the central role of glomerular injury. In Figure 2 samples of glomeruli in kidney images are shown.
Waste and excess fluids are expelled from the human body by glomeruli, which are clusters of capillaries responsible for expulsion. It is possible to group glomerular disorders according to their clinical symptoms, etymology, immunopathology, or morphological changes [9, 8]. A condition known as "glomerulosclerosis" is the result of the kidney lesion changing its morphology; this sclerosis can impact the kidney in many ways, depending on whether or not it is global or partial [10]. The number of glomeruli detected in each kidney biopsy should be counted in daily practice. Per kidney biopsy, about 20 to 30 cuts are made [10]. Additionally, glomeruli that are completely sclerosed must be noted (the entire glomerulus). Detection of localized sclerosis will provide further information regarding the patient's condition. Each pathology report should include this information because the number of glomeruli assessed must be representative enough to determine a diagnosis [15]. On the other hand, if the sample has numerous sclerosed glomeruli, this may suggest that the patient has chronic kidney disease with dead glomeruli. As a result, the patient may not be suited for some medications, which will
help to define adequate treatment [15]. This information is also entered into the national register for glomerulephritis [10]. Time-consuming and tiresome, the count of glomeruli is a painstaking process. Because of this, image processing methods that can identify and categorize the glomerulus are needed.
With the emergence of deep learning networks, various options for computer vision tasks such as glomeruli object identification, semantic segmentation, and instance segmentation became available [10]. For instance, some works provide a detailed assessment of object identification and instance segmentation algorithms [16]. Others provide a complete review of semantic segmentation [17]. Several recent research efforts in digital pathology have used deep neural networks for glomeruli detection and segmentation [18, 19, 20, 21, 22, 8, 23, 24, 25, 10].
For glomeruli detection, YOLO has been applied on kidney images for the first time in this paper and compared with the existing segmentation method U-Net, using the same validation dataset. There are two different tissue stains in the validation dataset.
### _Tissue Staining_
Staining is used to emphasize essential characteristics of the tissue, as well as improve contrast. Hematoxylin is a common stain dye used in this technique that gives the nuclei a bluish hue, whereas eosin (another stain dye used in histology) gives the cell's cytoplasm a pinkish tint [26].
#### Iii-A1 Periodic Acid-Schiff (PAS)
A staining technique called PAS is used in histochemistry to show that carbohydrates and carbohydrate compounds like polysaccharides, mucin, glycogen, and fungal cell wall components are found in cells. PAS has been used to look for glycogen in places like the skeletal muscle, liver, and heart muscle. PAS staining works with both formalin-fixed, paraffin-embedded (FFPE), and frozen tissue sections [27]. In renal pathology PAS stain is particularly useful to highlight basement membranes.
#### Iii-A2 Hematoxylin and Eosin (H&E) Staining
There are two types of histological stains that come together to make H&E: Hematoxylin and Eosin. The hematoxylinains cell nuclei purple, and eosin stains the extracellular matrix as well as the cytoplasm pink. Other anatomic tissue structures take on different shades and hues of these two colors [28]. There are two parts of a cell including the nucleus and the cytoplasm. Pathologists can easily tell them apart, and the overall patterns
Fig. 1: Samples of the three training datasets: sample from the first public dataset AIDPATH (left), sample from the second public dataset HubMap (middle), and an example from the private dataset from the University of Michigan (right).
Fig. 3: Renal core biopsy showing annotated glomeruli.
Fig. 2: Two samples of the private validation dataset along with annotated bounding boxes. On the top is a WSI with H&E-stained tissue, and the bottom shows a WSI with PAS-stained tissue.
of coloration from the stain show the general layout and distribution of cells and give an overall impression tissue morphology [27].
## III Materials & Methods
### _Method_
Predicting one or more object locations, determining their classes, and drawing a bounding box around the object is the definition of an object detection task. In many existing detection systems, multiple classifiers are applied to an image at many locations and scales to calculate the high-scoring regions of the image for detecting a region of interest. In this paper, the YOLO approach (You Only Look Once) [6] has been trained for both applications on detecting tissue patterns in WSIs; one is artifacts and manual ink-markers detection, and the second is glomeruli detection in kidney images.
One of the essential advantages of YOLO over classifier-based systems is the speed of this model. YOLO is faster than R-CNN with greater than 1000 times performance [5], and 100 times faster than Fast R-CNN [7]. Predictions based on YOLO are with a single network evaluation, while R-CNN requires many network evaluations for a single image. More importantly, as an object detector, YOLO does not require detailed pixel-level annotation; labels for YOLO are just bounding boxes around the target objects.
#### Iii-A1 Network Architecture of YOLO
By combining separate components of other object detection networks, like the ones using a sliding window, or region-based techniques, YOLO can predict all image objects for all the classes based on the information from the whole image only by looking at the image once [6]. In other words, the network models the entire image at once along with all of its individual objects. End-to-end training and real-time speeds are made possible by the YOLO architecture while high average accuracy is maintained [6]. An \(S\times S\) grid is generated on any given image. A grid cell is responsible for identifying an object whose center lies within that grid cell. Boxes and confidence ratings are predicted for each grid square. If the model is certain that the box contains an object, it will give it a high confidence score [6]. This confidence score is calculated based on
\[Pr(Object)=IOU^{truth}_{pred} \tag{1}\]
The confidence score should be 0 if there is no predicted object present in a cell. If there is at least one predicted object in that cell, for the confidence score to be accurate, it must be equal to the intersection over union (IOU) between the predicted box and the ground truth. The probabilities of each \(C\) conditional class, \(Pr(Class_{i}|Object)\), are also predicted in each grid cell. The location of object's grid cell determines these probabilities. No matter how many boxes \(B\) there are in a grid cell, the network can only forecast one set of class probabilities. For the evaluation of the network, the network of computes the class-specific confidence scores for each box (each class \(C_{i}\) and object \(O\)) based on
\[Pr(C_{i}|O)*Pr(O)*IOU^{truth}_{pred}=Pr(C_{i})*IOU^{truth}_{pred} \tag{2}\]
Both the likelihood that a certain class will be found in the box and how well the predicted box will fit the item are represented by these scores [6].
YOLO was inspired by GoogleNet model for image classification [29]. It has 24 convolutional layers, followed by two fully connected layers make up the detecting network. The full network has been shown in Figure 4.
#### Iii-A2 YOLO-v4
The YOLO-v4 [4] model has been trained for both applications in this paper. The codes of all steps involved are available on GitHub (source: [https://github.com/AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)), and in this experiment, the codes have been modified in a way to train a custom dataset. The implementation of a new architecture in the backbone in YOLO-V4 compared to YOLO-V3 has made an essential improvement in the mAP (mean Average Precision) and the number of FPS (Frame per Second) by 10% and 12%, respectively, when trained and tested on COCO dataset (source: [https://cocodataset.org/](https://cocodataset.org/)). The new architecture in the backbone is a deep neural network composed mainly of convolution layers, and the main objective is to extract features. The backbone selection is a key step and can improve object detection performance; mostly pre-trained neural networks are used to train the backbone [4].
### _Dataset_
Kimia Lab and the pathology department of the University of Michigan are collaborating on a project for developing a computational kidney disease diagnosis model. As a part of this project, Kimia Lab has received a glomeruli dataset with bounding box annotations created by nephropathologists. To expand the training data two different public datasets, plus a private dataset from the University of Michigan have been used in this study.
Public Dataset 1The first public dataset consists of 31 WSIs in SVS format. With the size range between \(21651\times 10498\) pixels and \(49799\times 32359\) pixels acquired at 20x to preserve image quality and information while requiring significantly less computational time than images taken at other magnifications [30]. A glomerulus may lose structural information due to the lower resolution and poor image quality. It is also important to note that employing magnifications such as 40x would increase the model size, slowing down the training process [10]. This data is part of the WSI datasets generated within the _European project AIDPATH_ (source: [http://aidpath.eu/](http://aidpath.eu/)). A biopsy needle with an outside diameter of between 100 nm and 300 nm was used to obtain tissue samples. Once the paraffin blocks were ready, the tissue portions were cut into 4 um pieces and coloured with through PAS staining [30]. It is common to employ PAS stain to color polysaccharides found in kidney tissue and to highlight glomerular basement membranes because of its effectiveness [31]. These images contain different types of glomeruli labeled by Bueno et al. approach [10]. This
dataset has two parts, DATASET A, which contains the raw 31 WSIs, and DATASET B, which is 2340 glomeruli images, 1170 normal glomeruli and 1170 sclerosed glomeruli. Because of the lack of exact coordinates of the extracted glomeruli, the exact coordinates of the glomeruli bounding boxes were extracted by a pathologist.An annotated WSI sample of the first public dataset has been shown in Figure 5.
Public Dataset 2This dataset has been used for _HubMap competition_ (source: [https://www.kaggle.com/c/hubmap-kidney-segmentation/overview](https://www.kaggle.com/c/hubmap-kidney-segmentation/overview)). TIFF files ranging in size from 500MB to 5GB make up the dataset containing 8 images for the training and 5 images for the test. RLE-coded and uncoded (JSON) annotations are included in the training set. The annotations identify glomeruli that have been divided into sections. Also, anatomical structural segmentations are included in both the training and public test sets. The bounding boxes of these anatomical structures for using these annotations for the YOLO object detector have been created based on manual contours. Figure 6 is an example of the procedure to generate a bounding box from manual delineation. This bounding box is found by calculating the upper left most and lower right- most coordinates in the delineation. An annotated WSI of the second dataset has been shown in Figure 7.
models. The 7 training datasets have been evaluated on two different validation datasets with different stains from the University of Michigan: One contains 20 PAS stained images and the other one contains 16 H&E stained images. All experiments, along with the explanation of the training and the validation dataset, have been reported in Table 2. And the configurations of the network for all 7 different training datasets have been described in Table 1. In this table,
* **Batch** stands for how many images are used in the forward pass to compute a gradient and update the weights via back-propagation,
* **Subdivisions** stands for the number of blocks in which the batch is subdivided,
* **Policy** means using the steps and scales parameters bellow to adjust the learning rate during training,
* **Steps** means adjust the learning rate after 3200 and 3600 batches,
* **Scales** means re-scale the current learning rate by the corresponding factor once the number of steps is reached,
* **Max batches** is the maximum number of iterations,
* **Filters** stands for how many convolutional kernels there are in a layer, and
* **Activation** defines the activation function.
Many studies have been performed to identify glomeruli functional tissue units in human kidneys. Recently, there was a Kaggle competition, _Hacking the Kidney_, launching to segment glomeruli in kidney images (source: [https://www.kaggle.com/competitions/hubmap-kidney-segmentation](https://www.kaggle.com/competitions/hubmap-kidney-segmentation)). The dataset provided for the competition was public dataset 2, discussed in the dataset section. TIFF files, ranging in size from 500MB to 5GB, make up the dataset containing eight images for the training and five images for the test. RLE-coded and uncoded (JSON) annotations are included in the training and validation sets. The authors of a study [32] compare the five winning algorithms between more than a thousand teams that
Fig. 8: Annotated WSI sample from the University of Michigan’s private dataset.
Fig. 6: Extracted bounding boxes (right) from manual delineations of a glomerulus (left).
Fig. 7: Annotated WSI sample from public dataset 2.
participated in the above competition. They assess the accuracy and performance of the five top algorithms, and the codes are available online (source: [https://github.com/cns-iu/ccf-research-kaggle-2021/](https://github.com/cns-iu/ccf-research-kaggle-2021/)). To compare a segmentation model with the detection model in this paper, the first team's algorithm has been chosen as the benchmark. The accuracy on the same validation dataset i.e. 20 PAS stained images and 16 H&E images from the University of Michigan has been calculated based on the explanation for the winning proposal (source: [https://www.kaggle.com/c/hubmap-kidney-segmentation/discussion/238198](https://www.kaggle.com/c/hubmap-kidney-segmentation/discussion/238198)). They have used a single U-Net SeResNext101 architecture with Convolutional Block Attention Module (CBAM), hypercolumns, and deep supervision. Their network read 1024\(\times\) 1024 pixel patches and then downsample them to 320\(\times\) 320 patches. SGD is the optimizer for their model, trained using binary cross-entropy.
Training is performed for 20 epochs, with a learning rate of \(10^{-4}\) to \(10^{-6}\) and a batch size of 8 images. Their final weights trained on the whole training dataset have been used to validate and compare their network on the University of Michigan dataset, which contains 20 PAS stained images and 16 H&E stained images. The results are provided in section IV. Note that this is not possible to fine-tune the mentioned segmentation model with the external validation set (University of Michigan WSIs) as the external WSIs do not contain the pixel-level annotation. For comparing the segmentation model with YOLO, the segmentation area is enclosed with the smallest possible rectangle (the upper left most and lower right-most coordinates) and use these rectangles as the segmentation model output. Figure 6 depicts the process visually.
## IV Experiments & Results
Immunopathology, clinical symptoms, and morphological abnormalities are all factors that go into classifying glomeruli disorders [33]. To classify the glomeruli diseases, these objects need to be detected first. Therefore, the average sensitivity and specificity of the detection matters.
By having True Positives as \(TP\), False Positives as \(FP\), False Negatives as \(FN\), and True Negative as \(TN\), The formula of sensitivity and specificity metrics [34] is as follows:
\[\text{Sensitivity} = \frac{TP}{TP+FN} \tag{3}\] \[\text{Specificity} = \frac{TN}{TN+FP} \tag{4}\]
For computing true positives, false positives, false negatives, and true negatives, the IoU measure has been used (intersection over union) to determine the overlap between two boundaries divided by their union. Our dataset pre-defined an IoU threshold (i.e., 0.5) in classifying whether the prediction is a true positive or a false positive. Also, false negative would be those glomeruli objects that any predicted bounding boxes have not covered. Moreover, the true negatives were calculated based on the area of the whole slide tissue minus those predicted areas that were not containing any glomeruli.
As mentioned in the last section III, 7 training datasets with two public and one private datasets have been created and validated on two datasets with different stains from the University of Michigan. In this section, the average sensitivity and specificity have been calculated for all these experiments for all images, along with the comparison with the existing segmentation method.
### _PAS Validation Set_
Two public datasets and a private dataset from the University of Michigan, all PAS stained, were used to train YOLO and validated on 20 PAS stained images. Different experiments were designed and evaluated on these images. Average sensitivity and specificity values for each experiment can be seen in Table 3 along with the comparison with the segmentation method explained in Section III (used for Hacking the Kidney Competition).
The ROC (receiver operator characteristics) curves [35] for all the experiments on these 20 PAS stained images have been shown in Figure 9. As it has been reported in Table 3, the segmentation results have a high average specificity with lower sensitivity which means the network has low number of false positives. However, it can only predict half of the true negative glomeruli objects. Furthermore, using the external validation set (University of Michigan WSIs) to fine-tune this segmentation model is not feasible since the external WSIs do not contain pixel-level annotation.
Among the YOLO experiments, one experiment was done with a training set containing only 7 PAS stained images from the University of Michigan, with average sensitivity and specificity equal to 85%, and 80%, respectively, which can show the network is performing well on the validation from the same resource with limited training data. Another three experiments have been performed only on public datasets. They have been evaluated on an external validation dataset which is the data from the University of Michigan.
By examining the network on an external dataset, the generalization of the network can be assessed. Also, it is evident that after fine-tuning the network with only 7 PAS stained images from the University of Michigan on the same dataset, the average sensitivity has a considerable improvement. For example, the average sensitivity and specificity changed from 45%, and 98% to 74%, and 94% respectively. The results may significantly change if there is more data of the same resource as the validation dataset for fine-tuning the network.
Another important point would be the difference between the results of experiments trained on the first public dataset and the second one. It has been shown that by combining both datasets, the accuracy could drop off compared to only training on the first public dataset, and the reason may be related to the difference between the images from the second public dataset and the images from the University of Michigan. The images from the first public dataset and images from the University of Michigan are needle biopsy images. In contrast, the second public dataset consists of excision tissue samples. The phrase "needle biopsy" refers to a procedure in which a
specific needle is inserted into a suspicious region of the skin in order to collect cells. During a "surgical biopsy", a surgeon creates an incision in your skin in order to reach the suspicious cells. As shown in Figures 7 and 8 number of glomeruli and the size of the glomeruli compared to the whole image are one of the differences between needle biopsy and surgical biopsy.
### _H&E Validation Set_
A total of 16 H&E stained images from the University of Michigan have been used as a validation dataset for all training datasets described in the previous section. Comparison between the average sensitivity and average specificity for all seven experiments using YOLO, with two public datasets, as well as a private dataset from the University of Michigan and the segmentation method explained in section III that was used for Hacking the Kidney Competition are provided in Table 4. ROC curves for all experiments on these 16 H&E stained images have been shown in Figure 10. There is a considerable difference between the validation results on PAS stained images and H&E stained images. This substantial difference is explainable because of the difference in tissue staining of training and validation datasets.
Same as in the Table 3, because of the high average specificity and low sensitivity shown in Table 4, the network's segmentation results practically never show false positives. However, only half of the ground truth negative glomeruli objects can be predicted by this method.
As well as the Table 3, the results have been improved by fine-tuning the training dataset with only seven images from the University of Michigan. The difference between the outcomes of experiments trained on the first public dataset and the second is still significant. Because of the differences in images between the second dataset which are surgical biopsy images and those from the University of Michigan that are needle biopsy images, it has been demonstrated that by combining both datasets, accuracy can drop.
## V Conclusions
There have been several technological advances across health care and digital pathology in recent years. Automated segmentation and pixel analysis of digital pathology images may identify diagnostic patterns and visual cues, leading to more reliable and consistent diagnostic categorization.
Glomeruli detection, as the first step of classifying the glomeruli diseases following by diagnosing different kidney diseases, is essential and critical in digital pathology. Because of the large number of these objects in the kidney, glomeruli detection could help pathologists save considerable time by computerized quantification. This paper trained YOLO-v4 with seven different training datasets consisting of two public datasets and a private dataset from the University of Michigan. Moreover, the networks were evaluated on 20 PAS stained images and 16 H&E stained images from the University of Michigan. By training YOLO-v4 on the first public dataset, and fine-tuning by only 7 PAS stained images from the University of Michigan, experiments achieved 85% average sensitivity and 89% average specificity while validating the network on 20 PAS stained images from the University of Michigan, which was the best result out of different training datasets. For evaluating the network on H&E stained images, 70% average sensitivity and 96% average specificity were obtained while training on both public datasets, followed by fine-tuning on the 7 PAS stained images. Also, final weights of a segmentation method based on U-Net have been used to evaluate the results on the same validation datasets. The model could achieve high specificity and lower sensitivity, making this method rather unreliable compared to YOLO with higher sensitivity. Moreover, obtaining pixel-level WSI annotations for the network is time-consuming. This makes the whole procedure for fine-tuning the model with limited data harder than detection methods like YOLO, which only requires a bounding box around the target objects.
|
2303.09436 | Balanced multiple q-zeta values | We introduce the balanced multiple q-zeta values. They give a new model for
multiple q-zeta values, whose product formula combines the shuffle and stuffle
product for multiple zeta values in a natural way. Moreover, the balanced
multiple q-zeta values are invariant under a very explicit involution. Thus,
all relations among the balanced multiple q-zeta values are conjecturally of a
very simple shape. Examples of the balanced multiple q-zeta values are the
classical Eisenstein series, and they also contain the combinatorial multiple
Eisenstein series. The construction of the balanced multiple q-zeta values is
done on the level of generating series. We introduce a general setup relating
Hoffman's quasi-shuffle products to explicit symmetries among generating series
of words, which gives a clarifying approach to Ecalle's theory of bimoulds.
This allows us to obtain an isomorphism between the underlying Hopf algebras of
words related to the combinatorial bi-multiple Eisenstein series and the
balanced multiple q-zeta values. | Annika Burmester | 2023-03-16T16:11:43Z | http://arxiv.org/abs/2303.09436v2 | # Balanced multiple q-zeta values
###### Abstract.
We introduce the balanced multiple q-zeta values. They give a new model for multiple q-zeta values, whose product formula combines the shuffle and stuffle product for multiple zeta values in a natural way. Moreover, the balanced multiple q-zeta values are invariant under a very explicit involution. Thus, all relations among the balanced multiple q-zeta values are conjecturally of a very simple shape. Examples of the balanced multiple q-zeta values are the classical Eisenstein series, and they also contain the combinatorial multiple Eisenstein series introduced in [1]. The construction of the balanced multiple q-zeta values is done on the level of generating series. We introduce a general setup relating Hoffman's quasi-shuffle products to explicit symmetries among generating series of words, which gives a clarifying approach to Ecalle's theory of bimoulds. This allows us to obtain an isomorphism between the underlying Hopf algebras of words related to the combinatorial bi-multiple Eisenstein series and the balanced multiple q-zeta values.
Key words and phrases:multiple zeta values, multiple q-zeta values, quasi-shuffle Hopf algebras, generating series, bimoulds 2020 Mathematics Subject Classification: 11M32, 05A30, 16T05 The author was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - SFB-TRR 358/1 2023 -- 491392403.
multiple zeta values is initiated in [10], [11]. A survey on achievements in the theory of multiple zeta values can be found in [1] and [12], and all articles related to multiple zeta values are listed in [10].
The product of multiple zeta values can be expressed in two different ways, one is called the _stuffle product_, and the other one is called the _shuffle product_. Both products possess a description in terms of quasi-shuffle algebras (introduced in Section 2). First, consider the alphabet \(\mathcal{Y}=\{y_{1},y_{2},\ldots\}\) and let \(\mathbb{Q}\langle\mathcal{Y}\rangle\) be the non-commutative \(\mathbb{Q}\)-algebra generated by \(\mathcal{Y}\). Then the _stuffle product_\(*\) on \(\mathbb{Q}\langle\mathcal{Y}\rangle\) is defined by \(\mathbf{1}*w=w*\mathbf{1}=w\) and
\[y_{i}u*y_{j}v=y_{i}(u*y_{j}v)+y_{j}(y_{i}u*v)+y_{i+j}(u*v) \tag{1.1}\]
for all \(u,v,w\in\mathbb{Q}\langle\mathcal{Y}\rangle\). The combinatorics of infinite nested sums imply that there is a surjective algebra morphism
\[(\mathbb{Q}\langle\mathcal{Y}\rangle,*) \to\mathcal{Z}, \tag{1.2}\] \[y_{k_{1}}\ldots y_{k_{d}} \mapsto\zeta^{*}(k_{1},\ldots,k_{d}).\]
The elements \(\zeta^{*}(k_{1},\ldots,k_{d})\) are the _stuffle regularized multiple zeta values_, they are uniquely determined by \(\zeta^{*}(k_{1},\ldots,k_{d})=\zeta(k_{1},\ldots,k_{d})\) for all \(k_{1}\geqslant 2,\ k_{2},\ldots,k_{d}\geqslant 1\) and \(\zeta^{*}(1)=0\).
Second, consider the finite alphabet \(\mathcal{X}=\{x_{0},x_{1}\}\) and let \(\mathbb{Q}\langle\mathcal{X}\rangle\) be the non-commutative algebra over \(\mathbb{Q}\) generated by \(\mathcal{X}\). The _shuffle product_ on \(\mathbb{Q}\langle\mathcal{X}\rangle\) is recursively defined by \(\mathbf{1}\sqcup w=w\sqcup\mathbf{1}=w\) and
\[x_{i}u\shuffle x_{j}v=x_{i}(u\shuffle x_{j}v)+x_{j}(x_{i}u\shuffle v) \tag{1.3}\]
for all \(u,v,w\in\mathbb{Q}\langle\mathcal{X}\rangle\). Define \(\mathfrak{h}^{1}\) to be the subspace of \(\mathbb{Q}\langle\mathcal{X}\rangle\) generated by all words not ending in \(x_{0}\), i.e., we have \(\mathfrak{h}^{1}=\mathbb{Q}\mathbf{1}+\mathbb{Q}\langle\mathcal{X}\rangle x_ {1}\). Using the iterated integral representation of multiple zeta values one obtains a surjective algebra morphism
\[(\mathfrak{h}^{1},\shuffle) \to\mathcal{Z}, \tag{1.4}\] \[x_{0}^{k_{1}-1}x_{1}\ldots x_{0}^{k_{d}-1}x_{1} \mapsto\zeta^{\shuffle}(k_{1},\ldots,k_{d}).\]
The elements \(\zeta^{\shuffle}(k_{1},\ldots,k_{d})\) are the _shuffle regularized multiple zeta values_, they are uniquely determined by \(\zeta^{\shuffle}(k_{1},\ldots,k_{d})=\zeta(k_{1},\ldots,k_{d})\) for all \(k_{1}\geqslant 2,\ k_{2},\ldots,k_{d}\geqslant 1\) and \(\zeta^{\shuffle}(1)=0\).
In [13, Theorem 1] an explicit map is given relating the stuffle and the shuffle regularized multiple zeta values, which allows comparing the two product formulas given in (1.2) and (1.4). This yields the _extended double shuffle relations_ among multiple zeta values, which are conjectured to give all algebraic relations in \(\mathcal{Z}\) ([13]).
Quasi-shuffle products can be lifted to symmetries among generating series (Section 3). Consider the generating series of the shuffle resp. stuffle regularized multiple zeta values
\[\mathfrak{z}_{d}^{\bullet}(X_{1},\ldots,X_{d})=\sum_{k_{1},\ldots,k_{d} \geqslant 1}\zeta^{\bullet}(k_{1},\ldots,k_{d})X_{1}^{k_{1}-1}\ldots X_{d}^{k_{ d}-1}\in\mathcal{Z}\llbracket X_{1},\ldots,X_{d}\rrbracket,\qquad d\geqslant 1,\]
with \(\bullet\in\{*,\shuffle\}\). The stuffle and shuffle product in depth 2 translate to
\[\mathfrak{z}_{1}^{*}(X_{1})\mathfrak{z}_{1}^{*}(X_{2}) =\mathfrak{z}_{2}^{*}(X_{1},X_{2})+\mathfrak{z}_{2}^{*}(X_{2},X_{ 1})+\frac{\mathfrak{z}_{1}^{*}(X_{1})-\mathfrak{z}_{1}^{*}(X_{2})}{X_{1}-X_{2 }}, \tag{1.5}\] \[\mathfrak{z}_{1}^{\shuffle}(X_{1})\mathfrak{z}_{1}^{\shuffle}(X _{2}) =\mathfrak{z}_{2}^{\shuffle}(X_{1}+X_{2},X_{2})+\mathfrak{z}_{2}^{ \shuffle}(X_{1}+X_{2},X_{1}).\]
In general depths, there are recursive formulas for the stuffle and shuffle product in terms of generating series of words ([16, Proposition 8]). Those product formulas explain Ecalle's notion of _symmetril_ and _symmetral_ moulds ([11, equation (6), (8)]).
### Multiple q-zeta values
For a better understanding of the structure of the multiple zeta values, we study particular q-analogs of them. Including the additional variable \(q\) reveals new structures. For example, one obtains an involution relating the two product formulas of multiple zeta values, or a derivation with respect to \(q\) known from the theory of quasi-modular forms.
We use the model-free approach to q-analogs of multiple zeta values given in [1]. To integers \(s_{1}\geq 1,\ s_{2},\ldots,s_{l}\geq 0\) and polynomials \(R_{1}\in t\mathbb{Q}[t],\ R_{2},\ldots,R_{l}\in\mathbb{Q}[t]\), associate the _generic multiple q-zeta value_
\[\zeta_{q}(s_{1},...,s_{l};R_{1},...,R_{l})=\sum_{n_{1}>\cdots>n_{l}>0}\frac{R _{1}(q^{n_{1}})}{(1-q^{n_{1}})^{s_{1}}}\cdots\frac{R_{l}(q^{n_{l}})}{(1-q^{n_{ l}})^{s_{l}}}\in\mathbb{Q}[\![q]\!].\]
For \(s_{1}\geq 2\), \(s_{2},\ldots,s_{l}\geq 1\), a generic multiple q-zeta values \(\zeta_{q}(s_{1},\ldots,s_{l};R_{1},\ldots,R_{l})\) is indeed a (modified) q-analog of multiple zeta values (Proposition 8.2). The product of any two generic multiple q-zeta values is a \(\mathbb{Q}\)-linear combination of generic multiple q-zeta values.
**Definition**.: The _algebra of multiple q-zeta values_ is the subalgebra of \(\mathbb{Q}[\![q]\!]\) given by
\[\mathcal{Z}_{q}=\operatorname{span}_{\mathbb{Q}}\bigl{\{}\zeta_{q}(s_{1},...,s _{l};R_{1},...,R_{l})\ \big{|}\ l\geq 0,s_{1}\geq 1,\ s_{2},...,s_{l}\geq 0, \deg(R_{j})\leq s_{j}\bigr{\}},\]
where we set \(\zeta_{q}(\emptyset;\emptyset)=1\).
The additional requirement on the degree of the polynomials is natural. For example, \(\mathcal{Z}_{q}\) is related to polynomial functions on partitions ([1], [2]), which implies the existence of spanning sets of \(\mathcal{Z}_{q}\) invariant under some involution. The space \(\mathcal{Z}_{q}\) also occurs in enumerative geometry. More precisely, A. Okounkov conjectured that certain generating series of Chern numbers on Hilbert schemes of points are always contained in the space \(\mathcal{Z}_{q}\) ([1]).
A _model_ of multiple q-zeta values is a particular assumption on the polynomials \(R_{i}\) (usually these particular polynomials form a basis of \(\mathbb{Q}[t]\)) or just a spanning set of \(\mathcal{Z}_{q}\). About twenty years ago the first models for multiple q-zeta values were introduced independently by D. Bradley, K. Schlesinger, J. Zhao, and W. Zudilin ([1], [12], [13]). Since then several more models appeared in the literature, all of them are contained in the above defined space \(\mathcal{Z}_{q}\). An overview on the various models of multiple q-zeta values and their relations is given in [1] and [2].
Traditionally, the models of multiple q-zeta values focus on a q-analog of the shuffle product given in (1.2) or the stuffle product given in (1.4) for multiple zeta values but do not combine them. So, usually it is difficult to describe a q-analog of the extended double shuffle relations. In rather recent articles on multiple q-zeta values by H. Bachmann ([1]) and by K. Ebrahimi-Fard, D. Manchon, and J. Singer ([1]) the focus changed to obtaining a product formula and some invariance under an involution. Combining the product formula and the involution one easily derives a q-analog of the extended double shuffle relations. In joint work with H. Bachmann ([1]), we constructed the combinatorial bi-multiple
Eisenstein series, which satisfy a weight-graded product formula and are invariant under some weight-homogeneous involution.
### Balanced multiple q-zeta values
A main result of this article gives a new model of multiple q-zeta values, which also satisfies weight-homogeneous relations and is closely related to the Schlesinger-Zudilin multiple q-zeta values studied in [1]. We call the elements in this model the _balanced multiple q-zeta values_\(\zeta_{q}(s_{1},\ldots,s_{l}),\ s_{1}\geqslant 1,\ s_{2},\ldots,s_{l}\geqslant 0\) (Definition 10.1). The classical Eisenstein series and their derivatives are examples of balanced multiple q-zeta values. Moreover, the balanced multiple q-zeta values can be used to conjecturally describe all relations among multiple Eisenstein series. This new model also gives an explicit description of a conjectural weight-grading on the algebra \(\mathcal{Z}_{q}\), which extends the weight-grading of the algebra \(\widetilde{\mathcal{M}}^{\mathbb{Q}}(\operatorname{SL}_{2}(\mathbb{Z}))\) of quasi-modular forms with rational coefficients (introduced in [13]). The balanced multiple q-zeta values are q-analogs of multiple zeta values, taking the limit \(q\to 1\) always yields an element in the algebra of multiple zeta values (Proposition 11.5, Remark 11.6). An advantage of the balanced multiple q-zeta values is that they satisfy a product formula, which can be seen as a balanced combination of the shuffle and stuffle product for the multiple zeta values (Theorem 10.4). Therefore, they provide a setup to study both products of the multiple zeta values at the same time. Moreover, the balanced multiple q-zeta values satisfy linear relations coming from a particular simple involution. They also possess a description in terms of a finite alphabet similar to the case of multiple zeta values (Theorem 11.4).
Consider the alphabet
\[\mathcal{B}=\{b_{0},b_{1},b_{2},\ldots\}\]
and denote by \(\mathbb{Q}\langle\mathcal{B}\rangle\) the free non-commutative \(\mathbb{Q}\)-algebra generated by \(\mathcal{B}\). The _balanced quasi-shuffle product_\(*_{b}\) is the quasi-shuffle product on \(\mathbb{Q}\langle\mathcal{B}\rangle\) recursively defined by \(\mathbf{1}*_{b}w=w*_{b}\mathbf{1}=w\) and
\[b_{i}u*_{b}b_{j}v=b_{i}(u*_{b}b_{j}v)+b_{j}(b_{i}u*_{b}v)+\begin{cases}b_{i+j}( u*_{b}v)&\text{ if }i,j\geqslant 1,\\ 0&\text{else}\end{cases}\]
for all \(b_{i},b_{j}\in\mathcal{B},\ u,v,w\in\mathbb{Q}\langle\mathcal{B}\rangle\). Restricted to the letters \(b_{i}\) for \(i\geqslant 1\) the product \(*_{b}\) is the stuffle product given in (1.1) and modulo words containing the letters \(b_{i}\) for \(i\geqslant 2\) we recover the shuffle product given in (1.3), therefore we call \(*_{b}\) the balanced quasi-shuffle product. The product \(*_{b}\) is exactly the associated weight-graded product to the quasi-stuffle product of the Schlesinger-Zudilin multiple q-zeta values ([1, Theorem 3.3]).
Denote by \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) the subalgebra of \(\mathbb{Q}\langle\mathcal{B}\rangle\) generated by all words which do not start in \(b_{0}\), i.e., we have \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}=\mathbb{Q}\langle\mathcal{B}\rangle \backslash b_{0}\mathbb{Q}\langle\mathcal{B}\rangle\). Let \(\tau:\mathbb{Q}\langle\mathcal{B}\rangle^{0}\to\mathbb{Q}\langle\mathcal{B} \rangle^{0}\) be the involution given by \(\tau(\mathbf{1})=\mathbf{1}\) and
\[\tau(b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}})=b_{m_{d}+1}b_{0}^{k _{d}-1}\ldots b_{m_{1}+1}b_{0}^{k_{1}-1}.\]
A main result of this work is the following.
**Theorem** (Theorem 10.4).: _There is a \(\tau\)-invariant, surjective algebra morphism_
\[(\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b})\to\mathcal{Z}_{q},\]
\[b_{s_{1}}\dots b_{s_{l}}\mapsto\zeta_{q}(s_{1},\dots,s_{l}),\]
_where \(\zeta_{q}(s_{1},\dots,s_{l})\) are the balanced multiple q-zeta values introduced in Definition 10.1._
To the balanced multiple q-zeta value \(\zeta_{q}(s_{1},\dots,s_{l})\) associate the _weight_
\[s_{1}+\dots+s_{l}+\#\{i\mid s_{i}=0\}.\]
The balanced quasi-shuffle product as well as the \(\tau\)-invariance of the balanced multiple q-zeta values are homogeneous for the weight. Since we expect that all algebraic relations in \(\mathcal{Z}_{q}\) can be deduced from these to sets of relations among balanced multiple q-zeta values (Conjecture 10.5), we expect the algebra \(\mathcal{Z}_{q}\) to be graded with respect to this weight.
Essential for the construction of the balanced multiple q-zeta values are the _combinatorial bi-multiple Eisenstein series_ introduced in [1] (Definition 9.7), which also possess a description in terms of quasi-shuffle algebras. Consider the alphabet \(\mathcal{Y}^{\mathrm{bi}}=\{y_{k,m}\mid k\geqslant 1,m\geqslant 0\}\) and let \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) be the non-commutative \(\mathbb{Q}\)-algebra generated by \(\mathcal{Y}^{\mathrm{bi}}\). The _stuffle product_\(*\) on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) is recursively defined by \(\mathbf{1}*w=w*\mathbf{1}=w\) and
\[y_{k_{1},m_{2}}u*y_{k_{2},m_{2}}v=y_{k_{1},m_{1}}(u*y_{k_{2},m_{2}}v)+y_{k_{2 },m_{2}}(y_{k_{1},m_{1}}u*v)+y_{k_{1}+k_{2},m_{1}+m_{2}}(u*v)\]
for all \(u,v,w\in\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\). This is the canonical bi-version of the stuffle product given in (1.1). Moreover, there is an involution \(\mathrm{swap}:\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\to\mathbb{Q} \langle\mathcal{Y}^{\mathrm{bi}}\rangle\) (Definition 7.9) closely related to conjugation of partitions.
Following the general explanations in Section 3, we will express the stuffle product \(*\) on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) and the balanced quasi-shuffle product \(*_{b}\) on \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) by a recursive formula on generating series of words (Theorem 4.1, 5.1). Moreover, we will obtain a _regularization map_
\[\mathrm{reg}:(\mathbb{Q}\langle\mathcal{B}\rangle,*_{b})\to(\mathbb{Q}\langle \mathcal{B}\rangle^{0},*_{b}),\]
which also possesses a description on generating series of words (Theorem 6.3). This regularization allows defining a _regularized coproduct_\(\Delta^{0}_{\mathrm{dec}}:\mathbb{Q}\langle\mathcal{B}\rangle^{0}\to\mathbb{Q} \langle\mathcal{B}\rangle^{0}\otimes\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) compatible with the balanced quasi-shuffle product. The lift to generating series of words allows us to show the following.
**Theorem** (Theorem 7.10).: _There is an isomorphism of weight-graded Hopf algebras_
\[\varphi_{\#}:(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle,*,\Delta_{ \mathrm{dec}})\to(\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b},\Delta^{0}_{ \mathrm{dec}}),\]
_which satisfies \(\varphi_{\#}\circ\mathrm{swap}=\tau\circ\varphi_{\#}\)._
The stuffle product \(*\) as well as the involution swap on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) can be rephrased in terms of bimoulds (compare to Section 3), this leads to the symmetries usually called _symmetril_ and _swap invariant_ (those symmetries are extensively studied in [11], [12]). An example of a symmetril and swap invariant bimould is the bimould \(\mathfrak{G}=(\mathfrak{G}_{d})_{d\geqslant 0}\) of generating series of the combinatorial bi-multiple Eisenstein series (Definition 9.5). On the other hand, the balanced quasi-shuffle product \(*_{b}\) and the involution on \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) also lead to symmetries of bimoulds, which we will refer to as _b-symmetril_ and _\(\tau\)-invariant_. Theorem 7.10 gives an isomorphism between symmetril and swap invariant bimoulds and b-symmetril and \(\tau\)-invariant bimoulds. Applying this isomorphism to the bimould \(\mathfrak{G}=(\mathfrak{G}_{d})_{d\geqslant 0}\) of generating series of the combinatorial bi-multiple Eisenstein series yields the bimould \(\mathfrak{B}=(\mathfrak{B}_{d})_{d\geqslant 0}\) of generating series of
the balanced multiple q-zeta values (Definition 10.1). For example, the b-symmetrility of \(\mathfrak{B}\) in depth 2 reads
\[\mathfrak{B}_{1}\binom{X_{1}}{Y_{1}}\cdot\mathfrak{B}_{1}\binom{X_{2}}{Y_{2}}= \mathfrak{B}_{2}\binom{X_{2},X_{1}}{Y_{1}+Y_{2}}+\mathfrak{B}_{2}\binom{X_{1}, X_{2}}{Y_{1},Y_{1}+Y_{2}}+\frac{\mathfrak{B}_{1}\binom{X_{1}}{Y_{1}+Y_{2}}- \mathfrak{B}_{1}\binom{X_{2}}{Y_{1}+Y_{2}}}{X_{1}-X_{2}}.\]
It combines the product formulas on the generating series of the stuffle and shuffle regularized multiple zeta values given in (1.5). The results in Section 7 for the generating series of words allow us to deduce the previously presented properties of the balanced multiple q-zeta values from the b-symmetrility and the \(\tau\)-invariance of \(\mathfrak{B}\).
### Structure of the paper
We start by recalling Hoffman's quasi-shuffle Hopf algebras ([10]) in Section 2. Then in Section 3 we develop a general approach relating quasi-shuffle products to symmetries among generating series resp. bimoulds. These general observations will be applied to the quasi-shuffle algebras \((\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle,*)\) and \((\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b})\) in Section 4 and 5. In Section 6, we will introduce the regularization and the regularized coproduct for \((\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b})\) and reformulate this in terms of generating series. This allows to obtain the Hopf algebra isomorphism \((\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle,*,\Delta_{\mathrm{dec}}) \rightarrow(\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b},\Delta_{\mathrm{ dec}}^{0})\) in Section 7. In Section 8 we recall the definition of the algebra \(\mathcal{Z}_{q}\) of multiple q-zeta values (as given in [1]) and explain shortly its relations to multiple zeta values, quasi-modular forms, and partitions. Then in Section 9 we introduce the combinatorial bi-multiple Eisenstein series constructed in [1], which form a spanning set of \(\mathcal{Z}_{q}\). Applying the results in Section 7 to the combinatorial bi-multiple Eisenstein series, we will obtain the balanced multiple q-zeta values and some of its most important properties in Section 10. Finally, we give some more properties of the balanced multiple q-zeta values in Section 11.
### Acknowledgment
This work was mainly part of my PhD thesis. Therefore, I deeply thank my supervisor Ulf Kuhn for many helpful comments and discussions on these contents. Moreover, I would like to thank Claudia Alfes-Neumann, Henrik Bachmann, Jan-Willem van Ittersum, and Koji Tasaka for valuable comments on these topics within my PhD project or on earlier versions of this paper.
## 2. Quasi-shuffle Hopf algebras
We give a short introduction to quasi-shuffle Hopf algebras, which were introduced and studied in [10] and [11]. They will be used later to describe the product of the algebra of multiple q-zeta values in terms of particular spanning sets.
In the following, \(R\) is some arbitrary fixed \(\mathbb{Q}\)-algebra. Let \(\mathcal{A}\) be an _alphabet_, this means \(\mathcal{A}\) is a countable set whose elements are called _letters_. By \(R\mathcal{A}\) denote the \(R\)-module spanned by the letters of \(\mathcal{A}\) and let \(R\langle\mathcal{A}\rangle\) be the free non-commutative algebra generated by the alphabet \(\mathcal{A}\). The monic monomials in \(R\langle\mathcal{A}\rangle\) are called _words_ with letters in \(\mathcal{A}\), and the set of all words is denoted by \(\mathcal{A}^{*}\). Moreover, let \(\mathbf{1}\) be the empty word.
**Definition 2.1**.: Let \(\diamond:R\mathcal{A}\times R\mathcal{A}\to R\mathcal{A}\) be an associative and commutative product. Define the _quasi-shuffle product_\(*_{\diamond}\) on \(R\langle\mathcal{A}\rangle\) recursively by \(\mathbf{1}*_{\diamond}w=w*_{\diamond}\mathbf{1}=w\) and
\[au*_{\diamond}bv=a(u*_{\diamond}bv)+b(au*_{\diamond}v)+(a\diamond b)(u*_{ \diamond}v)\]
for all \(u,v,w\in R\langle\mathcal{A}\rangle\) and \(a,b\in\mathcal{A}\).
Note that a quasi-shuffle product \(*_{\diamond}\) can be equally defined recursively from the left and from the right.
**Example 2.2**.: 1. Define
\[a\diamond b=0\text{ for all }a,b\in\mathcal{A},\]
then we get the well-known _shuffle product_, which is usually denoted by \(\sqcup\sqcup\).
2. Consider the bi-alphabet \(\mathcal{Y}^{\mathrm{bi}}=\{y_{k,m}\mid k\geqslant 1,\ m\geqslant 0\}\) and on \(R\mathcal{Y}^{\mathrm{bi}}\) define the product
\[y_{k_{1},m_{1}}\diamond y_{k_{2},m_{2}}=y_{k_{1}+k_{2},m_{1}+m_{2}}.\]
The obtained quasi-shuffle product is called the _stuffle product_ and is denoted by \(*\). It appears in the context of the combinatorial bi-multiple Eisenstein series (Corollary 9.10) and is exactly the associated weight-graded product to the quasi-shuffle product of Bachmann's bi-brackets ([1, Theorem 3.6]).
3. Consider the alphabet \(\mathcal{B}=\{b_{0},b_{1},b_{2},\ldots\}\) and define on \(R\mathcal{B}\) the product
\[b_{i}\diamond_{b}b_{j}=\begin{cases}b_{i+j},&\text{if }i,j\geqslant 1,\\ 0&\text{else}.\end{cases}\]
This quasi-shuffle product occurs for balanced multiple q-zeta values (Theorem 10.4) and will be called the _balanced quasi-shuffle product_, it is denoted by \(*_{b}\). It is exactly the associated weight-graded product to the quasi-shuffle product of the Schlesinger-Zudilin multiple q-zeta values ([14, Theorem 3.3]).
Let \(\Delta_{\mathrm{dec}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}\langle \mathcal{A}\rangle\otimes\mathbb{Q}\langle\mathcal{A}\rangle\) be the _deconcatenation product_, i.e., we have
\[\Delta_{\mathrm{dec}}(w)=\sum_{uv=w}u\otimes v\quad\text{ for each word }w\in\mathbb{Q}\langle\mathcal{A}\rangle. \tag{2.1}\]
We close this section with the following structure theorem of Hoffman.
**Theorem 2.3**.: _([11, Theorem 3.2, 3.3]) The tuple \((R\langle\mathcal{A}\rangle,*_{\diamond},\Delta_{\mathrm{dec}})\) is an associative, commutative Hopf algebra. There is an isomorphism between all quasi-shuffle Hopf algebras defined over the same alphabet given by some exponential map. _
It is well-known that the shuffle algebra \((\mathbb{Q}\langle\mathcal{A}\rangle,\sqcup\sqcup)\) is a free polynomial algebra on the Lyndon words ([11, Theorem 4.9 (ii)]). The theorem shows that this holds for any quasi-shuffle algebra.
## 3. Quasi-shuffle products and generating series
Inspired by [10, Section 7], we will explain how quasi-shuffle products can be translated into symmetries of commutative generating series resp. bimoulds. This gives a connection between Hoffman's theory of quasi-shuffle Hopf algebras ([11], [12]) and Ecalle's theory of bimoulds ([1], [21]). We will use this general setup later to construct a particular nice-behaving model for multiple q-zeta values and explain its properties.
Consider some given quasi-shuffle algebra \((\mathbb{Q}\langle\mathcal{A}\rangle,*_{\circ})\). Define the _generic diagonal series_ of \(\mathbb{Q}\langle\mathcal{A}\rangle\) by
\[\mathcal{W}(\mathcal{A})=\sum_{w\in\mathcal{A}^{*}}w\otimes w.\]
We want to apply \(\mathbb{Q}\)-linear maps to the first factors, usually denoted by \(\varphi\), or to the second factors, usually denoted by \(\rho\), of \(\mathcal{W}(\mathcal{A})\) to get generating series of different kinds and describe the resulting properties. We begin with an abstract discussion and later a more detailed explanation of special cases is given.
Let \(\mathrm{dep}:\mathcal{A}^{*}\to\mathbb{Z}_{\geqslant 0}\) be a _depth map_ compatible with concatenation, i.e., we have
\[\mathrm{dep}(uv)=\mathrm{dep}(u)+\mathrm{dep}(v),\qquad u,v\in\mathcal{A}^{*}.\]
Denote by \((\mathcal{A}^{*})^{(d)}\) the set of all words in \(\mathcal{A}^{*}\) of depth \(d\) and by \(\mathbb{Q}\langle\mathcal{A}\rangle^{(d)}\) the space spanned by \((\mathcal{A}^{*})^{(d)}\).
**Definition 3.1**.: Let \(\rho_{\mathcal{A}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}[Z_{1},Z_{2 },\ldots]\) be a \(\mathbb{Q}\)-linear map having the following properties with respect to the depth map:
1. There is a strictly increasing sequence \(\ell(0)<\ell(1)<\ell(2)<\dots\) of non-negative integers, such that for each \(d\geqslant 0\) the restriction of \(\rho_{\mathcal{A}}\) to \(\mathbb{Q}\langle\mathcal{A}\rangle^{(d)}\) is an injective \(\mathbb{Q}\)-linear map \[\rho_{\mathcal{A}}|_{\mathbb{Q}\langle\mathcal{A}\rangle^{(d)}}:\mathbb{Q} \langle\mathcal{A}\rangle^{(d)}\to\mathbb{Q}[Z_{1},\ldots,Z_{\ell(d)}].\]
2. We have \[\rho_{\mathcal{A}}(uv)=\rho_{\mathcal{A}}(u)\rho_{\mathcal{A}}^{[\ell(n)]}(v ),\qquad u\in\mathbb{Q}\langle\mathcal{A}\rangle^{(n)},\ v\in\mathbb{Q} \langle\mathcal{A}\rangle,\] where \(\rho_{\mathcal{A}}^{[n]}\) denotes the \(\mathbb{Q}\)-linear map obtained from \(\rho_{\mathcal{A}}\) by shifting the variables \(Z_{i}\) to \(Z_{n+i}\), so \(\rho_{\mathcal{A}}^{[n]}\Big{(}\mathbb{Q}\langle\mathcal{A}\rangle^{(d)} \Big{)}\subset\mathbb{Q}[Z_{n+1},\ldots,Z_{n+\ell(d)}]\).
**Definition 3.2**.: Let \(\rho_{\mathcal{A}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}[Z_{1},Z_{2 },\ldots]\) be a \(\mathbb{Q}\)-linear map as in Definition 3.1. The (commutative) _generating series of words_ in \(\mathbb{Q}\langle\mathcal{A}\rangle\) associated to \(\rho_{\mathcal{A}}\) are given by
\[\rho_{\mathcal{A}}(\mathcal{W})_{d}(Z_{1},\ldots,Z_{\ell(d)})=\sum_{w\in( \mathcal{A}^{*})^{(d)}}w\rho(w)\in\mathbb{Q}\langle\mathcal{A}\rangle[\![Z_{1 },\ldots,Z_{\ell(d)}]\!],\qquad d\geqslant 0.\]
We will often drop the index \(d\) and simply write \(\rho_{\mathcal{A}}(\mathcal{W})(Z_{1},\ldots,Z_{\ell(d)})\).
**Example 3.3**.: Consider the alphabet \(\mathcal{Y}=\{y_{1},y_{2},y_{3},\ldots\}\) and define the depth of a word in \(\mathbb{Q}\langle\mathcal{Y}\rangle\) by
\[\mathrm{dep}(y_{k_{1}}\ldots y_{k_{d}})=d.\]
The map
\[\rho_{\mathcal{Y}}:\mathbb{Q}\langle\mathcal{Y}\rangle \to\mathbb{Q}[Z_{1},Z_{2},\ldots],\] \[y_{k_{1}}\ldots y_{k_{d}} \mapsto Z_{1}^{k_{1}-1}\ldots Z_{d}^{k_{d}-1}\]
is a \(\mathbb{Q}\)-linear map satisfying the conditions in Definition 3.1 with \(\ell(d)=d\). The associated generating series of words are given by \(\rho_{\mathcal{Y}}(\mathcal{W})_{0}=\mathbf{1}\) and
\[\rho_{\mathcal{Y}}(\mathcal{W})_{d}(Z_{1},\ldots,Z_{d})=\sum_{k_{1},\ldots,k_ {d}\geqslant 1}y_{k_{1}}\ldots y_{k_{d}}Z_{1}^{k_{1}-1}\ldots Z_{d}^{k_{d}-1}, \qquad d\geqslant 1.\]
In the following sections, we will compute more involved examples.
**Proposition 3.4**.: _Let \(\rho_{\mathcal{A}}:\mathbb{Q}\langle A\rangle\to\mathbb{Q}[Z_{1},Z_{2},\ldots]\) be a \(\mathbb{Q}\)-linear map as in Definition 3.1 with \(\ell(d_{1})+\ell(d_{2})=\ell(d_{1}+d_{2})\) for all \(d_{1},d_{2}\geqslant 0\). Then the \(\mathbb{Q}[\![Z_{1},Z_{2},\ldots]\!]\)-linear extension of the concatenation product \(\cdot\) satisfies_
\[\rho_{\mathcal{A}}(\mathcal{W})_{n}(Z_{1},\ldots,Z_{\ell(n)})\cdot\rho_{ \mathcal{A}}(\mathcal{W})_{d-n}(Z_{\ell(n)+1},\ldots,Z_{\ell(d)})=\rho_{ \mathcal{A}}(\mathcal{W})_{d}(Z_{1},\ldots,Z_{\ell(d)})\]
_for all \(0\leqslant n\leqslant d\)._
Proof.: For \(n=0,d\) the formula is obvious. For \(0<n<d\), compute
\[\rho_{\mathcal{A}}(\mathcal{W})_{n}(Z_{1},\ldots,Z_{\ell(n)}) \cdot\rho_{\mathcal{A}}(\mathcal{W})_{d-n}(Z_{\ell(n)+1},\ldots,Z_{\ell(d)})= \sum_{uv\in(\mathcal{A}^{*})^{(n)}}\sum_{v\in(\mathcal{A}^{*})^{(d-n)}}\!\!uv \rho_{\mathcal{A}}(u)\rho_{\mathcal{A}}^{[\ell(n)]}(v)\] \[=\sum_{uv\in(\mathcal{A}^{*})^{(n)}}\sum_{v\in(\mathcal{A}^{*})^ {(d-n)}}uv\rho_{\mathcal{A}}(uv)=\sum_{w\in(\mathcal{A}^{*})^{(d)}}w\rho_{ \mathcal{A}}(w)=\rho_{\mathcal{A}}(\mathcal{W})_{d}(Z_{1},\ldots,Z_{\ell(d)}).\]
If we extend the quasi-shuffle product \(*_{\circ}\) on \(\mathbb{Q}\langle\mathcal{A}\rangle\) to \(\mathbb{Q}\langle\mathcal{A}\rangle[\![Z_{1},Z_{2},\ldots]\!]\) by \(\mathbb{Q}[\![Z_{1},Z_{2},\ldots]\!]\)-linearity, then by definition of the quasi-shuffle product \(*_{\circ}\) we have for all \(0\leqslant n\leqslant d\) that
\[\rho_{\mathcal{A}}(\mathcal{W})_{n}(Z_{1},\ldots,Z_{\ell(n)})*_{\circ}\rho_{ \mathcal{A}}(\mathcal{W})_{d-n}(Z_{\ell(n)+1},\ldots,Z_{\ell(d)})\in\mathbb{Q} \langle\mathcal{A}\rangle[\![Z_{1},\ldots,Z_{\ell(d)}]\!].\]
For some very well-behaved quasi-shuffle products appearing for example in the theory of multiple zeta values and multiple q-zeta values, it is possible to describe this product on the generating series of words by a recursive explicit formula with respect to concatenation. This will explain the origin of a particular kind of symmetries occurring in Ecalle's theory of bimoulds ([1]). Some important examples related to multiple q-zeta values will be discussed in the following sections, for some other examples also related to multiple zeta values, we refer to [1, Appendix A.4 + A.5].
To translate the quasi-shuffle products into symmetries among bimoulds, we apply a \(\mathbb{Q}\)-linear map \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) into some \(\mathbb{Q}\)-algebra \(R\) to the first component of such a generating series of words \(\rho_{\mathcal{A}}(\mathcal{W})\). In other words, we consider the image of the generic diagonal series \(\mathcal{W}(\mathcal{A})\) under \(\varphi\otimes\rho_{\mathcal{A}}\).
**Definition 3.5**.: Let \(\rho_{\mathcal{A}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}[\![Z_{1},Z_ {2},\ldots]\!]\) be a \(\mathbb{Q}\)-linear map as in Definition 3.1, \(R\) be a \(\mathbb{Q}\)-algebra and \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) be a \(\mathbb{Q}\)-linear map. Then the _(commutative) generating series with coefficients in \(R\)_ associated to \((\varphi,\rho_{\mathcal{A}})\) are given by
\[(\varphi\otimes\rho_{\mathcal{A}})(\mathcal{W})_{d}(Z_{1},\ldots,Z_{\ell(d)})= \sum_{w\in(\mathcal{A}^{*})^{(d)}}\varphi(w)\rho_{\mathcal{A}}(w)\in R[\![Z_{1},\ldots,Z_{\ell(d)}]\!],\qquad d\geqslant 0.\]
As before, we will often drop the depth index and simply write \((\varphi\otimes\rho_{\mathcal{A}})(\mathcal{W})(Z_{1},\dots,Z_{\ell(d)})\). Such kind of sequences \((\varphi\otimes\rho_{\mathcal{A}})(\mathcal{W})=\Big{(}(\varphi\otimes\rho_{ \mathcal{A}})(\mathcal{W})_{d}\Big{)}_{d\geqslant 0}\) will occur as examples of Ecalle's (bi-)moulds ([1]).
**Definition 3.6**.: Let \(R\) be a \(\mathbb{Q}\)-algebra. A sequence
\[M=(M_{d}(X_{1},\dots,X_{d}))_{d\geqslant 0}=(M_{0}(\emptyset),M_{1}(X_{1}),M_{2}( X_{1},X_{2}),\dots)\in\prod_{d\geqslant 0}R[\![X_{1},\dots,X_{d}]\!]\]
is called a _mould_ with coefficients in \(R\). Similarly, a sequence
\[M=\bigg{(}M_{d}\!\left(\!\!\begin{array}{c}X_{1},\dots,X_{d}\\ Y_{1},\dots,Y_{d}\end{array}\!\right)\bigg{)}_{d\geqslant 0}=\bigg{(}M_{0}( \emptyset),M_{1}\!\left(\!\!\begin{array}{c}X_{1}\\ Y_{1}\end{array}\!\right),M_{2}\!\left(\!\!\begin{array}{c}X_{1},X_{2}\\ Y_{1},Y_{2}\end{array}\!\right),\dots\bigg{)}\in\prod_{d\geqslant 0}R[\![X_{1},Y_{1}, \dots,X_{d},Y_{d}]\!]\]
is called a _bimodul_ with coefficients in \(R\).
If \(\rho_{\mathcal{A}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}[X_{1},X_{2}, \dots]\) is a map as in Definition 3.1 with \(\ell(d)=d\) for all \(d\geqslant 0\), then the corresponding generating series of words \(\rho_{\mathcal{A}}(\mathcal{W})=(\rho_{\mathcal{A}}(\mathcal{W})_{d})_{d \geqslant 0}\) are a mould with coefficients in \(\mathbb{Q}\langle\mathcal{A}\rangle\). For any \(\mathbb{Q}\)-linear map \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\), the generating series \((\varphi\otimes\rho_{\mathcal{A}})(\mathcal{W})=((\varphi\otimes\rho_{ \mathcal{A}})(\mathcal{W})_{d})_{d\geqslant 0}\) associated to \((\varphi,\rho_{\mathcal{A}})\) are a mould with coefficients in \(R\). Similarly, if \(\rho_{\mathcal{A}}:\mathbb{Q}\langle\mathcal{A}\rangle\to\mathbb{Q}[X_{1},Y_{ 1},X_{2},Y_{2},\dots]\) is a map as in Definition 3.1 with \(\ell(d)=2d\) for every \(d\geqslant 0\), then the corresponding generating series of words \(\rho_{\mathcal{A}}(\mathcal{W})=(\rho_{\mathcal{A}}(\mathcal{W})_{d})_{d \geqslant 0}\) are a bimould with coefficients in the algebra \(\mathbb{Q}\langle\mathcal{A}\rangle\), and also the generating series \((\varphi\otimes\rho_{\mathcal{A}})(\mathcal{W})=((\varphi\otimes\rho_{ \mathcal{A}})(\mathcal{W})_{d})_{d\geqslant 0}\) associated to \((\varphi,\rho_{\mathcal{A}})\) for each \(\mathbb{Q}\)-linear map \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) are a bimould with coefficients in \(R\).
**Definition 3.7**.: Let \(R\) be a \(\mathbb{Q}\)-algebra and \(\ell(0)<\ell(1)<\ell(2)<\dots\) a sequence of non-negative integers. A sequence \(M=(M_{d})_{d\geqslant 0}\) in \(\prod_{d\geqslant 0}R[\![Z_{1},\dots,Z_{\ell(d)}]\!]\) is called \((\varphi_{*_{\circ}},\rho_{\mathcal{A}})\)_-symmetric_ if there exists a \(\mathbb{Q}\)-algebra morphism \(\varphi_{*_{\circ}}:(\mathbb{Q}\langle\mathcal{A}\rangle,*_{\circ})\to R\) and a \(\mathbb{Q}\)-linear map satisfying the conditions in Definition 3.1, such that for all \(d\geqslant 0\)
\[M_{d}=(\varphi_{*_{\circ}}\otimes\rho_{\mathcal{A}})(\mathcal{W})_{d}.\]
In the following sections, we will consider two particular quasi-shuffle algebras, which will give rise to the notion of symmetril and b-symmetril bimoulds following Definition 3.7.
Let \(M=(M_{d})_{d\geqslant 0}\in\prod_{d\geqslant 0}R[\![Z_{1},\dots,Z_{d}]\!]\) be such a \((\varphi_{*_{\circ}},\rho_{\mathcal{A}})\)-symmetric sequence. Then, one obtains immediately from the definition that for \(0<n<d\)
\[M_{n}(Z_{1},\dots,Z_{\ell(n)})M_{d-n}(Z_{\ell(n)+1},\dots,Z_{\ell (d)})\\ =\varphi_{*_{\circ}}\Big{(}\rho_{\mathcal{A}}(\mathcal{W})_{n}(Z _{1},\dots,Z_{\ell(n)})*_{\circ}\rho_{\mathcal{A}}(\mathcal{W})_{d-n}(Z_{\ell(n )+1},\dots,Z_{\ell(d)})\Big{)}.\]
The right-hand side is an element in \(\mathbb{Q}\langle\mathcal{A}\rangle[\![Z_{1},\dots,Z_{\ell(d)}]\!]\), so the map \(\varphi_{*_{\circ}}:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) needs to be extended by \(\mathbb{Q}[\![Z_{1},Z_{2},\dots]\!]\)-linearity. As mentioned before, in some special cases the right-hand side can be described in terms of an explicit recursive formula.
Alternatively, we can first apply the evaluation map \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) for some \(\mathbb{Q}\)-algebra \(R\) to the generic diagonal series \(\mathcal{W}(\mathcal{A})\).
**Definition 3.8**.: Let \(R\) be a \(\mathbb{Q}\)-algebra and \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) be a \(\mathbb{Q}\)-linear map. Define the _(non-commutative) generating series with coefficients in \(R\)_ associated to \(\varphi\) by
\[\varphi(\mathcal{W}(\mathcal{A}))=\sum\limits_{w\in\mathcal{A}^{*}}\varphi(w)w \in R\langle\langle\mathcal{A}\rangle\rangle.\]
Here \(R\langle\langle\mathcal{A}\rangle\rangle\) denotes non-commutative algebra of power series over \(R\) generated by \(\mathcal{A}\).
The generating series \(\varphi(\mathcal{W}(\mathcal{A}))\) can also be decomposed into its homogeneous depth components \(\varphi(\mathcal{W}(\mathcal{A}))_{d}\) for \(d\geq 0\) (similar to Definition 3.2).
**Proposition 3.9**.: _Let \(R\) be a \(\mathbb{Q}\)-algebra and \(\varphi:\mathbb{Q}\langle\mathcal{A}\rangle\to R\) be a \(\mathbb{Q}\)-linear map. Assume that the algebra \((\mathbb{Q}\langle\mathcal{A}\rangle,*_{\circ})\) is graded with \(\deg(a)\geq 1\) for all \(a\in\mathcal{A}\), and denote by \(\Delta_{*_{\circ}}\) the dual completed coproduct to \(*_{\circ}\). The map \(\varphi\) is an algebra morphism for the quasi-shuffle product \(*_{\circ}\) if and only if \(\varphi(\mathcal{W}(\mathcal{A}))\) is grouplike for the coproduct \(\Delta_{*_{\circ}}\)._
Proof.: By duality, we have
\[\Delta_{*_{\circ}}\Big{(}\varphi(\mathcal{W}(\mathcal{A}))\Big{)}=\Delta_{*_{ \circ}}\left(\sum\limits_{w\in\mathcal{A}^{*}}\varphi(w)w\right)=\sum\limits_{ u,v\in\mathcal{A}^{*}}\varphi(u*_{\circ}v)u\otimes v.\]
So \(\varphi\) is an algebra morphism for the quasi-shuffle product \(*_{\circ}\) if and only if
\[\Delta_{*_{\circ}}\Big{(}\varphi(\mathcal{W}(\mathcal{A}))\Big{)}=\sum\limits _{u,v\in\mathcal{A}^{*}}\varphi(u)\varphi(v)u\otimes v=\varphi(\mathcal{W}( \mathcal{A}))\otimes\varphi(\mathcal{W}(\mathcal{A})).\]
In particular, applying the map \(\varphi\) involves always a dualization process. Summarizing this section, we obtain the following diagram
\[\begin{CD}\sum\limits_{w\in\mathcal{A}^{*}}\varphi(w)w\\ @V{18\rho}V{}V@V{}V{\varphi\otimes 1}V\\ @V{}V{18\rho}V@V{}V{18\rho}V\\ \left(\sum\limits_{w\in(\mathcal{A}^{*})^{(d)}}\varphi(w)\rho(w)\right)_{d\geq 0 }\end{CD}.\]
## 4. Generating series of words in \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\)
Consider the alphabet \(\mathcal{Y}^{\mathrm{bi}}=\{y_{k,m}\mid k\geq 1,\ m\geq 0\}\) and let the _weight_ and the _depth_ of a word in \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) be given by
\[\mathrm{wt}(y_{k_{1},m_{1}}\dots y_{k_{d},m_{d}})=k_{1}+\dots+k_{d}+m_{1}+ \dots+m_{d},\qquad\mathrm{dep}(y_{k_{1},m_{1}}\dots y_{k_{d},m_{d}})=d,\]
for all \(k_{1},\ldots,k_{d}\geqslant 1,\ m_{1},\ldots,m_{d}\geqslant 0\). Define the \(\mathbb{Q}\)-linear map
\[\rho_{\mathcal{Y}^{\mathrm{bi}}}:\mathbb{Q}\langle\mathcal{Y}^{ \mathrm{bi}}\rangle \to\mathbb{Q}[X_{1},Y_{1},X_{2},Y_{2},\ldots],\] \[y_{k_{1},m_{1}}\ldots y_{k_{d},m_{d}} \mapsto X_{1}^{k_{1}-1}\frac{Y_{1}^{m_{1}}}{m_{1}!}\ldots X_{d}^{ k_{d}-1}\frac{Y_{d}^{m_{d}}}{m_{d}!},\]
then \(\rho_{\mathcal{Y}^{\mathrm{bi}}}\) satisfies the conditions in Definition 3.1 with \(\ell(d)=2d\) for all \(d\geqslant 0\). The generating series of words in \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) associated to \(\rho_{\mathcal{Y}^{\mathrm{bi}}}\) are given by \(\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})_{0}=\mathbf{1}\) and for \(d\geqslant 1\) by
\[\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})_{d}\binom{X_{1},\ldots,X_{d}}{Y _{1},\ldots,Y_{d}}=\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geqslant 1\\ m_{1},\ldots,m_{d}\geqslant 0\end{subarray}}y_{k_{1},m_{1}}\ldots y_{k_{d},m_{d}}X_{1 }^{k_{1}-1}\frac{Y_{1}^{m_{1}}}{m_{1}!}\ldots X_{d}^{k_{d}-1}\frac{Y_{d}^{m_{ d}}}{m_{d}!}.\]
The stuffle product \(*\) on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) is one of the well-behaved quasi-shuffle products, where we can give an explicit recursive formula on the level of generating series of words.
**Theorem 4.1**.: _For the stuffle product \(*\) on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) (Example 2.2.2.), one obtains for all \(0<n<d\) that \(\mathbf{1}*\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})_{n}=\rho_{\mathcal{Y}^ {\mathrm{bi}}}(\mathcal{W})_{n}*\mathbf{1}=\rho_{\mathcal{Y}^{\mathrm{bi}}}( \mathcal{W})_{n}\) and_
\[\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1},\ldots,X _{n}}{Y_{1},\ldots,Y_{n}}*\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{ X_{n+1},\ldots,X_{d}}{Y_{n+1},\ldots,Y_{d}}\] \[=\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1}}{Y_{1} }\cdot\left(\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{2},\ldots,X _{n}}{Y_{2},\ldots,Y_{n}}*\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W}) \binom{X_{n+1},\ldots,X_{d}}{Y_{n+1},\ldots,Y_{d}}\right)\] \[+\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{n+1}}{Y_{n +1}}\cdot\left(\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1}, \ldots,X_{n}}{Y_{1},\ldots,Y_{n}}*\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W}) \binom{X_{n+2},\ldots,X_{d}}{Y_{n+2},\ldots,Y_{d}}\right)\] \[+\frac{\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1}}{ Y_{1}+Y_{n+1}}-\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{n+1}}{Y_{1}+Y_{n+1 }}}{X_{1}-X_{n+1}}\cdot\left(\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W}) \binom{X_{2},\ldots,X_{n}}{Y_{2},\ldots,Y_{n}}*\rho_{\mathcal{Y}^{\mathrm{bi}} }(\mathcal{W})\binom{X_{n+2},\ldots,X_{d}}{Y_{n+2},\ldots,Y_{d}}\right).\]
Proof.: First, consider the case \(d=2\) and compute directly
\[\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1}}{Y_{1} }*\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{2}}{Y_{2}}=\sum_{ \begin{subarray}{c}k_{1},k_{2}\geqslant 1\\ m_{1},m_{2}\geqslant 0\end{subarray}}(y_{k_{1},m_{1}}*y_{k_{2},m_{2}})X_{1}^{k_{1}-1} \frac{Y_{1}^{m_{1}}}{m_{1}!}X_{2}^{k_{2}-1}\frac{Y_{2}^{m_{2}}}{m_{2}!}\] \[=\sum_{\begin{subarray}{c}k_{1},k_{2}\geqslant 1\\ m_{1},m_{2}\geqslant 0\end{subarray}}\Big{(}y_{k_{1},m_{1}}y_{k_{2},m_{2}}+y_{k_{2},m _{2}}y_{k_{1},m_{1}}+y_{k_{1}+k_{2},m_{1}+m_{2}}\Big{)}X_{1}^{k_{1}-1}\frac{Y _{1}^{m_{1}}}{m_{1}!}X_{2}^{k_{2}-1}\frac{Y_{2}^{m_{2}}}{m_{2}!}\] \[=\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1},X_{2}}{ Y_{1},Y_{2}}+\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{2},X_{1}}{Y_{2},Y_{1}}+ \frac{\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1}}{Y_{1}+Y_{2}} -\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{2}}{Y_{1}+Y_{2}}}{X_{1}- X_{2}},\]
where the last step follows from simple power series manipulations. Since the definition of the stuffle product \(*\) as well as the above formula for the generating series \(\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\) is recursive, the arbitrary depth case follows similarly by induction.
Following Definition 3.7, the quasi-shuffle algebra \((\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle,*)\) gives rise to a symmetry among bimoulds, which was studied extensively in [11] and [15].
**Definition 4.2**.: A bimould \(M=(M_{d})_{d\geqslant 0}\in\prod_{d\geqslant 0}R[\![X_{1},Y_{1},\ldots,X_{d},Y_{d}]\!]\) with coefficients in some \(\mathbb{Q}\)-algebra \(R\) is _symmetril_ if there is an algebra morphism \(\varphi_{*}:(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle,*)\to R\), such that \(M\) is \((\rho_{\mathcal{Y}^{\mathrm{bi}}},\varphi_{*})\)-symmetric, i.e., we have \(M_{0}=1\) and for all \(d\geqslant 1\)
\[M_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{\begin{subarray}{c}k _{1},\ldots,k_{d}\geqslant 1\\ m_{1},\ldots,m_{d}\geqslant 0\end{subarray}}\varphi_{*}(y_{k_{1},m_{1}} \ldots y_{k_{d},m_{d}})X_{1}^{k_{1}-1}\frac{Y_{1}^{m_{1}}}{m_{1}!}\ldots X_{d }^{k_{d}-1}\frac{Y_{d}^{m_{d}}}{m_{d}!}.\]
We refer to \(\varphi_{*}\) as the _coefficient map_ of \(M\).
**Example 4.3**.: Applying the recursive formula for \(*\) on generating series of words given in Theorem 4.1, one obtains that Definition 4.2 is equivalent to the symmetrility defined in [15, p. 17ff]. For example, symmetrility in depths \(2\) and \(3\) means
\[M\binom{X_{1}}{Y_{1}}\cdot M\binom{X_{2}}{Y_{2}} = M\binom{X_{1},X_{2}}{Y_{1},Y_{2}}+M\binom{X_{2},X_{1}}{Y_{2},Y_ {1}}+\frac{M\binom{X_{1}}{Y_{1}+Y_{2}}-M\binom{X_{2}}{Y_{1}+Y_{2}}}{X_{1}-X_{2 }},\] \[M\binom{X_{1}}{Y_{1}}\cdot M\binom{X_{2},X_{3}}{Y_{2},Y_{3}} = M\binom{X_{1},X_{2},X_{3}}{Y_{1},Y_{2},Y_{3}}+M\binom{X_{2},X_{1 },X_{3}}{Y_{2},Y_{1},Y_{3}}+M\binom{X_{2},X_{3},X_{1}}{Y_{2},Y_{3},Y_{1}}\] \[+\frac{M\binom{X_{1},X_{3}}{Y_{1}+Y_{2},Y_{3}}-M\binom{X_{2},X_{3 }}{Y_{1}+Y_{2},Y_{3}}}{X_{1}-X_{2}}+\frac{M\binom{X_{2},X_{1}}{Y_{2},Y_{1}+Y_ {3}}-M\binom{X_{2},X_{3}}{Y_{2},Y_{1}+Y_{3}}}{X_{1}-X_{3}}.\]
The deconcatenation coproduct on \(\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\) (see (2.1)) is compatible with the translation map \(\rho_{\mathcal{Y}^{\mathrm{bi}}}\), a straight-forward computation gives the following.
**Proposition 4.4**.: _For each \(d\geqslant 0\), we have_
\[\Delta_{\mathrm{dec}}\rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{i=0}^{d}\rho_{\mathcal{Y}^{\mathrm{ bi}}}(\mathcal{W})\binom{X_{1},\ldots,X_{i}}{Y_{1},\ldots,Y_{i}}\otimes \rho_{\mathcal{Y}^{\mathrm{bi}}}(\mathcal{W})\binom{X_{i+1},\ldots,Y_{d}}{X_{ i+1},\ldots,Y_{d}}.\]
## 5. Generating series of words in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\)
Consider the alphabet \(\mathcal{B}=\{b_{0},b_{1},b_{2},\ldots\}\) and define the _weight_ and the _depth_ of a word in \(\mathbb{Q}\langle\mathcal{B}\rangle\) by
\[\operatorname{wt}(b_{s_{1}}\ldots b_{s_{l}})=s_{1}+\cdots+s_{l}+\#\{i\mid s_ {i}=0\},\qquad\operatorname{dep}(b_{s_{1}}\ldots b_{s_{l}})=l-\#\{i\mid s_{i}=0\},\]
for all \(s_{1},\ldots,s_{l}\geqslant 0\). Let \(\rho_{\mathcal{B}}\) be the \(\mathbb{Q}\)-linear map given by
\[\rho_{\mathcal{B}}:\mathbb{Q}\langle\mathcal{B}\rangle \to\mathbb{Q}[Y_{0},X_{1},Y_{1},X_{2},Y_{2},\ldots],\] \[b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}} \mapsto Y_{0}^{m_{0}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X_{d}^{k_{d}-1}Y_{d}^{m _{d}}\quad(k_{1},\ldots,k_{d}\geqslant 1,m_{1},\ldots,m_{d}\geqslant 0),\]
then \(\rho_{\mathcal{B}}\) satisfies the conditions in Definition 3.1 with \(\ell(d)=2d+1\) for all \(d\geqslant 0\). The generating series of words associated to \(\rho_{\mathcal{B}}\) are given by for \(d\geqslant 0\)
\[\rho_{\mathcal{B}}(\mathcal{W})_{d}\binom{X_{1},\ldots,X_{d}}{Y_{0};Y_{1}, \ldots,Y_{d}}=\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geqslant 1\\ m_{0},\ldots,m_{d}\geqslant 0\end{subarray}}b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}} \ldots b_{k_{d}}b_{0}^{m_{d}}Y_{0}^{m_{0}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X _{d}^{k_{d}-1}Y_{d}^{m_{d}}.\]
Let \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) be the subspace of \(\mathbb{Q}\langle\mathcal{B}\rangle\) generated by all words not starting in \(b_{0}\), i.e., we have
\[\mathbb{Q}\langle\mathcal{B}\rangle^{0}=\mathbb{Q}\langle\mathcal{B}\rangle \backslash b_{0}\mathbb{Q}\langle\mathcal{B}\rangle.\]
Define the corresponding diagonal series
\[\mathcal{W}(\mathcal{B})^{0}=\sum_{w\in\mathcal{B}^{*}\cap\mathbb{Q}\langle \mathcal{B}\rangle^{0}}w\otimes w.\]
Let \(\rho^{0}_{\mathcal{B}}:\mathbb{Q}\langle\mathcal{B}\rangle^{0}\to\mathbb{Q}[X_{ 1},Y_{1},X_{2},Y_{2},\ldots]\) be the restriction of \(\rho_{\mathcal{B}}\) to \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\). Then the generating series of words in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) associated to \(\rho^{0}_{\mathcal{B}}\) are given by \(\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})_{0}=\mathbf{1}\) and
\[\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1}, \ldots,Y_{d}}=\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geq 1\\ m_{1},\ldots,m_{d}\geq 0\end{subarray}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0 }^{m_{d}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X_{d}^{k_{d}-1}Y_{d}^{m_{d}}, \quad d\geq 1.\]
On \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) the balanced quasi-shuffle product \(*_{b}\) is very well-behaved, thus we can also give a recursive definition on generating series of words for this product.
**Theorem 5.1**.: _For the balanced quasi-shuffle product \(*_{b}\) (Example 2.2.3.), one obtains for all \(0<n<d\) that \(\mathbf{1}*_{b}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})_{n}=\rho^{0}_{\mathcal{ B}}(\mathcal{W}^{0})_{n}*_{b}\ \ \mathbf{1}=\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})_{n}\) and_
\[\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1},\ldots,X_{n}} {Y_{1},\ldots,Y_{n}}*_{b}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{n+1},\ldots,X_{d}}{Y_{n+1},\ldots,Y_{d}}\] \[=\binom{\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}, \ldots,X_{n-1}}{Y_{1},\ldots,Y_{n-1}}*_{b}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{ 0})\binom{X_{n+1},\ldots,X_{d}}{Y_{n+1},\ldots,Y_{d}}\cdot\rho^{0}_{\mathcal{B }}(\mathcal{W}^{0})\binom{X_{n}}{Y_{n}+Y_{d}}}{X_{n}-X_{d}}\] \[+\binom{\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}, \ldots,X_{n}}{Y_{1},\ldots,Y_{n}}*_{b}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0}) \binom{X_{n+1},\ldots,X_{d-1}}{Y_{n+1},\ldots,Y_{d-1}}\cdot\rho^{0}_{\mathcal{ B}}(\mathcal{W}^{0})\binom{X_{d}}{Y_{n}+Y_{d}}}{X_{n}-X_{d}}\cdot\rho^{0}_{\mathcal{B}}( \mathcal{W}^{0})\binom{X_{d}}{Y_{n}+Y_{d}}.\]
Proof.: Consider the case \(d=2\). We obtain
\[\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}} =\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)\cdot\frac{\mathbf{1}}{ \mathbf{1}-b_{0}Y_{1}} \tag{5.1}\] \[=\sum_{k\geq 1}b_{k}X_{1}^{k-1}+\rho^{0}_{\mathcal{B}}(\mathcal{W}^ {0})\binom{X_{1}}{Y_{1}}\cdot b_{0}Y_{1}.\]
By applying the second identity in (5.1) and the definition of the balanced quasi-shuffle product (we use the recursive definition from the right), we obtain
\[\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}*_{b} \rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\] \[=\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b}\left(\sum_{k \geq 1}b_{k}X_{2}^{k-1}\right)+\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b} \left(\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\cdot b_{0}Y_{ 2}\right)\] \[+\left(\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1} }\cdot b_{0}Y_{1}\right)*_{b}\left(\sum_{k\geq 1}b_{k}X_{2}^{k-1}\right)+ \left(\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}\cdot b_{0}Y_ {1}\right)*_{b}\left(\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2} }\cdot b_{0}Y_{2}\right)\]
\[=\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b}\left(\sum_{k \geq 1}b_{k}X_{2}^{k-1}\right)+\left(\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b} \left(\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}Y_{2}\right) \right)\cdot b_{0}\] \[+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\cdot b _{0}Y_{2}\cdot\sum_{k\geq 1}b_{k}X_{1}^{k-1}+\left(\left(\rho_{\mathcal{B}}^{0}( \mathcal{W}^{0})\binom{X_{1}}{Y_{1}}Y_{1}\right)*_{b}\left(\sum_{k\geq 1}b_{k}X_{2} ^{k-1}\right)\right)\cdot b_{0}\] \[+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}\cdot b _{0}Y_{1}\cdot\sum_{k\geq 1}b_{k}X_{2}^{k-1}+\left(\left(\rho_{\mathcal{B}}^{0}( \mathcal{W}^{0})\binom{X_{1}}{Y_{1}}Y_{1}\right)*_{b}\left(\rho_{\mathcal{B}} ^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\cdot b_{0}Y_{2}\right)\right)\cdot b _{0}\] \[+\left(\left(\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}} {Y_{1}}\cdot b_{0}Y_{1}\right)*_{b}\left(\rho_{\mathcal{B}}^{0}(\mathcal{W}^{ 0})\binom{X_{2}}{Y_{2}}Y_{2}\right)\right)\cdot b_{0}.\]
Applying again the second identity in (5.1) together with some cancellation, we deduce
\[\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}*_{b} \rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\] \[=\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b}\left(\sum_{k \geq 1}b_{k}X_{2}^{k-1}\right)-2\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)\cdot \left(\sum_{k\geq 1}b_{k}X_{2}^{k-1}\right)+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0} )\binom{X_{2}}{Y_{2}}\cdot\sum_{k\geq 1}b_{k}X_{1}^{k-1}\] \[+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}\cdot \sum_{k\geq 1}b_{k}X_{2}^{k-1}+\left(\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0}) \binom{X_{1}}{Y_{1}}*_{b}\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}} {Y_{2}}\right)\cdot b_{0}(Y_{1}+Y_{2}).\]
Using the first identity in (5.1) together with the formula for the concatenation product in terms of generating series of words (Proposition 3.4), we obtain
\[\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}*_{b} \rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}} \tag{5.2}\] \[=\left(\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b}\left(\sum_ {k\geq 1}b_{k}X_{2}^{k-1}\right)-2\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right) \cdot\left(\sum_{k\geq 1}b_{k}X_{2}^{k-1}\right)\right)\cdot\frac{\mathbf{1}}{ \mathbf{1}-b_{0}(Y_{1}+Y_{2})}\] \[\quad+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2},X_{1}}{ Y_{2},Y_{1}+Y_{2}}+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1},X_{2}}{Y_{1},Y_{1}+Y _{2}}.\]
Moreover, one verifies by applying the definition of \(*_{b}\) and some power series manipulation
\[\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)*_{b}\left(\sum_{k\geq 1}b_{k}X_{2}^{k- 1}\right)=2\left(\sum_{k\geq 1}b_{k}X_{1}^{k-1}\right)\cdot\left(\sum_{k\geq 1}b_{k}X_{2}^{k- 1}\right)+\frac{\sum_{k\geq 1}b_{k}X_{1}^{k-1}-\sum_{k\geq 1}b_{k}X_{2}^{k-1}}{X_{1}-X_{2}}.\]
Applying this result to (5.2) together with (5.1) gives
\[\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}}*_{b} \rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{2}}\] \[=\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2},X_{1}}{Y_{2},Y _{1}+Y_{2}}+\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1},X_{2}}{Y_{1},Y _{1}+Y_{2}}+\frac{\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}}{Y_{1}+Y_{ 2}}-\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{2}}{Y_{1}+Y_{2}}}{X_{1}-X_ {2}}.\]
This equals exactly the claimed formula of \(*_{b}\) for the generating series in depth 2. Since the definition of the product \(*_{b}\) and the above generating series formula are both recursive, we obtain the desired formula in arbitrary depth by applying induction and the same computations as before.
The quasi-shuffle algebra \((\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b})\) gives rise to symmetry among bimoulds according to Definition 3.7.
**Definition 5.2**.: A bimould \(M=(M_{d})_{d\geqslant 0}\in\prod_{d\geqslant 0}R[\![X_{1},Y_{1},\ldots,X_{d},Y_{d}]\!]\) with coefficients in some \(\mathbb{Q}\)-algebra \(R\) is _b-symmetri_ if there is an algebra morphism \(\varphi_{*_{b}}:(\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b})\to R\), such that \(M\) is \((\varphi_{*_{b}},\rho_{\mathcal{B}}^{0})\)-symmetric, i.e., we have \(M_{0}=1\) and for all \(d\geqslant 1\)
\[M_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{\begin{subarray}{c}k _{1},\ldots,k_{d}\geqslant 1\\ m_{1},\ldots,m_{d}\geqslant 0\end{subarray}}\varphi_{*_{b}}(b_{k_{1}}b_{0}^{m_{1}} \ldots b_{k_{d}}b_{0}^{m_{d}})X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X_{d}^{k_{d} -1}Y_{d}^{m_{d}}.\]
As before, we call the map \(\varphi_{*_{b}}\) the _coefficient map_ of \(M\).
**Example 5.3**.: Applying the recursive formula for \(*_{b}\) on the generating series of words given in Theorem 5.1, we obtain that b-symmetrility in depth 2 and 3 means
\[M\binom{X_{1}}{Y_{1}}\cdot M\binom{X_{2}}{Y_{2}} =M\binom{X_{2},X_{1}}{Y_{2},Y_{1}+Y_{2}}+M\binom{X_{1},X_{2}}{Y_{ 1},Y_{1}+Y_{2}}+\frac{M\binom{X_{1}}{Y_{1}+Y_{2}}-M\binom{X_{2}}{Y_{1}+Y_{2}}} {X_{1}-X_{2}},\] \[M\binom{X_{1}}{Y_{1}}\cdot M\binom{X_{2},X_{3}}{Y_{2},Y_{3}} =M\binom{X_{2},X_{3},X_{1}}{Y_{2},Y_{3},Y_{1}+Y_{3}}+M\binom{X_{2},X_{1},X_{3}}{Y_{2},Y_{1}+Y_{2},Y_{1}+Y_{3}}\] \[\quad+M\binom{X_{1},X_{2},X_{3}}{Y_{1},Y_{1}+Y_{2},Y_{1}+Y_{3}}+ \frac{M\binom{X_{2},X_{1}}{Y_{2},Y_{1}+Y_{3}}-M\binom{X_{2},X_{3}}{Y_{2},Y_{1 }+Y_{3}}}{X_{1}-X_{3}}\] \[\quad+\frac{M\binom{X_{1},X_{3}}{Y_{1}+Y_{2},Y_{1}+Y_{3}}-M \binom{X_{2},X_{3}}{Y_{1}+Y_{2},Y_{1}+Y_{3}}}{X_{1}-X_{2}}.\]
## 6. Generating series of words in \(\mathbb{Q}\langle\mathcal{B}\rangle\) and regularization
The deconcatenation coproduct on \(\mathbb{Q}\langle\mathcal{B}\rangle\) (defined in (2.1)) is compatible with the translation map \(\rho_{\mathcal{B}}\), hence it can be described in terms of the generating series \(\rho_{\mathcal{B}}(\mathcal{W})\) of words in \(\mathbb{Q}\langle\mathcal{B}\rangle\). But the deconcatenation coproduct does not preserve the subspace \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\). Therefore, we will introduce a regularization map and a regularized coproduct and describe both in terms of the generating series \(\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\) of words in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\).
**Proposition 6.1**.: _For all \(d\geqslant 0\), we have_
\[\Delta_{\rm dec}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y_{0 };Y_{1},\ldots,Y_{d}}=\sum_{i=0}^{d}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1 },\ldots,X_{i}}{Y_{0};Y_{1},\ldots,Y_{i}}\otimes\rho_{\mathcal{B}}(\mathcal{W} )\binom{X_{i+1},\ldots,X_{d}}{Y_{i};Y_{i+1},\ldots,Y_{d}}.\]
Proof.: By definition of the deconcatenation coproduct, we have
\[\Delta_{\rm dec}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y_{0};Y_{1},\ldots,Y_{d}}\] \[=\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geqslant 1\\ m_{0},\ldots,m_{d}\geqslant 0\end{subarray}}\Delta_{\rm dec}(b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{ 1}}\ldots b_{k_{d}}b_{0}^{m_{d}})Y_{0}^{m_{0}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}} \ldots X_{d}^{k_{d}-1}Y_{d}^{m_{d}}\] \[=\sum_{i=0}^{d}\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d} \geqslant 1\\ m_{0},\ldots,m_{d}\geqslant 0\end{subarray}}\sum_{\begin{subarray}{c}n_{1}+n_{2}=m_{i} \\ n_{1},n_{2}\geqslant 0\end{subarray}}(b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{i} }b_{0}^{n_{1}}\otimes b_{0}^{n_{2}}b_{k_{i+1}}b_{0}^{m_{i+1}}\ldots b_{k_{d} }b_{0}^{m_{d}})Y_{0}^{m_{0}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X_{d}^{k_{d}-1}Y _{d}^{m_{d}}\]
Reordering these sums, we obtain that this expression is equal to
\[\sum_{i=0}^{d}\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geqslant 1 \\ m_{0},\ldots,m_{i-1},n_{1},n_{2},m_{i+1},\ldots,m_{d}\geqslant 0\end{subarray}}(b_{0}^{m_ {0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{i}}b_{0}^{n_{1}}\otimes b_{0}^{n_{2}}b_{k _{i+1}}b_{0}^{m_{i+1}}\ldots b_{k_{d}}b_{0}^{m_{d}})Y_{0}^{m_{0}}X_{1}^{k_{1}-1 }Y_{1}^{m_{1}}\\ \ldots X_{i}^{k_{i}-1}Y_{i}^{n_{1}+n_{2}}X_{i+1}^{k_{i+1}-1}Y_{i+1 }^{m_{i+1}}\ldots X_{d}^{k_{d}-1}Y_{d}^{m_{d}}\\ =\sum_{i=0}^{d}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1},\ldots, X_{i}}{Y_{0};Y_{1},\ldots,Y_{i}}\otimes\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{i+1 },\ldots,X_{d}}{Y_{i};Y_{i+1},\ldots,Y_{d}}\]
Next, we introduce a regularization, which assigns to each word in \(\mathbb{Q}\langle\mathcal{B}\rangle\) an element in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\), such that the balanced quasi-shuffle product \(*_{b}\) is preserved. The regularization process is similar to the ones for multiple zeta values given in [13, Proposition 1]. We will rephrase this regularization map in terms of generating series of words.
**Proposition 6.2**.: _Let \(T\) be a formal variable and extend the product \(*_{b}\) by \(\mathbb{Q}[T]\)-linearity to \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}[T]\). The map_
\[\operatorname{reg}_{T}:\ \mathbb{Q}\langle\mathcal{B}\rangle^{0}[T] \to\mathbb{Q}\langle\mathcal{B}\rangle,\] \[wT^{n} \mapsto w*_{b}b_{0}^{*_{b}n}\]
_is an algebra isomorphism for the balanced quasi-shuffle product \(*_{b}\)._
Proof.: For the surjectivity of \(\operatorname{reg}_{T}\), we show that any word \(w\in\mathbb{Q}\langle\mathcal{B}\rangle\) is a polynomial in \(b_{0}\) with coefficients in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\). Let \(w=b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}}\) for some integers \(k_{1},\ldots,k_{d}\geqslant 1\) and \(m_{0},\ldots,m_{d}\geqslant 0\). We prove by induction on \(m_{0}\), that \(w=u+v*_{b}b_{0}\) for some \(u\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) and \(v\in\mathbb{Q}\langle\mathcal{B}\rangle\), where all words in \(v\) have weight smaller than \(\operatorname{wt}(w)\). Then induction on the weight proves the claim. The case \(m_{0}=0\) is trivial, simply choose \(u=w,\ v=0\). Next, calculate
\[b_{0}^{m_{0}-1}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d }}*_{b}b_{0}= \ m_{0}\ b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}}\] \[+\sum_{i=1}^{d}(m_{i}+1)\ b_{0}^{m_{0}-1}b_{k_{1}}b_{0}^{m_{1}} \ldots b_{k_{i}}b_{0}^{m_{i}+1}b_{k_{i+1}}b_{0}^{m_{i+1}}\ldots b_{k_{d}}b_{0} ^{m_{d}}.\]
Applying the induction hypotheses to every word in the second line leads to
\[w=\frac{1}{m_{0}}\left(u+\left(v+b_{0}^{m_{0}-1}b_{k_{1}}b_{0}^{m_{1}}\ldots b _{k_{d}}b_{0}^{m_{d}}\right)*_{b}b_{0}\right)\]
for some \(u\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) and \(v\in\mathbb{Q}\langle\mathcal{B}\rangle\), where \(v+b_{0}^{m_{0}-1}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}}\) consists of words of weight smaller than \(\operatorname{wt}(w)\).
Let \(P\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}[T]\backslash\{0\}\) and write \(P=wT^{n}+R\), where \(w\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}\backslash\{0\}\) and \(R\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}[T]\) is a polynomial of degree smaller than \(n\). We have
\[\operatorname{reg}_{T}(P)=w*_{b}b_{0}^{*_{b}n}+\operatorname{reg}_{T}(R)=n!b_{ 0}^{n}w+\widetilde{w},\]
where \(\widetilde{w}\in\mathbb{Q}\langle\mathcal{B}\rangle\) consists of words with at most \((n-1)\)-times the letter \(b_{0}\) at the beginning. We deduce \(\operatorname{reg}_{T}(P)\neq 0\), thus \(\operatorname{reg}_{T}\) is injective.
In particular, by first applying the inverse \(\operatorname{reg}_{T}^{-1}:\mathbb{Q}\langle\mathcal{B}\rangle\to\mathbb{Q} \langle\mathcal{B}\rangle^{0}[T]\) and then evaluating in \(T=0\) yields the desired _regularization map_
\[\operatorname{reg}:(\mathbb{Q}\langle\mathcal{B}\rangle,*_{b})\to(\mathbb{Q} \langle\mathcal{B}\rangle^{0},*_{b}).\]
By construction, it is an algebra morphism for the balanced quasi-shuffle product \(*_{b}\).
**Theorem 6.3**.: _For all \(d\geq 1\), we have_
\[\operatorname{reg}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y _{0};Y_{1},\ldots,Y_{d}}=\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1}, \ldots,X_{d}}{Y_{1}-Y_{0},\ldots,Y_{d}-Y_{0}}.\]
Proof.: We obtain from [13, equation (5.2)] applied to our setup that for each word \(w=b_{0}^{m_{0}}b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}}\) we have
\[\operatorname{reg}(w) =(-1)^{m_{0}}b_{k_{1}}(b_{0}^{m_{0}}*_{b}b_{0}^{m_{1}}b_{k_{2}}b_ {0}^{m_{2}}\ldots b_{k_{d}}b_{0}^{m_{d}})\] \[=(-1)^{m_{0}}\sum_{\begin{subarray}{c}n_{1}+\cdots+n_{d}=m_{0} \\ n_{1},\ldots,n_{d}\geq 0\end{subarray}}\binom{m_{1}+n_{1}}{n_{1}}\ldots \binom{m_{d}+n_{d}}{n_{d}}b_{k_{1}}b_{0}^{m_{1}+n_{1}}b_{k_{2}}b_{0}^{m_{2}+n _{2}}\ldots b_{k_{d}}b_{0}^{m_{d}+n_{d}}.\]
Hence we deduce
\[\operatorname{reg}\rho_{\mathcal{B}}(\mathcal{W})\binom{X_{1}, \ldots,X_{d}}{Y_{0};Y_{1},\ldots,Y_{d}}\] (6.1) \[=\sum_{\begin{subarray}{c}k_{1},\ldots,k_{d}\geq 1\\ m_{0},\ldots,m_{d}\geq 0\end{subarray}}(-1)^{m_{0}}\sum_{\begin{subarray}{c}n_{1}+ \cdots+n_{d}=m_{0}\\ n_{1},\ldots,n_{d}\geq 0\end{subarray}}\binom{m_{1}+n_{1}}{n_{1}}\ldots \binom{m_{d}+n_{d}}{n_{d}}b_{k_{1}}b_{0}^{m_{1}+n_{1}}b_{k_{2}}b_{0}^{m_{2}+n _{2}}\ldots b_{k_{d}}b_{0}^{m_{d}+n_{d}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
**Proposition 6.5**.: _For all \(d\geq 1\), we have_
\[\Delta^{0}_{\rm dec}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0})\binom{X_{1},\ldots,X_ {d}}{Y_{1},\ldots,Y_{d}}=\sum_{i=0}^{d}\rho^{0}_{\mathcal{B}}(\mathcal{W}^{0}) \binom{X_{1},\ldots,X_{i}}{Y_{1},\ldots,Y_{i}}\otimes\rho^{0}_{\mathcal{B}}( \mathcal{W}^{0})\binom{X_{i+1},\ldots,X_{d}}{Y_{i+1}-Y_{i},\ldots,Y_{d}-Y_{i}}.\]
Comparison of the generating series of words in \(\mathbb{Q}\langle\mathcal{Y}^{\rm bi}\rangle\) and \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\)
In this section, we will relate the product and coproduct formulas given for the generating series of words in \(\mathbb{Q}\langle\mathcal{Y}^{\rm bi}\rangle\) and \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) (Section 4, 5, 6).
**Definition 7.1**.: For any bimould \(M=(M_{d})_{d\geq 0}\) define the bimould \(M^{\#_{Y}}=(M_{d}^{\#_{Y}})_{d\geq 0}\) by
\[M_{d}^{\#_{Y}}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=M_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},Y_{1}+Y_{2},\ldots,Y_{1}+\cdots+Y_{d}},\qquad d\geq 0.\]
Evidently, the inverse operation is given by
\[M_{d}^{\#_{Y}^{-1}}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=M_{d}\binom{ X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}-Y_{d-1}},\qquad d\geq 0.\]
**Definition 7.2**.: Let \(\varphi_{\#}:\mathbb{Q}\langle\mathcal{Y}^{\rm bi}\rangle\to\mathbb{Q}\langle \mathcal{B}\rangle^{0}\) be the \(\mathbb{Q}\)-linear map implicitly defined by
\[(\varphi_{\#}\otimes\rho_{\mathcal{Y}^{\rm bi}})(\mathcal{W})\binom{X_{1}, \ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\rho_{\mathcal{B}}(\mathcal{W})_{d}^{\#_{Y} }\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}},\]
i.e., the coefficient of \(X_{1}^{k_{1}-1}\frac{Y_{1}^{m_{1}}}{m_{1}!}\ldots X_{d}^{k_{d}-1}\frac{Y_{d}^{ m_{d}}}{m_{d}!}\) in \(\rho_{\mathcal{B}}(\mathcal{W})_{d}^{\#_{Y}}\) equals the image of the word \(y_{k_{1},m_{1}}\ldots y_{k_{d},m_{d}}\) under the map \(\varphi_{\#}\).
**Example 7.3**.: We obtain for all \(k_{1},k_{2}\geq 1,\ m_{1},m_{2}\geq 0\)
\[\varphi_{\#}(y_{k_{1},m_{1}}) =m_{1}!b_{k_{1}}b_{0}^{m_{1}},\] \[\varphi_{\#}(y_{k_{1},m_{1}}y_{k_{2},m_{2}}) =\sum_{n=m_{2}}^{m_{1}+m_{2}}\frac{m_{1}!n!}{(n-m_{2})!}b_{k_{1}}b _{0}^{m_{1}+m_{2}-n}b_{k_{2}}b_{0}^{n}.\]
**Theorem 7.4**.: _The map \(\varphi_{\#}:(\mathbb{Q}\langle\mathcal{Y}^{\rm bi}\rangle,*)\to(\mathbb{Q} \langle\mathcal{B}\rangle^{0},*_{b})\) is an isomorphism of weight-graded algebras._
Proof.: Evidently, \(\varphi_{\#}\) is an isomorphism of graded vector \(\mathbb{Q}\)-vector spaces. We will now show the compatibility of the products \(*\) and \(*_{b}\) on the level of generating series, this means we will prove that \(\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}}\) satisfies the stuffle product formula given in Theorem 4.1. For all \(d,d^{\prime}\geq 1\) denote by \(\shuffle(d,d^{\prime})\) the set of all permutations \(\sigma\in S_{d+d^{\prime}}\) satisfying \(\sigma(1)<\cdots<\sigma(d)\), \(\sigma(d+1)<\cdots<\sigma(d+d^{\prime})\). Moreover, write \(\underline{Y_{i}}=Y_{1}+\cdots+Y_{i}\) for \(1\leq i\leq d\) and \(\underline{Y_{d+j}}=Y_{d+1}+\cdots+Y_{d+j}\) for \(1\leq j\leq d^{\prime}\). Then by Theorem 5.1, we have modulo terms of lower depths
\[\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}}\binom{X_{1},\ldots,X_{d} }{Y_{1},\ldots,Y_{d}}\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}}\binom{X_{d+1}, \ldots,X_{d+d^{\prime}}}{Y_{d+1},\ldots,Y_{d+d^{\prime}}} \tag{7.1}\] \[\equiv\sum_{\sigma\shuffle(d,d^{\prime})}\rho_{\mathcal{B}}( \mathcal{W})\binom{X_{\sigma^{-1}(1)},\ldots,X_{\sigma^{-1}(d+d^{\prime})}}{Y_{ \sigma^{-1}(1)}+Y_{\sigma^{-1}_{\mu}(1)},\ldots,Y_{\sigma^{-1}(d+d^{\prime})}+ Y_{\sigma^{-1}_{\mu}(d+d^{\prime})}}.\]
Here we set
\[\sigma_{\mu}^{-1}(k)=\begin{cases}\sigma^{-1}\big{(}\max\{n\mid\sigma^{-1}(n)>d,n<k \}\big{)},&1\leq\sigma^{-1}(k)\leq d,\\ \sigma^{-1}\big{(}\max\{n\mid\sigma^{-1}(n)\leq d,\ n<k\}\big{)},&d+1\leq\sigma ^{-1}(k)\leq d+d^{\prime},\end{cases}\]
and \(\underline{Y_{\sigma_{\mu}^{-1}(k)}}=0\) if such a number \(n\) does not exists. Observe that for every term in the right-hand side of (7.1) and for every \(k=1,\ldots,d+d^{\prime}\) the predecessor of the entry \(\underline{Y_{\sigma^{-1}(k)}+Y_{\sigma_{\mu}^{-1}(k)}}\) in the bi-index is given by \(\underline{Y_{\sigma^{-1}(k)-1}}+\underline{Y_{\sigma_{\mu}^{-1}(k)}}\) (where \(\underline{Y_{\sigma^{-1}(k)-1}}:=0\) if \(k\in\{1,d+1\}\)) and, we have \(\underline{Y_{n}}-\underline{Y_{n-1}}=Y_{n}\). Thus, we deduce
\[\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}^{-1}}\begin{pmatrix}X_{\sigma^{-1}(1) },\ldots,X_{\sigma^{-1}(d+d^{\prime})}\\ \underline{Y_{\sigma^{-1}(1)}}+\underline{Y_{\sigma_{\mu}^{-1}(1)}},\ldots, \underline{Y_{\sigma^{-1}(d+d^{\prime})}}+\underline{Y_{\sigma_{\mu}^{-1}(d+ d)}}\end{pmatrix}=\rho_{\mathcal{B}}(\mathcal{W})\begin{pmatrix}X_{\sigma^{-1}(1)}, \ldots,X_{\sigma^{-1}(d+d^{\prime})}\\ Y_{\sigma^{-1}(1)},\ldots,Y_{\sigma^{-1}(d+d^{\prime})}\end{pmatrix}. \tag{7.2}\]
Combining the equations (7.1) and (7.2), we get modulo terms of lower depths
\[\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}}\begin{pmatrix}X_{1}, \ldots,X_{d}\\ Y_{1},\ldots,Y_{d}\end{pmatrix}\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}} \begin{pmatrix}X_{d+1},\ldots,X_{d+d^{\prime}}\\ Y_{d+1},\ldots,Y_{d+d^{\prime}}\end{pmatrix}\\ =\sum_{\sigma\in\sqcup(d,d^{\prime})}\rho_{\mathcal{B}}(\mathcal{W })^{\#_{Y}}\begin{pmatrix}X_{\sigma^{-1}(1)},\ldots,X_{\sigma^{-1}(d+d^{\prime })}\\ Y_{\sigma^{-1}(1)},\ldots,Y_{\sigma^{-1}(d+d^{\prime})}\end{pmatrix}\]
Moreover, for every entry \(\begin{pmatrix}X_{j}\\ \underline{Y_{j}+Y_{j^{\prime}}}\end{pmatrix}\) in the lower depth terms coming from the third line in the recursive expression of \(*_{b}\) given in Theorem 5.1, the predecessor in the lower row is given by \(\underline{Y_{j-1}}+\underline{Y_{j^{\prime}-1}}\) (with \(\underline{Y_{j-1}}=0\) for \(j=1,d+1\)). Therefore, the terms of lower depth equals the ones in the stuffle product formula on generating series of words given in Theorem 4.1. Altogether, this means the product
\[\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}}\begin{pmatrix}X_{1},\ldots,X_{d}\\ Y_{1},\ldots,Y_{d}\end{pmatrix}\rho_{\mathcal{B}}(\mathcal{W})^{\#_{Y}} \begin{pmatrix}X_{d+1},\ldots,X_{d+d^{\prime}}\\ Y_{d+1},\ldots,Y_{d+d^{\prime}}\end{pmatrix}\]
has an expression via the same recursive formula as for the stuffle product obtained in Theorem 4.1. Thus, \(\varphi_{\#}\) is an algebra morphism.
**Corollary 7.5**.: _A bimould \(M=(M_{d})_{d\geq 0}\) is b-symmetril if and only if the bimould \(M^{\#_{Y}}\) is symmetril._
Proof.: This is a direct consequence of Theorem 7.4 and the definition of symmetrility and b-symmetrility (compare to Definition 4.2, 5.2).
The algebra morphism \(\varphi_{\#}:\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\to\mathbb{Q} \langle\mathcal{B}\rangle^{0}\) given in Theorem 7.4 is compatible with the coproducts \(\Delta_{\mathrm{dec}}\) and \(\Delta_{\mathrm{dec}}^{0}\).
**Proposition 7.6**.: _For all \(d\geq 1\), we have_
\[(\varphi_{\#}\otimes\varphi_{\#})\circ\Delta_{\mathrm{dec}}=\Delta_{\mathrm{ dec}}^{0}\circ\varphi_{\#}.\]
Proof.: We will prove this on the level of generating series of words by using the formulas for \(\Delta_{\mathrm{dec}}\) and \(\Delta_{\mathrm{dec}}^{0}\) given in Proposition 4.4 and 6.5. This allows us to straight-forwardly compute
\[(\#_{Y}\otimes\#_{Y})\circ\Delta_{\text{dec}}\rho_{\mathcal{Y}^{\text{bi}}}( \mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}\] \[=\sum_{i=0}^{d}\rho_{\mathcal{Y}^{\text{bi}}}(\mathcal{W})^{\#_{Y}} \binom{X_{1},\ldots,X_{i}}{Y_{1},\ldots,Y_{i}}\otimes\rho_{\mathcal{Y}^{\text{ bi}}}(\mathcal{W})^{\#_{Y}}\binom{X_{i+1},\ldots,X_{d}}{Y_{i+1},\ldots,Y_{d}}\] \[=\sum_{i=0}^{d}\rho_{\mathcal{Y}^{\text{bi}}}(\mathcal{W})\binom{X _{1},\ldots,X_{i}}{Y_{1},Y_{1}+Y_{2},\ldots,Y_{1}+\cdots+Y_{i}}\otimes\rho_{ \mathcal{Y}^{\text{bi}}}(\mathcal{W})\binom{X_{i+1},\ldots,X_{d}}{Y_{i+1}+Y_{ i+2},\ldots,Y_{i+1}+\cdots+Y_{d}}.\]
On the other hand, we have
\[\Delta_{\text{dec}}^{0}\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})^{ \#_{Y}}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\Delta_{\text{dec}}^{0} \rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1},\ldots,X_{d}}{Y_{1},Y_{1}+ Y_{2},\ldots,Y_{1}+\cdots+Y_{d}}\\ =\sum_{i=0}^{d}\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1 },\ldots,X_{i}}{Y_{1},Y_{1}+Y_{2},\ldots,Y_{1}+\cdots+Y_{i}}\otimes\rho_{ \mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{i+1},\ldots,X_{d}}{Y_{i+1},Y_{i+1}+ Y_{i+2},\ldots,Y_{i+1}+\cdots+Y_{d}}.\]
By definition of the map \(\varphi_{\#}\) (Definition 7.2) we get the claimed formula.
There are two involutions defined on \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\) and \(\mathbb{Q}\langle\mathcal{Y}^{\text{bi}}\rangle\), which play an important role in the theory of multiple q-zeta values and which are compatible with the morphism \(\varphi_{\#}\).
**Definition 7.7**.: Let \(\tau:\mathbb{Q}\langle\mathcal{B}\rangle^{0}\to\mathbb{Q}\langle\mathcal{B} \rangle^{0}\) be the involution given by \(\tau(\mathbf{1})=\mathbf{1}\) and
\[\tau(b_{k_{1}}b_{0}^{m_{1}}\ldots b_{k_{d}}b_{0}^{m_{d}})=b_{m_{d}+1}b_{0}^{k_ {d}-1}\ldots b_{m_{1}+1}b_{0}^{k_{1}-1}\]
for all \(k_{1},\ldots,k_{d}\geq 1,\ m_{1},\ldots,m_{d}\geq 0\).
Extend the involution \(\tau\) by \(\mathbb{Q}[X_{1},Y_{1},X_{2},Y_{2},\ldots]\)-linearity to \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}[\![X_{1},Y_{1},X_{2},Y_{2},\ldots]\!]\), then we obtain directly for each \(d\geq 0\) that
\[\tau\Bigg{(}\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1},\ldots,X_{d}} {Y_{1},\ldots,Y_{d}}\Bigg{)}=\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{Y_ {d},\ldots,Y_{1}}{X_{d},\ldots,X_{1}}.\]
This motivates the following definition for bimoulds.
**Definition 7.8**.: For a bimould \(M=(M_{d})_{d\geq 0}\), define the bimould \(\tau(M)=(\tau(M)_{d})_{d\geq 0}\) by
\[\tau(M)_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=M_{d}\binom{Y_{d}, \ldots,Y_{1}}{X_{d},\ldots,X_{1}}.\]
We call a bimould \(M\)\(\tau\)_-invariant_ if \(\tau(M)=M\).
The second involution defined on the algebra \(\mathbb{Q}\langle\mathcal{Y}^{\text{bi}}\rangle\) is originally given in terms of bimoulds, since its expression is much easier in this case.
**Definition 7.9**.: For a bimould \(M=(M_{d})_{d\geq 0}\), define the bimould \(\operatorname{swap}(M)=(\operatorname{swap}(M)_{d})_{d\geq 0}\) as
\[\operatorname{swap}(M)_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=M_{d} \binom{Y_{1}+\cdots+Y_{d},Y_{1}+\cdots+Y_{d-1},\ldots,Y_{1}}{X_{d},X_{d-1}-X_{d },\ldots,X_{1}-X_{2}}.\]
A bimould \(M\) is called _swap invariant_ if \(\operatorname{swap}(M)=M\).
Define the involution \(\operatorname{swap}:\mathbb{Q}\langle\mathcal{Y}^{\operatorname{bi}}\rangle \to\mathbb{Q}\langle\mathcal{Y}^{\operatorname{bi}}\rangle\) implicitly by
\[(\operatorname{swap}\otimes\!\rho_{\mathcal{Y}^{\operatorname{bi}}})( \mathcal{W})_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\operatorname {swap}(\rho_{\mathcal{Y}^{\operatorname{bi}}}(\mathcal{W}))_{d}\binom{X_{1}, \ldots,X_{d}}{Y_{1},\ldots,Y_{d}},\]
i.e., the coefficient of \(X_{1}^{k_{1}-1}\frac{Y_{1}^{m_{1}}}{m_{1}!}\ldots X_{d}^{k_{d}-1}\frac{Y_{d}^{ m_{d}}}{m_{d}!}\) in \(\operatorname{swap}(\rho_{\mathcal{Y}^{\operatorname{bi}}}(\mathcal{W}))_{d}\) is the image of the word \(y_{k_{1},m_{1}}\ldots y_{k_{d},m_{d}}\) under the map swap.
For example, we obtain
\[\operatorname{swap}(y_{k_{1},m_{1}}) =\frac{m_{1}!}{(k_{1}-1)!}y_{m_{1}+1,k_{1}-1},\] \[\operatorname{swap}(y_{k_{1},m_{1}}y_{k_{2},m_{2}}) =\sum_{u=0}^{m_{1}}\sum_{v=0}^{k_{2}-1}\frac{(-1)^{v}}{u!v!} \frac{m_{1}!}{(k_{1}-1)!}\frac{(m_{2}+u)!}{(k_{2}-1-v)!}y_{m_{2}+1+u,k_{2}-1-v }y_{m_{1}+1-u,k_{1}-1+v}.\]
In higher depths, it is hard to give an explicit formula for the involution swap on the algebra \(\mathbb{Q}\langle\mathcal{Y}^{\operatorname{bi}}\rangle\), see for example [1, Remark 3.14].
**Theorem 7.10**.: _The map_
\[\varphi_{\#}:(\mathbb{Q}\langle\mathcal{Y}^{\operatorname{bi}}\rangle,*, \Delta_{\operatorname{dec}})\to(\mathbb{Q}\langle\mathcal{B}\rangle^{0},*_{b },\Delta_{\operatorname{dec}}^{0})\]
_is an isomorphism of weight-graded Hopf algebras satisfying \(\varphi_{\#}\circ\operatorname{swap}=\tau\circ\varphi_{\#}\)._
Proof.: The map \(\varphi_{\#}\) is an isomorphism of weight-graded Hopf algebras by Theorem 7.4 and Proposition 7.6. Moreover, one verifies directly on the level of generating series of words
\[\#_{Y}\circ\operatorname{swap}\rho_{\mathcal{Y}^{\operatorname{bi} }}(\mathcal{W})\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}} =\rho_{\mathcal{Y}^{\operatorname{bi}}}(\mathcal{W})^{\#_{Y}} \binom{Y_{1}+\cdots+Y_{d},Y_{1}+\cdots+Y_{d-1},\ldots,Y_{1}}{X_{d},X_{d-1}-X_{ d},\ldots,X_{1}-X_{2}}\] \[=\rho_{\mathcal{Y}^{\operatorname{bi}}}(\mathcal{W})\binom{Y_{1} +\cdots+Y_{d},Y_{1}+\cdots+Y_{d-1},\ldots,Y_{1}}{X_{d},X_{d-1},\ldots,X_{1}},\]
and
\[\tau\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})^{\#_{Y}}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}} =\tau\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{X_{1},\ldots,X _{d}}{Y_{1},Y_{1}+Y_{2},\ldots,Y_{1}+\cdots+Y_{d}}\] \[=\rho_{\mathcal{B}}^{0}(\mathcal{W}^{0})\binom{Y_{1}+\cdots+Y_{d},Y_{1}+\cdots+Y_{d-1},\ldots,Y_{1}}{X_{d},X_{d-1},\ldots,X_{1}}.\]
By definition of the map \(\varphi_{\#}\) (Definition 7.2), we deduce that \(\varphi_{\#}\circ\operatorname{swap}=\tau\circ\varphi_{\#}\).
An immediate consequence of Theorem 7.10 is the following.
**Corollary 7.11**.: _A bimould \(M=(M_{d})_{d\geqslant 0}\) is \(\tau\)-invariant if and only if the bimould \(M^{\#_{Y}}\) is swap invariant._
## 8. The algebra of multiple q-zeta values
We introduce the algebra of multiple q-zeta values \(\mathcal{Z}_{q}\) and explain its relations to multiple zeta values and polynomial functions on partitions. As an application of the previous general results, we will present a particular nice spanning set for \(\mathcal{Z}_{q}\), the balanced multiple q-zeta values.
**Definition 8.1**.: ([1]) To integers \(s_{1}\geq 1,\ s_{2},\ldots,s_{l}\geq 0\) and polynomials \(R_{1}\in t\mathbb{Q}[t],\)
\(R_{2},\ldots,R_{l}\in\mathbb{Q}[t],\) associate the _generic multiple q-zeta value_
\[\zeta_{q}(s_{1},...,s_{l};R_{1},...,R_{l})=\sum_{n_{1}>\cdots>n_{l}>0}\frac{R_ {1}(q^{n_{1}})}{(1-q^{n_{1}})^{s_{1}}}\cdots\frac{R_{l}(q^{n_{l}})}{(1-q^{n_{l }})^{s_{l}}}\in\mathbb{Q}[\![q]\!].\]
The assumptions \(s_{1}\geq 1\) and \(R_{1}\in t\mathbb{Q}[t]\) are necessary for convergence.
**Proposition 8.2**.: _([1, p. 6]) For integers \(s_{1}\geq 2,\ s_{2},\ldots,s_{l}\geq 1\) and polynomials \(R_{1}\in t\mathbb{Q}[t],\ R_{2},\ldots,R_{l}\in\mathbb{Q}[t]\), one has_
\[\lim_{q\to 1}(1-q)^{s_{1}+\cdots+s_{l}}\zeta_{q}(s_{1},...,s_{l};R_{1},...,R_ {l})=R_{1}(1)\cdots R_{l}(1)\zeta(s_{1},...,s_{l}).\qed\]
Following H. Bachmann and U. Kuhn ([1]), we consider the following kind of multiple q-zeta values.
**Definition 8.3**.: Define the \(\mathbb{Q}\)-vector space
\[\mathcal{Z}_{q}=\operatorname{span}_{\mathbb{Q}}\bigl{\{}\zeta_{q}(s_{1},..., s_{l};R_{1},...,R_{l})\ \big{|}\ l\geq 0,s_{1}\geq 1,\ s_{2},...,s_{l}\geq 0,\deg(R_{j}) \leq s_{j}\bigr{\}},\]
where we set \(\zeta_{q}(\emptyset;\emptyset)=1\).
The space \(\mathcal{Z}_{q}\) contains all models for multiple q-zeta values studied in the literature, so this can be seen as a model-free approach to them. For \(\zeta_{q}(s_{1};R_{1}),\zeta_{q}(s_{2};R_{2})\in\mathcal{Z}_{q}\), the usual power series multiplication reads
\[\zeta_{q}(s_{1};R_{1})\cdot\zeta_{q}(s_{2};R_{2})=\zeta_{q}(s_{1},s_{2};R_{1},R_{2})+\zeta_{q}(s_{2},s_{1};R_{2},R_{1})+\zeta_{q}(s_{1}+s_{2};R_{1}R_{2}).\]
Since \(\deg(R_{1}R_{2})\leq s_{1}+s_{2}\), the product is also an element in \(\mathcal{Z}_{q}\). Similar computations for arbitrary multi indices show that \(\mathcal{Z}_{q}\) is an associative, commutative algebra. Moreover, the algebra \(\mathcal{Z}_{q}\) contains the algebra \(\widehat{\mathcal{M}}^{\mathbb{Q}}(\operatorname{SL}_{2}(\mathbb{Z}))\) of quasi-modular forms with rational coefficients (as introduced in [13]) and is closed under the well-known derivation \(q\frac{\mathrm{d}}{\mathrm{d}q}\).
**Remark 8.4**.: The additional assumption on the degree of the polynomials \(R_{j}\) in Definition 8.3 can be justified by its relations to polynomial functions on partitions. More precisely, the original definition of the space \(\mathcal{Z}_{q}\) was given by H. Bachmann as the span of the bi-brackets ([1]), which can be seen as generating series of monomial functions on partitions. In [10], cf (1.6)] it is shown that the space \(\mathcal{Z}_{q}\) is exactly the image of the polynomial functions on partitions under the q-bracket.
Let \(\lambda=(1^{m_{1}}2^{m_{2}}3^{m_{3}}\dots)\) be a partition of some natural number \(N\) of length \(d\), i.e., the multiplicities \(m_{i}\in\mathbb{Z}_{\geq 0}\) are nonzero only for finitely many indices \(i_{1},\dots,i_{d}\) and one has \(\sum_{i\geq 1}m_{i}i=N\). A polynomial \(f\in\mathbb{Q}[X_{1},\dots,X_{d},Y_{1},\dots,Y_{d}]\) can be evaluated at the partition \(\lambda\) by
\[f(\lambda)=f(i_{1},\dots,i_{d},m_{i_{1}},\dots,m_{i_{d}}).\]
Denote by \(\mathcal{P}(N,d)\) the set of all partitions of \(N\) of length \(d\). Then we associate to a polynomial \(f\in\mathbb{Q}[X_{1},\ldots,X_{d},Y_{1},\ldots,Y_{d}]\) the generating series
\[\operatorname{Gen}_{f}(q)=\sum_{N\geqslant 1}\left(\sum_{\lambda\in\mathcal{P}(N,d)}f(\lambda)\right)q^{N}.\]
Whenever \(f\in\mathbb{Q}[X_{1},\ldots,X_{d},Y_{1},\ldots,Y_{d}]\) is chosen to be a monomial, then \(\operatorname{Gen}_{f}(q)\) is equal to a bi-bracket of depth \(d\), more details are given in [1, Theorem 1.3]. Therefore, as a consequence of [1, Theorem 2.3] we obtain that the space \(\mathcal{Z}_{q}\) is spanned by the generating series \(\operatorname{Gen}_{f}(q)\) with \(f\in\mathbb{Q}[X_{1},\ldots,X_{d},Y_{1},\ldots,Y_{d}]\), \(d\geqslant 0\). This indicates that the elements in \(\mathcal{Z}_{q}\) should be invariant under some involution corresponding to conjugation of partitions.
## 9. Combinatorial bi-multiple Eisenstein series
We present the combinatorial bi-multiple Eisenstein series constructed in [1]. They form a spanning set for the space \(\mathcal{Z}_{q}\), so they are a particular model for multiple q-zeta values. Combinatorial bi-multiple Eisenstein series satisfy a weight-graded product formula and are invariant under a weight-homogeneous involution. Therefore, they should induce a grading on \(\mathcal{Z}_{q}\), which extends the weight-grading of the algebra \(\widehat{\mathcal{M}}^{\mathbb{Q}}(\operatorname{SL}_{2}(\mathbb{Z}))\) of quasi-modular forms with rational coefficients. Their construction is inspired by the Fourier expansion of multiple Eisenstein series ([1, Theorem 1.4]). Thus, the combinatorial bi-multiple Eisenstein series give a natural connection between the space \(\mathcal{Z}_{q}\) and multiple Eisenstein series. In particular, they should give a description of all relations between multiple Eisenstein series. We will recall the construction of the combinatorial bi-multiple Eisenstein series as given in [1].
**Definition 9.1**.: By the work of G. Racinet ([10]) and also the combination of the works of V. G. Drinfeld ([12]) and H. Furusho ([14]) there exists a rational solution to the extended double shuffle equations
\[\beta(k_{1},\ldots,k_{d})\in\mathbb{Q},\qquad k_{1},\ldots,k_{d} \in\mathbb{Z}_{\geqslant 1},\ k_{1}>1,\]
such that
\[\beta(k)=-\frac{B_{k}}{2k!},\ k\ \text{even},\qquad\beta(k)=0,\ k\ \text{odd}. \tag{9.1}\]
Denote by \(\beta_{*}(k_{1},\ldots,k_{d}),\ k_{1},\ldots,k_{d}\geqslant 1\), the corresponding stuffle regularized elements and define for all \(d\geqslant 1\)
\[\mathfrak{b}_{d}(X_{1},\ldots,X_{d})=\sum_{k_{1},\ldots,k_{d} \geqslant 1}\beta_{*}(k_{1},\ldots,k_{d})X_{1}^{k_{1}-1}\ldots X_{d}^{k_{d}-1}. \tag{9.2}\]
Moreover set \(\mathfrak{b}_{0}=1\), then \(\mathfrak{b}=(\mathfrak{b}_{d})_{d\geqslant 0}\) is a mould with coefficients in \(\mathbb{Q}\).
In general, a solution to the extended double shuffle equations is not unique. In the following, we will fix the mould \(\mathfrak{b}\) and the whole construction of the combinatorial bi-multiple Eisenstein series will depend on this choice.
**Definition 9.2**.: Define the bimould \(\mathfrak{b}=(\mathfrak{b}_{d})_{d\geqslant 0}\) by \(\mathfrak{b}_{0}=1\) and for \(d\geqslant 1\) by
\[\mathfrak{b}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{0\leqslant i \leqslant j\leqslant d}\gamma_{i}\mathfrak{b}_{j-i}(Y_{1}+\cdots+Y_{j-i}, \ldots,Y_{1}+Y_{2},Y_{1})\mathfrak{b}_{d-j}(X_{j+1},\ldots,X_{d}),\]
where the coefficients \(\gamma_{i}\) are defined by
\[\sum_{i\geqslant 0}\gamma_{i}T^{i}=\exp\left(\sum_{n\geqslant 2}\frac{(-1)^{n+ 1}}{n}\frac{B_{n}}{2n!}T^{n}\right).\]
Independent of the shape of the mould \(\mathfrak{b}\), this construction will always yield a swap invariant bimould. Since the coefficients of the mould \(\mathfrak{b}\) satisfy the extended double shuffle relations, we obtain that the bimould \(\mathfrak{b}\) is also symmetril.
**Definition 9.3**.: Define the bimould \(\widetilde{\mathfrak{b}}=(\widetilde{\mathfrak{b}}_{d})_{d\geqslant 0}\) by \(\widetilde{\mathfrak{b}}_{0}=1\) and
\[\widetilde{\mathfrak{b}}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}= \sum_{i=0}^{d}\frac{(-1)^{i}}{2^{i}i!}\mathfrak{b}_{d-i}\binom{X_{i+1},\ldots, X_{d}}{-Y_{1},\ldots,-Y_{d-i}},\qquad d\geqslant 1.\]
For each \(u\geqslant 1\), let \(\mathfrak{L}^{(u)}=(\mathfrak{L}^{(u)}_{d})_{d\geqslant 0}\) be the bimould given by \(\mathfrak{L}^{(u)}_{0}=1\) and
\[\mathfrak{L}^{(u)}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{ d}}=\sum_{j=1}^{d}\mathfrak{b}_{j-1}\binom{X_{1}-X_{j},\ldots,X_{j-1}-X_{j}}{Y_{1}, \ldots,Y_{j-1}}L_{u}\binom{X_{j}}{Y_{1}+\cdots+Y_{d}}\\ \cdot\widetilde{\mathfrak{b}}_{d-j}\binom{X_{d}-X_{j},\ldots,X_{j +1}-X_{j}}{Y_{d},\ldots,Y_{j+1}},\]
where the power series \(L_{u}\binom{X}{Y}\) is defined by
\[L_{u}\binom{X}{Y}=\frac{\exp(X+uY)q^{u}}{1-\exp(X)q^{u}},\qquad u\geqslant 1.\]
It can be shown ([1, Lemma 6.20]) that the bimould \(\mathfrak{L}^{(u)}\) is symmetril, the proof uses the additional conditions on the depth \(1\) terms of \(\mathfrak{b}\) given in (9.1).
**Definition 9.4**.: Define the bimould \(\mathfrak{g}^{*}=(\mathfrak{g}^{*}_{d})_{d\geqslant 0}\) by \(\mathfrak{g}^{*}_{0}=1\) and
\[\mathfrak{g}^{*}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{ \begin{subarray}{c}1\leqslant j\leqslant d\\ 0=d_{0}<d_{1}<\cdots<d_{j-1}<d_{j}=d\\ u_{1}>\cdots>u_{j}>0\end{subarray}}\prod_{i=1}^{j}\mathfrak{L}^{(u_{i})}_{d_ {i}-d_{i-1}}\binom{X_{d_{i-1}+1},\ldots,X_{d_{i}}}{Y_{d_{i-1}+1},\ldots,Y_{d_ {i}}}.\]
Since the bimould \(\mathfrak{L}^{(u)}\) is symmetril, also \(\mathfrak{g}^{*}\) is symmetril. This implication is independent of the explicit shape of \(\mathfrak{L}^{(u)}\).
**Definition 9.5**.: Let \(\mathfrak{G}=(\mathfrak{G}_{d})_{d\geqslant 0}\) be the mould product of \(\mathfrak{g}^{*}\) and \(\mathfrak{b}\), i.e., one has \(\mathfrak{G}_{0}=1\) and for \(d\geqslant 1\)
\[\mathfrak{G}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{i=0}^{d} \mathfrak{g}^{*}_{i}\binom{X_{1},\ldots,X_{i}}{Y_{1},\ldots,Y_{i}}\mathfrak{b} _{d-i}\binom{X_{i+1},\ldots,X_{d}}{Y_{i+1},\ldots,Y_{d}}.\]
The main result of [1] was the following.
**Theorem 9.6**.: _[_1_, Theorem 6.5.]_ _The bimould \(\mathfrak{G}\) is symmetri
If Conjecture 9.11 holds, then the algebra \(\mathcal{Z}_{q}\) is graded by weight
\[\mathcal{Z}_{q}=\bigoplus_{w\geqslant 0}\mathcal{Z}_{q}^{(w)}. \tag{9.3}\]
Here \(\mathcal{Z}_{q}^{(w)}\) denotes the subspace of \(\mathcal{Z}_{q}\) spanned by all combinatorial bi-multiple Eisenstein series of weight \(w\). This follows immediately from the observation that the stuffle product and also the swap operator are homogeneous in weight.
## 10. Balanced multiple q-zeta values
We will apply the results from Section 7 to obtain a new spanning set of the algebra \(\mathcal{Z}_{q}\). They will satisfy very explicit and simple relations, which are homogeneous in weight. Thus, this new spanning set should induce a weight-grading on \(\mathcal{Z}_{q}\).
**Definition 10.1**.: Define the bimould \(\mathfrak{B}=(\mathfrak{B}_{d})_{d\geqslant 0}\) with coefficients in \(\mathcal{Z}_{q}\) by \(\mathfrak{B}_{0}=1\) and for \(d\geqslant 1\) by
\[\mathfrak{B}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\mathfrak{G}_{ d}^{\#^{-1}_{\gamma}}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}},\]
where \(\mathfrak{G}\) is the bimould of generating series of the combinatorial bi-multiple Eisenstein series (Definition 9.5). The _balanced multiple q-zeta values_\(\zeta_{q}(s_{1},\ldots,s_{l})\), \(s_{1}\geqslant 1\), \(s_{2},\ldots,s_{l}\geqslant 0\), are the coefficients of the bimoulds \(\mathfrak{B}\),
\[\mathfrak{B}_{d}\binom{X_{1},\ldots,X_{d}}{Y_{1},\ldots,Y_{d}}=\sum_{ \begin{subarray}{c}k_{1},\ldots,k_{d}\geqslant 1\\ m_{1},\ldots,m_{d}\geqslant 0\end{subarray}}\zeta_{q}(k_{1},\{0\}^{m_{1}}, \ldots,k_{d},\{0\}^{m_{d}})X_{1}^{k_{1}-1}Y_{1}^{m_{1}}\ldots X_{d}^{k_{d}-1} Y_{d}^{m_{d}},\quad d\geqslant 1.\]
Equivalently, the balanced multiple q-zeta values can be defined as
\[\zeta_{q}(s_{1},\ldots,s_{l})=G(\varphi_{\#}^{-1}(b_{s_{1}}\ldots b_{s_{l}})),\]
where \(G:\mathbb{Q}\langle\mathcal{Y}^{\mathrm{bi}}\rangle\to\mathcal{Z}_{q}\) denotes the algebra morphism induced by the combinatorial bi-multiple Eisenstein series given in Corollary 9.10.
**Remark 10.2**.: The definition of the balanced multiple q-zeta values has many similarities with Zudilin's definition of the multiple q-zeta brackets in terms of the bi-brackets ([22, eq (8)]).
Since the bimould \(\mathfrak{G}\) of generating series of the combinatorial bi-multiple Eisenstein series is symmetril and swap invariant (Theorem 9.6), an immediate consequence of Corollary 7.5 and 7.11 is the following.
**Proposition 10.3**.: _The bimould \(\mathfrak{B}\) is b-symmetril and \(\tau\)-invariant._
These symmetries of the bimould \(\mathfrak{B}\) yield the following properties of balanced multiple q-zeta values.
**Theorem 10.4**.: _There is a \(\tau\)-invariant, surjective algebra morphism_
\[\zeta_{q}:(\mathbb{Q}\langle\mathcal{B}\rangle^{0},\ast_{b}) \to\mathcal{Z}_{q},\] \[b_{s_{1}}\ldots b_{s_{l}} \mapsto\zeta_{q}(s_{1},\ldots,s_{l}).\]
Proof.: This is an immediate consequence of the b-symmetrility and \(\tau\)-invariance of the bi-mould \(\mathfrak{B}\) (see Definition 5.2, 7.8) together with the observation that \(\#_{Y}\) is a bijection.
As a reformulation of Conjecture 9.11, we expect the following.
**Conjecture 10.5**.: _All algebraic relations in \(\mathcal{Z}_{q}\) are a consequence of the balanced quasi-shuffle product formula and the \(\tau\)-invariance of the balanced multiple q-zeta values._
Observe that the balanced quasi-shuffle product, as well as the \(\tau\)-invariance, are both completely explicit on the level of words of multi indices and easy to compute. If Conjecture 10.5 holds, then the algebra \(\mathcal{Z}_{q}\) is graded by weight
\[\mathcal{Z}_{q}=\bigoplus_{w\geqslant 0}\mathcal{Z}_{q}^{(w)},\]
where \(\mathcal{Z}_{q}^{(w)}\) is the subspace of \(\mathcal{Z}_{q}\) spanned by all balanced multiple q-zeta values of weight \(w\). This conjectural grading coincides with the one of the combinatorial bi-multiple Eisenstein series given in (9.3).
It was conjectured in [1, Conjecture 4.3] in a slightly different setup that the space \(\mathcal{Z}_{q}\) is spanned by the elements \(\zeta_{q}(k_{1},\ldots,k_{d})\), \(k_{1},\ldots,k_{d}\geqslant 1\). Some partial results towards this conjecture are obtained in [1, Section 6], [1, Proposition 4.4, 5.9] and [20, Theorem 5.3] and a general proof is announced in [1].
**Example 10.6**.: 1. In depth 1 the balanced multiple q-zeta values coincide with the combinatorial bi-multiple Eisenstein series up to multiplication with certain factorials. Thus, we deduce from Example 9.8 that for all \(k\geqslant 1,\ m\geqslant 0\)
\[\zeta_{q}(k,\{0\}^{m})=-\delta_{m,0}\frac{B_{k}}{2k!}-\delta_{k,1}\frac{B_{m +1}}{2(m+1)!}+\frac{1}{(k-1)!m!}\sum_{u,v>0}u^{m}v^{k-1}q^{uv}.\]
So, the element \(\zeta_{q}(k,\{0\}^{m})\) is essentially equal to the \(m\)-th derivative of the Eisenstein series \(G_{k}\). If \(k_{1},\ldots,k_{d}\geqslant 1\), then the balanced multiple q-zeta value \(\zeta_{q}(k_{1},\ldots,k_{d})\) equals the combinatorial multiple Eisenstein series \(G(k_{1},\ldots,k_{d})\). In particular, the balanced multiple q-zeta values give a very natural extension and explicit description of conjectural all relations between multiple Eisenstein series.
2. Recall that the mould \(\mathfrak{b}\) in depth 2 is given by \(\mathfrak{b}(X_{1},X_{2})=\sum_{k_{1},k_{2}\geqslant 1}\beta_{*}(k_{1},k_{2})X_{ 1}^{k_{1}-1}X_{2}^{k_{2}-1}\) (compare to (9.2)). Direct calculations show that
\[\zeta_{q}(2,3) =\beta_{*}(2,3)-\frac{1}{48}\sum_{u,v>0}v^{2}q^{uv}+\frac{1}{2} \sum_{\begin{subarray}{c}u_{1}>u_{2}>0\\ v_{1},v_{2}>0\end{subarray}}v_{1}v_{2}^{2}q^{u_{1}v_{1}+u_{2}v_{2}},\] \[\zeta(2,0,3) =\frac{1}{2}\sum_{\begin{subarray}{c}u_{1}>u_{2}>0\\ v_{1},v_{2}>0\end{subarray}}u_{1}v_{1}v_{2}^{2}q^{u_{1}v_{1}+u_{2}v_{2}}-\frac {1}{2}\sum_{\begin{subarray}{c}u_{1}>u_{2}>0\\ v_{1},v_{2}>0\end{subarray}}u_{2}v_{1}v_{2}^{2}q^{u_{1}v_{1}+u_{2}v_{2}}.\]
An explicit construction for the numbers \(\beta_{*}(k_{1},k_{2})\) is given in [1, Section 6], in this case one obtains \(\beta_{*}(2,3)=0\). Further constructions for these rational numbers are given in [1], [1].
3. The quasi-modular forms are contained in the algebra \(\mathcal{Z}_{q}\). In particular, the modular
discriminant \(\Delta(q)=q\prod_{n\geqslant 1}(1-q^{n})^{24}\) is a \(\mathbb{Q}\)-linear combination of balanced multiple q-zeta values. For example, using the exotic relation given in [12, Example 2.48 (ii)] we obtain
\[\frac{1}{43200}\Delta(q)= \ 240\zeta_{q}(4,4,4)-63\zeta_{q}(9,3)+183\zeta_{q}(8,4)-\frac{675} {2}\zeta_{q}(7,5)+\frac{89}{2}\zeta_{q}(6,6)-378\zeta_{q}(5,7)\] \[+183\zeta_{q}(4,8).\]
## 11. Further properties of the balanced multiple q-zeta values
Similar to the case of the Schlesinger-Zudilin multiple q-zeta values studied by K. Ebrahimi-Fard, D. Manchon, and J. Singer ([16], [1]), the balanced multiple q-zeta values possess a description in terms of an alphabet with two letters. We will compute the limits \(q\to 1\) of a certain kind of balanced multiple q-zeta values and obtain that those are elements in the algebra of multiple zeta values. Finally, we will describe the derivation \(q\frac{\mathrm{d}}{\mathrm{d}q}\) on the algebra \(\mathcal{Z}_{q}\) in terms of the balanced multiple q-zeta values.
**Definition 11.1**.: Let \(\shuffle_{b}\) be the product on the non-commutative free algebra \(\mathbb{Q}\langle p,y\rangle\) recursively defined by \(\mathbf{1}\shuffle_{b}w=w\shuffle_{b}\mathbf{1}=w\) and
\[(yu)\shuffle_{b}v=u\shuffle_{b}(yv)=y(u\shuffle_{b}v),\]
\[(pu)\shuffle_{b}(pv)=p(u\shuffle_{b}pv)+p(pu\shuffle_{b}v)+\begin{cases}p(u \shuffle_{b}v),&\text{ if }u=y\tilde{u}\text{ and }v=y\tilde{v},\\ 0&\text{ else}\end{cases}\]
for all \(u,v,w\in\mathbb{Q}\langle p,y\rangle\).
The involution \(\tau\) (Definition 7.7) can be also defined on the algebra \(\mathbb{Q}\langle p,y\rangle\).
**Definition 11.2**.: Let \(\tau\) be the anti-automorphism on \(\mathbb{Q}\langle p,y\rangle\) given by \(\tau(\mathbf{1})=\mathbf{1}\), \(\tau(p)=y\), and \(\tau(y)=p\), i.e., one has for all \(k_{1},\ldots,k_{d}\geqslant 1,\ m_{0},\ldots,m_{d}\geqslant 0\)
\[\tau(y^{m_{0}}p^{k_{1}}y^{m_{1}}\ldots p^{k_{d}}y^{m_{d}})=p^{m_{d}}y^{k_{d}} \ldots p^{m_{1}}y^{k_{1}}p^{m_{0}}.\]
The involution \(\tau\) connects the balanced quasi-shuffle product \(*_{b}\) and the product \(\shuffle_{b}\). To make this precise, consider the canonical embedding
\[i:\mathbb{Q}\langle\mathcal{B}\rangle \hookrightarrow\mathbb{Q}\langle p,y\rangle,\] \[b_{s_{1}}\ldots b_{s_{l}} \mapsto p^{s_{1}}y\ldots p^{s_{l}}y.\]
Observe that we have \(\tau(i(w))=i(\tau(w))\) for all \(w\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}\).
**Proposition 11.3**.: _For all \(u,v\in\mathbb{Q}\langle\mathcal{B}\rangle\), we have_
\[i(u*_{b}v)=\tau(\tau\circ i(u)\shuffle_{b}\tau\circ i(v)).\]
Proof.: Let \(u=b_{s_{1}}\ldots b_{s_{l}}\) and \(v=b_{r_{1}}\ldots b_{r_{k}}\) be words in \(\mathbb{Q}\langle\mathcal{B}\rangle\) and \(s_{l},r_{k}\geqslant 1\). Using the recursive definition of \(*_{b}\) from the right, we obtain
\[i(u*_{b}v)=i(b_{s_{1}}\ldots b_{s_{l-1}}*_{b}b_{r_{1}}\ldots b_{ r_{k}})p^{s_{l}}y+i(b_{s_{1}}\ldots b_{s_{l}}*_{b}b_{r_{1}}\ldots b_{r_{k-1}})p^{r_{ k}}y\] \[\qquad\qquad\qquad\qquad+i(b_{s_{2}}\ldots b_{s_{l}}*_{b}b_{r_{2 }}\ldots b_{r_{k}})p^{s_{l}+r_{k}}y.\]
On the other hand, applying the definition of \(\tau\) and \(i\) gives
\[\tau(\tau\circ i(u)\sqcup\!\!\sqcup_{b}\tau\circ i(v))=\tau(py^{s_{l} }\dots py^{s_{1}}\sqcup\!\!\sqcup_{b}py^{r_{k}}\dots py^{r_{1}})\] \[= \ \tau\Big{(}py^{s_{l}}(py^{s_{l-1}}\dots py^{s_{1}}\sqcup\!\! \sqcup_{b}py^{r_{k}}\dots py^{r_{1}})+py^{r_{k}}(py^{s_{l}}\dots py^{s_{1}} \sqcup\!\!\sqcup_{b}py^{r_{k-1}}\dots py^{r_{1}})\] \[+py^{s_{l}+r_{k}}(py^{s_{l-1}}\dots py^{s_{1}}\sqcup\!\! \sqcup_{b}py^{r_{k-1}}\dots py^{r_{1}})\Big{)}\] \[= \ \tau\Big{(}\tau\circ i(b_{s_{1}}\dots b_{s_{l-1}})\sqcup\!\! \sqcup_{b}\tau\circ i(b_{r_{1}}\dots b_{r_{k}})\Big{)}p^{s_{l}}y+\tau\Big{(} \tau\circ i(b_{s_{1}}\dots b_{s_{l}})\sqcup\!\!\sqcup_{b}\tau\circ i(b_{r_{1} }\dots b_{r_{k-1}})\Big{)}p^{r_{k}}y\] \[+\tau\Big{(}\tau\circ i(b_{s_{1}}\dots b_{s_{l-1}})\sqcup\!\! \sqcup_{b}\tau\circ i(b_{r_{1}}\dots b_{r_{k-1}})\Big{)}p^{s_{l}+r_{k}}y.\]
So induction on the depth implies the claim. Next, assume that \(s_{l}=0\) and \(r_{k}\geq 0\). Then we obtain
\[i(u\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}_{b}v)=i(b_{s_{1}} \dots b_{s_{l-1}}\mathbin{\raisebox{0.0pt}{\scalebox{1.2}{$\bullet$}}}_{b}b_{r_ {1}}\dots b_{r_{k}})y+i(b_{s_{1}}\dots b_{s_{l}}\mathbin{\raisebox{0.0pt}{ \scalebox{1.2}{$\bullet$}}}_{b}b_{r_{1}}\dots b_{r_{k-1}})p^{r_{k}}y,\]
and on the other hand, using again the definition of \(\tau\) and \(i\)
\[\tau(\tau\circ i(u)\sqcup\!\!\sqcup_{b}\tau\circ i(v))=\tau(py^{s _{l}}\dots py^{s_{1}}\sqcup\!\!\sqcup_{b}py^{r_{k}}\dots py^{r_{1}})\] \[= \ \tau\Big{(}p(py^{s_{l-1}}\dots py^{s_{1}}\sqcup\!\!\sqcup_{b}py ^{r_{k}}\dots py^{r_{1}})+py^{r_{k}}(py^{s_{l}}\dots py^{s_{1}}\sqcup\!\! \sqcup_{b}py^{r_{k-1}}\dots py^{r_{1}})\Big{)}\] \[= \ \tau\Big{(}\tau\circ i(b_{s_{1}}\dots b_{s_{l-1}})\sqcup\!\! \sqcup_{b}\tau\circ i(b_{r_{1}}\dots b_{r_{k}})\Big{)}y+\tau\Big{(}\tau\circ i (b_{s_{1}}\dots b_{s_{l}})\sqcup\!\!\sqcup_{b}\tau\circ i(b_{r_{1}}\dots b_{r_ {k-1}})\Big{)}p^{r_{k}}y.\]
Again, induction on the depth implies the claim.
By Proposition 11.3, there is injective algebra morphism
\[\tau\circ i:(\mathbb{Q}\langle\mathcal{B}\rangle,\mathbin{\raisebox{0.0pt}{ \scalebox{1.2}{$\bullet$}}}_{b}) \hookrightarrow(\mathbb{Q}\langle p,y\rangle,\sqcup\!\!\sqcup_{b}),\] \[b_{s_{1}}\dots b_{s_{l}} \mapsto py^{s_{l}}\dots py^{s_{1}}.\]
In particular, the restriction of \(\sqcup\!\!\sqcup_{b}\) to \(\operatorname{im}(\tau\circ i)=\mathbb{Q}\mathbf{1}+p\mathbb{Q}\langle p,y\rangle\) can be seen as a quasi-shuffle product. Denote
\[\mathbb{Q}\langle p,y\rangle^{0}=\mathbb{Q}\mathbf{1}+p\mathbb{Q}\langle p,y \rangle y.\]
**Theorem 11.4**.: _There is a \(\tau\)-invariant, surjective algebra morphism_
\[(\mathbb{Q}\langle p,y\rangle^{0},\sqcup\!\!\sqcup_{b}) \to\mathcal{Z}_{q},\] \[p^{s_{1}}y\dots p^{s_{l}}y \mapsto\zeta_{q}(s_{1},\dots,s_{l}).\]
Proof.: Observe that by definition
\[\zeta_{q}(i(u))=\zeta_{q}(u)\quad\text{ for all }u\in\mathbb{Q}\langle \mathcal{B}\rangle^{0}. \tag{11.1}\]
So as the balanced multiple q-zeta values form a spanning set of \(\mathcal{Z}_{q}\), we obtain surjectivity. Since \(\tau(i(u))=i(\tau(u))\) for each \(u\in\mathbb{Q}\langle\mathcal{B}\rangle^{0}\), we deduce the \(\tau\)-invariance from Theorem 10.4. Finally, we prove that the map is an algebra morphism for \(\sqcup\!\!\sqcup_{b}\). For \(s_{1},r_{1}\geq 1,\ s_{2},\dots,s_{l},r_{2},\dots,r_{k}\geq 0\), we obtain by applying the \(\tau\)-invariance and (11.1)
\(\zeta_{q}(py^{s_{l}}\dots py^{s_{1}})\zeta_{q}(py^{r_{k}}\dots py^{r_{1}})=\zeta_{q }(p^{s_{1}}y\dots p^{s_{l}}y)\zeta_{q}(p^{r_{1}}y\dots p^{r_{k}}y)=\zeta_{q}(b_{s _{1}}\dots b_{s_{l}})\zeta_{q}(b_{r_{1}}\dots b_{r_{k}}).\)
Since \(\zeta_{q}\) is an algebra morphism for the balanced quasi-shuffle product \(*_{b}\), we deduce
\[\zeta_{q}(py^{s_{l}}\dots py^{s_{1}})\zeta_{q}(py^{r_{k}}\dots py^{r_{1}})= \zeta_{q}(b_{s_{1}}\dots b_{s_{l}}*_{b}b_{r_{1}}\dots b_{r_{k}}).\]
Applying Proposition 11.3 and then again the \(\tau\)-invariance gives
\[\zeta_{q}(py^{s_{l}}\dots py^{s_{1}})\zeta_{q}(py^{r_{k}}\dots py ^{r_{1}}) =\zeta_{q}\big{(}\tau\big{(}\tau\circ i(b_{s_{1}}\dots b_{s_{l}}) \shuffle_{b}\tau\circ i(b_{r_{1}}\dots b_{r_{k}})\big{)}\big{)}\] \[=\zeta_{q}\big{(}\tau\circ i(b_{s_{1}}\dots b_{s_{l}})\shuffle_{ b}\tau\circ i(b_{r_{1}}\dots b_{r_{k}})\big{)}.\]
Finally, applying the definition of \(\tau\) and \(i\) yields the desired formula
\[\zeta_{q}(py^{s_{l}}\dots py^{s_{1}})\zeta_{q}(py^{r_{k}}\dots py^{r_{1}})= \zeta_{q}\big{(}py^{s_{l}}\dots py^{s_{1}}\shuffle_{b}py^{r_{k}}\dots py^{r_{1 }}\big{)}.\]
Next, we will compute the limit of the balanced multiple q-zeta values for \(q\to 1\) for some special multi indices.
**Proposition 11.5**.: _For a word \(w=b_{\varepsilon_{1}}\dots b_{\varepsilon_{n}}b_{k_{1}}\dots b_{k_{d}}\) in \(\mathbb{Q}\langle\mathcal{B}\rangle^{0}\), where \(\varepsilon_{1},\dots,\varepsilon_{n}\in\{0,1\}\), \(\varepsilon_{n}=0\) and \(k_{1},\dots,k_{d}\in\mathbb{Z}_{\geq 1}\), \(k_{1}\geq 2\), we have_
\[\lim_{q\to 1}(1-q)^{\operatorname{wt}(w)}\zeta_{q}(w)=\zeta(x_{\varepsilon_{ n}}\dots x_{\varepsilon_{1}})\zeta(y_{k_{1}}\dots y_{k_{d}}).\]
Here we set as explained in (1.2), (1.4),
\[\zeta(x_{0}^{k_{1}-1}x_{1}\dots x_{0}^{k_{d}-1}x_{1})=\zeta(k_{1},\dots,k_{d}), \zeta(y_{k_{1}}\dots y_{k_{d}})=\zeta(k_{1},\dots,k_{d}),\]
for all \(k_{1},\dots,k_{d}\in\mathbb{Z}_{\geq 1}\), \(k_{1}\geq 2\).
Proof.: Let \(\mathfrak{z}=(\mathfrak{z}_{d})_{d\geq 0}\) be the mould of the multiple zeta values, so for each \(d\geq 1\) we have
\[\mathfrak{z}(X_{1},\dots,X_{d})=\sum_{k_{1}\geq 2,k_{2},\dots,k_{d}\geq 1}\zeta(k_{ 1},\dots,k_{d})X_{1}^{k_{1}-1}\dots X_{d}^{k_{d}-1}.\]
Similar to [13, Definition 1.3] define the conjugated multiple zeta values \(\xi(m_{1},\dots,m_{d})\), \(m_{1},\dots,m_{d-1}\geq 0,\ m_{d}\geq 1\), by
\[\mathfrak{z}(Y_{1}+\dots+Y_{d},\dots,Y_{1}+Y_{2},Y_{1})=\sum_{m_{1},\dots,m_{ d-1}\geq 0,m_{d}\geq 1}\xi(m_{1},\dots,m_{d})\frac{Y_{1}^{m_{1}}}{m_{1}!}\dots \frac{Y_{d}^{m_{d}}}{m_{d}!}. \tag{11.2}\]
By [13, Theorem 4.18] and [1, Remark 6.18], the combinatorial bi-multiple Eisenstein series satisfy
\[\lim_{q\to 1}(1-q)^{m_{1}+\dots+m_{j}+j+k_{j+1}+\dots+k_{d}}G\binom{1,\dots,1, \ k_{j+1},\dots,k_{d}}{m_{1},\dots,m_{j},\ \ 0,\dots,0}=\xi(m_{1},\dots,m_{j})\zeta(k_{j+1},\dots,k_{d})\]
for all \(1\leq j\leq d\). We deduce from the definition of the balanced multiple q-zeta values (Definition 10.1) that we have for \(\varepsilon_{1},\dots,\varepsilon_{n}\in\{0,1\}\), \(\varepsilon_{n}=0\) and \(k_{1},\dots,k_{d}\in\mathbb{Z}_{\geq 1}\), \(k_{1}\geq 2\)
\[\lim_{q\to 1}(1-q)^{n+k_{1}+\dots+k_{d}}\zeta_{q}(b_{ \varepsilon_{1}}\dots b_{\varepsilon_{n}}b_{k_{1}}\dots b_{k_{d}}) =\lim_{q\to 1}(1-q)^{n+k_{1}+\dots+k_{d}}G(\varphi_{\#}^{-1}(b_{ \varepsilon_{1}}\dots b_{\varepsilon_{n}}b_{k_{1}}\dots b_{k_{d}}))\] \[=\xi(\varphi_{\#}^{-1}(b_{\varepsilon_{1}}\dots b_{\varepsilon_{ n}}))\zeta(k_{1},\dots,k_{d}). \tag{11.3}\]
Here \(\xi(\varphi_{\#}^{-1}(b_{\varepsilon_{1}}\dots b_{\varepsilon_{n}}))\) means that we first have to apply \(\varphi_{\#}\) to \(b_{\varepsilon_{1}}\dots b_{\varepsilon_{n}}\), which gives an element in \(\mathbb{Q}\langle y_{1,m}\mid m\geq 0\rangle\), and then we have to identify \(\xi(y_{1,m_{1}}\dots y_{1,m_{d}})=\xi(m_{1},\dots,m_{d})\). On the level of generating series, this means we have to apply \(\#_{Y}^{-1}\) (Definition 7.2), therefore we substitute \(Y_{1}\mapsto Y_{1},\ Y_{2}\mapsto Y_{2}-Y_{1},\dots,Y_{d}\mapsto Y_{d}-Y_{d-1}\) in the left-hand side of (11.2) and obtain
\[\mathfrak{z}(Y_{d},Y_{d-1},\dots,Y_{1})=\sum_{m_{1},\dots,m_{d-1}\geq 1,m_{d} \geq 2}\zeta(m_{d},\dots,m_{1})Y_{1}^{m_{1}-1}\dots Y_{d}^{m_{d}-1}. \tag{11.4}\]
Combining (11.3) and (11.4) gives the desired formula.
**Remark 11.6**.: Similar to [1, Definition 4.17], one could define a regularized limit for the balanced multiple q-zeta values by a certain regularization process in the algebra \(\mathbb{Q}\langle\mathcal{B}\rangle\) and obtains for each word \(w\in\mathbb{Q}\langle\mathcal{B}\rangle\)
\[\underset{q\to 1}{\lim}^{*}(1-q)^{\operatorname{wt}(w)}\zeta_{q}(w)=\sum_{ \begin{subarray}{c}uv=w\\ u=b_{\varepsilon_{1}}\dots b_{\varepsilon_{n}},\ \varepsilon_{i}\in\{0,1\}\\ v=b_{k_{1}}\dots b_{k_{d}},\ k_{i}\in\mathbb{N}\end{subarray}}\zeta^{\sqcup \sqcup}(x_{\varepsilon_{n}}\dots x_{\varepsilon_{1}})\zeta^{*}(y_{k_{1}}\dots y _{k_{d}}).\]
Here \(\zeta^{\sqcup}\) and \(\zeta^{*}\) denote the shuffle and stuffle regularized multiple zeta value map as given in (1.2), (1.4). In particular, the regularized limit of the balanced multiple q-zeta value \(\zeta_{q}(w)\) vanishes whenever the word \(w\) cannot be decomposed as \(w=uv\) with \(u\in\mathbb{Q}\langle b_{0},b_{1}\rangle\) and \(v\in\mathbb{Q}\langle b_{i}\mid i\geq 1\rangle\).
Finally, we want to describe the derivation \(q\frac{\mathrm{d}}{\mathrm{d}q}\) on \(\mathcal{Z}_{q}\) in terms of the balanced multiple q-zeta values. The obtained formula can be seen as a weight-graded version of the derivation formula for the Schlesinger-Zudilin multiple q-zeta values given in [15, Theorem 4.1].
**Proposition 11.7**.: _We have for all \(s_{1}\geq 1,s_{2},\dots,s_{l}\geq 0\) that_
\[q\frac{\mathrm{d}}{\mathrm{d}q}\zeta_{q}(s_{1},\dots,s_{l})=\sum_{i=1}^{l} \sum_{j=i}^{l}s_{i}\zeta_{q}(s_{1},\dots,s_{i}+1,\dots,s_{j},0,s_{j+1},\dots,s _{l}).\]
Observe that the derivation \(q\frac{\mathrm{d}}{\mathrm{d}q}\) is a homogeneous operator increasing the weight by \(2\).
Proof.: By [1, Proposition 6.29] the generating series of the combinatorial bi-multiple Eisenstein series satisfy
\[q\frac{\mathrm{d}}{\mathrm{d}q}\mathfrak{G}_{d}\binom{X_{1},\dots,X_{d}}{Y_{1 },\dots,Y_{d}}=\sum_{i=1}^{d}\frac{\partial}{\partial X_{i}}\frac{\partial}{ \partial Y_{i}}\mathfrak{G}_{d}\binom{X_{1},\dots,X_{d}}{Y_{1},\dots,Y_{d}}, \qquad d\geq 1.\]
By definition of the balanced multiple q-zeta values (Definition 10.1), we obtain that
\[\sum_{\begin{subarray}{c}k_{1},\dots,k_{d}\geq 1\\ m_{1},\dots,m_{d}\geq 0\end{subarray}}q\frac{\mathrm{d}}{\mathrm{d}q}\zeta_{q}(k_{1},\{ 0\}^{m_{1}},\dots,k_{d},\{0\}^{m_{d}})X_{1}^{k_{1}-1}Y_{1}^{m_{1}}X_{2}^{k_{2} -1}(Y_{1}+Y_{2})^{m_{2}}\dots X_{d}^{k_{d}-1}(Y_{1}+\dots+Y_{d})^{m_{d}}\]
must be equal to
\[\sum_{\begin{subarray}{c}k_{1},\dots,k_{d}\geq 1\\ m_{1},\dots,m_{d}\geq 0\end{subarray}}q\frac{\mathrm{d}}{\mathrm{d}q}\zeta_{q}(k_{1}, \{0\}^{m_{1}},\dots,k_{d},\{0\}^{m_{d}})X_{1}^{k_{1}-1}Y_{1}^{m_{1}}X_{2}^{k_{2 }-1}(Y_{1}+Y_{2})^{m_{2}}\dots X_{d}^{k_{d}-1}(Y_{1}+\dots+Y_{d})^{m_{d}}\]
\[\sum_{\begin{subarray}{c}k_{1},\dots,k_{d}\geq 1\\ m_{1},\dots,m_{d}\geq 0\end{subarray}}\sum_{i=1}^{d}\zeta_{q}(k_{1},\{0\}^{m_{1}}, \dots,k_{d},\{0\}^{m_{d}})\frac{\partial}{\partial X_{i}}\frac{\partial}{ \partial Y_{i}}X_{1}^{k_{1}-1}Y_{1}^{m_{1}}X_{2}^{k_{2}-1}(Y_{1}+Y_{2})^{m_{2}} \tag{11.6}\] \[\dots X_{d}^{k_{d}-1}(Y_{1}+\dots+Y_{d})^{m_{d}}\] \[=\sum_{\begin{subarray}{c}k_{1},\dots,k_{d}\geq 1\\ m_{1},\dots,m_{d}\geq 0\end{subarray}}\sum_{i=1}^{d}\sum_{j=i}^{d}(k_{i}-1)m_{j} \zeta_{q}(k_{1},\{0\}^{m_{1}},\dots,k_{d},\{0\}^{m_{d}})X_{1}^{k_{1}-1}Y_{1}^ {m_{1}}X_{2}^{k_{2}-1}(Y_{1}+Y_{2})^{m_{2}}\] \[\dots X_{i}^{k_{i}-2}(Y_{1}+\dots+Y_{i})^{m_{i}}\dots X_{j}^{k_{j} -1}(Y_{1}+\dots+Y_{j})^{m_{j}-1}\dots X_{d}^{k_{d}-1}(Y_{1}+\dots+Y_{d})^{m_{d}}.\]
Coefficient comparison at \(X_{1}^{k_{1}-1}Y_{1}^{m_{1}}X_{2}^{k_{2}-1}(Y_{1}+Y_{2})^{m_{2}}\dots X_{d}^{k _{d}-1}(Y_{1}+\dots+Y_{d})^{m_{d}}\) in (11.5) and (11.6) yields
\[q\frac{\mathrm{d}}{\mathrm{d}q}\zeta_{q}(k_{1},\{0\}^{m_{1}}, \dots,k_{d},\{0\}^{m_{d}})\] \[=\sum_{i=1}^{d}\sum_{j=i}^{d}k_{i}(m_{j}+1)\zeta_{q}(k_{1},\{0\}^ {m_{1}},\dots,k_{i}+1,\{0\}^{m_{i}},\dots,k_{j},\{0\}^{m_{j}+1},\dots,k_{d}, \{0\}^{m_{d}}),\]
which is equivalent to the claimed formula.
**Outlook.** In forthcoming articles we will formalize the balanced multiple q-zeta values and obtain an algebra generated by symbols satisfying the balanced quasi-shuffle product and \(\tau\)-invariance. Conjecturally, this algebra is equipped with a Hopf algebra structure where the coproduct is given by a generalization of Goncharov's coproduct. A first step towards this is a Lie algebra structure obtained in [1]. Moreover, this algebra of symbols should give a complete description of all relations in the algebra \(\mathcal{Z}_{q}\) and by Theorem 7.10 also a description of all relations between multiple Eisenstein series. By applying Racinet's ideas for formal multiple zeta values ([1]) to this new algebra, we will obtain a formal version of the limit computation given in Proposition 11.5. By [1] this map could be also seen as a formal version of taking the constant term of q-series.
|
2310.11341 | Dual Cognitive Architecture: Incorporating Biases and Multi-Memory
Systems for Lifelong Learning | Artificial neural networks (ANNs) exhibit a narrow scope of expertise on
stationary independent data. However, the data in the real world is continuous
and dynamic, and ANNs must adapt to novel scenarios while also retaining the
learned knowledge to become lifelong learners. The ability of humans to excel
at these tasks can be attributed to multiple factors ranging from cognitive
computational structures, cognitive biases, and the multi-memory systems in the
brain. We incorporate key concepts from each of these to design a novel
framework, Dual Cognitive Architecture (DUCA), which includes multiple
sub-systems, implicit and explicit knowledge representation dichotomy,
inductive bias, and a multi-memory system. The inductive bias learner within
DUCA is instrumental in encoding shape information, effectively countering the
tendency of ANNs to learn local textures. Simultaneously, the inclusion of a
semantic memory submodule facilitates the gradual consolidation of knowledge,
replicating the dynamics observed in fast and slow learning systems,
reminiscent of the principles underpinning the complementary learning system in
human cognition. DUCA shows improvement across different settings and datasets,
and it also exhibits reduced task recency bias, without the need for extra
information. To further test the versatility of lifelong learning methods on a
challenging distribution shift, we introduce a novel domain-incremental dataset
DN4IL. In addition to improving performance on existing benchmarks, DUCA also
demonstrates superior performance on this complex dataset. | Shruthi Gowda, Bahram Zonooz, Elahe Arani | 2023-10-17T15:24:02Z | http://arxiv.org/abs/2310.11341v1 | # Dual Cognitive Architecture: Incorporating Biases and Multi-Memory Systems for Lifelong Learning
###### Abstract
Artificial neural networks (ANNs) exhibit a narrow scope of expertise on stationary independent data. However, the data in the real world is continuous and dynamic, and ANNs must adapt to novel scenarios while also retaining the learned knowledge to become lifelong learners. The ability of humans to excel at these tasks can be attributed to multiple factors ranging from cognitive computational structures, cognitive biases, and the multi-memory systems in the brain. We incorporate key concepts from each of these to design a novel framework, _Dual Cognitive Architecture (DUCA)_, which includes multiple sub-systems, implicit and explicit knowledge representation dichotomy, inductive bias, and a multi-memory system. The inductive bias learner within DUCA is instrumental in encoding shape information, effectively countering the tendency of ANNs to learn local textures. Simultaneously, the inclusion of a semantic memory submodule facilitates the gradual consolidation of knowledge, replicating the dynamics observed in fast and slow learning systems, reminiscent of the principles underpinning the complementary learning system in human cognition. DUCA shows improvement across different settings and datasets, and it also exhibits reduced task recency bias, without the need for extra information. To further test the versatility of lifelong learning methods on a challenging distribution shift, we introduce a novel domain-incremental dataset _DN4IL_. In addition to improving performance on existing benchmarks, DUCA also demonstrates superior performance on this complex dataset. 12
Footnote 1: Code is public at [https://github.com/NewAI-Lab/DUCA](https://github.com/NewAI-Lab/DUCA).
Footnote 2: _DN4IL_ dataset is public at [https://github.com/NewAI-Lab/DN4IL-dataset](https://github.com/NewAI-Lab/DN4IL-dataset).
## 1 Introduction
Deep learning has seen rapid progress in recent years, and supervised learning agents have achieved superior performance on perception tasks. However, unlike a supervised setting, where data is static and independent and identically distributed, real-world data is changing dynamically. Continual learning (CL) aims to learn multiple tasks when data is streamed sequentially (Parisi et al., 2019). This is crucial in real-world deployment settings, as the model needs to adapt quickly to novel data (plasticity), while also retaining previously learned knowledge (stability). Artificial neural networks (ANN), however, are still not effective lifelong learners, as they often fail to generalize to small changes in distribution and also suffer from forgetting old information when presented with new data (catastrophic forgetting) (McCloskey and Cohen, 1989).
Humans, on the other hand, show a better ability to acquire new skills while also retaining previously learned skills to a greater extent. This intelligence can be attributed to different factors in human cognition. Multiple theories have been proposed to formulate an overall cognitive architecture, which is a broad domain-generic cognitive computation model that captures the essential structure and process of the mind. Some of these theories hypothesize that, instead of a single standalone module, multiple modules in the brain share information to excel at a particular task. CLARION (Connectionist learning with rule induction online)
(Sun & Franklin, 2007) is one such theory that postulates an integrative cognitive architecture, consisting of a number of distinct subsystems. It predicates a dual representational structure (Chaiken & Trope, 1999), where the top level encodes conscious explicit knowledge, while the other encodes indirect implicit information. The two systems interact, share knowledge, and cooperate to solve tasks. Delving into these underlying architectures and formulating a new design can help in the quest to build intelligent agents.
Multiple modules can be instituted instead of a single feedforward network: an explicit module responsible for learning from the standard visual input and an implicit module that specializes in acquiring and sharing contextual knowledge indirectly. The implicit module can be further divided into more submodules, each providing different information. Inductive biases and semantic memories can act as different kinds of implicit knowledge. Inductive biases are pre-stored templates or knowledge that provide some meaningful disposition toward adapting to the continuously evolving world (Chollet, 2019). Theories postulate that after rapidly learning information, a gradual consolidation of knowledge transpires in the brain for slow learning of structured information (Kumaran et al., 2016). Thus, the new design incorporates multiple concepts of cognition architectures, the dichotomy of implicit and explicit representations, inductive biases, and multi-memory systems theory.
To this end, we propose _Dual Cognitive Architecture_ (DUCA), a multi-module architecture for CL. The explicit working module processes the standard input data. Two different submodules are introduced for the implicit module. The inductive bias learner embeds relevant prior information, and as networks are shown to be biased toward textural information (unlike humans that are more biased toward global semantics) (Geirhos et al., 2019), we propose to utilize global shape information as the prior. Both texture and shape are present in the original image, but ANNs tend to rely more on texture and ignore semantic information. Hence, we utilize the implicit shape information and share it with the explicit module to learn more generic and high-level representations. Further, to emulate the consolidation of information in the slow-fast multi-memory system, a gradual accumulation of knowledge from the explicit working module is embedded in the
Figure 1: Schematic of _Dual Cognitive Architecture (DUCA)_. The working model in the explicit module learns direct sensory data. Within the implicit module, the inductive bias learner encodes the prior shape knowledge and the semantic memory consolidates information from the explicit module. Only one network (semantic memory) is used during inference as it includes consolidated knowledge across all tasks.
second semantic memory submodule. We show that interacting and leveraging information between these modules can help alleviate catastrophic forgetting, while also increasing the robustness to the distribution shift.
DUCA achieves superior performance across all CL settings on various datasets. DUCA outperforms SOTA CL methods on Seq-CIFAR10 and Seq-CIFAR100 in class-incremental settings. Furthermore, in more realistic general class-incremental settings where the task boundary is blurry and the classes are not disjoint, DUCA shows significant gains. The addition of inductive bias and semantic memory helps to achieve a better balance between the plasticity-stability trade-off. The prior in the form of shape helps produce generic representations, and this results in DUCA exhibiting a reduced task-recency bias. Furthermore, DUCA also shows greater robustness against natural corruption. Finally, to test the capability of CL methods against the distribution shift, we introduce a domain-incremental learning dataset, _DN4IL_, which is a carefully designed subset of the DomainNet dataset (Peng et al., 2019). DUCA shows considerable robustness across all domains on these challenging data, thus establishing the efficacy of our cognitive-inspired CL architecture. Our contributions are as follows:
* _Dual Cognitive Architecture (DUCA)_, a novel method that incorporates aspects of cognitive architectures, multi-memory systems, and inductive bias into the CL framework.
* Introducing _DN4IL_, a challenging domain-incremental learning dataset.
* Benchmark across different CL settings: class-, task-, generalized class-, and domain-incremental learning.
* Analyses on the plasticity-stability trade-off, task recency bias, and robustness to natural corruptions.
## 2 Methodology
### Cognitive Architectures
Cognitive architectures refer to computational models that encapsulate the overall structure of the cognitive process in the brain. The underlying infrastructure of such a model can be leveraged to develop better intelligent systems. Global workspace theory (GWT) (Juliani et al., 2022) postulates that human cognition is composed of a multitude of special-purpose processors and is not a single standalone module. Different sub-modules might encode different contextual information which, when activated, can transfer knowledge to the conscious central workspace to influence and help make better decisions. Furthermore, CLARION (Sun and Franklin, 2007) posits a dual-system cognitive architecture with two levels of knowledge representation. The explicit module encodes direct knowledge that is externally accessible. The implicit module encodes indirect knowledge that is not directly accessible, but can be obtained through some intermediate interpretive or transformational steps. These two modules interact with each other by transferring knowledge between each other.
Inspired by these theories, we formulate a method that incorporates some of the key aspects of cognitive architecture into the CL method. A working module, which encodes the direct sensory data, forms the explicit module. A second module that encodes indirect and interpretive information forms the implicit module. The implicit module further includes multiple sub-modules to encode different types of knowledge.
### Inductive Bias
The sub-modules in the implicit module need to encapsulate implicit information that can provide more contextual and high-level supervision. One of such knowledge can be prior knowledge or inductive bias. Inductive biases are pre-stored templates that exist implicitly even in earlier stages of the human brain (Pearl and Mackenzie, 2018). For instance, cognitive inductive bias may be one of the reasons why humans can focus on the global semantics of objects to make predictions. ANNs, on the other hand, are more prone to rely on local cues and textures (Geirhos et al., 2019). Global semantics or shape information already exists in the visual data, but in an indirect way. The incorporation of shape-awareness to the networks has proven to be a more effective approach in acquiring generic representations (Gowda et al., 2022). Hence, we
utilize shape as indirect information in the implicit module. The sub-module uses a transformation step to extract the shape and share this inductive bias with the working module. As the standard (RGB) image and its shape counterpart can be viewed as different perspectives/modalities of the same data, ensuring that the representation of one modality is consistent with the other increases robustness to spurious correlations that might exist in only one of them.
### Multi Memory System
Moreover, many theories have postulated that an intelligent agent must possess differentially specialized learning memory systems (Kumaran et al., 2016). While one system rapidly learns the individual experience, the other gradually assimilates the knowledge. To emulate this behavior, we establish a second sub-module that slowly consolidates the knowledge from the working module.
### Formulation
To this end, we propose a novel method _Dual Cognitive Architecture (DUCA)_, which incorporates all these concepts into the CL paradigm. DUCA consists of two modules, the explicit module, and the implicit module. The explicit module has a single working model and processes the incoming direct visual data. The implicit module further consists of two submodules, namely the inductive bias learner and the semantic memory. They share relevant contextual information and assimilated knowledge with the explicit module, respectively. Figure 1 shows the overall architecture.
In the implicit module, semantic memory \(N_{SM}\), consolidates knowledge at stochastic intervals from the working model \(N_{WM}\), in the explicit module. The other submodule, the inductive bias learner \(N_{IBL}\), processes the data and extracts shape information (Section G). \(N_{WM}\) processes the RGB data, \(N_{SM}\) consolidates the information from the working module at an update frequency in a stochastic manner, and \(N_{IBL}\) learns from the shape data. The encoder or the feature extractor network takes an image as the input and produces latent representations, which are then passed to the linear classifier to do object recognition. \(f\) represents the combination of the encoder and the classifier, and \(\theta_{WM}\), \(\theta_{SM}\), and \(\theta_{IBL}\) are the parameters of the three networks.
A CL classification consists of a sequence of \(T\) tasks and, during each task, \(t\in{1,2...T}\), samples \(x_{c}\) and their corresponding labels \(y_{c}\) are drawn from the current task data \(D_{t}\). Furthermore, for each subsequent task, a random batch of exemplars is sampled from episodic memory \(B\) as \(x_{b}\). An inductive bias (shape) filter is applied to generate shape samples, \(x_{c_{s}}=\mathbbm{I}\mathbb{B}(x_{c})\) and \(x_{b_{s}}=\mathbbm{I}\mathbb{B}(x_{b})\). Reservoir sampling (Vitter, 1985) is incorporated to replay previous samples. Each of the networks \(N_{WM}\) and \(N_{IBL}\) learns in its own modality with a supervised cross-entropy loss on both the current samples and the buffer samples:
\[\begin{split}\mathcal{L}_{Sup_{WM}}=&\mathcal{L}_{ CE}(f(x_{c};\theta_{WM}),y_{c})+\mathcal{L}_{CE}(f(x_{b};\theta_{WM}),y_{b})\\ \mathcal{L}_{Sup_{IBL}}=&\mathcal{L}_{CE}(f(x_{c_{s} };\theta_{IBL}),y_{c})+\mathcal{L}_{CE}(f(x_{b_{s}};\theta_{IBL}),y_{b})\end{split} \tag{1}\]
The Knowledge Sharing (KS) objectives are designed to transfer and share information between all modules. KS occurs for current samples and buffered samples. We employ the mean squared error as the objective function for all KS losses. To provide shape supervision to the working model and vice versa, a bidirectional decision space similarity constraint (\(\mathcal{L}_{biKS}\)) is enforced to align the features of the two modules.
\[\mathcal{L}_{biKS}=\mathop{\mathbb{E}}_{x\sim D_{t}\cup B}\lVert f(x_{s}; \theta_{IBL})-f(x;\theta_{WM})\rVert_{2}^{2} \tag{2}\]
The consolidated structural information in semantic memory is transferred to both the working model and the inductive bias learner by aligning the output space on the buffer samples, which further helps in information retention. The loss functions \(\mathcal{L}_{KS_{WM}}\) and \(\mathcal{L}_{KS_{IBL}}\) are as follows;
\[\begin{split}\mathcal{L}_{KS_{WM}}=&\mathop{ \mathbb{E}}_{x\sim B}\lVert f(x_{b};\theta_{SM})-f(x_{b};\theta_{WM})\rVert_{2 }^{2}\\ \mathcal{L}_{KS_{IBL}}=&\mathop{\mathbb{E}}_{x\sim B }\lVert f(x_{b};\theta_{SM})-f(x_{b_{s}};\theta_{IBL})\rVert_{2}^{2}\end{split} \tag{3}\]
Thus, the overall loss functions for the working model and the inductive bias learner are as follows;
\[\mathcal{L}_{WM}= \mathcal{L}_{Sup_{WM}}+\lambda(\mathcal{L}_{bikS}+\mathcal{L}_{KS_{ WM}}) \tag{4}\] \[\mathcal{L}_{IBL}= \mathcal{L}_{Sup_{IBL}}+\gamma(\mathcal{L}_{bikS}+\mathcal{L}_{ KS_{IBL}})\]
The semantic memory of the implicit module is updated with a stochastic momentum update (SMU) of the weights of the working model at rate \(r\) with a decay factor of \(d\),
\[\theta_{SM}=d\cdot\theta_{SM}+(1-d)\cdot\theta_{WM}\text{ if }s\sim U(0,1)<r \tag{5}\]
More details are provided in Algorithm 1. We discuss the computational aspect in Section F. Note that we use semantic memory (\(\theta_{SM}\)) for inference, as it contains consolidated knowledge across all tasks.
## 3 Experimental Settings
ResNet-18 (He et al., 2016) architecture is used for all experiments. All networks are trained using the SGD optimizer with standard augmentations of random crop and random horizontal flip. The different hyperparameters, tuned per dataset, are provided in E. The different CL settings are explained in detail in Section D. We consider CLass-IL, Domain-IL, and also report the Task-IL settings. Seq-CIFAR10 and Seq-CIFAR100 (Krizhevsky et al., 2009) for the class-incremental learning (Class-IL) settings, which are divided into 5 tasks each. In addition to Class-IL, we also consider and evaluate general Class-IL (GCIL) (Mi et al., 2020) on the CIFAR100 dataset. For domain-incremental learning (Domain-IL), we propose a novel dataset, _DN4IL_.
## 4 Results
We provide a comparison of our method with standard baselines and multiple other SOTA CL methods. The lower and upper bounds are reported as SGD (standard training) and JOINT (training all tasks together), respectively. We compare with other rehearsal-based methods in the literature, namely ER, DER++ (Buzzega et al., 2020), Co\({}^{2}\)L (Cha et al., 2021), ER-ACE (Caccia et al., 2021), and CLS-ER (Arani et al., 2022). Table 1 shows the average performance in different settings over three seeds. Co\({}^{2}\)L utilizes task boundary information, and therefore the GCIL setting is not applicable. The results are taken from the
\begin{table}
\begin{tabular}{l l|l c c c|c c} \multirow{2}{*}{\(|\mathcal{B}|\)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Seq-CIFAR10} & \multicolumn{2}{c|}{Seq-CIFAR100} & \multicolumn{2}{c}{GCIL-CIFAR100} \\ \cline{3-8} & & Class-IL & Task-IL & Class-IL & Task-IL & Uniform & Longtail \\ \hline \multirow{3}{*}{-} & JOINT & 92.20\(\pm\)0.15 & 98.31\(\pm\)0.12 & 70.62\(\pm\)0.64 & 86.19\(\pm\)0.43 & 60.45\(\pm\)1.65 & 60.10\(\pm\)0.42 \\ & SGD & 19.62\(\pm\)0.05 & 61.02\(\pm\)3.33 & 17.58\(\pm\)0.04 & 40.46\(\pm\)0.99 & 10.36\(\pm\)0.13 & 9.62\(\pm\)0.21 \\ \hline \multirow{3}{*}{200} & ER & 44.79\(\pm\)1.86 & 91.19\(\pm\)0.94 & 21.40\(\pm\)0.22 & 61.36\(\pm\)0.39 & 16.52\(\pm\)0.10 & 16.20\(\pm\)0.30 \\ & DER++ & 64.88\(\pm\)1.17 & 91.92\(\pm\)0.60 & 29.60\(\pm\)1.14 & 62.49\(\pm\)0.78 & 27.73\(\pm\)0.93 & 26.48\(\pm\)2.04 \\ & Co\({}^{2}\)L & 65.57\(\pm\)1.37 & 93.43\(\pm\)0.78 & 31.90\(\pm\)0.38 & 55.02\(\pm\)0.36 & - & - \\ & ER-ACE & 62.08\(\pm\)1.44 & 92.20\(\pm\)0.57 & 32.49\(\pm\)0.95 & 59.77\(\pm\)0.31 & 27.64\(\pm\)0.76 & 25.10\(\pm\)2.64 \\ & CLS-ER & 66.19\(\pm\)0.75 & 93.90\(\pm\)0.60 & 43.80\(\pm\)1.89 & 73.49\(\pm\)1.04 & 35.88\(\pm\)0.41 & 35.67\(\pm\)0.72 \\ & DUCA & **70.04\(\pm\)1.07** & **94.49\(\pm\)0.38** & **45.38\(\pm\)1.28** & **76.62\(\pm\)0.16** & **38.61\(\pm\)0.83** & **37.11\(\pm\)0.16** \\ \hline \multirow{3}{*}{500} & ER & 57.74\(\pm\)0.27 & 93.61\(\pm\)0.27 & 28.02\(\pm\)0.31 & 68.23\(\pm\)0.16 & 23.62\(\pm\)0.66 & 22.36\(\pm\)1.27 \\ & DER++ & 72.70\(\pm\)1.36 & 93.88\(\pm\)0.50 & 41.40\(\pm\)0.96 & 70.61\(\pm\)0.11 & 35.80\(\pm\)0.62 & 34.23\(\pm\)1.19 \\ \cline{1-1} & Co\({}^{2}\)L & 74.26\(\pm\)0.77 & 95.90\(\pm\)0.26 & 39.21\(\pm\)0.39 & 62.98\(\pm\)0.58 & - & - \\ \cline{1-1} & ER-ACE & 68.45\(\pm\)1.78 & 93.47\(\pm\)1.00 & 40.67\(\pm\)0.06 & 66.45\(\pm\)0.71 & 30.14\(\pm\)1.11 & 31.88\(\pm\)0.73 \\ \cline{1-1} & CLS-ER & 75.22\(\pm\)0.71 & 94.94\(\pm\)0.53 & 51.40\(\pm\)1.00 & 78.12\(\pm\)0.24 & 38.94\(\pm\)0.38 & 38.79\(\pm\)0.67 \\ \cline{1-1} & DUCA & **76.20\(\pm\)0.70** & **95.95\(\pm\)0.14** & **54.27\(\pm\)1.09** & **79.80\(\pm\)0.32** & **43.34\(\pm\)0.32** & **41.44\(\pm\)0.22** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods on standard CL benchmarks (Class-IL, Task-IL and GCIL settings). DUCA shows a consistent improvement over all methods for both buffer sizes.
original works and, if not available, using the original codes, we conducted a hyperparameter search for the new settings (see Section E for details).
DUCA achieves the best performance across all datasets in all settings. In the challenging Class-IL setting, we observe a gain of \(\sim\)50% over DER++, thus showing the efficacy of adding multiple modules for CL. Furthermore, we report improvements of \(\sim\)6% on both the Seq-CIFAR10 and Seq-CIFAR100 datasets, over CLS-ER, which utilizes two semantic memories in its design. DUCA has a single semantic memory, and the additional boost is obtained by prior knowledge from the inductive bias learner. Improvement is prominent even when the memory budget is low (200 buffer size). GCIL represents a more realistic setting, as the task boundaries are blurry, and classes can reappear and overlap in any task. GCIL-Longtail version also introduces an imbalance in the sample distribution. DUCA shows a significant improvement on both versions of GCIL-CIFAR100. Additional results are provided in Table 4.
Shape information from the inductive bias learner offers the global high-level context, which helps in producing generic representations that are not biased towards learning only the current task at hand. Furthermore, sharing of the knowledge that has been assimilated through the appearance of overlapping classes through the training scheme, further facilities learning in this general setting. The overall results indicate that the dual knowledge sharing between the explicit working module and the implicit inductive bias and semantic memory modules enables both better adaptation to new tasks and information retention.
## 5 Domain-incremental learning
Intelligent agents deployed in real-world applications need to maintain consistent performance through changes in the data and environment. Domain-IL aims to assess the robustness of the CL methods to the distribution shift. In Domain-IL, the classes in each task remain the same, but the input distribution changes, and this makes for a more plausible use case for evaluation. However, the datasets used in the literature do not fully reflect this setting. For instance, the most common datasets used in the literature are different variations (Rotated and Permuted) of the MNIST dataset (LeCun et al., 1998). MNIST is a simple dataset, usually evaluated on MLP networks, and its variations do not reflect the real-world distribution shift challenges that a CL method faces (the results for R-MNIST are presented in the Table C). Farquhar and Gal (2018) propose fundamental desiderata for CL evaluations and datasets based on real-world use cases. One of the criteria is to possess cross-task resemblances, which Permuted-MNIST clearly violates. Thus, a different dataset is needed to test the overall capability of a CL method to handle the distributional shift.
### _Dn4il_ Dataset
To this end, we propose _DN4IL_ (DomainNet for Domain-IL), which is a well-crafted subset of the standard DomainNet dataset (Peng et al., 2019), used in domain adaptation. DomainNet consists of common objects in six different domains - real, clipart, infograph, painting, quickdraw, and sketch. The original DomainNet consists of 59k samples with 345 classes in each domain. The classes have redundancy, and moreover, evaluating the whole dataset can be computationally expensive in a CL setting. _DN4IL_ version considers different criteria such as relevance of classes, uniform sample distribution, computational complexity, and ease of benchmarking for CL.
All classes were grouped into semantically similar supercategories. Of these, a subset of classes was selected that had relevance to domain shift, while also having maximum overlap with other standard datasets such as CIFAR, to facilitate out-of-distribution analyses. 20 supercategories were chosen with 5 classes each (resulting in a total of 100 classes). In addition, to provide a balanced dataset, we performed a class-wise sampling. First, we sample images per class in each supercategory and maintain class balance. Second, we choose samples per domain, so that it results in a dataset that has a near-uniform distribution across all classes and domains. The final dataset _DN4IL_ is succinct, more balanced, and more computationally efficient for benchmarking, thus facilitating research in CL. Furthermore, the new dataset is deemed more plausible for real-world settings and also adheres to all evaluation desiderata by (Farquhar and Gal, 2018). The challenging distribution shift between domains provides an apt dataset to test the capability of CL methods
in the Domain-IL setting. More details, statistics, and visual examples of this crafted dataset are provided in Section H.
### DN4IL Performance
Figure 2 (left) reports the results on _DN4IL_ for two different buffer sizes (values are provided in Table 10). DUCA shows a considerable performance gain in the average accuracy across all domains, and this can be primarily attributed to the supervision from the shape data. Standard networks tend to exhibit texture bias and learn background or spurious cues (Geirhos et al., 2019) that result in performance degradation when the distribution changes. Learning global shape information of objects, on the other hand, helps in learning generic features that can translate well to other distributions. Semantic memory further helps to consolidate information across domains. Maintaining consistent performance to such difficult distribution shifts prove beneficial in real-world applications, and the proficiency of DUCA in this setting can thus open up new avenues for research in cognition-inspired multi-module architectures.
## 6 Model Analyses
### Plasticity-Stability Trade-off
Plasticity refers to the capability of a model to learn new tasks, while stability shows how well it can retain old information. The plasticity-stability dilemma is a long-standing problem in CL, which requires an optimal balance between the two. Following Sarfraz et al. (2022), we measure each of these to assess the competence of the CL methods. Plasticity is computed as the average performance of each task when first learned (e.g., the accuracy of the network trained on task \(T_{2}\), evaluated on the test set of \(T_{2}\)). Stability is computed as the average performance of all tasks \(1:T\)-1, after learning the final task \(T\). Figure 2 (right) reports these numbers for the _DN4IL_ dataset. As seen, the ER and DER methods exhibit forgetting and lower stability, and focus only on the newer tasks. CLS-ER shows greater stability, but at the cost of reduced plasticity. However, DUCA shows the highest stability while maintaining comparable plasticity. The shape knowledge helps in learning generic solutions that can translate to new tasks, while the semantic consolidation update at stochastic rates acts as a regularization to maintain stable parameter updates. Thus, DUCA strikes a better balance between plasticity and stability.
### Recency-Bias Analysis
Recency bias is a behavior in which the model predictions tend to be biased toward the current or the most recent task (Wu et al., 2019). This is undesirable in a CL model, as it results in a biased solution that forgets the old tasks. To this end, after the end of the training, we evaluate the models on the test set (of
Figure 2: Accuracy (left) on _DN4IL_ dataset and (right) plasticity-stability analysis. DUCA substantially outperforms other methods and with better plasticity-stability trade-off.
all tasks) and calculate the probability of predicting each task. The output distribution for each test sample is calculated for all classes and the probabilities are averaged per task.
Figure 3 (left) shows the probabilities for each task on the Seq-CIFAR10 dataset. As shown, the ER and DER++ methods tend to incline most of their predictions toward the classes seen in the last task, thus creating a misguided bias. DUCA shows a lower bias compared to both of these baselines. CLS-ER exhibits reduced bias due to the presence of multiple memories, but the distribution is still relatively skewed (with respect to a probability of 0.2). DUCA shows a more uniform distribution across all tasks. The dual information from the shape data and the consolidated knowledge across tasks helps in breaking away from Occam's razor pattern of neural networks to default to the easiest solution.
### Robustness
Lifelong agents, when deployed in real-world settings, must be resistant to various factors, such as lighting conditions, weather changes, and other effects of digital imaging. Inconsistency in predictions under different conditions might result in undesirable outcomes, especially in safety-critical applications such as autonomous driving. To measure the robustness of the CL method against such natural corruptions, we created a dataset by applying fifteen different corruptions, at varying levels of severity (1- least severe to 5- most severe corruption).
The performances on the fifteen corruptions are averaged at each severity level and are shown in Figure 3 (right). DUCA outperforms all other techniques at all severity levels. ER, DER++, and CLS-ER show a fast decline in accuracy as severity increases, while DUCA maintains stable performance throughout. Implicit shape information provides a different perspective of the same data to the model, which helps to generate high-level robust representations. DUCA, along with improved continual learning performance, also exhibits improved robustness to corruption, thus proving to be a better candidate for deployment in real-world applications.
### Task-wise Performance
The average accuracy across all tasks does not provide a complete measure of the ability of a network to retain old information while learning new tasks. To better represent the plasticity-stability measure, we report the task-wise performance at the end of each task. After training each task, we measure the accuracy on the test set of each of the previous tasks. Figure 4 reports this for all tasks of _DN4IL_. The last row represents the performance of each task after the training is completed. ER and DER++ show performance degradation on earlier tasks, as the model continues to train on newer tasks. Both perform better than DUCA on the last task, 71.1 and 68.9 respectively, while DUCA has a performance of 61.1.
However, in continual learning settings, the data arrives continuously and the focus is on both retention of old task performance while performing well on the current task. As seen, the accuracy on the first task (real)
Figure 3: DUCA shows reduced task recency bias (left), as well as higher robustness against natural corruption (right) on Seq-CIFAR10 (\(|\mathcal{B}|\)=200) dataset.
reduces to 27.6 on ER and 43.5 on \(DER++\) after training the six tasks (domains), while the DUCA maintains the accuracy of 54.9. Similarly, after the second task, the performance on the first task decreases (\(44.5:\text{ER}\), \(54.2:\text{DER++}\), \(57.2:\text{CLS-ER}\) and \(62.9:\text{DUCA}\)), but with DUCA the forgetting is lesser. DUCA reports the highest information retention on older tasks, while also maintaining plasticity. For example, CLS-ER shows better retention of old information, but at the cost of plasticity. The last task in CLS-ER shows a lower performance compared to DUCA (52.1 vs. 61.0). The performance of the current task in DUCA is relatively lesser and can be attributed to the stochastic update rate. Therefore, it's essential to recognize that while the shape inductive bias is beneficial to a classification task, the observed high performances are a consequence of how the inductive bias is thoughtfully integrated into the proposed architecture within the framework of continual learning, where both stability and plasticity hold equal importance.
To shed more light on the performance of each of the modules in DUCA, we also provide the performance of the working model and the inductive bias learner, in Appendix Figure 5. The working model shows better plasticity, while DUCA (semantic memory) displays better stability. Overall, all modules in the proposed approach present unique attributes that improve the learning process and improve performance.
## 7 Effect of Inductive Bias and Knowledge Sharing
To assess the impact of the specific inductive bias and various ways of integrating it into the training framework, we conducted supplementary experiments. In these experiments, we introduced two additional baselines, ER and DER++, with the utilization of shape as an augmentation technique. Specifically, we
\begin{table}
\begin{tabular}{l l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Specifics} & \multicolumn{2}{c|}{Seq-CIFAR100} & \multicolumn{2}{c}{DN4IL} \\ \cline{3-6} & & 200 & 500 & 200 & 500 \\ \hline ER & Original & **21.40** & **28.02** & **26.59** & **31.01** \\ & RGB \& Shape* & 19.47 & 23.96 & 27.45 & 33.44 \\ \hline DER++ & Original & **29.60** & **41.40** & **34.75** & 41.87 \\ & RGB \& Shape* & 24.40 & 34.30 & 36.55 & 40.99 \\ \hline \hline DUCA & original & **45.38** & **54.27** & **44.23** & **49.32** \\ & \(-\text{SM}\) (RGB \& Shape*) & 24.34 & 32.64 & 36.80 & 43.88 \\ & \(-\text{SM}\) \(-\text{IBL}\) (Shape only) & 18.33 & 21.98 & 27.89 & 31.57 \\ & \(-\text{SM}\) \(-\text{IBL}\) (RGB+Shape) & 20.57 & 25.20 & 31.52 & 35.68 \\ & \(-\text{SM}\) \(-\text{IBL}\) (RGB \& Shape*) & 19.47 & 23.96 & 27.45 & 33.44 \\ & \(-\text{IBL}\) (RGB \& Shape*) & 42.01 & 49.55 & 40.75 & 43.99 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Analyzing the impact of inductive bias and knowledge sharing on baselines and DUCA. ‘*’ indicates the use of shape as an augmentation. ‘\(-\text{X}\)’ indicates the removal of component X. ‘+’ refers to concatenation in the channel dimension.
Figure 4: Task-wise performance on _DN4IL_ (\(|\mathcal{B}|\)=500), where each task represents a domain. DUCA shows more retention of old information without compromising much on current accuracy.
included the Sobel filter in the augmentation list, alongside RandomCrop and RandomHorizontalFlip, and proceeded with continual training. The results presented in Table 2 demonstrate that this approach yields inferior performance compared to the baseline models trained solely on RGB images. On DN4IL dataset, the performance is slightly better than baseline, as shape is a more discriminative and important feature in this dataset. Thus the incorporation of shape as an augmentation strategy appears to yield suboptimal feature representations, and is also dependent on the dataset.
We also conduct various ablations on the DUCA framework. Specifically, we perform isolations and exclusions of different elements within DUCA, including IBL and SM. Instead, we subject the base network (DUCA -SM -IBL) to three distinct training conditions: (1) exclusive training on shape images (Shape only), (2) concurrent training on both RGB and shape images (RGB+Shape), and (3) training on RGB images with the incorporation of a shape filter as an augmentation (RGB & Shape*). It is worth noting that shape information contributes valuable global semantic insights that complement the visually rich RGB data, thus emphasizing the necessity of both modalities for achieving enhanced generalization. However, training a single network on both distributions simultaneously may not always yield optimal utilization of this information.
Finally, we train the base model within the framework by excluding one component at a time, namely SM and IBL. We train the working model with but excluding the IBL, while also introducing shape as an augmentation. The results (-IBL) show improvement due to the presence of the semantic memory module. Nevertheless, the best performance is achieved when incorporating shape information through an alternative supervisory network, namely the IBL.
These experiments underscore the critical importance of the specific method used to induce this knowledge, highlighting its pivotal role in enhancing the overall training process. In our pursuit to bridge the distribution gap between RGB and shape images, we have leveraged knowledge distillation as a means of self-teaching, where distinct networks collaboratively engage in mutual learning and knowledge sharing. This approach not only sheds light on the significance of effective knowledge sharing but also offers a promising avenue for improving model performance and generalization in complex (visual) tasks.
## 8 Ablation Study
DUCA architecture comprises multiple components, each contributing to the efficacy of the method. The explicit module has the working model, and the implicit module has semantic memory (SM) and inductive bias learner (IBL). Disentangling different components in the DUCA can provide more insight into the contribution of each of them to the overall performance.
Table 3 reports the ablation study with respect to each of these components on both the Seq-CIFAR10 and _DN4IL_ datasets. Considering the more complex _DN4IL_ dataset, the ER accuracy without any of our components is 26.59. Adding cognitive bias (IBL) improves performance by 40%. Shape information plays a prominent role, as networks need to learn the global semantics of the objects, rather than background or spurious textural information, to translate performance across domains. Adding the dual-memory component (SM) shows an increase of approximately 49% over the vanilla baseline. Furthermore, the KS between explicit and implicit modules on current experiences also plays a key role in performance gain. Combining both of these cognitive components and, in general, following the multi-module design shows a gain of 66%. A similar trend is seen on Seq-CIFAR10.
\begin{table}
\begin{tabular}{c c c|c|c} \hline \hline SM & IBL & KS (WM\(\leftrightarrow\)IBL) & Seq-CIFAR10 & DN4IL \\ \hline ✓ & ✓ & ✓ & **70.04\(\pm\)**1.07 & **44.23\(\pm\)**0.05 \\ ✓ & ✓ & ✗ & 69.28\(\pm\)1.34 & 40.35\(\pm\)0.34 \\ ✓ & ✗ & - & 69.21\(\pm\)1.46 & 39.76\(\pm\)0.56 \\ ✗ & ✓ & ✓ & 64.61\(\pm\)1.22 & 37.33\(\pm\)0.01 \\ ✗ & ✗ & ✗ & 44.79\(\pm\)1.86 & 26.59\(\pm\)0.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation to analyze the effect of each component of DUCA on Seq-CIFAR10 and _DN4IL_.
## 9 Related Works
Rehearsal-based approaches, which revisit examples from the past to alleviate catastrophic forgetting, have been effective in challenging CL scenarios (Farquhar and Gal, 2018). Experience Replay (ER) (Riemer et al., 2018) methods use episodic memory to retain previously seen samples for replay purposes. DER++ (Buzzega et al., 2020) adds a consistency loss on logits, in addition to the ER strategy. In situations where memory limitations impose constraints on buffer size, such as in edge devices, it has been observed that rehearsal-based methods are susceptible to overfitting on the data stored in the buffer (Bhat et al., 2022). To address this, CO\({}^{2}\)L (Cha et al., 2021) uses contrastive learning from the self-supervised learning domain to generate transferable representations. ER-ACE (Caccia et al., 2021) targets the representation drift problem in online CL and develops a technique to use separate losses for current and buffer samples. All of these methods limit the architecture to a single stand-alone network, contrary to the biological workings of the brain.
CLS-ER (Arani et al., 2022) proposed a multi-network approach that emulates fast and slow learning systems by using two semantic memories, each aggregating weights at different times. Though CLS-ER utilizes the multi-memory design, sharing of different kinds of knowledge is not leveraged, and hence presents a method with limited scope. DUCA digresses from the standard architectures and proposes a multi-module design that is inspired by cognitive computational architectures. It incorporates multiple submodules, each sharing different knowledge to develop an effective continual learner that has better generalization and robustness.
## 10 Conclusion
We introduced a novel framework for continual learning that incorporates concepts inspired by cognitive architectures, high-level cognitive biases, and the multi-memory system. _Dual Cognitive Architecture (DUCA)_, includes multiple subsystems with dual knowledge representation. DUCA designed a dichotomy of explicit and implicit modules in which information is selected, maintained, and shared with each other to enable better generalization and robustness. DUCA outperformed on Seq-CIFAR10 and Seq-CIFAR100 on the Class-IL setting. In addition, it also showed a significant gain in the more realistic and challenging GCIL setting. Through different analyses, we showed a better plasticity-stability balance.
Furthermore, shape prior and knowledge consolidation helps to learn more generic solutions, indicated by the reduced problem of task recency bias and greater robustness against natural corruptions. Furthermore, we introduced a challenging domain-IL dataset, _DN4IL_, with six disparate domains. The significant improvement of DUCA on this complex distribution shift demonstrates the benefits of shape context, which helps the network to converge on a generic solution, rather than a simple texture-biased one. The objective of this work was to develop a framework that incorporates elements of cognitive architecture to mitigate the forgetting problem and enhance generalization and robustness.
## 11 Future Work
Here, we delve into the potential for extending our current research, acknowledging its applicability to a diverse range of modalities. The adaptability of DUCA serves as a robust foundation for further exploration. As we broaden the scope of DUCA beyond the image domain, an essential consideration is the identification of pertinent inductive biases tailored to the specific modality in question. For example, when venturing into the audio domain, a promising avenue involves the utilization of spectrogram representations. These representations effectively convert audio waveforms into visual data, encompassing both frequency and time-domain information. The integration of phonemes, the fundamental units of spoken language, holds the potential to enhance DUCA's effectiveness in tasks such as speech understanding, speaker identification, and language processing.
The collaboration between the core DUCA architecture and modality-specific inductive biases creates a synergy that drives knowledge sharing and learning capabilities. This collaborative architecture yields more generic and robust representations, substantially enhancing overall performance. Furthermore, the gradual accumulation of semantic memory emerges as a valuable asset, particularly in scenarios involving the contin
uous influx of data from various modalities. It mitigates the risk of forgetting and empowers the framework to maintain its adaptability over time.
These modality-specific adaptations, guided by the intrinsic principles and mechanisms of DUCA, open the door to exciting future directions. They offer the potential to advance lifelong learning and adaptability in a multitude of domains. We anticipate that our preliminary work will serve as a cornerstone for future research endeavors, including investigations into various cognitive biases and more efficient design methodologies. Ultimately, we hope to pave the way for the advancement of lifelong learning methods for ANNs.
#### Acknowledgments
The research was conducted when all the authors were affiliated with Advanced Research Lab, NavInfo Europe, The Netherlands.
|
2302.11150 | Microusity: A testing tool for Backends for Frontends (BFF) Microservice
Systems | The microservice software architecture is more scalable and efficient than
its monolithic predecessor. Despite its increasing adoption, microservices
might expose security concerns and issues that are distinct from those
associated with monolithic designs. We propose Microusity, a tool that performs
RESTful API testing on a specific type of microservice pattern called back end
for front end (BFF). We design a novel approach to trace BFF requests using the
port mapping between requests to BFF and the sub-requests sent to back-end
microservices. Furthermore, our tool can pinpoint which of the back end service
causing the internal server error, which may lead to unhandled errors or
vulnerabilities. Microusity provides an error report and a graph visualization
that reveal the source of the error and supports developers in comprehension
and debugging of the errors. The evaluation of eight software practitioners
shows that Microusity and its security test reports are useful for
investigating and understanding problems in BFF systems. The prototype tool and
the video demo of the tool can be found at
https://github.com/MUICT-SERU/MICROUSITY. | Pattarakrit Rattanukul, Chansida Makaranond, Pumipat Watanakulcharus, Chaiyong Ragkhitwetsagul, Tanapol Nearunchorn, Vasaka Visoottiviseth, Morakot Choetkiertikul, Thanwadee Sunetnanta | 2023-02-22T05:13:03Z | http://arxiv.org/abs/2302.11150v1 | # Microusity: A testing tool for Backends for Frontends (BFF) Microservice Systems
###### Abstract
The microservice software architecture is more scalable and efficient than its monolithic predecessor. Despite its increasing adoption, microservices might expose security concerns and issues that are distinct from those associated with monolithic designs. We propose Microusity, a tool that performs RESTful API testing on a specific type of microservice pattern called back end for front end (BFF). We design a novel approach to trace BFF requests using the port mapping between requests to BFF and the sub-requests sent to back-end microservices. Furthermore, our tool can pinpoint which of the back end service causing the internal server error, which may lead to unhandled errors or vulnerabilities. Microusity provides an error report and a graph visualization that reveal the source of the error and supports developers in comprehension and debugging of the errors. The evaluation of eight software practitioners shows that Microusity and its security test reports are useful for investigating and understanding problems in BFF systems. The prototype tool and the video demo of the tool can be found at [https://github.com/MUICT-SERU/MICROUSITY](https://github.com/MUICT-SERU/MICROUSITY).
microservices, API security, testing, fuzzing
## I Introduction
Microservice architecture has been increasingly adopted [1] and is frequently used when building modern software. One of the benefits of the microservice architecture is its scalability and modularity [2]. The developers can adopt a microservice pattern that is suitable to their business, such as an aggregator pattern, chained pattern, proxy pattern [3], micro front-end pattern, and the backends for frontends (BFF) pattern [4]. The BFF pattern provides an API endpoint as a middleman for the client, i.e., the front end, to fetch the data from itself rather than fetching directly from the back-end microservices. Multiple BFFs can be created to support several target devices of the same system to support different types of data used by such devices, such as a BFF for mobile applications and a BFF for the desktop website [4]. Thus, using the BFF pattern is useful for the service with different client platforms. However, when the error occurs in the back-end microservices, it is hard to trace which service causes the error when the data is passed via the BFF. Furthermore, without proper implementation, back end problems (i.e., stack trace, error exception) may be transmitted back to the client, exposing sensitive data to the attackers.
Existing tools and techniques [5, 6, 7, 8, 9] that perform RESTful API testing can only test at the API endpoints, but cannot trace the execution after the endpoint. In the case of the BFF pattern, when an endpoint (i.e., BFF) returns an error, the developers need to manually perform the checking to identify which back-end microservice(s) is causing the error.
In this paper, we introduce **Microusity**, a tool that performs RESTful API testing of BFF microservices. The tool traces requests processed by BFF, creates main-request to sub-request mapping, and provides test reports, in both textual and visualization formats, to help developers to trace and fix issues. This paper makes the following key contributions.
1. BFF API fuzzing and request tracking: A novel approach to map the request coming into the BFF to the requests sent to the back-end microservices.
2. A graph-based visualization to help the developers in comprehension and in debugging the errors caused by the back-end microservices.
## II Background
In this section, we discuss the background knowledge and the existing tools that are related to Microusity.
**RESTful APIs.** A Representational State Transfer (REST) application programming interface (API) is a web service that offers functionalities via HTTP. It has been adopted widely by industry [9]. Microservices usually adopt RESTful APIs as a communication method between their services.
**Microservices and their security.** Microservice is a software architecture that breaks down an application into several decoupled components that may be deployed independently of one another [4]. The services communicate with one another via multiple methods such as RESTful APIs, event streaming, or message brokers [2]. Furthermore, each microservice intends to be an autonomous development and run-time decision-making unit. Therefore, microservice have seen widespread use in practice [10, 11, 12, 13]. Several software and service organizations, including Netflix [14], Soundcloud [15, 16], and Uber [17], have embraced microservice architecture as a substitute for their older monolithic approaches. It is expected to grow further with the 5-year forecasted growth rate during 2022-2027 of 15.7% [18].
Nevertheless, threats to the security of microservice architectures are becoming more prevalent. According to the study from Hannousse and Yahiouche [19], microservices suffer from security breaches by user-based, data, infrastructure, and software attacks. Esposito et al. [20] report that microservices can introduce more attack surfaces due to the larger number of independent services compared to their monolithic counterpart. As a result, the lack of a testing support framework in the
RESTful API of the microservice system might cause serious issues since the RESTful API plays a huge role in integrating microservices together [21].
**Backends for Frontends (BFF).** BFF is one of the patterns for microservices that is designed to provide several gateways for the front ends rather than having only one API gateway for every front end. It allows fine-tuning to a specific front-end user interface (UI) such as mobile and web applications since they may have different requirements [4]. BFF works as an interface between the front end and the back-end microservices. It receives a request from a front end and dispatches the request or creates requests to several back-end microservices to retrieve data and aggregate the result back to the front end [22]. In this paper, we call the request to the BFF as the _main request_ and those requests that propagate from the BFF to back-end services as _sub-requests_. This usually creates a one-to-many mapping between the main request, created by the client, to the sub-requests created by the BFF. By not carefully checking for responses that are sent from the back-end microservices, the BFF can leak sensitive information such as programming exception messages. This information can be beneficial to attackers and lead to API attacks targeting a module that caused the exception. Moreover, identifying the back-end service that causes an error could be troublesome in BFF. When an API request is sent to the BFF, this request is passed to several back-end services. If one of the back-end services fails and returns the 500 HTTP status code, we cannot know which of the services fail since we can only get the unsuccessful result from the BFF response.
**RESTler** is a stateful RESTful API fuzzing tool [23]. RESTler automatically tests RESTful APIs for finding bugs related to security and reliability issues based on the responses from APIs. It generates test requests by compiling Swagger or OpenAPI specifications and inferring the producer and consumer dependencies. In addition, RESTler can identify the state of the test sequence and use the response from previous test results to generate new test requests to find more bugs.
**Zeek** (formerly known as Bro) is a network monitoring tool specialized in event logging and powerful event handling. It is extensible with its own programming language [24]. Zeek can be used to perform network traffic analysis and create alerts to the user by using scripts.
## III The Microusity Tool
Microusity is an automated API testing tool that targets BFF microservice systems. In this section, we present the Microusity system design, the approach of API fuzzing and request tracking, and the reporting mechanism.
### _System Design_
The system architecture of Microusity is depicted in Figure 1. Microusity's back end is composed of three components. The first component is a _Controller_. The controller reads the test configuration and handles the data flow between the front end and back end of the tool allowing the user to query test history. The second component is the _Test Configurator_. Test Configurator incorporates the custom configuration that the user created. This configuration is used to adjust the test coverage of the target BFF system and also to modify the test inputs. The third component is the _Data Aggregator_. Data Aggregator processes the data collected from the test execution and generates the test result mapping between main requests to BFF and their sub-requests. This mapping result is stored in data storage and sent to the tool's front end for processing as a test report and graph report. For Microusity's front end, HTML, CSS, Bootstrap, and EJS are used. The graph report from the API testing is visualized by Cytoscape.js.
Two main components of Microusity that handle the testing part include RESTler and Zeek. We use RESTler as the RESTful API fuzzing engine and we use Zeek to intercept the network log between the BFF and their back-end services. Microusity monitors HTTP requests and responses that occur by RESTler's fuzzing in order to trace the execution of each request and to locate responses that contain errors, i.e., responses that contain HTTP response status code of client error responses (400-499) or server error responses (500-599).
### _BFF API Fuzzing and Request Tracking_
We propose a novel approach for BFF API fuzzing and request tracking in this paper. Figure 1 is used to explain the approach. First, the testing configuration is supplied by the user through the web interface and passed to the Controller. The Controller passes the testing configuration to the Test Configurator, which identifies the target system to test. Next, RESTler will be executed with the provided configuration to fuzz the APIs of the tested system while Zeek monitors the communications between the BFF and the back-end microservices. After fuzzing, the Data Aggregator collects the test results from RESTler and the network monitoring log from Zeek. The mapping is performed and the final API test results are kept in the Test Result database. Lastly, the results are sent back to the front end to generate security and graph reports.
Microusity leverages the fuzzing capability from RESTler which generates the requests from the targeted BFF's OpenAPI specification. Once a request is validated, RESTler will attempt to modify the request into a malformed one by taking the BFF's state into account. After testing is completed, Microusity reads the log and creates the main request and sub-request mapping based on the network port in the requests as described in Algorithm 1. For each test input generated by RESTler, Zeek collects all the requests created in the system. Then, based on the order of requests in the Zeek log (sorted chronologically), each request is checked whether it is the original request coming to BFF (i.e., by having the destination to the BFF's host and port) or the sub-request generated by BFF. Then, the sub-requests are mapped to the original request. The process keeps repeating until all the requests in the Zeek log are processed.
### _Reporting Mechanism_
Microusity offers two types of reports: the error report and the graph report. We explain them below.
#### Iii-B1 Error Report
The error report is divided into three main sections as shown in Figure 2. The first section is the overall test summary. This section tells the API coverage, the number of total requests sent to the BFF, the total of responses, and HTTP status code from all responses, including sub-responses. The second section shows the total number of error responses from the BFF and back-end microservices. This can help the user understand how many errors are occurred from the BFF or back-end microservices. The last section is the request sequence that contains issues found by Microusity. This section is grouped into four different categories. The first category contains the request sequences in which their response contains an exception leakage in both of main response from the BFF and sub-responses from the back-end microservices. The second category contains the request sequence that only the main response from the BFF contains the exception leakage. The third category contains the sequence that only the sub-responses from back-end microservices contain exception leakage. The last category includes sequences that contain the HTTP 5xx (i.e., HTTP 500-599 indicating a server error) as their return status code. We choose to monitor HTTP 5xx responses because they indicate that an unhandled issue occurs in the back-end microservices, and potentially contains a security vulnerability1.
Footnote 1: Internal Server Error (HTTP 500) is a response from a server when it finds an error that it does not know how to handle. More info: [https://developer.mozilla.org/en-US/docs/Web/HTTP/Status](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)
#### Iii-B2 Graph Report
Microusity offers a graph report to visualize the relationship between the main request and sub-requests and their responses. Figure 3 shows an example of the graph report. It depicts an API request and responses that pass through the BFF. The triangle node represents the client. The arrow on the left is the main request and main response from Microusity to the BFF, while the arrows on the right are the sub-requests that the BFF creates after receiving the client request. Each request is labeled by IP address and port number. Microusity provides this graph visualization for the sequence that contains the issues shown in the error report to give additional information and easy-to-trace connections between the main request and the sub-requests to the developers. The requests which contain exception message leakage or HTTP 5xx are highlighted in red and the user can expand the request arrow to show more information such as the request's or response's bodies and header.
## IV Evaluation
We performed a user evaluation to evaluate the ease of understanding and the usefulness of Microusity using a demonstration and an interview. We recruited eight full-time software
Fig. 1: System architecture and workflow of Microusity
Fig. 3: Example of the graph report
Fig. 2: Example of the error report
engineers from four different companies. The demographic of the interviewed participants is shown in Table I. The participants are software architects and software engineers with working experience from 1 to 7 years. They all have worked with microservice systems ranging from half a year to 6 years. We explained the Microusity tool and its API testing concept to the participants. Then, for three participants, we demonstrated the tool execution on a simple BFF project2 with the security test report and graph report. For the other five participants, we demonstrated the tool execution on their company's BFF system. Then, we asked them 7 questions to rate the usability and usefulness of the system using the 5-level Likert scale. The full list of interview questions is on our study website3.
Footnote 2: [https://github.com/pionin/sample-spring-microservices-new](https://github.com/pionin/sample-spring-microservices-new)
Footnote 3: [https://github.com/MUICT-SERU/MICROUSITY](https://github.com/MUICT-SERU/MICROUSITY)
After performing the interviews, we aggregated the scores given by the participants. The result is shown in Figure 4. We found that the overall average score (represented by a green triangle) that the research participants evaluated the usability of the system in terms of _clearness and comprehension of the error report_ at 4.25 out of 5. The average score for the _clearness and comprehension of the graph report_ is 4.1. Furthermore, Microusity received an average score of 4.1 in the _overall system usability_. Lastly, for the _real-world usefulness_, the average score is 4.5.
We also asked them what they liked and disliked about the tool. Five participants agreed that Microusity's error report, which indicates which service has an error, is useful. They believed the tool can assist them in determining which service causes API security issues. It saves them time compared to manually identifying the service that causes the problem. Moreover, the participants give positive comments such as categorization of HTTP error types, working with more than one service, showing API test coverage, being open-source, and using a fuzzing technique that can discover more bugs than traditional or manual testing. However, three participants reported that the tool has no getting-start instruction and high learning curve, which caused them to struggle with learning how to start using the tool. The other issues include requiring a lot of settings and pre-requisite knowledge about RESTler, maintaining the tool, in the long run, can be problematic, lacks of filtering options in the error report, and not presenting a guide on how to solve the detected security issues. We plan to improve the tool based on their comments in our future work.
The user evaluation reveals that the Microusity system is useful for software practitioners that work and maintain BFF microservice systems. The participants found the error and the graph reports useful and support the debugging of one-to-many requests created by BFF.
## V Related Work
Chondamrongkul et al. [5] proposed a security analysis approach for the microservice architecture model. The approach analyzes the microservice model defined using Ontology Web Language and Architecture Description Language and identifies security issues. Compare to their work, Microusity performs the API testing on the actual microservice system instead of the model. RESTTESTGEN [7] and QuickREST [8] are automated testing tools for RESTful APIs. Similar to Microusity, the two tools rely on the specification of the APIs (e.g., Swagger or OpenAPI) to generate test inputs and locate the errors based on the HTTP response status code. EvoMaster [9] is a search-based white-box automated testing tool that can be applied to RESTful API testing.
These existing tools can mostly test the RESTful APIs at the endpoints but do not trace the requests behind the endpoints. In contrast, Microusity deploys RESTler and Zeek and monitors the requests and responses. Therefore, Microusity can trace errors from the request and response sequence across the BFF as well as their back-end microservices. Microusity then provides easy-to-understand reports for the mapping of the service which causes such errors.
## VI Conclusion
We propose Microusity, an automated RESTful API testing tool for BFF microservice systems using stateful fuzzing and main-request and sub-request mapping. The tool detects the requests that create HTTP 500-599 responses with exception messages from the back-end microservices and creates an error report and a graph report to aid the developers in comprehensing the issues. The evaluation with eight practitioners shows that the participants found the tool easy to use and useful during their development and maintenance of microservices.
|
2308.04260 | Nonequilibrium Response for Markov Jump Processes: Exact Results and
Tight Bounds | Generalizing response theory of open systems far from equilibrium is a
central quest of nonequilibrium statistical physics. Using stochastic
thermodynamics, we develop an algebraic method to study the response of
nonequilibrium steady state to arbitrary perturbations. This allows us to
derive explicit expressions for the response of edge currents as well as
traffic to perturbations in kinetic barriers and driving forces. We also show
that these responses satisfy very simple bounds. For the response to energy
perturbations, we straightforwardly recover results obtained using nontrivial
graph-theoretical methods. | Timur Aslyamov, Massimiliano Esposito | 2023-08-08T13:49:23Z | http://arxiv.org/abs/2308.04260v1 | # Nonequilibrium Response for Markov Jump Processes: Exact Results and Tight Bounds
###### Abstract
Generalizing response theory of open systems far from equilibrium is a central quest of nonequilibrium statistical physics. Using stochastic thermodynamics, we develop an algebraic method to study the response of nonequilibrium steady state to arbitrary perturbations. This allows us to derive explicit expressions for the response of edge currents as well as traffic to perturbations in kinetic barriers and driving forces. We also show that these responses satisfy very simple bounds. For the response to energy perturbations, we straightforwardly recover results obtained using nontrivial graph-theoretical methods.
_Introduction.--_Linear response theory is a central tenet in statistical physics [1; 2; 3]. The response of systems in steady state at, or close to, equilibrium is described by the seminal dissipation-fluctuation relation (DFR) [4; 5]. Generalizations to study the response of systems in nonequilibrium steady state (NESS) are a more recent endeavor, in particular since the advent of stochastic thermodynamics [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. Understanding the response of far-from-equilibrium systems is of great conceptual but also practical importance (e.g. to characterize homeostasis, design resilient nanotechnologies, detect critical transitions, and metabolic control). Progress in this direction relies on our ability to derive useful expressions for NESS responses, and possibly derive practically meaningful bounds for them.
In this Letter, we study the NESS response of Markov jump processes within stochastic thermodynamics. In this context, Ref. [17] constitute a frontier. The authors studied the response to perturbations of the energy landscape parameters. They derived an exact result and two bounds using graph-theoretic methods, which can be quite tedious and nonintuitive to use [18; 19; 20]. We develop a novel approach based on simple linear algebra, which allows us to go significantly further than currently known results. We first derive a simple and elegant expression for the response of a NESS to arbitrary perturbations. We use it to straightforwardly reproduce the main result [17] for the NESS response to perturbations of the energy landscape. But more importantly we also use it to derive novel and simple expressions for the response of edge currents and traffic to kinetic barriers and driving forces perturbations. We furthermore derive four remarkably simple bounds for these four quantities (see Table 1), which can be added to the list of simple bounds valid far-from-equilibrium, together with thermodynamic uncertainty relations [21; 22] and speed limits [23].
_Setup.--_We consider a Markov jump process over a discrete set of \(N\) states. Transitions between these states are described by the rate matrix \(\mathbb{W}/\tau\), where the element \(W_{mn}/\tau\) defines the probability per unit time \(\tau\) to jump from state \(m\) to state \(n\). Below we choose \(\tau=1\), to adimensionalize the matrix \(\mathbb{W}\). We assume that all transitions are reversible and that the matrix \(\mathbb{W}\) is irreducible [24]. This ensures the existence of a unique steady-state probability distribution \(\mathbf{\pi}=(\pi_{1},\ldots,\pi_{N})^{\top}\) satisfying
\[\mathbb{W}\cdot\mathbf{\pi}=\mathbf{0}\,, \tag{1}\]
where the length of the vector \(\mathbf{\pi}\) is \(|\mathbf{\pi}|=1\). When the rates depend on a model parameter \(\eta\), one can define the linear response (resp. sensitivity) of the nonequilibrium state as \(\partial_{\eta}q\) (resp. \(\partial_{\eta}\ln q\)) for an arbitrary quantity \(q\).
_General theory.--_The rate matrix \(\mathbb{W}\) in Eq. (1) has only one zero eigenvalue [24]. This allows us to rewrite Eq. (1) as
\[\mathbb{K}_{n}\cdot\mathbf{\pi}=\mathbf{e}_{n}\,, \tag{2a}\] \[\begin{array}{c}1\\ \vdots\\ n-1\\ n+1\\ N\end{array}\begin{pmatrix}W_{11}&W_{12}&\ldots&W_{1N}\\ \vdots&\vdots&\ldots&\vdots\\ W_{n-1}&W_{n-12}&\ldots&W_{n-1,N}\\ 1&1&\ldots&1\\ W_{n+1,1}&W_{n+1,2}&\ldots&W_{n+1,N}\\ \vdots&\vdots&\vdots&\ldots&\vdots\\ W_{N1}&W_{N,2}&\ldots&W_{N,N}\end{pmatrix} \tag{2b}\]
where \(\mathbf{e}_{n}\) denotes the vector with a \(1\) for the \(n\)-th element and \(0\)'s elsewhere, and where the matrix \(\mathbb{K}_{n}\) coincides with the rate-matrix \(\mathbb{W}\) except the \(n\)-th row. Since the matrix \(\mathbb{K}_{n}\) is invertible [\(\det\mathbb{K}_{n}\neq 0\)] the solution of Eq. (2a) has the following form:
\[\mathbf{\pi}=\mathbb{K}_{n}^{-1}\cdot\mathbf{e}_{n}\,. \tag{3}\]
To find the linear response \(\partial_{\eta}\mathbf{\pi}\) we calculate the derivative \(\partial_{\eta}\) of Eq. (2a):
\[\partial_{\eta}[\mathbb{K}_{n}(\eta)\cdot\mathbf{\pi}(\eta)]=\mathbf{0}\,,\] \[\mathbb{K}_{n}\cdot\partial_{\eta}\mathbf{\pi}=-\partial_{\eta} \mathbb{K}_{n}\cdot\mathbf{\pi}\,. \tag{4}\]
Solving Eq. (4), we arrive at the desired result:
\[\partial_{\eta}\mathbf{\pi}=-\mathbb{K}_{n}^{-1}\cdot\partial_{\eta} \mathbb{K}_{n}\cdot\mathbf{\pi}\,. \tag{5}\]
Equation (5) will be central in what follows. Indeed, it provides a linear algebra-based method to calculate different nonequilibrium responses which is much simpler and direct than methods based on graph theory representations of \(\mathbf{\pi}\)[17]. At this stage, Eq. (5) holds for any dependence of \(\mathbb{W}(\eta)\)
on the control parameter.
_Rate-matrix model.--_To proceed, we follow Ref. [17] and parameterize the nondiagonal elements of the rate matrix as
\[W_{ij}=\mathrm{e}^{-(B_{ij}-E_{j}-F_{ij}/2)}\,, \tag{6}\]
where \(E_{j}\) are the vertex parameters, \(B_{ij}=B_{ji}\) are the symmetric edge parameters, and \(F_{ij}=-F_{ji}\) are the antisymmetric edge parameters. Expression (6) is reminiscent of Arrhenius rates that characterize the transition rates of a system in an energy landscape with wells of depths \(E_{j}\), connected via barriers of heights \(B_{ij}\), and subjected to nonconservative driving forces \(F_{ij}\) along the transition paths [17; 25]. These rates satisfy local detailed balance ensuring the compatibility with stochastic thermodynamics [25; 26; 27].
_Vertex parameters.--_To calculate \(\partial_{E_{n}}\mathbf{\pi}\), we note that only the \(n\)-th column of the matrix \(\mathbb{K}_{n}\) depends on \(E_{n}\). Therefore,
\[\partial_{E_{n}}\mathbf{\kappa}_{n}=\begin{array}{c}1\\ \vdots\\ n-1\\ n+1\\ \vdots\\ N\end{array}\left(\begin{array}{c}W_{1,n}\\ \vdots\\ W_{n-1,n}\\ 0\\ W_{n+1,n}\\ \vdots\\ W_{N,n}\end{array}\right), \tag{7}\]
where all columns but the \(n\)-th one are zero. The element \((n,n)\) is zero because \(K_{n,n}=1\). Here and below, empty spaces in matrices denote zeros. Inserting Eq. (7) into Eq. (5), we immediately recover a key result of Ref. [17] obtained using nontrivial graph-theoretical methods, namely
\[\partial_{E_{n}}\mathbf{\pi}=-\pi_{n}\mathbb{K}_{n}^{-1}\cdot(\mathbf{K}_{n}-\mathbf{e}_{ n})=-\pi_{n}(\mathbf{e}_{n}-\mathbf{\pi})\,, \tag{8}\]
where \(\mathbf{K}_{n}\) is the \(n\)-th column of \(\mathbb{K}_{n}\) and where we used Eq. (3).
_Symmetric edge parameters.--_We proceed with calculating \(\partial_{B_{nm}}\mathbf{\pi}\). One can see from Eq. (6) that such a perturbation changes \(W_{nm}\) and \(W_{mn}\). These rates are also contained in the diagonal elements of the matrix \(\mathbb{W}\) since \(W_{nm}=-\sum_{m\neq n}W_{mn}\). Overall four elements depend on \(B_{nm}\): \(W_{nm}\), \(W_{mn}\), \(W_{mn}\). But the matrix \(\mathbb{K}_{n}\) defined in Eq. (2b) only contains two of those elements (due to row \(n\)). Using Eq. (6), their derivatives reads \(\partial_{B_{nm}}W_{mn}=-W_{mn}\) and \(\partial_{B_{nm}}W_{mm}=-\partial_{B_{nm}}W_{nm}=W_{nm}\), and we find that
\[\partial_{B_{nm}}\mathbb{K}_{n}=\begin{array}{c}1\\ \vdots\\ \vdots\\ \vdots\\ \vdots\\ N\end{array}\left(\begin{array}{c}W_{nm},W_{mn},W_{mn},W_{mn}.\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ \\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \\ 0 \\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \\ 0 \\ 0 \\ 0\\ 0\\ 0 \\ 0\\ 0\\ 0 \\ 0 \\ 0\\ 0\\ 0 \\ 0 \\ 0 \\ 0\\ 0 \\ 0 \\ 0 \\ 0\\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0\\ 0 \\ 0 0\\ 0 \\ 0 \\ 0 \\ 0 \\ 0
Similarly, the sensitivities of edge traffic reads:
\[\partial_{B_{nm}}\ln\tau_{nm} =-1-\frac{J_{nm}}{\tau_{nm}}\nabla_{nm}\,, \tag{15a}\] \[\partial_{F_{nm}}\ln\tau_{nm} =\frac{1}{2}\left(\frac{J_{nm}}{\tau_{nm}}+\nabla_{nm}\right),\] (15b) \[\nabla_{nm} =\frac{W_{mn}\kappa_{n}+W_{nm}\kappa_{m}}{\det\mathbb{K}_{n}}\,. \tag{15c}\]
These two results, Eqs. (14) and (15), are important because they provide explicit algebraic expressions for the response. Indeed, the variables \(\Delta_{nm}\) and \(\nabla_{nm}\) defined in Eqs. (14c) and (15c) do not depend on \(\pi_{i}\). They depend only on the elements in the minors \((m,n)\) and \((m,m)\) of the matrix \(\mathbb{K}_{n}\).
_Bounds and discussion.--_Another important result is that simple bounds can be obtained for Eqs. (14) and (15). They are given in Table 1 and bound the sensitivities \(\partial_{q}\ln q\) for all combinations of \(q=\{J_{nm},\tau_{nm}\}\) and \(\eta=\{B_{nm},F_{nm}\}\). In Appendix A, we derive the following bounds for \(\Delta_{nm}\) and \(\nabla_{nm}\),
\[0\leq \Delta_{nm}\leq 1\,, \tag{16a}\] \[|\nabla_{nm}|\leq \Delta_{nm}\leq 1\,, \tag{16b}\]
which can be used to prove all bounds in Table 1. Indeed, inserting Eq. (16a) into Eqs. (14a) and (14b) we get two tight bounds for the current \(J_{nm}\) in Table 1. Using Eqs. (15a), (15b), and (16b), we derive two tight bounds for the traffic
\[\left|\frac{\tau_{nm}}{J_{nm}}\Big{(}\frac{\partial\ln\tau_{nm}}{ \partial B_{nm}}+1\Big{)}\right|\leq 1\,, \tag{17a}\] \[\left|\frac{2\partial\ln\tau_{nm}}{\partial F_{nm}}-\frac{J_{nm} }{\tau_{nm}}\right|\leq 1\,. \tag{17b}\]
The simpler bounds for \(\tau_{nm}\) shown in Table 1 are not tight anymore. They are obtained using Eq. (16b) and \(|J_{nm}/\tau_{nm}|\leq 1\) in Eqs. (15a) and (15b). To discuss the saturation of the tight bounds in Table 1 and Eq. (17), we consider one of them:
\[-1\leq\partial_{B_{nm}}\ln J_{nm}\leq 0\,. \tag{18}\]
The upper bound in Eq. (18) is simple to understand: a higher energy barrier (\(B_{nm}\)) always results in a lower absolute value of the current between states \(n\) and \(m\). This bound is saturated at \(\Delta_{nm}=1\), which reveals another (topological) way to reduce the response of the current. To saturate the lower bound in Eq. (18), one needs \(\Delta_{nm}=0\). However, in Appendix A we prove that \(\Delta_{nm}=0\) only if \(W_{nm}=W_{nm}=0\) or \(\kappa_{m}=\kappa_{n}=0\), where the former condition is equivalent to \(J_{nm}=0\). Therefore, excluding the case \(\kappa_{m}=\kappa_{n}=0\), the lower bound of Eq. (18) can be saturated only for a zero current. This is illustrated by the numerical simulations shown in Fig. 2, where the set of possible values \(\partial_{B_{nm}}\ln J_{nm}\) touches the edge \(-1\) only at one point \(J_{nm}=0\). The bound for the sensitivity \(\partial_{F_{nm}}\ln J_{nm}\) has the same properties as Eq. (18). The bounds in Eq. (17) saturate at \(\nabla_{nm}=\pm 1\), which implies \(\Delta_{nm}=1\), see Eq. (16b).
_Future studies.--_Our first main result Eq. (5) provides a general algebraic expression of a NESS response with respect to any parameterization of the rate matrix. Our other results rely on the Arrhenius-like form (6) of the rates, which allows us to perturb isolated edges. But our methodology can be extended to consider more general rate matrices with nonconservative force acting on multiple edges [16; 26; 29]. It could also be used to study the stationary responses of other physical observables (beyond currents and activities) [30; 31], as well as to study time-dependent "Green-Kubo-Agarwal-like" relations [7; 11].
_Acknowledgments.--_This research was funded by project ChemComplex (C21/MS/16356329). We thank Massimo Biancioni for detailed feedback on our manuscript.
## Appendix A Proof of bounds in Eq. (16)
We prove the bounds in Eq. (16). The determinant \(\det\mathbb{K}_{n}\) on the \(m\)-th row of the matrix \(\mathbb{K}_{n}\) can be written as
\[\det\mathbb{K}_{n} =(-1)^{m+n}W_{mn}M_{mn}(\mathbb{K}_{n})+(-1)^{m+m}W_{mm}M_{mm}( \mathbb{K}_{n})\] \[+\sum_{i\neq m,n}(-1)^{i+m}W_{mi}M_{mi}(\mathbb{K}_{n})\] \[=W_{mn}\kappa_{n}-W_{nm}\kappa_{m}+C\,, \tag{19}\]
where we used Eq. (11) and where \(C\) denotes the sum of all terms which do not depend on \(B_{nm}\) and \(F_{nm}\). Since \(\det\mathbb{K}_{n}=\prod_{i}^{N-1}\lambda_{i}\), where \(\lambda_{i}\) are nonzero negative eigenvalues of the matrix \(\mathbb{W}\) (see [32]), the sign of the determinant \(\operatorname{sgn}\det\mathbb{K}_{n}=(-1)^{N-1}\) is fixed and does not depend on \(B_{nm}\) and \(F_{nm}\). Using the fact that \(C\) does not depend on \(B_{nm}\), we can determine the sign of \(C\) from Eq. (A) in the limit \(B_{nm}\to\infty\), where \(W_{nm},W_{mn}\to 0\):
\[\operatorname{sgn}C=\lim_{B_{nm}\to\infty}\operatorname{sgn}\det\mathbb{K}_{n}=( -1)^{N-1}\,. \tag{20}\]
Using the fact that the signs of \(C\) and \(\det\mathbb{K}_{n}\) are the same, we can rewrite Eq. (14c) using Eq. (A) as
\[\Delta_{nm}=1-\left|\frac{C}{\det\mathbb{K}_{n}}\right|, \tag{21}\]
which gives us the upper bound in Eq. (16a).
\begin{table}
\begin{tabular}{|c|c|c|} \hline control & current, \(J_{nm}\) & traffic, \(\tau_{nm}\) \\ \hline \(B_{nm}\) & \(-1\leq\frac{\partial\ln J_{nm}}{\partial B_{nm}}\leq 0\) & \(-2\leq\frac{\partial\ln\tau_{nm}}{\partial B_{nm}}\leq 0\) \\ \hline \(F_{nm}\) & \(0\leq\frac{2}{\tau_{nm}}\frac{\partial J_{nm}}{\partial F_{nm}}\leq 1\) & \(-1\leq\frac{\partial\ln\tau_{nm}}{\partial F_{nm}}\leq 1\) \\ \hline \end{tabular}
\end{table}
Table 1: The central and right columns correspond to the response of the current and traffic, respectively. The central and bottom rows are perturbations of the symmetric and antisymmetric edge parameters, respectively.
In the case \(\kappa_{n}=0\), \(\kappa_{m}=0\), Eqs. (14c) and (15c) satisfy the bounds (16a) and (16b). Considering \(\kappa_{n}\neq 0\) and \(\kappa_{m}\neq 0\), the following limits of Eq. (11) hold:
\[\lim_{B_{nm}\rightarrow-\infty}\det\mathbb{K}_{n} =W_{mn}\kappa_{n}-W_{nm}\kappa_{m}\,, \tag{17a}\] \[\lim_{F_{nm}\rightarrow\infty}\det\mathbb{K}_{n} =-W_{nm}\kappa_{m}\,,\] (17b) \[\lim_{F_{nm}\rightarrow-\infty}\det\mathbb{K}_{n} =W_{mn}\kappa_{n}\,. \tag{17c}\]
Since \(\operatorname{sgn}(W_{mn}\kappa_{n}/\det\mathbb{K}_{n})\) and \(\operatorname{sgn}(W_{nm}\kappa_{m}/\det\mathbb{K}_{n})\) are fixed, we can find them using Eqs. (17b) and (17c)
\[\operatorname{sgn}\!\left(\frac{W_{mn}\kappa_{n}}{\det\mathbb{K}_ {n}}\right) =\lim_{F_{nm}\rightarrow-\infty}\frac{W_{mn}\kappa_{n}}{\det \mathbb{K}_{n}}=1\,, \tag{18a}\] \[\operatorname{sgn}\!\left(\frac{W_{nm}\kappa_{m}}{\det\mathbb{K}_ {n}}\right) =\lim_{F_{nm}\rightarrow\infty}\frac{W_{nm}\kappa_{m}}{\det \mathbb{K}_{n}}=-1\,, \tag{18b}\]
which implies the lower bound in Eq. (16a) and
\[\kappa_{n}\kappa_{m}<0\,. \tag{19}\]
Combining Eqs. (14) and (18), we derive the inequalities (16a).
The lower bound in Eq. (16a) is saturated only when \(W_{nm}=W_{mn}=0\), while for \(W_{nm}\neq 0\) the condition in Eq. (19) implies \(\Delta_{nm}\neq 0\). The upper bound in Eq. (16a) is saturated in the limit \(B_{nm}\rightarrow-\infty\) [see Eqs. (14c) and (17a)], as well as when \(C=0\).
To find bounds for \(\nabla_{nm}\), we rewrite it as follows:
\[\nabla_{nm}=\Delta_{nm}\frac{W_{mn}\kappa_{n}+W_{nm}\kappa_{m}}{W_{mn}\kappa_ {n}-W_{nm}\kappa_{m}}\,. \tag{20}\]
If \(W_{mn}\kappa_{n}=0\), then \(\nabla_{nm}=-\Delta_{nm}\), otherwise we have:
\[\nabla_{nm}=\Delta_{nm}\frac{1+a}{1-a}\,,\text{ where }\ a=\frac{W_{nm}\kappa_{m}}{W_{mn} \kappa_{n}}\leq 0\,. \tag{21}\]
Since \(|(1+a)/(1-a)|\leq 1\) for \(a\leq 0\), we find Eq. (16b).
In the case \(\kappa_{n}=0\), \(\kappa_{m}\neq 0\) (resp. \(\kappa_{n}\neq 0\), \(\kappa_{m}=0\)), we derive Eq. (16a) using Eq. (14) and Eq. (18b) (resp. Eq. (18a)); and we have \(\nabla_{nm}=-\Delta_{nm}\) (resp. \(\nabla_{nm}=\Delta_{nm}\)).
## Appendix B Example of network
In Fig. 1, we consider the responses \(\partial_{B_{13}}\pi_{i}\) with \(i=1,\ldots,4\), for the network given in the inset, and compare it to the bound \(|\partial_{B_{nm}}\pi_{i}|\leq\pi_{i}(1-\pi_{i})\tanh(F_{\max}/4)\) obtained in [17]. We see that \(J_{13}\) can vanish even at nonzero value \(F_{\max}=\max(|F_{1}|,|F_{2}|)\neq 0\). In other words, the system is out-of-equilibrium but the edge \(1-3\) is detailed balanced.
## Appendix C Numerical simulations
In Fig. 2, we numerically verify the bounds in Table 1 and Eq. (17) using random generated rate matrices for the network shown in the inset of Fig. 1.
Figure 2: a-d: Illustrations of the bounds of \(J_{nm}\) from Table 1 and \(\tau_{nm}\) from Eq. (17). The dashed lines show the corresponding bounds. The dots are the result of numerical calculations for 20000 random matrices \(\mathbb{W}\) with a homogeneous distribution of elements in the range \((0,\mathsf{w}_{\max})\). The network corresponds to the inset in Fig. 1, \(\mathsf{w}_{\max}=100\).
Figure 1: Inset: Example of the network with 4 states; \(F_{1}\) and \(F_{2}\) denote the forces in the cycles \(1-2-3-1\) and \(1-4-3-1\), respectively. Main: The solid curves show the responses \(\partial_{B_{13}\pi_{i}}\) from Eq. (10) scaled to \(\pi_{i}(1-\pi_{i})\), where \(i=1,2,3,4\) correspond to blue, orange, green, and red colors, respectively. In these coordinates, the dashed lines (\(\pm\tanh(F_{\max}/4)\)) correspond to the bound from [17]. Black arrow indicates \(J_{13}=0\). Simulation parameters: the nondiagonal and nonzero elements of \(\mathbb{W}\) are \(W_{21}=10.8\), \(W_{31}=13.4\mathrm{e}^{-B_{31}-F_{31}/2}\), \(W_{41}=16.2\), \(W_{12}=94.8\), \(W_{32}=26.6\), \(W_{13}=45.5\mathrm{e}^{-B_{13}+F_{31}/2}\), \(W_{23}=19.5\), \(W_{43}=14.3\), \(W_{14}=0.5\), \(W_{34}=9.8\), where \(B_{13}=1\). The forces in the inset are \(F_{1}=F_{13}-0.6\), \(F_{2}=F_{13}+4.4\) which give \(|F_{1}|=|F_{2}|\) at \(F_{13}=-1.9\). |
2305.13938 | Algorithmic Unfairness through the Lens of EU Non-Discrimination Law: Or
Why the Law is not a Decision Tree | Concerns regarding unfairness and discrimination in the context of artificial
intelligence (AI) systems have recently received increased attention from both
legal and computer science scholars. Yet, the degree of overlap between notions
of algorithmic bias and fairness on the one hand, and legal notions of
discrimination and equality on the other, is often unclear, leading to
misunderstandings between computer science and law. What types of bias and
unfairness does the law address when it prohibits discrimination? What role can
fairness metrics play in establishing legal compliance? In this paper, we aim
to illustrate to what extent European Union (EU) non-discrimination law
coincides with notions of algorithmic fairness proposed in computer science
literature and where they differ. The contributions of this paper are as
follows. First, we analyse seminal examples of algorithmic unfairness through
the lens of EU non-discrimination law, drawing parallels with EU case law.
Second, we set out the normative underpinnings of fairness metrics and
technical interventions and compare these to the legal reasoning of the Court
of Justice of the EU. Specifically, we show how normative assumptions often
remain implicit in both disciplinary approaches and explain the ensuing
limitations of current AI practice and non-discrimination law. We conclude with
implications for AI practitioners and regulators. | Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, Mykola Pechenizkiy | 2023-05-05T12:00:39Z | http://arxiv.org/abs/2305.13938v2 | # Algorithmic Unfairness through the Lens of EU
###### Abstract.
Concerns regarding unfairness and discrimination in the context of artificial intelligence (AI) systems have recently received increased attention from both legal and computer science scholars. Yet, the degree of overlap between notions of algorithmic bias and fairness on the one hand, and legal notions of discrimination and equality on the other, is often unclear, leading to misunderstandings between computer science and law. What types of bias and unfairness does the law address when it prohibits discrimination? What role can fairness metrics play in establishing legal compliance? In this paper, we aim to illustrate to what extent European Union (EU) non-discrimination law coincides with notions of algorithmic fairness proposed in computer science literature and where they differ. The contributions of this paper are as follows. First, we analyse seminal examples of algorithmic unfairness through the lens of EU non-discrimination law, drawing parallels with EU case law. Second, we set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the Court of Justice of the EU. Specifically, we show how normative assumptions often remain implicit in both disciplinary approaches and explain the ensuing limitations of current AI practice and non-discrimination law. We conclude with implications for AI practitioners and regulators.
EU non-discrimination law, algorithmic fairness, machine learning, artificial intelligence +
Footnote †: journal: Computer science
Existing work in this direction has primarily targeted a legal audience (e.g. (Raphage et al., 2017; Raghavan et al., 2018; Raghavan et al., 2018)). Most notably, Wachter et al. (Wachter et al., 2018) set out how the contextual nature of EU non-discrimination law makes it impossible to automate non-discrimination in the context of AI systems and propose a fairness metric that aligns with the Court's "gold standard". Additionally, several works focus on US anti-discrimination law (e.g. (Hellman, 1998; Raghavan et al., 2018; Raghavan et al., 2018)). For example, Hellman (Hellman, 1998) considers the compatibility of several fairness metrics under US anti-discrimination law and touches upon the legitimacy of particular types of technical interventions.
In this paper, we consider European Union (EU) non-discrimination law and target a broader audience, bridging two distinct disciplines. The contributions of this paper are as follows. Following a brief introduction to EU Discrimination law, we analyse seminal examples of algorithmic unfairness through the lens of EU non-discrimination law, drawing parallels with EU case law. Second, we set out the normative underpinnings of fairness metrics and technical interventions and compare these to the legal reasoning of the court. Specifically, we show how normative assumptions often remain implicit in both disciplinary approaches and explain the ensuing limitations of current AI practice and non-discrimination law.
The remainder of the paper is structured as follows. Section 2 provides the necessary background on EU non-discrimination law. Section 3 presents our analysis of seminal examples from the algorithmic fairness literature through the lens of EU non-discrimination law. Building on these findings, Section 4 explores the normative underpinnings of fairness metrics, fairness-aware machine learning algorithms, and the legal reasoning of the Court of Justice of the EU. In Section 5, we discuss the implications of our findings for AI practitioners and regulators and Section 6 concludes the paper.
## 2. Discrimination Under EU Law
Following Lippert-Rasmussen (Lippert-Rasmussen, 1997), discrimination can generally be characterised by the morally objectionable practice of subjecting a person (or group of persons) to a treatment in some social dimension that, for no good reason, is disadvantageous compared to the treatment awarded to other persons who are in a similar situation, but who belong to another socially salient group.1 Central to this definition is the comparative element: the treatment under consideration is differential compared to the treatment received by a similarly situated person. In this context, discrimination can be considered the opposite of equality. Behind this apparently simple statement lies great complexity. As Western (Wachter et al., 2018) demonstrated early on, the meaning of equality is lost if we do not specify what it is that makes persons or treatments "similar" in a morally relevant way. In other words, the primary question that non-discrimination law poses is: "equal to what?" In this section, we first provide a brief overview of how EU non-discrimination law has grappled with this question over the years, after which we discuss how discrimination is established under current EU law.
Footnote 1: Lippert-Rasmussen (Lippert-Rasmussen, 1997) considers a group to be socially salient ‘if perceived membership of it is important to the structure of social interactions across a wide range of social contexts’.
### A Brief History of EU Non-Discrimination law
EU law is a form of supranational law: member states of the EU transfer parts of their sovereignty to the EU, which can then legislate in specific fields.2 The body of EU law comprises, among other things, the foundational treaties, secondary legislation mainly in the form of regulations and directives, and case law. While regulations apply directly within all member states, directives require member states to transpose their content, i.e. to implement it in their own legal system. Directives then leave member states discretion as to how the regulatory aim is to be achieved. In the field of non-discrimination law, directives reflect a minimum harmonisation approach, meaning that the law sets common minimum standards that must be achieved by all members states, but still allows individual member states to incorporate stricter measures as long as they comply with the EU treaties.
Footnote 2: The EU’s competence is defined in Art. 2, 3, 4 and 6 TTEU (Wachter et al., 2018)
Regulations and directives are forms of statutory law: written laws that are passed by the EU legislator. It is impossible for statutory law to cover all relevant aspects of all possible cases. Consequently, to be applied in factual cases, the law needs to be interpreted by a court in a judgment. To do so, the Court of Justice of the EU takes into account the "spirit, the general scheme and the wording" of given legal provisions, including their aim as set out in the preamble and the preparatory documents, as well as previous judicial decisions that were rendered in similar cases in the past (case law). In the EU, a mechanism called the preliminary reference procedure allows member state courts to dialogue with the Court of Justice of the European Union (CJEU).3 Individuals are not able to access the CJEU directly, but national courts can ask questions regarding the interpretation and validity of EU law to the CJEU. After receiving the response of the CJEU, the national court then makes the final decision by implementing the CJEU's interpretation of EU law to the specific circumstances in the case at hand.
Footnote 3: See Article 267 of the Treaty on the Functioning of the European Union (TTEU) (Wachter et al., 2018).
Footnote 4: The Council of Europe’s human rights instrument – the European Convention on Humans Rights – adopted in 1950 and in force since 1953, contains a prohibition against discrimination that also applies to all EU member states. The European Court of Human Rights was however only established in 1959.
It is important to note that the law is not made up of static rules. In response to social advancements, new statutory law may be introduced and the interpretation of existing legal norms may change over time as new cases emerge. Over the years, EU non-discrimination law has evolved.
The first legal protection against discrimination spanning multiple European countries came with the Rome Treaty in 1957,5 which established the European Economic Community.5 In particular, Article 119 of the EC Treaty established equal pay for men and women.6 In 1975 and 1976, non-discrimination legislation was complemented with two directives on equality between men and women in the workplace (Raghavan et al., 2018; Raghavan et al., 2018). This paved the way for the Court
of Justice to further elaborate non-discrimination law in subsequent years. The boundaries of EU non-discrimination law were expanded in three main directions: its application was extended to new areas, new concepts were spelled out, and new characteristics became protected against discrimination. For instance, the material scope of non-discrimination law was expanded through a broader interpretation of the notion of "pay".7 Moreover, in the landmark decision in _Bilka-Kaufhaus_(Bilka, 2018), the Court of Justice introduced the concept of "indirect discrimination". In that case, the differential treatment was between full time and part time employees: only full time workers had access to a pension scheme as part of their employment contract. As a consequence, it was not directly covered by the wording of Article 119 which specifically guaranteed equality _between women and men_. The Court, however, noted that where disproportionately more women than men work part-time, the differentiation operated by the company in granting access to the pension scheme gives rise to a discriminatory effect, in other words indirect discrimination on grounds of sex.8 In 1999, the Amsterdam Treaty entered into force and extended legal protection to other grounds of discrimination including racial or ethnic origin, religion or belief, disability, age and sexual orientation. From this point on, legislation and case law proliferated to include new regulatory territory, for instance, in the area of housing, healthcare, the consumption of goods and services and even, in limited cases, education.
Footnote 7: See _Bilka-Kaufhaus GmbH v Karin Weber von Hartz_(Bilka, 2018) and _Douglas Harwberber v Guardian Royal Exchange Assurance Group_(Dewis, 2018).
Footnote 8: The Court added: "However, if the undertaking is able to show that its pay practice may be explained by objectively justified factors unrelated to any discrimination on grounds of sex there is no breach of Article 119." See _Bilka-Kaufhaus GmbH v Karin Weber von Hartz, 30_.
Four main directives make up today's EU non-discrimination law: the Race Equality Directive 2000/43/EC (Dewis, 2018); the Framework Equality Directive (Dewis, 2018); and the gender equality Directives 2004/113/EC (Dewis, 2018) and 2006/54/EC (Dewis, 2018). Additionally, primary law9 provisions include Articles 2 and 3(3) of the Treaty on European Union (Dewis, 2018), Articles 8, 10, 19 and 157 of the Treaty on the Functioning of the European Union (Dewis, 2018) (the last two corresponding to ex-Article 13 EC and Article 119 EEC) as well as Articles 20, 21 and 23 of the Charter of Fundamental Rights of the EU (Dewis, 2018) (the Charter), adopted in 2000 and elevated to the same status as the Treaties in 2009.
Footnote 9: There is a hierarchy of norms in EU law, according to which _primary law_, which has quasi-constitutional status, prevails over _secondary law_ which is equivalent to legislation.
### Establishing Discrimination
In order to understand how EU non-discrimination law operates, we need to first distinguish between the notions of direct and indirect discrimination. This distinction is key because it determines the applicable regime of justifications: direct discrimination cannot be justified except for a limited number of derogations, whereas _prima facie_ indirect discrimination can be justified much more widely. In other words, this technical distinction matters because it determines how the costs and burdens of inequality are distributed among decision-makers, potential victims and society at large.
Direct discrimination occurs when "one person is treated less favourably than another is, has been or would be treated in a comparable situation on grounds of" a protected characteristic (Dewis, 2018). In other words: protected characteristics are to be excluded from any decision-making process covered by EU non-discrimination law.10 Traditionally, the doctrine of direct discrimination prescribes that "likes should be treated alike" according to the Aristotelian formula of justice as consistency, an approach often referred to as formal equality. A problem with this conceptualisation of equality is that it is unable to redress more complex forms of injustice such as proxy discrimination and structural inequality. For example, a rule banning all individuals shorter than 1,70m from applying to jobs with the police essentially excludes a large majority of women. Yet the selection does not depend explicitly on the sex or gender of candidates, and therefore it does not amount to direct discrimination on grounds of sex as confirmed by the CJEU in _Kallri_(Kallri, 2018).
Footnote 10: In algorithmic fairness literature, direct and indirect discrimination are often equated with, respectively, disparate treatment and impact in United States law. However, an important difference between the doctrines is that while disparate treatment requires discriminatory intent, direct discrimination in EU law does not require any moral wronggoing and will therefore apply in more cases than disparate treatment would (Dewis, 2018).
As explained in the previous section, to complement the legal protection of equality, the Court of Justice has adopted the doctrine of indirect discrimination, which, in certain situations, forbids treating those who are unalike in a like manner. Specifically, indirect discrimination occurs where "an apparently neutral provision, criterion or practice would put persons of a protected group at a particular disadvantage compared with other persons, unless that provision, criterion or practice is objectively justified by a legitimate aim and the means of achieving that aim are appropriate and necessary" (Dewis, 2018). This asymmetrical conception of equality encapsulates the second part of the Aristotelian formula and forbids applying the same rule to legal subjects who are positioned differently. Our example above, concerning the application of the same height requirement to male and female candidates, falls within the concept of indirect discrimination (Kallri, 2018). The ban on indirect discrimination has often been described as guaranteeing a substantive form of equality because it creates an obligation to accommodate legally protected differences (for instance height difference resulting from one's sex) and associated lifestyles (for instance protecting certain religious holidays). Since indirect discrimination focuses on the disadvantageous effects of given rules and practices rather than the inclusion of protected characteristics in given decisions, it allows addressing proxy discrimination that impacts protected groups. To some extent, this creates an obligation for decision-makers to account for the unjust _status quo_ that prevails in society. For example, the gender pay gap is a well-known form of institutionalised discrimination. The practice of using newly recruited employees' past salaries to decide on their new pay in salary negotiations could be regarded as indirect discrimination on grounds of sex, because it tends to perpetuate the gender pay gap.
From the definitions of direct and indirect discrimination, we can identify four main elements in a discrimination case.
_"On grounds of..._ To determine whether the case is one of direct or indirect discrimination, it is necessary to assess whether a decision was taken "on grounds of" a protected characteristic. When a protected characteristic is explicitly used as a basis for a decision, that decision falls under the notion of direct discrimination. In some cases, using a proxy that is "inseparably linked" to
a protected ground (e.g. pregnancy and sex) will amount to direct discrimination (Krishnan et al., 2017). By contrast, if a decision creates a disadvantage to a protected group albeit not targeting that group, it falls within the notion of indirect discrimination.
_..."a protected characteristic" in an area covered by EU law (personal and material scope)..._ Protected characteristics vary across sectors. The widest protection against discrimination can be found in relation to employment, where discrimination is banned in relation to racial or ethnic origin, sex or gender, religion or belief, disability, age and sexual orientation (Sandel, 2017; Sandel, 2017; Sandel, 2018). In relation to access to goods and services, only racial or ethnic origin and sex or gender are protected characteristics. Although a major concern from a social or moral point of view is that algorithmic systems operate differently based on people's income or socio-economic background, this form of disadvantage does not fall within the scope of protection offered by EU secondary law. In addition, while discriminatory effects may occur at the intersection of two or more vectors of disadvantage (for example race and gender or age and sexual orientation), (Sandel, 2017; Sandel, 2018), the CJEU has so far failed to recognise intersectional discrimination explicitly.11 For example, in _Parris_(Parris, 2018) the Court found that no "combined" discrimination on grounds of sexual orientation and age could exist where discrimination could not be proven on each ground taken separately (Bowman et al., 2017; Sandel, 2018).
Footnote 11: It could be argued that the Court has nevertheless addressed combined discrimination implicitly in cases such as _Olar or Bedi_, which combined disadvantage based on age and disability, (Krishnan et al., 2017; Sandel, 2018).
Directive 2004/113/EC (Bowman et al., 2017) includes some exceptions for the ban on gender discrimination, namely in relation to advertisement and the media as well as education. By contrast, discrimination on grounds of racial or ethnic origin is prohibited in relation to education. Furthermore, Article 21(1) of the Charter prohibits discrimination based on a greater number of grounds than secondary law, including but not limited to sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation. Article 21(1) of the Charter and the general principle of equality are both horizontally and vertically applicable (i.e. they have direct effect in relations between public and private parties and between private parties themselves) (Krishnan et al., 2017; Sandel, 2018).12 By contrast, directives are only vertically directly applicable, meaning that their provisions only apply directly between a public and a private party.13 However, national law transposing directives could in and of itself create horizontal effects.
Footnote 12: In principle, the Charter is only directly applicable in vertical relationships between public authorities and private parties. However, the Court has carved out horizontal direct effects in relation to several articles including Art. 21(1) on non-discrimination in C-414/16 (Regember) (Bowman et al., 2017) as confirmed in C-68/17/18 (Ry 2017), and Art. 31 on annual leave in joined Cases C-569/16 and C-570/16 (Bauer (Bauer, 2017)).
Footnote 13: Direct effects arise only in relation to provisions that are precise, clear and unconditional, see C-26/62 (Van Gend en Laos, 2017).
_...where there is evidence for "less favourable treatment"_ or "particular disadvantage"_. To establish a case of discrimination, an applicant first needs to bring _prima facie_ evidence, i.e. sufficient evidence for a rebuttable presumption of discrimination to be established by the judge. Evidence of _prima facie direct_ discrimination could include, for instance, information about another group or individual of a different protected group being treated more favourably. If such a comparator does not exist, EU law allows applicants to construct a hypothetical comparator. Evidence of _prima facie indirect_ discrimination involves raising a reasonable suspicion that a given disadvantage affects a protected group. This could, but does not have to, involve statistics.14 If _prima facie_ discrimination is established, the burden of proving that discrimination has not occurred shifts to the defendant.
Footnote 14: By contrast with US law which relies a lot on statistical evidence, evidence in EU law is much more contextual and hardly relies on statistical comparisons.
_...unless there is an "objective justification"_. While direct discrimination is not justifiable in principle (except for a few exceptions provided for by the law), the indirect discrimination doctrine allows for a _prima facie_ discriminatory measure to be "objectively justified" where it fulfils a legitimate aim and passes the so-called proportionality test. The law does not provide concrete guidelines on whether the means to achieve a legitimate aim are necessary and proportionate. Due to the large variety yet small number of cases, the proportionality test cannot be settled in advance based on previous case law. One rule that stands out is that if the same legitimate aim can be achieved through less discriminatory alternatives, those must be used (Sandel, 2017). Other than that, however, objective justifications are judged on a case-by-case basis, depending on the significance of the harm and the legitimacy of the aim.
## 3. Algorithmic unfairness through the lens of non-discrimination law
Over the past years, several incidents have raised concerns regarding bias and unfairness of algorithms and, in particular, AI systems. When used in automated decision-making, AI systems have the ability to produce fairness-related harms systematically and at a large scale. Moreover, while discrimination by human actors can to some extent be signalled to victims through behaviour or past experiences, discrimination by algorithmic systems typically remains largely invisible. In light of the increased use of machine learning systems, it has thus become a pressing question to which extent algorithmic unfairness can be seen as discrimination under EU law (Sandel, 2017). In this section, we analyse several seminal examples from algorithmic fairness literature through the lens of EU non-discrimination law.
### Dutch Childcare Benefits Scandal
We start our analysis with a case related to the explicit use of a sensitive feature in a machine learning model, which is often assumed to be unlawful. In January 2021, the Dutch government resigned over a scandal involving false fraud allegations made by the Tax and Customs Administration in the distribution of childcare benefits. In particular, over the course of several years, the administration had used a risk assessment algorithm that explicitly included Dutch citizens as one of the risk factors.15 To determine whether this is a case of unlawful discrimination under EU law, we first need to determine whether it falls within the material and personal scope of
EU non-discrimination law. This particular case involved a public body and, if the case fell within the scope of EU law, Article 21(1) of the Charter, which prohibits discrimination on a non-exhaustive list of grounds including membership of a national minority, could apply. Indeed, the Dutch Data Protection Authority (DPA) established that the use of nationality as a factor in the risk classification model is considered discriminatory processing of data on the basis of, amongst others, Art. 21 of the Charter, and therefore illegitimate given the principle of fairness in Article 5 of the GDPR (Yam et al., 2018).10 In particular, the DPA explained that incorporating nationality as a factor in the risk classification model could result in higher risk scores for applicants who are not Dutch citizens compared to applicants with a Dutch nationality (Yam et al., 2018). This increased the probability of higher scrutiny through manual processing of the application by an employee of the tax administration, which the DPA considered a particular disadvantage.17
Footnote 10: Note that Article 51(1) restricts the scope of application of the Charter only to situations where ‘Member States [...] are implementing Union law’. In this case, the GDPR can provide the necessary link to EU law to the extent that public authorities are implementing EU data protection legislation when processing data. Note that the case might also be framed as one of discrimination on grounds of ethnicity, in which case the Race Equality Directive 2000/43/EC might be applicable. The Court has dealt with similar issues in cases such as C-6681/5.35 _Fshare Funans_ and C-557/17 _Marinet_. 17 At first glance, this seems like a case of direct discrimination, the algorithm explicitly included nationality as a factor in decision-making. Instead, however, the DPA analysed the case through the lens of indirect discrimination: nationality by itself is insufficient to determine whether the applicent is eligible for childcare benefit, as it is also relevant whether an applicent is registered in a Dutch municipality or is a lawful resident in the Netherlands. Thus, the DPA explained, the tax administration could have used a risk factor with less potential for discriminatory effect, such as: “Applicant possesses Dutch nationality, or EU nationality and is registered in a Dutch municipality, or a non-EU nationality and has a valid residence permit”.
However, even in cases of (in hindsight) obvious potential for discriminatory treatment, establishing _prima facie_ evidence can prove very difficult - especially in the context of unintelligible or inaccessible algorithms. In case of the childcare benefits scandal, parents were wrongly accused over the course of a decade and the full scale of the scandal only became clear after several years of investigation. Notoriously, parents who requested access to their files received documents with pages and pages of redacted text (Yam et al., 2018). In a situation like this, the case law of the CJEU shows that the absence of transparency or information can contribute to contextual evidence with a view to triggering a shift of the burden of proof (Bradley et al., 2018). Yet, when algorithmic systems are embedded into opaque decision-making processes, an individual is unlikely to become aware that discrimination has occurred at all. Therefore, legal claims of discrimination might not even arise without adequate support. This raises questions regarding the protection that equality law, which is designed to protect against discrimination by humans, offers in cases of algorithmic discrimination.
### Amazon's Recruitment Algorithm
A commonly cited example of algorithmic bias is a resume selection algorithm that was under development at Amazon in 2017 (Yam et al., 2018). As it turned out, the algorithm penalised words that indicated the applicent's gender, such as participation in the women's chess team or attending an all-woman's college. It is important to note that Amazon's hiring algorithm was not necessarily less accurate for women compared to men. Instead, the main culprit for the disparity was unequal hiring rates: in the past, the company had primarily hired men for technical roles. An important question is why these hiring rates differed. We can identify at least two potential reasons: either the data is a biased measurement of reality or reality is biased.18 First, we might be looking at a case of measurement bias: historical hiring decisions are incomplete measurements of actual employee quality. When measurement bias is associated with a sensitive characteristic, in this case gender, the model is likely to replicate the pattern which can result in an unfair allocation of jobs (Yam et al., 2018). In other words, the sensitive characteristic is implicitly included as a factor in decision-making. This type of unfairness speaks to the exclusionary function of formal equality: protected characteristics should be excluded from decision-making. Second, gender disparities in hiring rates could in part be explained by disparities in behaviour caused by factors related to structural inequality. For example, women may have been systematically discouraged from pursuing technical roles, resulting in fewer suitable candidates. From this perspective, the wrongness of Amazon's hiring algorithm can best be considered through the lens of substantive equality.
Footnote 18: While this may seem to suggest that algorithmic unfairness is primarily related to biases in data sets, we would like to emphasise that algorithmic bias is not merely a problem of “bias in \(-\) bias out”. Data sets do not simply exist, they are constructed. Considering a backdrop of historical hipitate and structures of oppression, the social processes that produced these data sets require critical attention. Having said that, the causes of fairness-related harms induced by algorithmic systems can – in both subtle and obvious ways – be different from harms induced by human actors. Therefore, we believe an increased understanding of the different ways in which algorithmic systems can cause harm is critical for their mitigation.
How would such a case of algorithmic unfairness be captured by EU discrimination law? According to Amazon, the algorithm was never actually used. For the sake of our argument, however, let's assume that the algorithm was deployed in the EU. Employment discrimination on the basis of gender clearly falls within the material scope of non-discrimination law. While gender is not used directly as a factor by the algorithm, penalising applicants on the basis of characteristics highly associated with the applicent's gender can be seen as a form of proxy discrimination that would either fall under the indirect discrimination doctrine or, in line with the Court of Justice's jurisprudence in _Dekker_(Yam et al., 2018) under the direct discrimination doctrine if the decision criteria used are "inextricably linked" with sex or gender. As argued by Adams-Prassl et al. (2018), we may wonder to what extent attendance of an all woman's college can be seen as an "apparently neutral criterion" that is not inseparably linked to gender. As mentioned above, the distinction between direct and indirect discrimination is key because it determines whether observed disparities can be justified, and ultimately who is responsible for internalising the costs of social inequality.
From a conceptual perspective, predicting how the Court of Justice would legally qualify the Amazon recruitment algorithm raises at least two issues. First, the Court of Justice has not always consistently distinguished between direct and indirect discrimination. For instance, in _Dekker_(Yam et al., 2018), the Court ruled that discrimination on grounds of pregnancy amounted to direct discrimination on grounds of sex because of the "inextricable" link that exists between pregnancy and sex. As a result, even where the protected characteristic itself was not used as a basis for a decision, using a proxy that is "inseparably linked" to it amounts to direct and not indirect discrimination. At the same time, it is unclear which
proxies will be regarded as "inseparably linked" to protected characteristics. In _Jyske Finans_[29], the CJEU did not consider that the practice of a credit institution to subject an EU citizen to an additional identity check when born outside the EU amounted to direct discrimination on grounds of racial or ethnic origin. The CJEU did not deem the link between someone's country of birth and ethnic origin "inseparable".19 In sum, the boundary between direct and indirect discrimination is contested and the Court has not always been consistent in distinguishing both notions or in defining what "on grounds of" a protected characteristic means.20
Footnote 19: To answer the question of the nature of the link, it is first necessary to define what ethnic origin is an end in relation to which group(s), which is a delicate question. Here the differentiation was between EU- and non-EU-born citizens.
Footnote 20: In has also been argued that, from a moral point of view, direct and indirect discrimination capture the same harm [72].
Second, part of the problem of distinguishing between direct and indirect discrimination is linked to the difficulty of defining what a protected characteristic is. The answer to this question directly depends on the choice of comparator made by the Court.21 For instance, in the context of neutral dress codes imposed by employers on their employees, whether or not discrimination is deemed direct or indirect heavily depends on which comparator is chosen. If religious and non-religious employees are compared, it appears that not all religious employees are disadvantaged by the rule. This seems to exclude direct discrimination. However, if employees whose religion mandates wearing religious clothing and employees whose (absence of) religion does not are compared, this reveals that a well-defined group is exclusively disadvantaged by the rule [78; 34], because the rule is more compatible with some religious practices than others. In fact, the divide between direct and indirect discrimination has been extensively discussed by commentators in the context of the so-called headscar cases. In its _Achbitta_[13] and _Wabe_[61] decisions, the Court has been criticised for failing to treat facially neutral dress codes as a form of direct discrimination on grounds of religion (and gender) [73].22 As former Advocate General Sharpton stated, "neutrality' that in reality predictably denies employment opportunities to particular, very clearly identifiable, minority groups is false neutrality" and should thus not fall within the scope of indirect discrimination [78].
Footnote 21: As argued by Westen, the comparator simultaneously defines the normative baseline of discrimination law, that is the desirable level of equality in a given situation[84].
Footnote 22: Note that the Court distinguishes the situation in _Wabe_ from that in _Miller_.
Given the Court's problematic approach to the distinction between direct and indirect discrimination, there is a risk that the Court could treat cases of algorithmic unfairness such as Amazon's recruitment algorithm from the perspective of indirect discrimination. This would raise two further issues. First of all, the notion of "particular disadvantage" inherent in indirect discrimination is particularly vague, which makes it difficult both to assess compliance and to provide evidence for _prima facie_ discrimination. For example, in _Kalliri_[22, para. 31], the Court found evidence of _prima facie_ discrimination because the height requirement of 1,70m "work[ed] to the disadvantage of _far more_ women than men". The existence of a particular disadvantage is only assessed by the Court contextually. In _Seymour-Smith_[14] the Court considered that statistics showing that 77.4% of the men and 68.9% of the women in the workforce were able to meet the two-year employment requirement needed to obtain compensation for dismissal "[id] not appear, on the face of it, to show that a considerably smaller percentage of women than men is able to fulfil the requirement" [81]. However, there is no consistent use of statistics by the Court. The normative principles guiding this assessment and the thresholds operated by the Court of Justice often remain implicit.23 We can see those elements emerge in a few cases such as _YS v NK_[18], which concerned a claim of indirect discrimination on grounds of sex, age and property. The Advocate General dismissed the applicant's argument that an austerity measure cutting a type of large pensions in use in the 1990s amounted to a particular disadvantage against older men. If the comparison test showed that men were affected more by the measure than women in absolute terms, she reasoned that it would "at most [be] linked to an already existing state of inequality". In other terms, gender segregation on the labour market in the 1990s, the current gender pay gap and the gender pension gap would explain any apparent impact on older men: any "predominant impact on men would in all likelihood have to be solely attributed to the fact that men, on average, still earn more than women and are over-represented in management positions". [18, para. 64 and 76] This case reveals the normative principle underpinning the Court's assessment of a "particular disadvantage": the lens of indirect discrimination should capture the unjustified reinforcement of inequalities as opposed to mere punctual "unbalances". Hence, rather than targeting a precise threshold, probing legal compliance in situations of algorithmic unfairness requires reflecting on the implications of a given imbalance in terms of structural inequality.
Footnote 23: In the common position adopted by the Council of the EU in November 2022, Art. 103 of the current proposal for an EU AI Act stipulates that "[training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete".
Second, the indirect discrimination doctrine allows for an objective justification. If Amazon's hiring algorithm is interpreted as indirect discrimination, the accuracy of the algorithm on a test set may be deemed an acceptable justification in court [3].24 Without access to information regarding the data collection procedure and machine learning process, it is difficult for applicants to prove whether accuracy - as indicated by the alleged offender - is a good reflection of effectiveness in practice. However, in cases of outcomes tainted by measurement bias, accuracy on _observed_ data is an inadequate measurement of the true effectiveness of the model. Moreover, accuracy in a test environment may not generalize to accuracy of the algorithm after deployment, particularly in cases of out-of-sample predictions (i.e. the model is used under circumstances different from the one it was trained on) or concept drift (i.e. the data distribution evolves over time).25 Importantly for computer scientists thinking about how to translate legal norms to ensure compliance, the normative principle underpinning the Advocate General's reasoning in _YS v NK_, i.e. substantive equality, can be used to shape the proportionality test. As confirmed by AG Kokott, "the existing economic inequality between the sexes is not exacerbated further in the present case" so "the requirements regarding the justification of any indirect discrimination are correspondingly
lower". In other words, even though _prima facie_ a particular disadvantage arises punctually, it can be justified if it does not generate or reinforce structural inequalities.26 This is an important indicator for assessing legal compliance.
Footnote 26: It is important to note, however, that the Court has also been criticised for an inconsistent approach to the normative underpinnings of the doctrine of indirect discrimination, i.e it does not always consistently approach indirect discrimination from the perspective of substantive equality.
### Gender Shades
In their seminal work "Gender Shades", Buolamwini and Gebru (Buolamwini and Gebru, 2017) found that several commercial facial recognition systems intended to identify a person's gender failed disproportionately for darker-skinned women, particularly compared to faces of lighter-skinned men. There are many reasons why the predictive performance of a machine learning system differs across groups, including the use of features that are not equally predictive across groups and the use of a machine learning algorithm that is unable to adequately capture the data distributions of minority groups. In the case of Gender Shades, the primary culprit was the under-representation of darker-skinned women in facial recognition data sets. This type of bias can be particularly problematic when the data distribution of majority groups differs substantially from the data distribution of minority groups.27
Footnote 27: This can itself be the result of structural inequality, e.g. unequal access to a given set of jobs, educational opportunities, housing options, etc.[(55)].
Again, we first need to consider whether the problem at stake falls within the material scope of EU discrimination law, which itself depends on the sector in which the facial recognition system is used. For example, if facial recognition is required to gain access to particular goods or services (with the exception of advertisement, education and the media in relation to gender-based discrimination), disparate misclassification rates in relation to gender or skin colour lead to denying access to protected groups fall within the material scope of Directives 2004/113/EC (Diek et al., 2017) and Directive 2000/43/EC (Diek et al., 2017).28 As race and gender are not used directly as input factors in the algorithms, a case like this might fall within the indirect discrimination doctrine.29 This would open up the possibility of an objective justification.
Footnote 28: Our example, the directives cover goods and services available to the public that are sold both by private and public parties. Furthermore, the broad protection against discrimination anchored in Act. 21 of the Charter applies in relation to public bodies when they are implementing EU law.
Footnote 29: Adams-Prasal et al. (Andris et al., 2017) have pointed out the limitation of usual interpretations of “because of” in direct discrimination, which is primarily designed to combat human discrimination. Even when protected characteristics are not used as factors at the point of decision-making, it is hard to view disparate predictive performance in facial recognition as not causally dependent on race and gender.
For example, in 2022, a Dutch student filed a complaint against her university, stating that the face recognition check included in fraud detection software used during online exams, often failed - seemingly due to the student's dark skin colour. In an interim judgement, the Netherlands Institute for Human Rights states that the disadvantage experienced by the student, together with scientific research pointing towards disparate performance of face recognition algorithms, provide _prima facie_ evidence for indirect discrimination in relation to race (Rosen et al., 2017)30 and shifting the burden of proof to the university to prove the law was not violated.
Footnote 30: The Institute specifically refers to Article 7(1): of the Dutch AWGR (Algemene Wet Gélièle Belhandeling), which prohibis discrimination based on race (which should be interpreted broadly to also include skin colour) with regard to access to goods and services by institutions in the field of education.
Furthermore, the case of facial recognition software provides an interesting case study for interrogating the boundaries of EU non-discrimination law. Would a particular disadvantage arising from the disparate _quality_ of goods and services, for instance, face recognition, in relation to gender or race fall within the ban on discrimination? Arguably, there is a case for EU non-discrimination law in the area of goods and services to be applied to disparate product safety and performance across demographic groups. For example, could the exclusive use of male crash dummies to test cars be captured by the Gender Directive 2000/43/EC on goods and services, since it results in higher risks of injury for female occupants (Diek et al., 2017)? Even though the case law in this area is scarce and does not provide for immediate analogies (see e.g. C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-236/09 _Test-Achats_(C-36/09 _Test-Achats_(C-36/09 _Test-Achats_(C-36/09 _Test-Achats_(C-46/09 _Test-Achats_(C-46/09 _Test-Achats_(C-5-609 _Test-Achats_(C-609 _Test-Achats_(C-70 _Test-Achats_(C-80 _Test-Achats_(C-80 _Test-Achats_(C-80 _Test-Achats_(C-80 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 _Test-Achats_(C-90 (0 _Test-Achats_(C-90 (0 _Test(C-0 0 _Test-Achats_(0 (
unfairness reveals grey areas and inconsistencies in the Court's approach to discrimination. Some of these gaps could be filled via teleological interpretation of EU discrimination law in the digital context, for example in cases of disparate predictive performance, but this also opens up difficult normative questions. Moreover, the unintelligibility of prediction-generating mechanisms and lack of transparency regarding important design choices of AI systems make it difficult for applicants to provide _prima facie_ evidence to even start court proceedings. From a legal compliance perspective, since the CJEU rarely relies on statistical evidence in its judgments, it is difficult to derive general, abstract or readily transferable rules of thumb regarding requirements for thresholds, proportionality or justification from the highly particularised case law of the Court.
## 4. The problem of emptiness
In response to concerns regarding algorithmic bias and unfairness, computer scientists have proposed several fairness metrics and fairness-aware machine learning (fair-ml) algorithms that are designed to measure and mitigate fairness-related harm. A straightforward question, then, is which fairness metric AI practitioners should choose and what value it should take in order to be compliant with the law. From the examples in the previous section, it is clear that EU non-discrimination law does not provide us with explicit rules that must be upheld. Instead, the court is granted judicial discretion that allows it to make normative decisions based on the specifics of an individual case - an approach Wachter et al. (2017) refer to as "contextual equality". To better understand the applicability of fairness metrics and the algorithms that optimise for them, we must therefore consider their normative underpinnings.
### Emptiness in Fairness Metrics
A common denominator of algorithmic fairness metrics is equality - be it in the form of a particular distribution of predictions in the case of group fairness metrics, approximately equal treatment in the case of individual fairness, and equal counterfactual outcomes in case of counterfactual fairness. The choice of fairness metric, then, boils down to a question that greatly resembles the primary question of non-discrimination law: _what_ should be equal? The most prominent fairness metrics in algorithmic fairness literature concern the classification scenario, where we can distinguish two main lines of work: group fairness and individual fairness.
_Group fairness_ metrics aim to capture the extent to which particular group statistics are equal across sensitive groups. Similar to protected characteristics in non-discrimination law, sensitive features are intended to represent group membership of some socially salient group. Numerous group fairness metrics have been proposed in algorithmic fairness literature, which can be differentiated primarily in terms of which group statistic is compared. Arguably the strongest requirement of equality is set by _demographic parity_33, which requires the proportion of positive predictions (e.g. the selection rate in hiring) to be equal between groups. For example, in case of Amazon's recruitment algorithm, a positive prediction relates to a benefit (a job interview) and demographic parity essentially requires that receiving the benefit should be independent of sensitive group membership - even if observed data suggests otherwise. By choosing demographic parity as a fairness metric, we thus implicitly assume that whether an individual is deserving of the benefit does not depend on their observed ground truth class. This can be an empirical assumption, e.g. because we believe that observed data is subject to measurement bias, or it can be a more explicit normative assumption, e.g. that the observed ground truth class is affected by historical injustice that we do not wish to replicate (Sundundhi et al., 2017). Contrarily, the group fairness metric _equalsised odds_(Sundhi et al., 2017) considers an individual's ground truth class a factor that can justify existing disparities in the distribution of predictions. Specifically, this metric considers equality of false positive rates (e.g. the proportion of healthy individuals that are falsely predicted to have a disease) and false negative rates (e.g. the proportion of sick individuals that are predicted to be healthy). In the case of the distribution of a benefit, the use of this metric thus reveals a specific normative assumption: the status quo is acceptable (Sundhi et al., 2017). A third commonly cited metric, _equal calibration_, requires that predicted scores are equally well calibrated across groups. A model is considered to be well calibrated if the output of the model (i.e. predicted scores) corresponds to the probability of belonging to the positive class.34 For example, a model is calibrated if out of all instances that receive a predicted score of 0.7, the proportion of instances that actually belongs to the positive class is also 0.7. Essentially, equal calibration requires that the meaning of predicted scores is equal across groups (Sundhi et al., 2017): receiving a score of 0.7 corresponds to a probability of 0.7, irrespective of sensitive group membership. In contrast to demographic parity and equalised odds, equal calibration cannot be readily interpreted as a particular distribution of burdens and benefits and instead relates more to _beliefs_ about (groups of) individuals (Sundhi et al., 2017).
Footnote 34: Calibration is particularly relevant when predicted scores are used as input in decision-making, as a decision threshold for calibrated scores can be directly interpreted in term of different misclassification costs. For example, if a calibrated confidence score is used for suggesting a specific treatment in clinical decision-making, a decision threshold of 0.1 means that we accept up to 9 false positives (i.e., unnecessary treatments) for each true positive.
Where group fairness metrics primarily consider fairness from the perspective of groups of people, notions of individual and counterfactual fairness are primarily concerned with the perspective of the individual. _Counterfactual fairness_ metrics consider fairness from an explicit causal modelling perspective (Sundhi et al., 2017). An elaborate explanation of causal inference is outside of the scope of this paper - it suffices to know that empirical assumptions regarding causal relationships between (sensitive) features and outcome variables are modelled in a causal graph. Counterfactual fairness, then, considers the question: given what we know about this individual, how would the model's prediction change, had they belonged to a different sensitive group? If the prediction changes, the model does not satisfy counterfactual fairness. The underlying normative assumption, then, is that factors that are causally related to sensitive group membership should not impact the outcome.
Normative assumptions become less explicit when we consider metrics that allow the user to specify characteristics that may justify observed disparities. At the extreme, _individual fairness_ requires
that "similar people are treated similarly" (Kamiran et al., 2017). Here, similarity is measured through a quantitative similarity metric, usually based on the input features. Essentially, all normative assumptions are therefore captured in the choice of similarity metric (Kamiran et al., 2017). Inspired by the notion of "objective justification" in the indirect discrimination doctrine, some variations of fairness metrics, such as _conditional demographic parity_(Kamiran et al., 2017; Kamiran et al., 2018) and path-specific counterfactual fairness (Kamiran et al., 2018), allow further conditioning on specific characteristics that are deemed justifiable factors in decision-making irrespective of their (causal) relationship to a sensitive characteristics. For example, in college admissions, we may want to account for varying levels of competitiveness across programs. That is, instead of measuring whether overall admission rates are equal for female and male applicants, we measure equality of selection rates within each program separately.
### Emptiness in Fairness-Aware Machine Learning
In addition to fairness metrics, much work in algorithmic fairness research has centred around technical interventions purporting to mitigate unfairness, which we will refer to as fairness-aware machine learning (fair-ml) techniques. A typical approach is to formulate the problem as an optimisation task, where predictive performance is optimised subject to a fairness constraint.35 Fair-ml techniques are commonly categorised into three groups. _Preprocessing_ approaches modify the data used to train the ML model. Most pre-processing techniques aim at ensuring that the sensitive feature and target variable are statistically independent. For example, the output label (e.g. "hired" or "not hired") of (some) instances in the training data set may be changed according to an algorithmic heuristic. In contrast, _in-processing_ techniques incorporate fairness constraints directly into the machine learning process. For example, instead of optimising solely for misclassification errors, we can include a penalty in the objective function that quantifies to what extent the model deviates from a particular fairness constraint. Finally, _post-processing_ algorithms account for fairness after a model has been trained, including direct adjustments to the model parameters or adjustments to the predictions of the model. For example, to account for disparate hiring rates across genders, we may adjust the decision threshold for one group (e.g. male applicants) such that the proportion of hired individuals is equal.
Footnote 35: We refer the interested reader to Caton and Haas (Caton and Haas, 2017) for a comprehensive overview of existing techniques.
Some fair-ml algorithms are explicit regarding the underlying empirical and normative assumptions. For example, the massaging technique introduced by Kamiran et al. (Kamiran et al., 2018) relies on the assumption that discrimination is most likely to occur to individuals close to the decision boundary of a classifier. Consequently, the algorithm relabels instances considered to be border cases such that the base rates are equal across sensitive groups. Similarly, the reject-option classification Kamiran et al. (Kamiran et al., 2018) approach essentially applies a different decision threshold across sensitive groups, centred around the original decision threshold. As such, these techniques can be interpreted to counteract a specific form of measurement bias in which particular groups receive systematically lower scores. However, despite often referred to as "de-biasing" techniques, many fair-ml techniques do not explicitly counteract biases that lie at the root of fairness-related harm, but instead optimise directly for a given fairness constraint. For example, pre-processing techniques intended to learn new representations of the data (Kamiran et al., 2018) and constrained learning techniques cannot be readily interpreted as particular decision-making policies. Instead, these techniques take an effects-based approach, assuming that as long as a fairness constraint is satisfied, biases have been counteracted. This can be problematic, especially considering the under-specification of fairness metrics from a normative standpoint. Consequently, simply enforcing a metric by means of a fair-ml technique can have various undesirable consequences. For example, some algorithms enforce equality by reducing benefits for the advantaged group, rather than increasing benefits for the disadvantaged group (Kamiran et al., 2018). Notably, such a levelling-down approach is contrary to the case law of the Court of Justice, which indicated in _Milkova_ that redressing discrimination requires "granting to persons within the disadvantaged category the same advantages as those enjoyed by persons within the favoured category" where there is "a valid point of reference" and "as long as measures reinstating equal treatment have not been adopted" (Bartos et al., 2019; Barabasi et al., 2019).
### Emptiness in the Law
Many of the aforementioned fairness metrics are incompatible with each other. In particular, when base rates (i.e. the proportions of positives) differ between groups, any combination of demographic parity, equalised odds, and equal calibration cannot be satisfied simultaneously (Kamiran et al., 2018; Kamiran et al., 2018). Additionally, when all input features are incorporated in a similarity metric, individual fairness is typically at odds with demographic parity (Kamiran et al., 2017). Given the vastly different empirical and normative assumptions of these metrics, this should not come as a surprise. In particular, different metrics make different assumptions regarding the characteristics that can justify disparities. This brings us back to the problem of emptiness inherent in the principle of equality: what factors should or should not play a part in decision-making? And what normative baselines should be used to assess the right equality standard, the right amount of benefit received or the right quality of treatment? In the next paragraphs, we seek guidance in the legal reasoning of the CJEU.
In some cases, case law provides us with such guidance. Considering a measure withdrawing benefits from an advantaged group to ensure equality with a disadvantaged group, the Court has been clear. For example, in _Cresco_(Cresco, 2017), a private employer applied a piece of discriminatory legislation concerning religious holidays. The Court of Justice ruled that it could not simply withdraw the benefit from the "advantaged" group of workers to reinstate equality, but rather that it had to extend the benefit to all workers across the protected group (religion). This shows that equal treatment on the face of it is insufficient and that the question "equal to what" was answered by the Court by pointing at the most advantaged group.36
Footnote 36: For other examples of the “levelling up” approach see (Kamiran et al., 2018).
Next, we can consider fair-ml approaches that set group-specific decision thresholds by analogy with the case law of the Court on so-called positive action measures, and in particular quotas. The
Court of Justice has been particularly strict when assessing the lawfulness of quotas. In _Kalanke_ and _Marschall_, for example, the Court only allowed _flexible_ as opposed to strict, unconditional, automatic or absolute quotas (Kalanke, 2017; Kalanke, 2018).37 In addition, EU equality law does not _require_ but only _allows_ positive action measures. Therefore, ensuring the lawfulness of post-processing techniques might amount to walking a tightrope.
Footnote 37: In _Marschall_, the Court allowed the quota because it contained a so-called “saving clause” to the effect that women are not to be given priority in promotion if reasons specific to an individual male candidate tilt the balance in his favour” (Kalanke, 2017, para. 2017).
Unfortunately, the law is not always as clear. As demonstrated by Schauer (Schauer, 2017), the question of similarity central to judicial precedent and to the comparative heuristics that underpin the Court's discrimination test is not as such an ontological question of similarity, but instead revolves around what the Court _deems_ similar. The CJEU has not been explicit regarding the normative framework that is used to determine what makes two cases similar, resulting in inconsistencies.38 Equality is a polysemous legal principle and shifts in the Court's choice of normative baseline in comparisons are difficult to predict in the absence of an explicit reference framework.
Footnote 37: The Court has, however, constantly made clear that two cases do not need to be similar in absolute terms but rather in light of the nature and purpose of the contested measure.
This is further complicated as social advancements cause societal norms to shift. This fuels the difficulty of defining what a protected characteristic is. Protected characteristics fulfil a double function. On the one hand, they resemble and signal identity categories. On the other hand, in discrimination law, they serve as proxies for historical privileges and disadvantages. In other words, within society, particular groups of people have been disadvantaged in social arrangements and to account for historical injustice, these groups are afforded legal protection. As the boundaries of privileged and non-privileged might shift across contexts, different groups can be considered socially salient in different scenarios.
## 5. The Law is Not a Decision Tree
While algorithmic bias is not yet explicitly regulated, such regulation is likely to be adopted within a few years.39 This in turn raises the question of bias management and responsibility for unlawful algorithmic bias and unfairness. What is required of AI system providers to avoid or mitigate bias and when can AI system providers be said to have fulfilled this requirement? What limitations of current non-discrimination law should new regulations address? In this section, we discuss the implications of our findings.
Footnote 39: A proposal for an EU AI Act is currently under discussion at EU level.
When thinking about the law, many people envision some kind of tree structure, comprising of main rules and exceptions to those rules. While to some extent statutory law can be encoded as a decision tree, the analogy does not hold up to scrutiny. Instead, the law is dynamic, open-textured, and based on holistic reasoning. With regard to non-discrimination law in particular, (implicit) normative reasoning plays a fundamental role and the court rarely relies on statistical pointers. Further adding to this complexity, non-discrimination law is a polysemous legal instrument (Kalanke, 2018). It fulfils a host of different social functions, ranging from the recognition of historical injustices and disadvantaged social groups, the (re)distribution of valuable goods and opportunities, the protection of dignity and autonomy, the accommodation of different lifestyles, and the facilitation of access to, and participation in, central social institutions such as the market, labour, education, healthcare, etc.40 These various normative aims entail different conceptions of equality. While in a given context, formal equal treatment will suffice to fulfil the mandate of non-discrimination law, in others substantive or even transformative conceptions of equality will be required.
Footnote 40: Many different scholars have reflected on this question: (Kalanke, 2018; Kalanke, 2018; Kalanke, 2018; Kalanke, 2018).
This suggests that, while many fairness metrics have taken inspiration from non-discrimination law, legal compliance cannot translate into a single threshold or fairness metric. Rather, fulfilling the requirements of non-discrimination law demands reflecting explicitly on the normative goal of legal and technical fairness interventions. Not doing so would render the notions of equality and fairness tautological (Kalanke, 2018). In other words: focus should be shifted from questions such as "what should be the value of my fairness metric" to the more difficult yet crucial question of _why_ a particular distribution of burdens and benefits is right in a given context, and ultimately, _who_ should bear the costs of inequality.
To assist practitioners in these endeavours, future work is necessary to uncover the moral implications of design choices in the machine learning development process. While discourse regarding the suitability of fairness metrics has received much attention in the legal community (e.g. (Kalanke, 2018; Kalanke, 2018; Kalanke, 2018), lawyvers often have an idealised view of what fair-ml techniques can achieve (Bowman et al., 2018) and legal scholars have only recently begun to address the question of lawfulness of particular fair-ml strategies (e.g. (Kalanke, 2018; Kalanke, 2018)). Understanding when particular interventions are appropriate is especially important considering the difficulties applicants face in providing _prima facie_ evidence in the context of opaque algorithmic systems.
## 6. Conclusion
In this paper, we set out to build a bridge between two separate disciplines: computer science and law. We analysed three seminal cases of algorithmic unfairness through the lens of EU non-discrimination law and showed that while the law offers protection against some types of algorithmic bias and unfairness, not all types of algorithmic unfairness neatly fall within the law's concepts and analytical frameworks. Subsequently, we explored the role fairness metrics can play in establishing legal compliance. In particular, we uncovered the normative assumptions of fairness metrics and the fair-ml algorithms that optimise for them and compared these to the legal reasoning of the Court of Justice of the EU. This analysis leads us to suggest that future research should inquire into what gets 'lost in translation' when discrimination law as it is operationalised in judicial interpretation is expressed in terms of algorithmic (un)fairness and _vice versa_. This would also entail a broadening of the scope of inquiry: in order to meaningfully answer the question that non-discrimination law poses, we must move beyond merely asking _what_ should be equal and, instead, ask ourselves _why_ a particular distribution of burdens and benefits is right.
###### Acknowledgements.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 898937. |
2310.08604 | Pontryagin Maximum Principle for Incommensurate Fractional-Orders
Optimal Control Problems | We introduce a new optimal control problem where the controlled dynamical
system depends on multi-order (incommensurate) fractional differential
equations. The cost functional to be maximized is of Bolza type and depends on
incommensurate Caputo fractional-orders derivatives. We establish continuity
and differentiability of the state solutions with respect to perturbed
trajectories. Then, we state and prove a Pontryagin maximum principle for
incommensurate Caputo fractional optimal control problems. Finally, we give an
example, illustrating the applicability of our Pontryagin maximum principle. | Faical Ndairou, Delfim F. M. Torres | 2023-10-09T15:09:26Z | http://arxiv.org/abs/2310.08604v1 | # Pontryagin Maximum Principle for Incommensurate Fractional-Orders Optimal Control Problems
###### Abstract
We introduce a new optimal control problem where the controlled dynamical system depends on multi-order (incommensurate) fractional differential equations. The cost functional to be maximized is of Bolza type and depends on incommensurate Caputo fractional-orders derivatives. We establish continuity and differentiability of the state solutions with respect to perturbed trajectories. Then, we state and prove a Pontryagin maximum principle for incommensurate Caputo fractional optimal control problems. Finally, we give an example, illustrating the applicability of our Pontryagin maximum principle.
incommensurate fractional-orders derivatives; fractional optimal control; continuity and differentiability of state trajectories; needle-like variations 26A33; 49K15
## 1 Introduction
The celebrated Pontryagin maximum principle was first formulated in 1956, being widely regarded as the central result of optimal control theory [1]. The significance of the maximum principle lies in the fact that rather than maximizing over a function space, the problem is converted to a pointwise optimization. Various generalizations of the Pontryagin maximum principle are still being investigated nowadays [2; 3; 4].
Fractional optimal control theory, as a branch of applied mathematics, deals with optimization issues for controlled fractional dynamics coupled with a performance index functional [5; 6]. The development of the theory started shortly at the beginning of 21th century with some pioneering works [7; 8], where necessary optimality conditions are derived using techniques from variational analysis. A major contribution from those works is weak versions of the Pontryagin maximum principle that consists in reducing the fractional optimal control problem (FOCP) to a fractional boundary value problem together with an optimality condition. Later, the theory emerged with the work of [9], by establishing a strong maximum principle of Pontryagin type, which introduces a Hamiltonian maximality condition in the set of necessary optimality conditions instead of the optimality condition from weak versions. Indeed, since this seminal work of 2014, the subject of maximum principle for FOCPs gained more interest. For instance, a new approach to the Pontryagin maximum principle for a problem involving a Lagrangian cost of Riemann-Liouville fractional integral subject to Caputo fractional dynamics is considered in [10]. The authors of [11], derived a Pontryagin maximum principle for FOCP defined by a general Bolza cost of fractional integral type with terminal constraints on the Caputo fractional dynamics. More recently, in 2022, Kamocki investigated an optimal control problem of multi-order fractional systems under a Lagrange-type functional, proving two existence results [12]. In contrast with Kamocki, here we are interested in proving the necessary optimality conditions of Pontryagin type.
Another important development in control theory is the so-called Bellman's dynamic programming principle, which gives a necessary and sufficient condition of optimality [13]. In this direction, the approach for solving an optimal control problem can be achieved by determining a certain value function that might happen to be a viscosity solution to
a Hamilton-Jacobi-Bellman (HJB) equation. The dynamic programming principle has been extended for fractional discrete-time systems by the authors of references [14, 15]. Furthermore, an attempt to derive a fractional version of the Hamilton-Jacobi-Bellman (HJB) equation has been investigated in [16]. In [17], Gomoyunov studies an extension of the dynamic programming principle to the case of fractional-order dynamical systems. A new approach is required for such problems so that for every intermediate time \(t\in(0,T)\), it is necessary to introduce an auxiliary optimal control problem (sub-problem) with this time \(t\) considered as the initial one. Also, a derivation of the fractional version of the HJB equation is studied deeply [17]. To sum up, it is important to mention that the Pontryagin maximum principle, along with Bellman's dynamic programming principle, is one of the most effective and fundamental tools for investigating solutions to various optimal control problems.
Nowadays, there are many different definitions of fractional-order integrals or derivatives [18] and, in some sense, it is possible to consider a single broad class of fractional-order operators that include existing ones as particular cases [19]. This is important in analyzing a single operator rather than focusing on each individual one separately. Also, it enriches the subject of Pontryagin's maximum principle to handle a more general and wide class of fractional-order operators. For instance, a maximum principle is obtained for a combined fractional operator with a general analytic kernel in [20]. Also, some recent results for the Pontryagin maximum principle are investigated for fractional stochastic delayed systems with non-instantaneous impulses [21], for a degenerate fractional differential equation [22], and for distributed-order fractional derivatives [23]. In contrast, the main aim of our current study is to utilize the idea of multi-order or incommensurate orders of derivatives in the definition of optimal control problems and then analyze their solutions. In doing so, we start with the most popular fractional model, which is the Caputo.
The structure of the article is as follows. In Section 2, we present some basic definitions and properties from fractional calculus. In Section 3, our contribution is given: we start by introducing the incommensurate non-local fractional optimal control problem, then we prove the continuity of solutions (Lemma 3), differentiability of perturbed trajectories (Lemma 4), and, finally, the proof of the Pontryagin maximum principle (Theorem 1). In Section 4, we give an example illustrating the application of the new Pontryagin maximum principle. We end with Section 5, summarizing our work and giving some perspectives for future work.
## 2 Preliminaries
In this section, we briefly recall the necessary notions and results from fractional calculus. For more on the subject, we refer the interested readers to the books [24, 25, 26].
Let \((\alpha_{i})_{i=1,\ldots,n}\in(0,1)\) be a multi-order of real numbers. In the sequel, we use the following notation:
\[L^{(\alpha)}([a,b],\mathbb{R}^{n}):=\Big{\{}x\in L^{1}([a,b],\mathbb{R}^{n}):I ^{\alpha_{i}}_{a^{+}}x_{i},I^{\alpha_{i}}_{b^{-}}x_{i}\in AC([a,b],\mathbb{R}) \Big{\}},\]
where \(I^{\alpha_{i}}_{a^{+}}\) and \(I^{\alpha_{i}}_{b^{-}}\) represent, respectively, the left and right Riemann-Liouville integrals of order \(\alpha_{i}\). We also use the notation \(AC^{(a)}([a,b],\mathbb{R}^{n})\) to represent the set of absolutely continuous functions that can be represented as
\[x_{i}(t)=x_{i}(a)+I^{\alpha_{i}}_{a^{+}}f(t)\,\,\,\text{and}\,\,\,x_{i}(t)=x_{ i}(b)+I^{\alpha_{i}}_{b^{-}}f(t),\]
for some functions \(f\in L^{\alpha_{i}}:=\big{\{}x\in L^{1}([a,b],\mathbb{R}):I^{\alpha_{i}}_{a^{ +}}x_{i},I^{\alpha_{i}}_{b^{-}}x_{i}\in AC([a,b],\mathbb{R})\big{\}}\).
**Definition 1**.: _The left- and right-sided Riemann-Liouville incommensurate fractional derivative of orders \((\alpha_{i})_{i=1,\ldots,n}\in(0,1)\) of a function \(x\in L^{(\alpha)}\) are defined, respectively, by_
\[D_{a^{+}}^{(\alpha)}x(t)=\begin{cases}D_{a^{+}}^{\alpha_{1}}x_{1}(t),\\ D_{a^{+}}^{\alpha_{2}}x_{2}(t),\\ \vdots\vdots\\ D_{a^{+}}^{\alpha_{n}}x_{n}(t),\end{cases};\quad\quad D_{b^{-}}^{(\alpha)}x(t)= \begin{cases}D_{b^{-}}^{\alpha_{1}}x_{1}(t),\\ D_{b^{-}}^{\alpha_{2}}x_{2}(t),\\ \vdots\vdots\\ D_{b^{-}}^{\alpha_{n}}x_{n}(t),\end{cases}\]
_with_
\[D_{a^{+}}^{\alpha_{i}}x_{i}(t)=\frac{d}{dt}\left(I_{a^{+}}^{1-\alpha_{i}}x_{i} (t)\right),\;D_{b^{-}}^{\alpha_{i}}x_{i}(t)=-\frac{d}{dt}\left(I_{b^{-}}^{1- \alpha_{i}}x_{i}(t)\right),\]
_where \(I_{a^{+}}^{1-\alpha_{i}}\) and \(I_{b^{-}}^{1-\alpha_{i}}\) represent, respectively, the left- and right-sided Riemann-Liouville fractional integrals of order \(1-\alpha_{i}\)._
**Definition 2**.: _The left- and right-sided Caputo incommensurate fractional derivatives of order \((\alpha_{i})_{i=1,\ldots,n}\in(0,1)\) of a function \(x\in AC^{(\alpha)}\) are defined, respectively, by_
\[{}^{\mathrm{c}}D_{a^{+}}^{(\alpha)}x(t)=\begin{cases}{}^{\mathrm{c}}D_{a^{+} }^{\alpha_{1}}x_{1}(t),\\ {}^{\mathrm{c}}D_{a^{+}}^{\alpha_{2}}x_{2}(t),\\ \vdots\vdots\\ {}^{\mathrm{c}}D_{a^{+}}^{\alpha_{n}}x_{n}(t),\end{cases};\quad\quad{}^{ \mathrm{c}}D_{b^{-}}^{(\alpha)}x(t)=\begin{cases}{}^{\mathrm{c}}D_{b^{-}}^{ \alpha_{1}}x_{1}(t),\\ {}^{\mathrm{c}}D_{b^{-}}^{\alpha_{2}}x_{2}(t),\\ \vdots\vdots\\ {}^{\mathrm{c}}D_{b^{-}}^{\alpha_{n}}x_{n}(t),\end{cases}\]
_where_
\[D_{a^{+}}^{\alpha_{i}}x_{i}(t)=I_{a^{+}}^{1-\alpha_{i}}\bigg{(}\frac{d}{dt}x_ {i}(t)\bigg{)},\quad D_{b^{-}}^{\alpha_{i}}x_{i}(t)=-I_{b^{-}}^{1-\alpha_{i}} \bigg{(}\frac{d}{dt}x_{i}(t)\bigg{)}.\]
Note that integration by parts is a powerful tool when two functions are multiplied together, being useful for our purposes in the proof of the Pontryagin maximum principle. In the sequel, the dot \(\cdot\) is used for indicating scalar products.
**Lemma 1** (Integration by parts formula [25]).: _Let \(x\in L^{(\alpha)}\) and \(y\in AC^{(\alpha)}\). Then,_
\[\int_{a}^{b}x(t)\cdot{}^{\mathrm{c}}D_{a^{+}}^{(\alpha)}y(t)dt=\Big{[}y(t) \cdot I_{b^{-}}^{1-(\alpha)}x(t)\Big{]}_{a}^{b}+\int_{a}^{b}y(t)\cdot D_{b^{-} }^{(\alpha)}x(t)dt,\]
_where_
\[I_{b^{-}}^{1-(\alpha)}x(t)=\begin{cases}I_{b^{-}}^{1-\alpha_{1}}x_{1}(t),\\ I_{b^{-}}^{1-\alpha_{2}}x_{2}(t),\\ \vdots\vdots\\ I_{b^{-}}^{1-\alpha_{n}}x_{n}(t).\end{cases}\]
In what follows, we recall a generalized Gronwall inequality that is useful to prove the continuity and differentiability of perturbed trajectories.
**Lemma 2** (Generalized Gronwall inequality [27]).: _Let \(\alpha\) be a positive real number and let \(p(\cdot)\), \(q(\cdot)\), and \(u(\cdot)\) be non-negative continuous functions on \([a,b]\) with \(q(\cdot)\) monotonic increasing on \([a,b)\). If_
\[u(t)\leq p(t)+q(t)\int_{a}^{t}(t-s)^{\alpha-1}u(s)ds,\]
_then_
\[u(t)\leq p(t)+\int_{a}^{t}\Biggl{[}\sum_{n=1}^{\infty}\frac{\left(q(t)\Gamma (\alpha)\right)^{n}}{\Gamma(n\alpha)}(t-s)^{n\alpha-1}p(s)\Biggr{]}ds\]
_for all \(t\in[a,b)\)._
## 3 Main Results
In this section, our main concern is to find a control function \(u\in L^{\infty}([a,b],\mathbb{R}^{m})\) and its corresponding state trajectory \(x\in AC^{(a)}([a,b],\mathbb{R}^{n})\), solution to the following incommensurate non-local fractional optimal control problem:
\[J[x(\cdot),u(\cdot)]=\varphi(b,x(b))+\int_{a}^{b}L(t,x(t),u(t))dt \longrightarrow\max,\] \[{}^{\mathrm{c}}D^{(a)}_{a^{+}}x(t)=f(t,x(t),u(t)),\quad t\in[a,b] \ a.e., \tag{1}\] \[x(\cdot)\in AC^{(a)},\quad u(\cdot)\in L^{\infty},\] \[x(a)=x_{a}\in\mathbb{R}^{n},\quad u(t)\in\Omega,\]
where \(\Omega\) is a closed subset of \(\mathbb{R}^{m}\). The data functions \(L:[a,b]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}\), \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) and \(f:[a,b]\times\mathbb{R}^{n}\times\mathbb{R}^{m}\to\mathbb{R}^{n}\) are subject to the following assumptions:
* The function \(\varphi\) is of class \(\mathcal{C}^{1}\).
* Functions \(L\) and \(f\) are continuous in all its three arguments and of class \(\mathcal{C}^{1}\) with respect to \(x\) and, in particular, locally Lipschitz-continuous, that is, for every compact \(B\subset\mathbb{R}^{n}\) and for all \(x,y\in B\) there is \(K>0\) such that \(|L(t,x,u)-L(t,y,u)|\leq K\|x-y\|\) and \(\|f(t,x,u)-f(t,y,u)\|\leq K\|x-y\|\).
* There exists also \(N>0\), such that \(\left\|\frac{\partial L(t,x,u)}{\partial x}\right\|\leq N\) and \(\left\|\frac{\partial f(t,x,u)}{\partial x}\right\|\leq N\).
* With respect to the control \(u\), there exists \(M>0\) such that \[|L(t,x,u)|\leq M,\quad\|\ f(t,x,u)\ \|\leq M,\quad\forall(t,x)\in[a,b]\times \mathbb{R}^{n}.\]
### Needle-like Perturbation of the Optimal Control
We will prove a first-order necessary optimality condition of the Pontryagin type, which is only sufficient in very particular cases [28]. Any proof of a necessary optimality condition, e.g., Fermat theorem about stationary points or the classical Pontryagin maximum principle [1], begins by assuming the existence of a solution. We do the same here: we assume \(u^{*}(t)\in\Omega\) to be an optimal control to problem (1) for \(t\in[a,b]\). The reader who is interested in the question of the existence of \(u^{*}\) is referred to the recent paper [12]. Our aim, in this section, is to derive continuity and differentiability properties of perturbed trajectories, which are crucial to proving the necessary optimality condition for the optimal control problem (1). One way of achieving this is to perturb the optimal control by a needle-like variation and study the behavior of the corresponding state with respect to the optimal curve.
Denote by \(\mathcal{L}[F(\cdot)]\) the set of all Lebesgue points in \([a,b)\) of the essentially bounded functions \(t\mapsto f(t,x(t),u(t))\) and \(t\mapsto L(t,x(t),u(t))\). Then, for \((\tau,v)\in\mathcal{L}[F(\cdot)]\times\Omega\), and for every \(\theta\in[0,b-\tau)\), let us consider the needle-like variation \(u^{\theta}\in L^{\infty}([a,b],\mathbb{R}^{n})\) associated to the optimal control \(u^{*}\), which is given by
\[u^{\theta}(t)=\begin{cases}u^{*}(t)&\text{if}\quad t\not\in[\tau,\tau+\theta ),\\ v&\text{if}\quad t\in[\tau,\tau+\theta),\end{cases} \tag{2}\]
for almost every \(t\in[a,b]\).
Lemma 3 (Continuity of solutions): _For any \((\tau,v)\in\mathcal{L}[F(\cdot)]\times\Omega\), denote by \(x^{\theta}\) the corresponding state trajectory to the needle-like variation \(u^{\theta}\), that is, the state solution of_
\[{}^{\mathrm{c}}D^{(a)}_{a+}x^{\theta}(t)=f\Big{(}t,x^{\theta}(t),u^{\theta}(t )\Big{)},\quad x^{\theta}(a)=x_{a}. \tag{3}\]
_Then, the state \(x^{\theta}\) converges uniformly to the optimal state trajectory \(x^{*}\) whenever \(\theta\) tends to zero._
Proof.: By definition of incommensurate Caputo derivative, we have
\[{}^{\mathrm{c}}D_{a+}^{(\alpha)}\Big{(}x^{\theta}(t)-x^{*}(t)\Big{)}=f(t,x^{ \theta}(t),u^{\theta}(t))-f(t,x^{*}(t),u^{*}(t)).\]
Then, the integral representation is obtained as
\[x^{\theta}(t)-x^{*}(t)=\int_{0}^{t}\mathrm{diag}\bigg{(}\frac{(t-s)^{\alpha_{ i}-1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\Big{(}f(t,x^{\theta}(t),u^{\theta}(t))-f(t,x^{* }(t),u^{*}(t))\Big{)}ds.\]
Moreover, the function \(\alpha_{i}\mapsto\frac{c^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}\) is continuous on \([\min\limits_{1\leq i\leq n}\alpha_{i},1]\), where \(c\) is a non-zero constant. Thus, by the extreme value theorem due to Weierstrass, it attains a maximum. Hence, there exists \(\tilde{\alpha}\in[\min\limits_{1\leq i\leq n}\alpha_{i},1]\) such that \(\frac{c^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}\leq\frac{c^{\alpha-1}}{\Gamma( \tilde{\alpha})}\). This leads to
\[x^{\theta}(t)-x^{*}(t)\leq\int_{0}^{t}\frac{(t-s)^{\tilde{\alpha }-1}}{\Gamma(\tilde{\alpha})}\cdot\Big{(}f(t,x^{\theta}(t),u^{\theta}(t))-f(t, x^{*}(t),u^{*}(t))\Big{)}ds\\ =I_{a+}^{\alpha}\Big{(}f(t,x^{\theta}(t),u^{\theta}(t))-f(t,x^{*} (t),u^{*}(t))\Big{)}.\]
Further, the following relation holds for function \(f\),
\[f(t,x^{\theta}(t),u^{\theta}(t))-f(t,x^{*}(t),u^{*}(t))=\{f(t,x^ {\theta}(t),u^{\theta}(t))-f(t,x^{*}(t),u^{\theta}(t))\}\\ +\{f(t,x^{*}(t),u^{\theta}(t))-f(t,x^{*}(t),u^{*}(t))\}.\]
Using the triangular inequality, and noticing that \(u^{\theta}\) and \(u^{*}\) are different only on \([\tau,\tau+\theta)\), we obtain
\[\|\ x^{\theta}(t)-x^{*}(t)\ \|\leq I_{a^{+}}^{\tilde{\alpha}}\Big{(}\|\ f(t,x^{ \theta}(t),u^{\theta}(t))-f(t,x^{*}(t),u^{\theta}(t))\ \|\Big{)}\\ +I_{\tau^{+}}^{\tilde{\alpha}}\Big{(}\|\ f(t,x^{*}(t),u^{\theta}( t))-f(t,x^{*}(t),u^{*}(t))\ \|\Big{)}.\]
Next, from data assumptions, \(K\) is a Lipschitz constant of \(f\) and \(M\) is an upper bound of \(f\) with respect to the control function. Thus, it follows that
\[\|\ x^{\theta}(t)-x^{*}(t)\ \|\leq KI_{a^{+}}^{\tilde{\alpha}}\Big{(}\|\ x^{ \theta}(t)-x^{*}(t)\ \|\Big{)}+M\frac{\theta^{\tilde{\alpha}}}{\Gamma(\tilde{\alpha}+1)}.\]
Now, by applying the generalized Gronwall inequality of Lemma 2, it follows that
\[\|\ x^{\theta}(t)-x^{*}(t)\ \|\leq\frac{M\theta^{\tilde{\alpha}}}{ \Gamma(\alpha+1)}\Bigg{[}1+\int_{a}^{t}\sum_{n=1}^{\infty}\frac{K^{n}}{\Gamma (n\tilde{\alpha})}(t-s)^{nn-1}ds\Bigg{]}\\ \leq\frac{M\theta^{\tilde{\alpha}}}{\Gamma(\alpha+1)}E_{\alpha,1 }(K(b-a)^{\alpha}),\]
where \(E_{\alpha,1}\) is the Mittag-Leffler function of parameter \(\tilde{\alpha}\)[29]. Hence, we obtain that \(\|x^{\theta}-x^{*}\|\) converges to zero on \([a,b]\), which ends the proof.
Lemma 4 (Differentiability of the perturbed trajectory).: _Suppose that \((x^{*},u^{*})\) is an optimal pair to problem (1). Then, for \((\tau,v)\in\mathcal{L}[F(\cdot)]\times\Omega\), the quotient variational trajectory
\[\frac{x^{\theta}(\cdot)-x^{*}(\cdot)}{\theta}\text{ converges uniformly on }[\tau+\theta,b]\text{ to }\eta(\cdot)\text{ when }\theta\text{ tends to zero, where }\eta(\cdot)\text{ is the unique solution to the incommensurate left Caputo fractional Cauchy problem}\] \[\begin{cases}{}^{c}D_{\tau+}^{(a)}\eta(t)=\frac{\partial f(t,x^{*}(t),u^{* }(t))}{\partial x}\cdot\eta(t),\quad t\in]\tau,b],\\ I_{\tau+}^{1-\kappa}\eta(\tau)=(f(\tau,x^{*}(\tau),v)-f(\tau,x^{*}(\tau),u^{* }(\tau))),\end{cases} \tag{4}\]
with \(\bar{\alpha}\in[\min\limits_{1\leq i\leq n}\alpha_{i},1]\), such that \(\frac{\epsilon^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}\leq\frac{\epsilon^{\alpha- 1}}{\Gamma(\bar{\alpha})}\), for some non-zero constant \(c\) and \(i=1,\ldots,n\).
Proof.: Set \(z^{\theta}(t)=\frac{x^{\theta}(t)-x^{*}(t)}{\theta}-\eta(t)\) for all \(t\in[\tau+\theta,b]\). Our aim is to prove that \(z^{\theta}\) converges uniformly to zero whenever \(\theta\to 0\).
The integral representation of \(z^{\theta}\) is obtained as follows:
\[z^{\theta}(t)=\int_{\tau}^{\tau+\theta}\text{diag}\bigg{(}\frac {(t-s)^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\frac{f(s,x^{\theta}(s), v)-f(s,x^{*}(s),v)}{\theta}ds\\ +\int_{\tau}^{\tau+\theta}\text{diag}\bigg{(}\frac{(t-s)^{\alpha_ {i}-1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\frac{f(s,x^{*}(s),v)-f(s,x^{*}(s),u^{* }(s))}{\theta}ds\\ -\frac{1}{\Gamma(\bar{\alpha})}(t-\tau)^{\bar{\alpha}-1}(f(\tau, x^{*}(\tau),v)-f(\tau,x^{*}(\tau),u^{*}(\tau)))\\ +\int_{\tau+\theta}^{t}\text{diag}\bigg{(}\frac{(t-s)^{\alpha_{i} -1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\bigg{(}\frac{f(s,x^{\theta}(s),u^{*}(s) )-f(s,x^{*}(s),u^{*}(s))}{\theta}\\ -\frac{\partial f(s,x^{*}(s),u^{*}(s))}{\partial x}\times\frac{x^{ \theta}(s)-x^{*}(s)}{\theta}\bigg{)}ds\\ \int_{\tau}^{\tau+\theta}\text{diag}\bigg{(}\frac{(t-s)^{\alpha_ {i}-1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\frac{\partial f(s,x^{*}(s),u^{*}(s) )}{\partial x}\times\eta(s)ds\\ +\int_{\tau+\theta}^{t}\text{diag}\bigg{(}\frac{(t-s)^{\alpha_{i} -1}}{\Gamma(\alpha_{i})}\bigg{)}\cdot\frac{\partial f(s,x^{*}(s),u^{*}(s))}{ \partial x}\times z^{\theta}(s)ds.\]
Note that by the existence property of \(\bar{\alpha}\), we have that \(\frac{(t-s)^{\alpha_{i}-1}}{\Gamma(\alpha_{i})}\leq\frac{(t-s)^{\bar{\alpha}- 1}}{\Gamma(\bar{\alpha})}\) for all \(i=1,\ldots,n\). Thus, we can deduce that
\[z^{\theta}(t)\leq\frac{1}{\Gamma(\bar{\alpha})}\int_{\tau}^{\tau +\theta}(t-s)^{\bar{\alpha}-1}\frac{f(s,x^{\theta}(s),v)-f(s,x^{*}(s),v)}{ \theta}ds\\ +\frac{1}{\Gamma(\bar{\alpha})}\int_{\tau}^{\tau+\theta}(t-s)^{ \bar{\alpha}-1}\frac{f(s,x^{*}(s),v)-f(s,x^{*}(s),u^{*}(s))}{\theta}ds\\ -\frac{1}{\Gamma(\bar{\alpha})}(t-\tau)^{\bar{\alpha}-1}(f(\tau, x^{*}(\tau),v)-f(\tau,x^{*}(\tau),u^{*}(\tau)))\\ +\frac{1}{\Gamma(\bar{\alpha})}\int_{\tau+\theta}^{t}(t-s)^{\bar{ \alpha}-1}\bigg{(}\frac{f(s,x^{\theta}(s),u^{*}(s))-f(s,x^{*}(s),u^{*}(s))}{ \theta}\\ -\frac{\partial f(s,x^{*}(s),u^{*}(s))}{\partial x}\times\frac{x^{ \theta}(s)-x^{*}(s)}{\theta}\bigg{)}ds\\ -\frac{1}{\Gamma(\bar{\alpha})}\int_{\tau}^{\tau+\theta}(t-s)^{ \bar{\alpha}-1}\frac{\partial f(s,x^{*}(s),u^{*}(s))}{\partial x}\times\eta(s) ds\\ +\frac{1}{\Gamma(\bar{\alpha})}\int_{\tau+\theta}^{t}(t-s)^{\bar{ \alpha}-1}\frac{\partial f(s,x^{*}(s),u^{*}(s))}{\partial x}\times z^{\theta}( s)ds. \tag{5}\]
In line with the work of [11], precisely the proof of Proposition 3.3 of this reference, we obtain that each term appearing in the right hand side of (5) is bounded, which yields the following estimate:
\[\|z^{\theta}(t)\|\leq\Theta^{\theta}_{1}(t-(\tau+\theta))^{\lambda-1}E_{\tilde{ \alpha},\tilde{\alpha}^{\prime}}\big{(}N(b-a)^{\tilde{\alpha}}\big{)}+\Theta^{ \theta}_{2}(t-(\tau+\theta))^{\lambda^{\prime}-1}E_{\tilde{\alpha},\tilde{ \alpha}^{\prime}}\big{(}N(b-a)^{\tilde{\alpha}}\big{)},\]
where functions \(\Theta^{\theta}_{1}\) and \(\Theta^{\theta}_{2}\) both converge uniformly to zero whenever \(\theta\) tends to zero. Hence, we conclude that \(z^{\theta}(t)\) converges uniformly to zero as \(\theta\) goes to zero, which is the desired result.
### Pontryagin's Maximum Principle for Problem (1)
The fractional Pontryagin maximum principle has many applications, for example in Engineering, Economics and Health [30; 31]. Here, we state and prove the main result of our work: a Pontryagin maximum principle for the incommensurate fractional-order optimal control problem (1).
**Theorem 1** (Pontryagin Maximum Principle for (1)).: _If \((x^{*}(\cdot),u^{*}(\cdot))\) is an optimal pair for (1), then there exists \(\lambda\in L^{(\alpha)}\), called the adjoint function variable, such that the following conditions hold for all \(t\) in the interval \([a,b]\):_
* _the maximality condition_ \[H(t,x^{*}(t),u^{*}(t),\lambda(t))=\max_{\omega\in\Omega}H(t,x^{*}(t),\omega, \lambda(t));\] (6)
* _the adjoint system_ \[D^{(\alpha)}_{b-}\lambda(t)=\frac{\partial H}{\partial x}(t,x^{*}(t),u^{*}(t),\lambda(t));\] (7)
* _the transversality condition_ \[I^{1-(\alpha)}_{b^{-}}\lambda(b)=\frac{\partial\varphi}{\partial x}(b,x^{*}( b)),\] (8)
_where the Hamiltonian function \(H\) is defined by_
\[H(t,x,u,\lambda)=L(t,x,u)+\lambda\cdot f(t,x,u). \tag{9}\]
Proof.: Let \(x^{\theta}(t)\) be the corresponding state trajectory to the needle-like variation \(u^{\theta}\) defined by (2). Observe that, by integration by parts, we have for \(\lambda(\cdot)\in L^{(\alpha)}\),
\[\int_{a}^{b}\lambda(t)\cdot{}^{c}D^{(\alpha)}_{a^{+}}x^{\theta}(t)dt=\Big{[}x ^{\theta}(t)\cdot I^{1-(\alpha)}_{b-}\lambda(t)\Big{]}_{a}^{b}+\int_{a}^{b}x^ {\theta}(t)\cdot D^{(\alpha)}_{b-}\lambda(t)dt. \tag{10}\]
This relation can be added to the objective functional at \((x^{\theta},u^{\theta})\) defined by
\[J(x^{\theta},u^{\theta})=\varphi(b,x^{\theta}(b))+\int_{a}^{b}L\Big{(}t,x^{ \theta}(t),u^{\theta}(t)\Big{)}ds,\]
meaning that
\[J(x^{\theta},u^{\theta})=\varphi(b,x^{\theta}(b))+\int_{a}^{b} \Big{[}L\Big{(}t,x^{\theta}(t),u^{\theta}(t)\Big{)}+\lambda(t)\cdot{}^{c}D^{( \alpha)}_{a^{+}}x^{\theta}(t)-x^{\theta}(t)\cdot D^{(\alpha)}_{b-}\lambda(t) \Big{]}dt\\ x^{\theta}(b)\cdot I^{1-(\alpha)}_{b-}\lambda(b)+x^{\theta}(a) \cdot I^{1-(\alpha)}_{a}\lambda(a), \tag{11}\]
which, by substituting (3) to this latter expression, leads to
\[J(x^{\theta},u^{\theta})=\varphi(b,x^{\theta}(b))+\int_{a}^{b} \Bigl{[}H\Bigl{(}t,x^{\theta}(t),u^{\theta}(t),\lambda(t)\Bigr{)}-x^{\theta}(t) \cdot D_{b-}^{(\alpha)}\lambda(t)\Bigr{]}dt\] \[-x^{\theta}(b)\cdot I_{b-}^{1-(\alpha)}\lambda(b)+x_{a}\cdot I_{a} ^{1-(\alpha)}\lambda(a),\]
where \(H(t,x,u,\lambda)=L(t,x,u)+\lambda\cdot f(t,x,u)\). Next, we write down the Taylor expansion
\[\varphi(b,x^{\theta}(b))=\varphi(b,x^{*}(b))+\Bigl{(}x^{\theta}(b )-x^{*}(b)\Bigr{)}\cdot\frac{\partial\varphi}{\partial x}(b,x^{*}(b))+\circ \Bigl{(}\parallel x^{\theta}-x^{*}\parallel\Bigr{)};\\ H\Bigl{(}t,x^{\theta}(t),u^{\theta}(t),\lambda(t)\Bigr{)}=H \Bigl{(}t,x^{*}(t),u^{\theta}(t),\lambda(t)\Bigr{)}+\Bigl{(}x^{\theta}(t)-x^{ *}(t)\Bigr{)}\cdot\frac{\partial H}{\partial x}\Bigl{(}t,x^{*}(t),u^{\theta}( t),\lambda(t)\Bigr{)}\\ +o\Bigl{(}\parallel x^{\theta}-x^{*}\parallel\Bigr{)}.\]
Note that, by the continuity Lemma 3, we have the uniform convergence of \(\|x^{\theta}-x^{*}\|\to 0\) whenever \(\theta\to 0\). Thus, the residue term in the Taylor expansion can be expressed as a function of \(\theta\). Therefore, we can evaluate the quotient \(\frac{J(x^{\theta},u^{\theta})-J(x^{*},u^{*})}{\theta}:=\delta J\) as follows:
\[\delta J=\frac{x^{\theta}(b)-x^{*}(b)}{\theta}\cdot\frac{ \partial\varphi}{\partial x}(b,x^{*}(b))+\int_{a}^{b}\frac{H\bigl{(}t,x^{*}(t ),u^{\theta}(t),\lambda(t)\bigr{)}-H(t,x^{*}(t),u^{*}(t),\lambda(t))}{\theta}dt \\ +\int_{a}^{b}\Bigl{(}\frac{\partial H}{\partial x}\Bigl{(}t,x^{*} (t),u^{\theta}(t),\lambda(t)\Bigr{)}-D_{b-}^{(\alpha)}\lambda(t)\Bigr{)}\cdot \frac{x^{\theta}(t)-x^{*}(t)}{\theta}dt-\frac{x^{\theta}(b)-x^{*}(b)}{\theta} \cdot I_{b-}^{1-(\alpha)}\lambda(b)\\ +o(\theta)(1+b-a).\]
Now, by the differentiability Lemma 4, we have that \(\frac{x^{\theta}(t)-x^{*}(t)}{\theta}\) converges uniformly to \(\eta(t)\) when \(\theta\) tends to zero. Therefore, the limit of \(\delta J\) when \(\theta\) tends to zero can be expressed as
\[\lim_{\theta\to 0}\delta J=\eta(b)\cdot\frac{\partial q}{ \partial x}(b,x^{*}(b))+\lim_{\theta\to 0}\int_{a}^{b}\frac{H\bigl{(}t,x^{*}(t),u^{ \theta}(t),\lambda(t)\bigr{)}-H(t,x^{*}(t),u^{*}(t),\lambda(t))}{\theta}dt\\ +\int_{a}^{b}\Bigl{(}\frac{\partial H}{\partial x}(t,x^{*}(t),u^{ *}(t),\lambda(t))-D_{b-}^{(\alpha)}\lambda(t)\Bigr{)}\cdot\eta(t)dt-\eta(b) \cdot I_{b-}^{1-(\alpha)}\lambda(b).\]
Next, we fix
\[D_{b-}^{(\alpha)}\lambda(t)=\frac{\partial H}{\partial x}(t,x^{*}(t),u^{*}(t),\lambda(t))\quad\text{ with }\quad I_{b-}^{1-(\alpha)}\lambda(b)=\frac{\partial\varphi}{ \partial x}(b,x^{*}(b)),\]
that is, the adjoint Equation (7) and the transversality condition (8). Thus, we are left with
\[\lim_{\theta\to 0}\delta J=\lim_{\theta\to 0}\int_{a}^{b}\frac{H\bigl{(}t,x^{*}(t),u^{ \theta}(t),\lambda(t)\bigr{)}-H(t,x^{*}(t),u^{*}(t),\lambda(t))}{\theta}dt.\]
Moreover, recalling that \(u^{\theta}(t)=\begin{cases}u^{*}(t)&\text{ if }\quad t\not\in[\tau,\tau+\theta);\\ v&\text{ if }\quad t\in[\tau,\tau+\theta),\end{cases}\quad\text{ for almost every }t\in[a,b],\) it follows that
\[\lim_{\theta\to 0}\delta J=\lim_{\theta\to 0^{+}}\frac{1}{\theta}\int_{\tau}^{ \tau+\theta}[H(s,x^{*}(s),v,\lambda(s))-H(s,x^{*}(s),u^{*}(s),\lambda(s))]ds.\]
However, notice that \(\tau\) is a Lebesgue point of
\[H(s,x^{*}(s),v,\lambda(s))-H(s,x^{*}(s),u^{*}(s),\lambda(s)):=\psi(s).\]
Thus, from the Lebesgue differentiation property,
\[\left|\frac{1}{\theta}\int_{\tau}^{\tau+\theta}\psi(s)ds-\psi(\tau)\right|=\left| \frac{1}{\theta}\int_{\tau}^{\tau+\theta}(\psi(s)-\psi(\tau))ds\right|\leq\frac {1}{\theta}\int_{\tau}^{\tau+\theta}|\psi(s)-\psi(\tau)|ds,\]
and we have that the right-hand side tends to zero for almost every point \(\tau\). As a consequence,
\[\begin{split}\lim_{\theta\to 0^{+}}\frac{1}{\theta}\int_{\tau}^{ \tau+\theta}&[H(s,x^{*}(s),v,\lambda(s))-H(s,x^{*}(s),u^{*}(s), \lambda(s))]ds\\ &=H(\tau,x^{*}(\tau),v,\lambda(\tau))-H(\tau,x^{*}(\tau),u^{*}( \tau),\lambda(\tau)).\end{split} \tag{12}\]
Further, by the optimality assumption of the pair
\[(x^{*},u^{*}),\text{ one has also }\lim_{\theta\to 0}\delta J\leq 0,\]
altogether, we obtain that
\[H(\tau,x^{*}(\tau),v,\lambda(\tau))-H(\tau,x^{*}(\tau),u^{*}(\tau),\lambda( \tau))\leq 0.\]
Finally, because \(\tau\) is an arbitrary Lebesgue point of the control \(u^{*}\) and \(v\) is an arbitrary element of the set \(\Omega\), it follows that the relation
\[H(t,x^{*}(t),u^{*}(t),\lambda(t))=\max_{\omega\in\Omega}H(t,x^{*}(t),\omega, \lambda(t))\]
holds at all Lebesgue points, which ends the proof.
Theorem 1 gives a necessary optimality condition that provides an algorithm that can be used to solve general Bolza-type fractional optimal control problems that depend on multi-order Caputo fractional derivatives: given a problem (1),
1. one writes the associated Hamiltonian (9);
2. we use the maximality condition (6) to obtain an expression of the optimal controls in terms of the state and adjoint variables;
3. we substitute the expressions obtained in step (ii) in the adjoint system (7);
4. finally, we solve the system obtained in step (iii) together with the initial conditions \(x(a)=x_{a}\) and the transversality condition (8).
A simple example of the usefulness of our result is given in Section 4.
## 4 An Illustrative Example
Let us study the following optimal control multi-order fractional Caputo problem:
\[\begin{split} x_{2}(5)+\int_{1}^{5}(1+\exp(2u(t)))dt& \longrightarrow\max,\\ x=(x_{1},x_{2})\in AC^{(\frac{1}{5},\frac{1}{2})}\Big{(}[1,5], \mathbb{R}^{2}\Big{)},\quad u\in L^{\infty}([1,5],\mathbb{R}),\\ \begin{cases}{}^{c}D_{1+}^{1}x_{1}(t)=1-\exp(2u(t)),\\ {}^{c}D_{1+}^{1}x_{2}(t)=x_{1}(t),\\ {}^{c}(1)=(1,1),\quad u(t)\in[-2,7].\end{cases}\end{split} \tag{13}\]
The Hamiltonian function and the running cost of this problem (13) are given, respectively, by
\[H(t,x,u,\lambda)=1+\exp(2u)+\lambda_{1}(1-\exp(2u))+\lambda_{2}x_{1},\text{ \ and }\varphi(b,x(b))=x_{2}(5).\]
We do not know if the problem has a solution or not but, if the problem has a solution, then it must satisfy the necessary optimality condition is given by our Theorem 1. Precisely, if \((x^{*},u^{*})\) is an optimal pair solution to problem (13), then by application of our Theorem 1, there exists an adjoint function \(\lambda\in L^{(\frac{1}{3},\frac{1}{3})}\), satisfying
\[\begin{cases}D_{5-}^{\frac{1}{3}}\lambda_{1}(t)=\frac{\partial H}{\partial x_{1 }}=\lambda_{2},&I_{5-}^{1-\frac{1}{3}}=\frac{\partial\varphi}{\partial x_{1}}= 0;\\ D_{5-}^{\frac{1}{3}}\lambda_{2}(t)=\frac{\partial H}{\partial x_{2}}=0,&I_{5-}^ {1-\frac{1}{2}}=\frac{\partial\varphi}{\partial x_{1}}=1.\end{cases}\]
We integrate, to obtain that
\[\lambda_{1}(t)\frac{(5-t)^{\frac{2}{3}}-1}{\Gamma(\frac{2}{3})},\quad\text{ and}\quad\lambda_{2}(t)=\frac{(5-t)^{\frac{1}{2}-1}}{\Gamma(\frac{1}{2})}.\]
Moreover, from the maximality condition (6), it yields that
\[u^{*}(t)\in\underset{v\in[-2,7]}{\arg}\bigg{\langle}\begin{pmatrix}1-\exp(2v (t))\\ 1+\exp(2v(t))\end{pmatrix}\cdot\begin{pmatrix}\lambda_{1}\\ 1\end{pmatrix}\bigg{\rangle},\]
for almost \(t\in[1,5]\). Therefore, using the classical Cauchy-Schwartz inequality, we obtain
\[\begin{pmatrix}1-\exp(2u^{*}(t))\\ 1+\exp(2u^{*}(t))\end{pmatrix}=\frac{1}{|(\lambda_{1}(t),1)|_{\mathbb{R}^{2}}} \begin{pmatrix}\lambda_{1}\\ 1\end{pmatrix}.\]
This leads to
\[\tanh(u^{*}(t))=\frac{1}{\lambda_{1}(t)}=\Gamma\bigg{(}\frac{2}{3}\bigg{)}(5-t )^{\frac{1}{3}}.\]
Finally, we obtain that \(u^{*}\) is given by
\[u^{*}(t)=\arctan h\bigg{(}\Gamma\bigg{(}\frac{2}{3}\bigg{)}(5-t)^{\frac{1}{3} }\bigg{)},\text{ for almost }t\in[1,5]. \tag{14}\]
Note that (14) is just a Pontryagin extremal (candidate).
For practical applications, the problems are difficult and one needs to use numerical methods. Many software packages for computing fractional-order derivatives and solving fractional differential systems are now available. We refer the interested reader to [32] and references therein.
## 5 Conclusions
In this paper, we have studied a general Bolza-type fractional optimal control problem that depends on multi-order Caputo fractional derivatives. We have established a Pontryagin maximum principle for this incommensurate fractional-order problem. Our approach starts with a sensitivity analysis from which we prove the continuity and differentiability of perturbed trajectories to the optimal state. An illustrative example shows the applicability of our main result (Theorem 1), which provides a Pontryagin maximum principle for incommensurate fractional-order optimal control problems.
Recent applications of fractional mathematical modeling have shown the importance of considering incommensurate orders in infectious disease dynamics [33]. This might permit greater flexibility in capturing the heterogeneous nature of disease dynamics, accounting for factors such as population demographics or social behaviors of individuals [34]. Further, it has an advantage when introducing a control function in such models, by the fact that the more accurate the model, the better the control. Therefore, for future work, it would be interesting to emphasize numerical methods for incommensurate-order problems in order to handle applications on existing models of multi-order derivatives.
**Author Contributions:** Conceptualization, F.N. and D.F.M.T.; methodology, F.N. and D.F.M.T.; validation, F.N. and D.F.M.T.; formal analysis, F.N. and D.F.M.T.; investigation, F.N. and D.F.M.T.; writing--original draft preparation, F.N. and D.F.M.T.; writing--review and editing, F.N. and D.F.M.T. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was funded by the Fundacao para a Ciencia e a Tecnologia, I.P. (FCT, Funder ID = 50110000187) under Grants UIDB/04106/2020 and UIDP/04106/2020.
**Data Availability Statement:** No new data were created or analyzed in this study. Data sharing is not applicable to this article.
**Acknowledgments:** The authors are very grateful to three Reviewers for several comments, questions, and suggestions, that helped them to improve the submitted paper.
**Conflicts of Interest:** The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
2302.07726 | Tension between implications from PREX-2 data and gravitational tidal
response on dense matter equation of state | Recently an improved value of neutron skin thickness of $^{208}\text{Pb}$ was
reported in Lead Radius EXperiment-2 (PREX-2) to be $R_{\text{skin}}=R_n -
R_p=(0.283\pm 0.071)$ fm which corresponds to high estimations of nuclear
symmetry energy ($E_{\text{sym}}$) and its slope ($L_{\text{sym}}$). The
updated values of $E_{\text{sym}}$ and $L_{\text{sym}}$ commensurating to the
neutron star observable estimations lie exterior to the astrophysical observed
range. The higher values of $L_{\text{sym}}$ at $n_0$ deduced from recent
PREX-2 data correlates to matter being easily deformable (yielding higher
radius values) around intermediate matter densities leading to higher values of
$\tilde{\Lambda}$ creating a tension between the terrestrial and astrophysical
observations. In this study, we exploit this tension to constrain the
$\Delta$-scalar meson coupling parameter space. | Vivek Baruah Thapa, Monika Sinha | 2023-02-15T15:32:50Z | http://arxiv.org/abs/2302.07726v1 | Tension between implications from PREX-2 data and gravitational tidal response on dense matter equation of state
###### Abstract
Recently an improved value of neutron skin thickness of \({}^{208}\)Pb was reported in Lead Radius EXperiment-2 (PREX-2) to be \(R_{\rm skin}=R_{n}-R_{p}=(0.283\pm 0.071)\) fm which corresponds to high estimations of nuclear symmetry energy (\(E_{\rm sym}\)) and its slope (\(L_{\rm sym}\)). The updated values of \(E_{\rm sym}\) and \(L_{\rm sym}\) commensurating to the neutron star observable estimations lie exterior to the astrophysical observed range. The higher values of \(L_{\rm sym}\) at \(n_{0}\) deduced from recent PREX-2 data correlates to matter being easily deformable (yielding higher radius values) around intermediate matter densities leading to higher values of \(\tilde{\Lambda}\) creating a tension between the terrestrial and astrophysical observations. In this study, we exploit this tension to constrain the \(\Delta\)-scalar meson coupling parameter space.
## 1 Introduction
The neutron stars (NSs), highly compact stars, contain densest matter inside them. The matter density inside the NS core varies in the range of a few times nuclear matter density [1]. Most of the models of highly dense matter are formulated to reproduce the experimentally obtained range of the saturation property parameters, viz., nuclear saturation density (\(n_{0}\)), saturation energy (\(E_{0}\)), incompressibility (\(K_{0}\)), symmetry energy (\(E_{\rm sym}\)), its slope with density (\(L_{\rm sym}\)), curvature of symmetry energy (\(K_{\rm sym}\)), and effective nucleonic Dirac mass (\(m_{N}^{*}\)). There is recent update in obtaining the experimental values of \(E_{\rm sym}\) and its density dependence from the nuclear physics experiment. The Lead Radius EXperiment (PREX) collaboration reported its first results (PREX-1) [2] with \(R_{\rm skin}=R_{n}-R_{p}=0.33^{+0.16}_{-0.18}\) fm and corresponding \(L_{\rm sym}(n_{0})\) based on strong correlations to be in the range (\(35-265\)) MeV. Later on the ranges of \(E_{\rm sym}\) and \(L_{\rm sym}\) based on experimental data from finite nuclei and heavy-ion collisions (HICs) with different microscopic model calculations were estimated to be \(28.5-34.9\) MeV and \(30.6-86.8\) MeV respectively [3]. A latest compilation has been reported in ref.-[4] for the isospin properties and that to be in the range of (\(30.6-32.8\)) MeV, (\(55.7-63.9\)) MeV for \(E_{\rm sym}\) and \(L_{\rm sym}\) at nuclear saturation density (\(n_{0}\)) respectively. But very recently an improved value of \({}^{208}\)Pb was reported in PREX-2 as \(R_{\rm skin}=(0.283\pm 0.071)\) fm [5] with \(\sim 1\%\) precision. This however leads to an estimation of \(E_{\rm sym}\) and \(L_{\rm sym}\) at \(n_{0}\) to be in the ranges (\(38.1\pm 4.7\)) MeV, (\(106\pm 37\)) MeV respectively (\(1\sigma\) interval) with correlation coefficient as 0.978 [6]. The dense matter model should be tested and constrained with the recent astrophysical observations from NSs along with the nuclear physics experimental data. In recent years, the pursuit to
determine and constrain the dense matter equation of state (EOS) can be drawn from the massive NS (\(M\geq 2~{}M_{\odot}\)) observations [7; 8; 9], radius measurements of NS candidates from NICER (Neutron star Interior Composition ExplorER) space mission [10; 11; 12; 13; 14] as well as tidal response from gravitational-wave (GW) events [15; 16; 17; 18] by the LIGO-Virgo Collaboration. Very recently another pulsar, namely PSR J0952\(-\)0607 has been observed with mass reported in the range \(2.18-2.52~{}M_{\odot}\) at \(1-\sigma\) confidence interval [19]. The large lower bound of maximum NS mass leads to consider the appearance of heavier baryons inside the inner core of the NS [20; 21; 22; 23; 24; 25]. Also the constrained of matter to be soft at intermediate density opens the possibility of \(\Delta\) resonances to appear with the increase of density.
In present work we model the dense matter inside NS making compatible with recently updated ranges of \(E_{\rm sym}\) and its density dependence as well as recent astrophysical observations within the covariant density functional (CDF) model of dense matter. (_Conventions_: We implement the natural units \(G=\hbar=c=1\) throughout the work)
## 2 Hadronic CDF model
In this study, to compute the dense matter EOS we implement CDF framework model. In this model scheme, the coupling constants are based on the fact to reproduce the experimental quantities known at nuclear saturation. For this work, we consider the entire baryon octet (\(b\equiv N,Y\)) along with the spin-3/2 \(\Delta\)-resonances in the composition of dense matter. Leptons (\(e^{-},\mu^{-}\)) are brought into the picture to maintain the beta-equilibrium condition. We consider the effective interactions between the baryons are mediated via the isoscalar-scalar meson \(\sigma\), isoscalar-vector mesons \(\omega\), \(\phi\) and the isovector-vector \(\rho\)-meson. The hidden strangeness \(\phi\)-meson mediates the repulsive interactions between the strange baryons only. The total Lagrangian density in this formalism is given by [1]
\[\mathcal{L} =\sum_{b}\bar{\psi}_{b}(i\gamma_{\mu}D^{\mu}_{(b)}-m^{*}_{b})\psi_ {b}+\sum_{l}\bar{\psi}_{l}(i\gamma_{\mu}\partial^{\mu}-m_{l})\psi_{l}+\sum_{ \Delta}\bar{\psi}_{\Delta\nu}(i\gamma_{\mu}D^{\mu}_{(\Delta)}-m^{*}_{\Delta}) \psi^{\nu}_{\Delta}+\frac{1}{2}(\partial_{\mu}\sigma\partial^{\mu}\sigma \tag{1}\] \[-m^{2}_{\sigma}\sigma^{2})-\frac{1}{4}\omega_{\mu\nu}\omega^{\mu \nu}+\frac{1}{2}m^{2}_{\omega}\omega_{\mu}\omega^{\mu}-\frac{1}{4}\mathbf{\rho}_{ \mu\nu}\cdot\mathbf{\rho}^{\mu\nu}+\frac{1}{2}m^{2}_{\rho}\mathbf{\rho}_{\mu}\cdot\mathbf{ \rho}^{\mu}-\frac{1}{4}\phi_{\mu\nu}\phi^{\mu\nu}+\frac{1}{2}m^{2}_{\phi}\phi _{\mu}\phi^{\mu}-\text{U}(\sigma),\]
where the covariant derivative is given by \(D_{\mu(j)}=\partial_{\mu}+ig_{\omega j}\omega_{\mu}+ig_{\rho j}\tau_{j3}\cdot \mathbf{\rho}_{\mu}+ig_{\phi j}\phi_{\mu}\) with \(j\) denoting the baryon particle spectrum. The baryon octet and \(\Delta\) baryon Schwinger-Rarita fields along with their respective masses are represented by \(\psi_{b},~{}m_{b}\) and \(\psi_{\Delta},~{}m_{\Delta}\) respectively. \(\omega_{\mu\nu}\), \(\mathbf{\rho}_{\mu\nu}\) and \(\phi_{\mu\nu}\) are the anti-symmetric field tensors corresponding to vector meson fields. To reproduce the nuclear matter incompressibility at \(n_{0}\), the self-interaction of \(\sigma\) meson is included [1] by the term \(U(\sigma)=(1/3)g_{2}\sigma^{3}+(1/4)g_{3}\sigma^{4}\) with \(g_{2}\), \(g_{3}\) denoting the self-interaction coefficients (non-linear (NL) model). Alternate to the NL model is the density dependent (DD) coupling model. In latter model, the incompressibility of nuclear matter at \(n_{0}\) can be reproduced without self-interaction term but considering the coupling parameters density dependent. In the density-dependent approach, a rearrangement term is necessary to maintain the thermodynamic consistency contributing to matter pressure explicitly via the chemical potential and is given by \(\Sigma^{r}=\sum_{b}\left[\frac{\partial_{\alpha b}}{\partial n}\omega_{0}n_{b} -\frac{\partial_{\alpha\mu c}}{\partial n}\sigma n^{s}_{b}+\frac{\partial_{ \beta\mu c}}{\partial n}\rho_{03}\tau_{b3}n_{b}+\frac{\partial_{\beta\mu c}}{ \partial n}\phi_{0}n_{b}\right]+\sum_{\Delta}(\psi_{b}\longrightarrow\psi^{\nu} _{\Delta})\). The scalar couplings with the \(\Delta\), \(\Sigma\)-hyperons are determined from the hypernuclear binding energy fits corresponding to \(U^{N}_{\Lambda}(n_{0})=-30\) MeV and \(U^{N}_{\Sigma}(n_{0})=+30\) MeV [26]. We have considered the \(\sigma-\Xi\) coupling to be \(U^{N}_{\Xi}(n_{0})=-20\) MeV in symmetric nuclear matter reported recently in Ref.-[27]. As for the vector meson couplings with the hyperons, they are incorporated according to SU(6) symmetry [28]. As the \(\Delta\)-baryons are resonant states of nucleons, they do not couple with isoscalar-vector \(\phi\)-meson. Due to insufficient knowledge on meson-\(\Delta\) couplings in nuclear matter, we treat these couplings as parameters. The \(\Delta\) isoscalar
meson coupling gap is constrained in Ref.-[20] to be the range \(0\leq R_{\sigma\Delta}-R_{\omega\Delta}\leq 0.2\) where \(R_{i\Delta}=g_{i\Delta}/g_{iN}\) and \(i=\sigma\), \(\omega\), \(\rho\)-mesons. In this study, we further constrain this parameter space on the basis of reconciling the tension between experimental nuclear and astrophysical observations. Here, we consider the isoscalar-vector and isovector-vector \(\Delta\)-couplings to be \(R_{\omega\Delta}=1.1\), \(R_{\rho\Delta}=1.0\) respectively. The scalar \(\Delta\)-coupling parameter space is varied in the range \(1.1\leqslant R_{\sigma\Delta}\leqslant 1.3\).
## 3 Results \(\&\) Discussion
This section discusses the implications of exotic particles in reconciling the tension between the nuclear physics and astrophysical constraints. Here, we report the numerical results of various dense matter EOSs corresponding to variation in \(R_{\sigma\Delta}-R_{\omega\Delta}\) coupling parameter space and isospin dependent saturation property.
The pressure variation with baryon number density for the different EOS models are provided in fig.-1. It can be seen from the figure that the re-calibrated EOSs obtained via exploiting the \(L_{\rm sym}\) and \(\sigma-\Delta\) coupling parameter space for both the parametrization models lie within the constraints from GW170817 as well as the HIC data. Here, it is also evident that from the figure that considering the upper bound on \(\sigma-\Delta\) coupling, \(R_{\sigma\Delta}=1.3\) satisfies the constraints from terrestrial as well as astrophysical observations. With further attractive optical potential for \(\Delta\)-resonances in nuclear matter, the EOS models tend to be softer at the
Figure 2: Tidal deformability parameters corresponding to the primary and secondary components involved in GW170817 event for various isospin dependent coupling in accordance to PREX-2 and alterations in \(R_{\sigma\Delta}-R_{\omega\Delta}\). Here, we have considered the chirp mass to be \(\mathcal{M}=1.186\ M_{\odot}\). The shaded regions denote the tidal deformability upper bounds of \(\tilde{\Lambda}\sim 900\)[15] (cyan) and \(\tilde{\Lambda}\sim 720\)[17] (orange).
Figure 1: The matter pressure as a function of baryon number density for the nucleonic (upper panels), \(\Delta\)-admixed hypernuclear (lower panels) dense matter EOSs considering variation in \(L_{\rm sym}(n_{0})\) and scalar meson-\(\Delta\) coupling values. Re-calibrated \(L_{\rm sym}(n_{0})=70,\ 106\) MeV EOSs are designated via the solid and dot-dashed curves respectively. The vertical bound at \(2n_{0}\) is depicted from GW170817 event data via interpolation [16]. The shaded regions from density range \((2-4.5)\ n_{0}\) depict the flow data from HIC and modelled with stiffer as well as softer EOSs [29].
lower matter density regimes as evident from the figure. This result is in consistent with Ref.-[25]. The EOSs considered in this work fulfill the thermodynamic stability condition in addition to providing non-vanishing effective Dirac nucleon mass as pointed in Ref.-[30].
The tidal responses of both the components involved in GW170817 event are evaluated based on dense matter EOSs with variation in symmetry energy slope and scalar \(\Delta\)-meson couplings. From the fig.-2, it can be seen that with higher values of \(L_{\rm sym}\), the tidal responses also gradually increases. This is because of the increase in radius of intermediate mass NSs and \(\bar{\Lambda}\) is proportional to fifth-power of NS radius. And with higher values of \(\sigma-\Delta\) coupling corresponding to further attractive potential, the tidal deformability is observed to decrease as invoking \(\Delta^{-}\) at early matter densities induces more compactness. With the strict constraint of \(\bar{\Lambda}\sim 720\) obtained from recent reanalysis of GW170817 event data, it can be inferred that attractive potentials of \(\Delta\)-resonances in dense matter is required to satisfy this constraint.
The variation of \(\Delta\)-coupling parameters with \(L_{\rm sym}\) in dense matter EOS is shown in fig.-3. This figure provides the allowed range of \(R_{\sigma\Delta}\) keeping \(R_{\omega\Delta}=1.10\) and \(R_{\rho\Delta}=1.0\) with changing \(L_{\rm sym}\) scenarios. It can be seen that higher \(L_{\rm sym}\) values demand the \(\Delta\) potential in dense matter to be attractive in nature. In case of DD-MEX parameterization, for \(L_{\rm sym}\leq 80\) MeV, the \(\Delta\) coupling parameter space is unconstrained and all values of \(R_{\sigma\Delta}-R_{\omega\Delta}\) are admissible following the former \(\bar{\Lambda}\sim 900\) constraint. While in case of GM1 coupling set, this \(\bar{\Lambda}\) constraint bounds the \(R_{\sigma\Delta}-R_{\omega\Delta}\) parameter set for \(L_{\rm sym}\leq 95\) MeV. Following strict upper bound of \(\bar{\Lambda}\sim 720\), the parameter space is further constrained leaving the coupling value of \(R_{\sigma\Delta}=1.30\) in case of DD-MEX set. However, it is noteworthy to mention that another analysis of the GW170817 event data [31] suggest much higher tidal deformability estimations \(\sim 1000\) which would definitely put loose constraints on the poorly known \(\Delta-\)coupling values.
In another recent experiment to measure \(R_{\rm skin}\) in \({}^{48}\)Ca-isotope (CREX) [32], the same has been reported to be \((0.121\pm 0.026)\) fm which is in disagreement with PREX results. CREX findings indicate the symmetry energy to be low consequently leading to more compact NSs. This indicates a need to further investigate the limitations of present dense matter models so that both of the findings can be accomodated which is beyond the scope of this work.
|
2308.12416 | Reframing the Brain Age Prediction Problem to a More Interpretable and
Quantitative Approach | Deep learning models have achieved state-of-the-art results in estimating
brain age, which is an important brain health biomarker, from magnetic
resonance (MR) images. However, most of these models only provide a global age
prediction, and rely on techniques, such as saliency maps to interpret their
results. These saliency maps highlight regions in the input image that were
significant for the model's predictions, but they are hard to be interpreted,
and saliency map values are not directly comparable across different samples.
In this work, we reframe the age prediction problem from MR images to an
image-to-image regression problem where we estimate the brain age for each
brain voxel in MR images. We compare voxel-wise age prediction models against
global age prediction models and their corresponding saliency maps. The results
indicate that voxel-wise age prediction models are more interpretable, since
they provide spatial information about the brain aging process, and they
benefit from being quantitative. | Neha Gianchandani, Mahsa Dibaji, Mariana Bento, Ethan MacDonald, Roberto Souza | 2023-08-23T20:33:22Z | http://arxiv.org/abs/2308.12416v1 | # Reframing the Brain Age Prediction Problem to a More Interpretable and Quantitative Approach
###### Abstract
Deep learning models have achieved state-of-the-art results in estimating brain age, which is an important brain health biomarker, from magnetic resonance (MR) images. However, most of these models only provide a global age prediction, and rely on techniques, such as saliency maps to interpret their results. These saliency maps highlight regions in the input image that were significant for the model's predictions, but they are hard to be interpreted, and saliency map values are not directly comparable across different samples. In this work, we reframe the age prediction problem from MR images to an image-to-image regression problem where we estimate the brain age for each brain voxel in MR images. We compare voxel-wise age prediction models against global age prediction models and their corresponding saliency maps. The results indicate that voxel-wise age prediction models are more interpretable, since they provide spatial information about the brain aging process, and they benefit from being quantitative.
Machine Learning, ICML
## 1 Introduction
Researchers have hypothesized that the brain age of healthy subjects should match their corresponding chronological ages. Increased brain age compared to chronological age is an important indicator of brain health, making brain age prediction a widely explored research area. Most brain age prediction work focuses on predicting a 'global' brain age index that reflects the maturity level and age of the brain. The global brain age index has been shown to be an effective biomarker to assess the aging process as well as to understand structural changes in the brain in the presence of neurological disorders (Cole and Franke, 2017; Wang et al., 2019). Global brain age has most widely been estimated from T1-weighted Magnetic Resonance (MR) volumes using Deep Learning (DL) techniques (Cole and Franke, 2017; Tanveer et al., 2023; Jonsson et al., 2019), as a regression task.
Predicting the brain age is important to study normal brain aging and prospective cognitive decline, but understanding and interpreting these findings, and uncovering the black-box nature of these methods is an even more relevant task. These models aim to detect brain regions or biomarkers that should be analyzed in greater detail, leading to personalized diagnosis and decision-making. Efforts have been made to interpret the DL models with techniques based on saliency maps, such as Gradient-weighted Class Activation Mapping (Grad-CAM) (Bermudez et al., 2019; Yin et al., 2023; Selvaraju et al., 2017), and occlusion-based techniques (Bintsi et al., 2021; Zeiler and Fergus, 2014). Saliency map techniques focus on creating visualizations to depict the contribution levels of each pixel in the decision-making process, whereas occlusion-based techniques aim to identify regions that are most important to make a particular decision by hiding regions in the input and observing their impact on model performance. The worse the model performs when hiding a certain feature, the higher its importance. There are other interpretability techniques like Layer Wise Relevance Propagation (Bach et al., 2015) and SHapley Additive Explanations (SHAP) (Lundberg and Lee, 2017), however, to the best of our knowledge, they have not been utilized to explain brain age prediction models.
At a high level, the aforementioned techniques provide an understanding of how DL models learn and explain the relationship between the features and the models' decision making, hence, providing interpretability to DL models, _i.e._, unboxing the black-box. Existing interpretability techniques help with understanding the global brain age prediction models by accessing the DL model's decision-making. Such techniques explain what features in the model input contribute to the decision-making process (_i.e._ to predict global
brain age) providing a spatial-level analysis of the problem, relevant to identify specific brain regions of interest, biomarkers, and abnormalities related to aging.
It is important to highlight that most saliency-based interpretability techniques were initially proposed to explain classification models, producing heat maps to a specific class. The translation of such techniques for regression tasks is not straightforward.
In this article, we evaluate our recently proposed voxel-level brain age prediction model (Gianchandani et al., 2023) from an interpretability perspective. Interpretability has been defined differently in varying contexts (Doshi-Velez and Kim, 2017; Kim et al., 2016; Miller, 2019), however, all definitions aim to achieve a common goal: to satisfy human curiosity (Miller, 2019) and in the machine learning context, to make the modeling pipeline (from feature extraction to decision making) less of a black box, and more transparent. An alternative to utilizing interpretability methods to explain the feature extraction by previously trained age prediction models, we propose to redefine how we approach the research problem. The aim of predicting brain age is to understand how the brain ages in healthy compared to diseased brains. We further want to understand how various regions contribute to the decision-making process reflecting more on the spatial aging processes in the brain. Instead of approaching it as a global age prediction problem, we propose to approach it as an image-to-image regression problem where we predict brain age at a voxel level. So far to the best of our knowledge, only (Popescu et al., 2021) have attempted a voxel-level brain age prediction approach. The authors use a U-Net-like (Ronneberger et al., 2015) architecture to predict voxel-level brain age and achieve a Mean Absolute Error (MAE) of \(9.94\pm 1.73\) years. However, the approach is not discussed from an interpretability point of view.
A voxel-level approach will enable a spatial-level analysis of the brain aging process by assigning a brain age prediction to each voxel of the brain. Individual predictions for each volume unit not only allow us to analyze the aging process at a more fine-grained level but also provides a quantitative visualization that reflects how different regions of the brain are aging. A brain voxel in the image that is assigned an increased or decreased brain age index can be explained by the contribution of the voxel in the decision-making process by the DL model. In this article, we propose a voxel-level age prediction model as a new approach to interpretability. An overview of global age versus voxel-level age prediction outputs is described in Fig. 1.
We further compare our new proposed method to the existing methods for interpretability. Grad-CAM (Selvaraju et al., 2017), Occlusion Sensitivity maps (Zeiler and Fergus, 2014) and SmoothGrad (Smilkov et al., 2017) are utilized to explain a publicly available state-of-the-art global brain age prediction model, and the interpretations are discussed and compared with our proposed voxel-level brain age prediction model.
We acknowledge the existing state-of-the-art methods for brain age prediction and emphasize that the focus of this article is not to propose state-of-the-art brain age prediction models, but to contrast existing interpretability techniques with our proposed approach to the brain age prediction problem to uncover the black box and to better understand the predictions.
Figure 1: (a) Traditional paradigm where a brain age prediction model is used to predict a global brain age value for the whole brain. (b) New proposed paradigm where a voxel-level brain age prediction model assigns different brain ages to each region of the brain. At the most granular level, each voxel corresponding to brain tissue in the image can be assigned a brain age.
## 2 Materials and Methods
### Data
We used 3D T1-weighted MR scans from the publicly available Calgary-Campinas dataset (Souza et al., 2018). We will refer to this dataset as \(D_{cc}\) hereafter in this article. All data corresponds to presumed healthy subjects. \(D_{cc}\) has 359 samples aged 29-80 years (mean age of 53.47\(\pm\)7.84) with a male:female sex ratio of 49:51 percent. The data was acquired on three different scanners (Philips, Siemens, and General Electric), each at two magnetic field strengths (1.5 T and 3 T). Skull-stripping masks, as well as tissue segmentation masks (Gray Matter (GM), White Matter (WM), and Cerebrospinal Fluid (CSF)), are also publicly available with the dataset.
### Data Preprocessing
For the voxel-level brain age prediction model, the scans are reoriented to a standard template (MN152 (Fonov et al., 2009)) using the FSL (Jenkinson et al., 2012) utility'reorient2std' to ensure consistency across the dataset. We adjusted each sample's intensity to fall within the range of 0 to 1. Any other kind of preprocessing steps, such as registration is avoided for this model as the predictions are to be done at a voxel level, and we want to ensure that the input features remain untouched and the brain structures hold the shape and intensity values as in the original reconstructed T1-weighted image.
For the global age prediction model, the MR scans are linearly registered with 6 degrees of freedom using FMRIB Software Library's (FSL) FLIRT (FMRIB's Linear Image
Figure 3: Global age prediction model architecture. The backbone is adopted from (Peng et al., 2021), and the output head is modified to treat global age prediction as a regression problem rather than a classification problem as done in the original research.
Figure 2: Proposed voxel-level brain age prediction model architecture. The model has a U-Net backbone and follows a multi-output design with three outputs: (i) Segmentation of Gray Matter, White Matter and CSF, (ii) Voxel-level brain age prediction, and (iii) Global-level brain age prediction.
Registration Tool) command (Jenkinson Smith, 2001), which allows for rotation and translation, keeping the shape of the brain consistent to avoid loss/change in input data features. The registration step was seen to improve the performance of the global age prediction model and hence, was included in the pipeline.
### Proposed Model Architectures
#### 2.3.1 Voxel-level brain age prediction model
The proposed model follows the 3D U-Net (Ronneberger et al., 2015) architecture backbone (Fig. 2). The model has an encoder network (left) and decoder network (right) joined together with a bottleneck (center bottom), making an U shape. The input is downsampled at each level of the encoder as features are learned, whereas the feature map is interpolated back to the original input size iteratively at each level of the decoder. Skip connections are used to connect the encoder and the decoder at each level to allow for feature re-usability as well as help with the vanishing gradient problem. Batch normalization is used post convolution layers in the encoder network.
The model follows a multitask architecture with three outputs for which the features are learnt simultaneously. This forces the model to learn similar features for the three tasks. The three tasks defined for the model are: (i) A segmentation task to segment GM, WM, CSF. (ii) Voxel-level brain age prediction task. The U-Net architecture helps in maintaining output size to be the same as the input so as to obtain voxel-level brain age prediction for each voxel in the input age. A Rectified Linear Unit (ReLU) activation is used to ensure positive age predictions. (iii) Global-level brain age prediction task, which is computed from the bottleneck features of the model and is a scalar age prediction for each volume. This task closely resembles the output from the global age prediction model described in Section 2.3.2.
Task (i) and (iii) are helper tasks that aid the model in learning accurate features for the voxel-level age prediction task. Adding the segmentation task is especially useful as it pushes the model to learn structural features for the voxel-level age prediction task ignoring other unnecessary information present in the input images. The global-level brain age can be thought of as a prerequisite and simpler version of the voxel-level brain age prediction task.
#### 2.3.2 Global Brain Age prediction model
We adapt the architecture for the global brain age prediction model from a state-of-the-art Simple Fully Convolutional Neural network (SFCN) proposed in (Peng et al., 2021; Gong et al., 2021). The authors treat the brain age prediction task as a soft classification task where the model predicts a Gaussian Probability distribution centered at the ground truth (chronological age) instead of a scalar brain age index. The model is lightweight as it is made up of 7 convolution blocks where the input is down-sampled after each convolution layer in the first five blocks with \(3\times 3\times 3\) convolutions, followed by a \(1\times 1\times 1\) convolution block and a classification head. Batch normalization is also used to ensure a smooth training process. We modify the architecture and retain the convolution blocks, but treat it as a classical regression problem (Fig. 3).
### Loss Function
#### 2.4.1 Voxel-level brain age prediction model
A combination loss function is used to train the voxel-level brain age prediction model. The loss function is a weighted sum of three loss terms, one corresponding to each task. The Soft Dice Score (Milletari et al., 2016) is used for the segmentation task, and MAE at the global and voxel level is used for the brain age prediction task.
The weighting terms (\(\alpha\), \(\beta\), and \(\gamma\)) used are initialized and changed as the model trains in a way that all tasks are given equal importance throughout the training process. The loss function is described in equation 1.
\[\begin{split} L_{overall}=\alpha Dice_{loss}+\beta MAE_{global}+\\ \gamma MAE_{voxel}\end{split} \tag{1}\]
#### 2.4.2 Global Brain Age prediction model
We use the MAE as the loss function for the global age prediction model. The decision to use MAE as the loss function was reinforced after running experiments with Mean Squared Error (MSE) as the loss function. Experiments showed that MAE performed significantly better leading to an improved training trajectory with a lower validation loss and a smoother training/validation curve.
### Training Methodology
#### 2.5.1 Voxel-level brain age prediction model
We train the model for 300 epochs with an initial learning rate of 0.001, which reduces every 70 epochs by a multiplicative factor of 0.5. A batch size of 8 is used for training where each input sample is a randomly cropped patch of size \(96\times 96\times 96\) from the T1-weighted volume. A single crop is randomly obtained from each volume ensuring significant brain volume in each crop while exposing the model to the same brain region from various perspectives. 50% of the cropped patches are rotated by 15\({}^{\circ}\) as an augmentation step on-the-fly. To prevent the model from solely learning to predict global brain age for each voxel, we introduce small noise to the ground-truth labels before calculating the loss.
For voxel-level age prediction task, instead of the ground truths having the the same global age at each voxel, we introduce small noise (in the range of [-2,2]) to ensure the models learns variations in age across different voxels (or regions) without significantly impacting the error.
#### 2.5.2 Global Brain Age prediction model
In the training process for the global brain age prediction model, we ran the model for 50 epochs, starting with a learning rate of 0.0001. As the training progressed, learning rate was decreased by half every 20 epochs. We trained with a batch size of 8, and adjusted each sample's intensity to fall within a scale of 0 to 1. We also implemented an on-the-fly augmentation step, rotating 50% of the samples by 15\({}^{\circ}\). This approach aimed to increase the model's robustness by providing a diverse set of sample variations throughout the training process.
### Interpretably Methods
There are several techniques employed to explain the decision-making process of a deep-learning model, such as Grad-CAM (Selvaraju et al., 2017). This method uses the gradients flowing into the final convolutional layer to produce a coarse localization map, highlighting the image regions important to prediction. Its model-agnostic property makes it applicable across a wide range of Convolutional Neural Network (CNN)-based models, without requiring retraining. However, the main limitation of Grad-CAM is its coarse localization due to the low spatial resolution of deeper convolutional layers, which can sometimes limit its interpretability. To counteract this limitation, we employed a method that averages two maps--one from an earlier convolutional layer as the target layer and one from the final layer--to blend detailed features with high-level representations for a more comprehensive understanding (Morbidelli et al., 2020). It is important to acknowledge that the inputs to the two layers considered for generating the heatmaps are different in terms of the input feature maps as well as dimensions. Aggregating two feature maps, originating from layers at distinct depths in the model helps to smooth out inconsistencies or noise that may exist in the heatmap from the final convolutional layer. It also helps in obtaining informative heatmaps by encompassing both relatively high-level and low-level features present at different depths in the network (McAllister et al., 2020).
Another interpretability technique, Occlusion Sensitivity (Zeiler and Fergus, 2014), provides an interpretability approach that systematically occludes portions of the input image to observe changes in model output, thereby producing a heat map of the input's most influential regions. The strength of this technique lies in its simplicity and general applicability; it can be applied to any model predicting based on an image input, even black box models, without requiring access to model gradients. Nevertheless, the technique is not without drawbacks. Specifically, Occlusion Sensitivity is computationally intensive, necessitating a rerun of the model for every occluded version of the image, which makes it somewhat inefficient for larger images or complex models. Additionally, the results can be influenced by the size and shape of the occluding patch, making it important to choose these parameters carefully.
Finally, SmoothGrad (Smilkov et al., 2017) offers another approach to interpretability. In contrast to the previous techniques, SmoothGrad aims to reduce noise in gradient-based saliency maps by averaging the gradients of multiple noisy versions of the same input image. Consequently, it often yields less noisy saliency maps than single input-based methods, highlighting noise-robust patterns. However, SmoothGrad requires multiple forward passes through the model as well as repeated gradient calculation to obtain smoother saliency maps for each input, which increases computational demands. Similarly to Occlusion Sensitivity, the choice of appropriate noise level may prove challenging and require further experimentation or tuning.
For our experiments, we implement interpretability methods using MONAI (Cardoso et al., 2022), which is a PyTorch-based framework that provides built-in functions for implementing various interpretability techniques.
## 3 Results
The voxel-level brain age prediction model achieved an MAE of \(5.96\pm 3.75\) years on the \(D_{cc}\) test set (n=30 samples). The global brain age prediction model achieved an MAE of \(6.56\pm 4.04\) years on the same test samples (see Table 1).
For the voxel-level age prediction model, we present results using MAE averaged across all voxels of the brain for simplicity. It is not feasible to report voxel-level MAE for all (millions) of the brain voxels individually. However, for visualization, we use predicted age difference (PAD) masks that show the difference between the predicted brain age and the chronological age at the level of each voxel. Blue color indicates brain regions that look younger than chronological age and red points to older-looking brain regions. PAD masks can be an excellent way to visualize brain maturity levels from voxel-level brain age predictions. The masks also provide an additional level of quantification to explain
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Test Set & MAE\(\pm\)S.D \\ \hline Voxel-level model & \(D_{cc}\) & 5.96\(\pm\) 3.75 \\ Global model & \(D_{cc}\) & 6.56\(\pm\) 4.04 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for voxel-level and global brain age prediction models on the \(D_{cc}\) test set
the outputs of the DL model.
For the global age prediction model, we report the MAE over the test set and also present the results of three different interpretability techniques (Fig. 4).
## 4 Discussion
Figure 4 shows the comparison of traditional interpretability methods to voxel-level PAD masks. Each row represents different interpretability outputs for the same test subject. Grad-CAM heatmaps show the important regions that contributed to the decision-making process (to predict global brain age) using a red-yellow-blue gradient colormap with red regions being the biggest contributors and blue regions being the smallest contributors or least important. However, as can be observed in Fig 4, Grad-CAM heatmaps are coarse and prone to up-sampling errors. Grad-CAM maps are also not comparable across different samples as they are hard to quantify, this is due to the origin of the heatmap intensity values which come from the relative strength of gradients for a particular input image.
Similarly, the occlusion sensitivity maps for classification problems show the most and least important regions for classifying a specific class, however, in the context of a regression problem such as brain age prediction, red points refer to regions that make the model overestimate the brain age (prediction \(>\) ground truth) and blue points refer to regions that make the model underestimate brain age (pre
Figure 4: Traditional interpretability methods (left to right- Grad-CAM, Occlusion Sensitivity maps, and SmoothGrad) implemented on global age prediction model and voxel-level brain PAD masks obtained from voxel-level brain age prediction model.
diction \(<\) ground truth). This means that all regions in red and blue contribute to the brain age prediction decision in some way, and the white regions are the least contributing.
Saliency maps created using SmoothGrad highlight the areas that significantly influence the model's output. These crucial regions are typically marked in red, while the areas having a minimal effect are illustrated in blue. As observed, these maps offer more detailed insights compared to Grad-CAM, seemingly highlighting important regions with more precision. This enhanced level of detail could be attributed to the fact that SmoothGrad averages over multiple versions of the same input with added noise. Similar to Grad-CAM, SmoothGrad is also a gradient-based technique, it relies on the quantitative nature of the gradients, and how changing the input affects the gradients in the model. Thus, it reflects the relative importance of different regions in the input image (Brain MR).
Overall, while distinct differences exist in the maps generated by the three techniques, a general agreement can be seen regarding the key regions utilized in predicting global age. While the traditional interpretability methods do a good job of providing a high-level understanding of important regions, they are based on relative scores for a specific input sample, and not absolute quantitative measurements which can be used to directly compare different samples.
The voxel-level PAD masks on the other hand are used to visualize individual predictions for each voxel (region) of the brain. The PAD values embedded in the masks quantify the difference between the brain age prediction and chronological age (in years), consequently making the PAD masks comparable across different samples. Owing to the multitask design of the model architecture, the model is forced to learn features that can be repurposed for all three tasks. The addition of the brain tissue segmentation task ensures that the model learns structural features within the brain region that will be used for the segmentation as well as the voxel-level brain age prediction task. We achieve a Dice Score of 84.25% on the \(D_{cc}\) test set indicating a high overlap between predicted segmentation and ground truth for GM, WM, and CSF, hence accurate segmentations (Fig. 5). This is only possible if correct structural features are learned during the feature extraction process while training the model. This further confirms that structural features within the brain region are contributing to the voxel-level age prediction task. It can also be hypothesized that big PAD values can be partially explained by the presence of underlying structural anomalies that have not evolved into a disease suggesting the contribution of the region in predicted voxel-level brain age. Fig. 4 shows PAD masks of healthy subjects from our test set, research has shown that the brain aging process varies across different regions of the brain. Studies have also shown that each healthy brain is unique and follows a different spatial aging trajectory (Raz et al., 2005; Scahill et al., 2003) and this can be observed in the PAD masks. The underlying structural features of the brain cause the predicted age difference of a voxel to be non-zero PAD and this varies spatially in the brain.
Voxel-level PAD maps also provide an increased resolution at the level of each voxel, unlike traditional interpretability methods which are usually noisy at such granular level (Liu et al., 2021). Saliency map-based techniques like Grad-CAM and even SmoothGrad only indicate the activity of broad regions in the final output, however, due to the heatmaps undergoing interpolation (for upsampling), it becomes nonviable to correlate the importance of regions to specific morphological changes that have caused a particular prediction. Age prediction at a voxel level can lead to a fine-grained analysis of underlying changes in the brain. For the purpose of this article, our aim is to compare how various traditional interpretability methods compare to voxel-level brain PAD masks, hence, we will base all our comparisons on the voxel-level PAD maps. However, in the future, to correlate variation observed in PAD maps to existing research on aging, which often focuses on regions rather than individual voxels, it can be valuable to average the voxel-level brain age predictions within the known anatomical regions of the brain to visualize and study the variation in aging trajectories across regions of the brain. In the future, it will
Figure 5: Segmentation results using the voxel-level brain age prediction model on one sample. The model achieved a mean dice score of 84.25% on the test set.
also be valuable to get clinical feedback to better understand the aging patterns observed in the voxel-PAD maps. This can help correlate the spatial variations observed in the brain to existing research on aging at a regional level.
Occlusion-based sensitivity maps come close to providing similar insights as voxel-level PAD maps, however, with occlusion sensitivity maps, we only get an estimation of single regions contributing to over or under-estimation of overall global brain age. The impact of an occluded region on the output is hard to quantify as it's rarely ever one region in the image that contributes entirely to the final output. The output of DL models is a combination of multiple features, and occlusion sensitivity maps do not account for the impact of occluding multiple regions together.
Most traditional interpretability techniques have the advantage of being utilized for multiple use cases and with different model architectures, however, they are used as a post-modeling step to assess if the trained model has learned accurate features. Our proposed model on the other hand is specific to the brain age prediction problem (as of now, although, can be extended to other imaging research problems), however, it is a modeling technique, rather than an additional algorithm to check for interpretability. It ensures that accurate features are learned to predict brain age and additionally, also provides spatial information on the brain aging process.
## 5 Conclusion
In this article, we used our recently proposed voxel-level brain age prediction (Gianchandani et al., 2023), as a step towards interpretability in the brain age prediction realm. This perspective on brain age prediction is relatively new and has not been explored from an interpretability point of view in the past. We also compare the outputs of the proposed voxel-level age prediction model to existing traditional interpretability methods and reflect on the differences between them. Through our findings, we show that our proposed model provides an additional level of interpretability, and fine-grained analysis of the features used for the decision-making process by the model and is quantitative in nature.
## Acknowledgements
The authors would like to express our appreciation to the reviewers for their feedback which helped in refining and strengthening the final version of the paper. This project is supported by the Hotchkiss Brain Institute, University of Calgary, Alberta Innovates, the Natural Sciences and Engineering Research Council of Canada (NSERC), and the Digital Research Alliance of Canada. The authors would like to thank Denvr Dataworks for not only providing access to their high-performance supercluster but also for making our compute-intensive project more sustainable. With their environmentally conscious cloud infrastructure and user-friendly AI Cloud software, we were able to train our models efficiently while conserving water and reducing our carbon footprint. This collaboration allowed us to advance our research in interpretable medical machine intelligence, creating a positive impact on healthcare while demonstrating our commitment to sustainable practices.
|
2307.02080 | Resurgent Structure of the Topological String and the First Painlevé
Equation | We present an explicit formula for the Stokes automorphism acting on the
topological string partition function. When written in terms of the dual
partition function, our formula implies that flat coordinates in topological
string theory transform as quantum periods, and according to the
Delabaere-Dillinger-Pham formula. We first show how the formula follows from
the non-linear Stokes phenomenon of the Painlev\'e I equation, together with
the connection between its $\tau$-function and topological strings on elliptic
curves. Then, we show that this formula is also a consequence of a recent
conjecture on the resurgent structure of the topological string, based on the
holomorphic anomaly equations, and it is in fact valid for arbitrary Calabi-Yau
threefolds. | Kohei Iwaki, Marcos Marino | 2023-07-05T07:44:32Z | http://arxiv.org/abs/2307.02080v4 | # Resurgent structure of the topological string and the first Painleve equation
###### Abstract.
We present an explicit formula for the Stokes automorphism acting on the topological string partition function. When written in terms of the dual partition function, our formula implies that flat coordinates in topological string theory transform as quantum periods, and according to the Delabaere-Dillinger-Pham formula. We first show how the formula follows from the non-linear Stokes phenomenon of the Painleve I equation, together with the connection between its \(\tau\)-function and topological strings on elliptic curves. Then, we show that this formula is also a consequence of a recent conjecture on the resurgent structure of the topological string, based on the holomorphic anomaly equations, and it is in fact valid for arbitrary Calabi-Yau threefolds.
Key words and phrases:resurgence, first Painleve equation, perturbative series, Borel resummation, Stokes automorphisms, Stokes constants, topological string theory.
The basic quantities in topological string theory are given by formal perturbative series in the string coupling constant \(g_{s}\), which are factorially divergent. One can then ask, in the spirit of the theory of resurgence [11, 12], what is the resurgent structure of these series, i.e. what are the exponentially small trans-series associated to their Borel singularities, and the associated Stokes constants. This question has been recently addressed in various works. Based on the trans-series approach of [13, 14, 15], a conjecture for the resurgent structure of topological string theory on arbitrary Calabi-Yau (CY) threefolds was put forward in [16, 17]. Although topological strings are closely related to quantum periods (also known as Voros symbols), their resurgent structure is quite different. For example, it is expected that the Stokes automorphisms of quantum periods are given by the Delabaere-Dillinger-Pham (DDP) formula [1, 2, 3] (see [16, 17] for recent arguments in that direction), but the Stokes automorphisms act on the topological string partition function with a more complicated structure, and no closed formula is known for them.
In this paper we provide such a closed formula. It reads
\[\boxed{\mathfrak{S}Z(\nu;g_{s})=\exp\left\{a\mathrm{Li}_{2}\left(\mathrm{e}^{ -g_{s}\partial}\right)-g_{s}a\log\left(1-\mathrm{e}^{-g_{s}\partial}\right) \partial\right\}Z(\nu;g_{s})} \tag{1}\]
where \(a\) is a Stokes constant, \(\nu\) is the flat coordinate (or period) parametrizing the moduli space, and \(\partial\) is the derivative w.r.t. \(\nu\)1.
Footnote 1: We have written the formula in a somewhat schematic way, and for a simple case, involving a single modulus; additional details, generalizations and clarifications can be found below.
Although the partition function transforms in a relatively complicated way under a Stokes automorphism, the dual partition function (obtained from the original one after a discrete Fourier transform) transforms in a very simple way:
\[\boxed{\mathfrak{S}\tau(\nu,\rho;g_{s})=\mathrm{e}^{a\mathrm{Li}_{2}(\mathrm{e }^{2\pi\mathrm{i}\rho/g_{s}})}\tau\left(\nu-ag_{s}\log(1-\mathrm{e}^{2\pi \mathrm{i}\rho/g_{s}}),\rho;g_{s}\right)} \tag{2}\]
i.e. it picks a prefactor involving the dilogarithm function, and its argument transforms indeed as a quantum period, following the DDP formula!
In this paper we give two lines of argument that lead to (1) and (2). The first one is based on the connection found in [18] between \(\tau\)-functions of the first Painleve equation, and a dual partition function in topological string theory. By using this connection, one can show that the non-linear Stokes phenomenon of the first Painleve equation leads to the above transformation formula for the dual partition function under a Stokes automorphism. Although this derivation is based on a particular example of the topological string partition function, the resulting formula turns out to be universally valid. To show this, we develop a second argument based on the conjectures for the resurgent structure of the topological string proposed in [16, 17], and we provide a derivation of (1) in the spirit of alien calculus. Since the conjectures of [16, 17] are valid for any topological string partition function which satisfies the holomorphic anomaly equations (HAE) of [17], we conclude that (1) applies universally to topological strings on arbitrary backgrounds.
As is mentioned above, the derivation based on [18] naturally relates the DDP formula to our main formula. The DDP formula has a close relationship with the BPS invariants
in class \(\mathcal{S}\) theories, which are defined by a weighted counting of saddle connections in the spectral network [14]. In fact, the DDP formula for the quantum period \(V_{\mu}\) reads
\[\mathrm{e}^{V_{\mu}}\mapsto\mathrm{e}^{V_{\mu}}(1+\sigma(\gamma)\mathrm{e}^{V_{ \gamma}})^{\Omega(\gamma)\langle\mu,\gamma\rangle}, \tag{3}\]
where \(\gamma\) is the cycle on the spectral curve associated with the saddle connection, \(\sigma(\gamma)\in\{\pm 1\}\), and \(\Omega(\gamma)\) is the BPS invariant; see e.g. [14, 1]. In view of this, we expect the Stokes constant \(a\) appearing in (1), (2) and in the formulae in section 3 to be given by a BPS invariant \(\Omega(\gamma)\). The precise relation is
\[a=\frac{\Omega(\gamma)}{2\pi\mathrm{i}}. \tag{4}\]
In the more general case considered in this paper, the \(\Omega(\gamma)\) are given by the Donaldson-Thomas invariants of the CY threefold. Here, the cycle \(\gamma\) is dual to the Borel singularity \(\mathcal{A}\) underlying the Stokes discontinuity, and as we explain below \(\mathcal{A}\) is an integral period of the CY geometry (up to an overall normalization). This identification between Stokes constants appearing in the resurgent structure of the topological string and BPS invariants was already suggested in [14, 15], and it is also consistent with the results of [13, 14]. Further evidence for this identification appears in the recent paper [13].
We should mention that close cousins of (2) have appeared before in different, but related contexts. In [1] the transformation properties of a certain family of theta series defined on the (twistor space of the) hypermultiplet moduli space of CY threefolds under wall-crossing were studied. In the special case \(k=1\), the theta series (1.5) in [1] reduces to the dual topological partition function [1], and their wall-crossing formula (1.9) agrees with our formula for the Stokes automorphism. In addition, for the formula to agree, one must have the relation (4), giving in this way additional evidence for the identification between Stokes constants and Donaldson-Thomas invariants. A detailed comparison between the wall-crossing formula of [1] and our formula will be made in section 3. A similar wall-crossing formula was also found in [11] for the dual partition functions of some \(\mathcal{N}=2\) gauge theories. Let us point out that in [1, 1, 12] the transformation of the flat coordinate according to the DDP formula is essentially built in, while in our case it follows in a more indirect (and surprising) way from the resurgent structure of the topological string perturbative series. It would be very interesting to understand better the relation between [1, 1, 13] and our approach.
The result obtained in this paper can be understood as relating the Stokes automorphism acting on the topological string partition function, to the Stokes automorphism acting on quantum periods. It might have a connection to the blowup formula which relates in a similar way the topological string free energy to the Nekrasov-Shatashvili free energy [15, 16]. This idea has also been developed in [13] and it might lead to a different derivation of our main formula. It would be also interesting to study the relation between our work and a series of papers by Bridgeland [1, 2, 1].
After this paper was finished, we were informed by R. Schiappa and M. Schwick that in forthcoming work [21] they address similar issues and obtain related results, albeit with different methods.
### Acknowledgements
We would like to thank Ioana Coman, Fabrizio del Monte, Alba Grassi, Paolo Gregori, Jie Gu, Shinobu Hosono, Omar Kidwai, Oleg Lisovyy, Pietro Longhi, Kento Osuga, Boris Pioline, Ricardo Schiappa, Masa-Hiko Saito, Maximilian Schwick, Atsushi Takahashi, Yoshitugu Takei and Joerg Teschner for useful conversations. We would also like to thank to Maxim Kontsevich and Yan Soibelman for organizing the IHES school "Wall-crossing structures, analyticity and resurgence," which made possible this collaboration. The work of K.I has been supported by the JSPS KAKENHI Grand Numbers 20K14323, 21K18576, 21H04994, 22H00094, 23K17654. The work of M.M. has been supported in part by the ERC-SyG project "Recursive and Exact New Quantum Theory" (ReNewQuantum), which received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, grant agreement No. 810573.
## 2. From Non-linear Stokes Phenomenon of Painleve Transcendents to Main Formula
In this section, we give a derivation of the formula (2) for the topological recursion partition function through the analysis of the non-linear Stokes phenomenon of the Painleve equation. Our derivation is based on the exact WKB theoretic approach to the Stokes multipliers of the isomonodromy system associated with Painleve equation, which was developed in [20, SS5]. We note that our result is closely related to [1, 17, 18], and we will make a comment in the end of this section.
### Topological Recursion and Painleve I
Let us briefly review the relationship between topological recursion and the first Painleve equation.
We focus on the topological recursion partition function
\[Z(t,\nu;g_{s})=\exp\left(\sum_{g\geq 0}g_{s}^{2g-2}F_{g}(t,\nu)\right) \tag{5}\]
defined from a family of genus \(1\) spectral curves of the form
\[\Sigma\ :\ y^{2}=4x^{3}+2tx+u(t,\nu). \tag{6}\]
Here \(u(t,\nu)\) is a locally-defined function given by the implicit relation
\[\nu=\frac{1}{2\pi\mathrm{i}}\oint_{A}y\mathrm{d}x \tag{7}\]
after fixing a choice of symplectic basis \(A,B\) of \(H_{1}(\Sigma,\mathbb{Z})\). See Appendix A for the definition of \(F_{g}\) (see also [11, 20]).
It was shown in [20] that the discrete Fourier transform
\[\tau(t,\nu,\rho;g_{s})=\sum_{k\in\mathbb{Z}}\mathrm{e}^{2\pi\mathrm{i}k\rho/g _{s}}Z(t,\nu+kg_{s};g_{s}) \tag{8}\]
gives a \(\tau\)-function for the Painleve I equation2
Footnote 2: This construction is now generalized to all Painleve equations; see [1, 20, 1].
\[g_{s}^{2}\frac{\mathrm{d}^{2}q}{\mathrm{d}t^{2}}=6q^{2}+t. \tag{9}\]
That is,
\[q(t,\nu,\rho;g_{s})=-g_{s}^{2}\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\log\tau(t, \nu,\rho;g_{s}) \tag{10}\]
is a formal solution of (9). The parameters \(\nu\) and \(\rho\) are regarded as integration constants parametrizing the general solution of Painleve I. The formula (8) is analogous to the formula obtained in [12, 17].
To derive our main formula (1), we will also use the isomonodromy system associated with Painleve I [11]:
\[\begin{cases}&\left[g_{s}^{2}\frac{\partial^{2}}{\partial x^{2}}- \frac{g_{s}}{x-q}\left(g_{s}\frac{\partial}{\partial x}-p\right)-(4x^{3}+2tx+2 H)\right]Y=0,\\ &\left[g_{s}\frac{\partial}{\partial t}-\frac{1}{2(x-q)}\left(g_{s} \frac{\partial}{\partial x}-p\right)\right]Y=0,\end{cases} \tag{11}\]
where
\[p=g_{s}\frac{\mathrm{d}q}{\mathrm{d}t},\qquad H=\frac{1}{2}p^{2}-2q^{3}-tq. \tag{12}\]
It is well-known that the compatibility condition of the system (11) of PDEs is given by the Painleve I equation. The compatibility implies that the Stokes multipliers3 of the first equation of the system (11), which is an linear ODE with an irregular singular point \(x=\infty\) of Poincare rank \(5/2\), are independent of the isomonodromic time \(t\). It was also shown in [18] that
Footnote 3: This paper discusses two types of Stokes phenomenon. The first one is related to the limit \(g_{s}\to 0\), while the second one is related to \(x\to\infty\). Our main formulas (1) and (2) are related to the first type Stokes phenomenon, but we analyze the second type as well to derive the main formulas.
\[Y_{\pm}(x,t,\nu,\rho;g_{s})=\frac{\sum_{k\in\mathbb{Z}}\mathrm{e}^{2\pi\mathrm{ i}k\rho/g_{s}}\,Z(t,\nu+kg_{s};g_{s})\,\chi_{\pm}(x,t,\nu+kg_{s};g_{s})}{ \sum_{k\in\mathbb{Z}}\mathrm{e}^{2\pi\mathrm{i}k\rho/g_{s}}\,Z(t,\nu+kg_{s};g _{s})} \tag{13}\]
gives a formal solution for the isomonodromy system. Here, \(\chi_{\pm}\) is a WKB-type formal series defined by the "quantum curve formula" as follows4 (see [1] for example):
Footnote 4: Precisely speaking, we must regularize the integral in (14) for \((g,n)=(0,1)\) and \((0,2)\); see [18].
\[\begin{split}\chi_{\pm}&(x,t,\nu;g_{s})\\ &=\exp\Biggl{(}\sum_{\begin{subarray}{c}g\geq 0\\ n\geq 1\end{subarray}}\frac{(\pm g_{s})^{2g-2+n}}{n!}\int_{0}^{z(x)}\cdots \int_{0}^{z(x)}\Bigl{(}\omega_{g,n}(z_{1},\ldots,z_{n})-\delta_{g,0}\delta_{ n,2}\frac{\mathrm{d}x(z_{1})\,\mathrm{d}x(z_{2})}{(x(z_{1})-x(z_{2}))^{2}} \Bigr{)}\Biggr{)}.\end{split} \tag{14}\]
Here, \(\omega_{g,n}\)'s are the topological recursion correlators (see Appendix A for the definition).
The most important formula to have a formal solution to the isomonodromy system (11) is the following one, which describes the term-wise analytic continuation along \(A\)-cycle and
\(B\)-cycle:
\[Y_{\pm}(x,t,\nu,\rho;g_{s})\ \mapsto\ \begin{cases}\mathrm{e}^{\pm 2\pi\mathrm{i}\nu/g _{s}}\,Y_{\pm}(x,t,\nu,\rho;g_{s})&\text{along $A$-cycle}\\ \mathrm{e}^{\mp 2\pi\mathrm{i}\rho/g_{s}}\,Y_{\pm}(x,t,\nu,\rho;g_{s})&\text{along $B$- cycle}.\end{cases} \tag{15}\]
See [13, Theorem 3.9 and Theorem 4.8] for the derivation of (15). In the spirit of the exact WKB analysis, and the abelianization in the sense of Gaiotto-Moore-Neitzke [17], the monodromy and Stokes data of a Schrodinger-type linear ODE should be written in terms of the quantum periods on the spectral curve (see [14, SS3] for more details). Exponentials of those periods are called Voros symbols, which play an important role in exact WKB analysis (see [11, 12, 13] for example). In view of the property (15), it is natural to define the Voros symbols of the isomonodromy system (11) along \(A\)-cycle and \(B\)-cycle by \(\mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}}\) and \(\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}\), respectively, even though the parameter \(\rho\) is not an actual period integral on the spectral curve. These Voros symbols are \(t\)-independent, so it is natural to expect that \(Y_{\pm}\) satisfies an isomonodromy system. This is a philosophical remark on the result of [13].
### Stokes graph of the isomonodromy system
Here we also review the discussion of [13, SS5] on computation of the Stokes multipliers of the isomonodromy system.
Let us take the meromorphic quadratic differential
\[\varphi(x)=(4x^{3}+2tx+u(t,\nu))\,\mathrm{d}x^{2}, \tag{16}\]
and define the Stokes graph of (the first equation of) the isomonodromy system (11) as the graph on \(\mathbb{P}^{1}\) as follows:
* The vertices of the Stokes graph consists of zeros and poles of \(\varphi\).
* The edges of the Stokes graph, called Stokes curves, are trajectories of \(\varphi\) emanating from a zero of \(\varphi\).
Here, a (horizontal) trajectory of \(\varphi\) is any maximal leaf of the foliation on \(\mathbb{P}^{1}\) defined by
\[\mathrm{Im}\int^{x}\sqrt{\varphi(x)}=\text{constant}. \tag{17}\]
See [12] for more details on trajectories of quadratic differentials.
The Stokes graph is also known as an example of spectral networks of [17]. Note that the Stokes graph depends on the parameters \(t\) and \(\nu\). Away from the zero locus of the discriminant, \(\varphi\) has three simple zeros, and three Stokes curves emanate from each simple zero. Each face of the Stokes graph is called a Stokes region. Some examples are shown in Figure 1. The neighborhood of the irregular singular point \(x=\infty\) is divided into five sectors by asymptotic directions of Stokes curves. The five directions are called singular directions.
The main conjectural ansatzs for the computation of Stokes multipliers around \(x=\infty\) are the following:
1. If the Stokes graph does not contain any saddle connection (i.e., a Stokes curve connecting zeros of \(\varphi\)), then the partition function (5) is Borel summable. Moreover, the discrete Fourier series \[\mathscr{T}(t,\nu,\rho;g_{s})=\sum_{k\in\mathbb{Z}}\mathrm{e}^{2\pi\mathrm{i} k\rho/g_{s}}\mathcal{Z}(t,\nu+kg_{s};g_{s})\] (18)
converges and gives an analytic \(\tau\)-function of Painleve I. Here, \(\mathcal{Z}\) is the Borel sum of the partition function \(Z\).
2. Under the same saddle-free condition, the WKB-type series \(\chi_{\pm}\) defined in (14) is Borel summable on each Stokes region. Moreover, \[\mathscr{Y}_{\pm}(x,t,\nu,\rho;g_{s})=\frac{\sum_{k\in\mathbb{Z}} \mathrm{e}^{2\pi\mathrm{i}k\rho/g_{s}}\,\mathcal{Z}(t,\nu+kg_{s};g_{s})\, \mathcal{X}_{\pm}(x,t,\nu+kg_{s};g_{s})}{\sum_{k\in \mathbb{Z}}\mathrm{e}^{2\pi\mathrm{i}k\rho/g_{s}}\,\mathcal{Z}(t,\nu+kg_{s};g_ {s})}\] (19) converges and give an analytic solution of the isomonodromy system (11) on the Stokes region. Here, \(\mathcal{X}_{\pm}\) is the Borel sum of \(\chi_{\pm}\).
3. Under the same saddle-free condition, the Borel sums \(\mathcal{X}_{\pm}\) of the WKB-type series (14) defined on adjacent faces of the Stokes graph are related by the Voros connection formula [23, 24, 25] (or the path-lifting rule in the sense of Gaiotto-Moore-Neitzke [13]).
Figure 1. Stokes graphs for several \(t\) (with \(\nu=1\)). Here we choose \(t_{c}=-5\) and \(\epsilon=1/2\).
The assumption (i) is consistent with the conjecture in [11, 12, 13] which claims the Borel singularities appear on the lattice generated by the integral periods of \(y\mathrm{d}x\). Note that the saddle-free condition in (i) is satisfied if all the integral periods of \(y\mathrm{d}x\) on the spectral curve have a non-zero imaginary part. Under the assumption (ii), we have five canonical solutions \(\mathscr{Y}_{\pm}^{(j)}\) defined in the Stokes region \(D_{j}\) in Figure 1 (\(j=0,\pm 1,\pm 2\) mod \(5\)). Then, we define the Stokes multiplier \(s_{j}\) attached to the \(j\)-th singular direction \(\arg x=2\pi j/5\) as the non-trivial entry of the Stokes matrix \(\mathcal{S}_{j}\) defined by
\[(\mathscr{Y}_{+}^{(j+1)},\mathscr{Y}_{-}^{(j+1)})=(\mathscr{Y}_{+}^{(j)}, \mathscr{Y}_{-}^{(j)})\cdot\mathcal{S}_{j}, \tag{20}\]
where
\[\mathcal{S}_{j}=\begin{cases}\begin{pmatrix}1&s_{j}\\ 0&1\end{pmatrix}&j=0,\pm 2\\ \begin{pmatrix}1&0\\ s_{j}&1\end{pmatrix}&j=\pm 1.\end{cases} \tag{21}\]
The assumption (iii) enables us to describe these Stokes multipliers explicitly via the Voros symbols \(\mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}}\) and \(\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}\). A rough explanation is the following: The Voros connection formula allows us to describe the analytic continuation of Borel resummed WKB solution by the term-wise analytic continuations on the spectral curve, and the formula (15) gives an explicit description of those analytic continuations in terms of the Voros symbols (see [14, SS5] for details.). We will present the resulting Stokes multipliers in the next subsection.
### Dillinger-Delabaere-Pham formula and non-linear Stokes phenomenon
Now, let us discuss the non-linear Stokes phenomenon in the Painleve equation. We borrow the idea of Takei used in [13, 14] where he relates the mutation of Stokes graph (which we will explain shortly) to non-linear Stokes phenomenon of Painleve transcendents, through
Figure 2. \(A\)-cycle and \(B\)-cycle.
the invariance of the Stokes multipliers of (11). This idea has been recently further developed in [10, 11]. We also note that our result is closely related to [12, 13] based on the Riemann-Hilbert method.
Figure 1 depicts Stokes graphs for \(\nu=1\) and several \(t\)'s which are close to
\[t_{c}=-5. \tag{22}\]
Let us label the three zeros of \(\varphi(x)\) at \(t=t_{c}\) as
\[e_{1}\coloneqq 1.59075,\qquad e_{2}\coloneqq-0.01288,\qquad e_{3}\coloneqq-1.57157, \tag{23}\]
and take the \(A\) cycle (resp., the \(B\) cycle) as the cycle encircling the zeros \(e_{1}\) and \(e_{2}\) (resp., \(e_{2}\) and \(e_{3}\)). Here the orientation of these cycle is given so that
\[\oint_{A}y\mathrm{d}x=2\int_{e_{2}}^{e_{1}}y\mathrm{d}x,\qquad\oint_{B}y \mathrm{d}x=-2\int_{e_{3}}^{e_{2}}y\mathrm{d}x, \tag{24}\]
where the branch of \(y\) is chosen so that it has a positive imaginary part (resp., a negative real part) on the segment \([e_{2},e_{1}]\) (resp., \([e_{3},e_{2}]\)). We may observe that a saddle connection appears at \(t=t_{c}\), where the \(B\)-period
\[\oint_{B}y\mathrm{d}x\coloneqq 5.87065 \tag{25}\]
of \(y\mathrm{d}x\) has zero imaginary part. As we have mentioned in SS2.2, we expect that this induces singularities on the positive real axis on the Borel-plane, and we will discuss the action of Stokes automorphism below.
The saddle connection is resolved and we have saddle-free Stokes graphs under a small variation of \(t\) since the \(B\)-periods of \(ydx\) are no longer real:
\[\oint_{B}y\mathrm{d}x\coloneqq\begin{cases}5.83465+1.47447\mathrm{i}&\text{ at }t=t_{c}-\mathrm{i}\epsilon\\ 5.83465-1.47447\mathrm{i}&\text{at }t=t_{c}+\mathrm{i}\epsilon.\end{cases} \tag{26}\]
Here we take \(\epsilon=1/2\). We can observe that the topology of the Stokes graphs changes discontinuously before and after the appearance of the saddle connection. This is what we call the mutation of the Stokes graph.
Since the Stokes graphs at \(t=t_{c}\pm\mathrm{i}\epsilon\) are saddle-free, we can use the recipe of [14, SS5] to compute the Stokes multipliers around \(x=\infty\). The resulting Stokes multipliers are
\[\text{At }t=t_{c}-\mathrm{i}\epsilon:\quad\begin{cases}s_{-2}=\mathrm{i}\, \mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}}\\ s_{-1}=\mathrm{i}\,\big{(}\mathrm{e}^{-2\pi\mathrm{i}\nu/g_{s}}-\mathrm{e}^{-2 \pi\mathrm{i}(\nu+\rho)/g_{s}}+\mathrm{e}^{-2\pi\mathrm{i}\rho/g_{s}}\big{)} \\ s_{0}=\mathrm{i}\,\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}\\ s_{1}=\mathrm{i}\,\mathrm{e}^{-2\pi\mathrm{i}\rho/g_{s}}(1-\mathrm{e}^{2\pi \mathrm{i}\nu/g_{s}})\\ s_{2}=\mathrm{i}\,\mathrm{e}^{-2\pi\mathrm{i}\nu/g_{s}}(1-\mathrm{e}^{2\pi \mathrm{i}\rho/g_{s}}).\end{cases} \tag{27}\]
\[\text{At }t=t_{c}+\mathrm{i}\epsilon:\quad\begin{cases}s_{-2}=\mathrm{i}\, \mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}}(1-\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}) \\ s_{-1}=\mathrm{i}\,\mathrm{e}^{-2\pi\mathrm{i}\rho/g_{s}}(1-\mathrm{e}^{-2\pi \mathrm{i}\nu/g_{s}})\\ s_{0}=\mathrm{i}\,\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}\\ s_{1}=\mathrm{i}\,(\mathrm{e}^{-2\pi\mathrm{i}\rho/g_{s}}-\mathrm{e}^{2\pi \mathrm{i}(\nu-\rho)/g_{s}}+\mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}})\\ s_{2}=\mathrm{i}\,\mathrm{e}^{-2\pi\mathrm{i}\nu/g_{s}}.\end{cases} \tag{28}\]
We can check that the consistency conditions (c.f., [11])
\[1+s_{j-1}s_{j}+\mathrm{i}\,s_{j}+2=0\quad(s_{j}=s_{j+5}) \tag{29}\]
of the Stokes multipliers are satisfied for both cases. We note that the result agrees with the computation of [1, SS7.4]. This is also consistent with [12, vSV22] through the identification of [1, (7.71)-(7.72)].
Thus, we have seen that the mutation of Stokes graphs induces a discontinuous change of the expressions of the Stokes multipliers. It is easy to observe that the Stokes data (28) are obtained from (27) by the transformation
\[(\mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}},\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}) \mapsto(\mathrm{e}^{2\pi\mathrm{i}\nu/g_{s}}\,(1-\mathrm{e}^{2\pi\mathrm{i} \rho/g_{s}}),\mathrm{e}^{2\pi\mathrm{i}\rho/g_{s}}) \tag{30}\]
In fact, this is an example of a cluster transformation (or a Kontsevich-Soibelman transformation). Our observation is consistent with the results [1, 1, 2, 3, 4] where the mutation of Stokes graphs induces a cluster transformation for the Voros symbols (the quantum period, the Fock-Goncharov coordinates). (30) is an example of the Delabaere-Dillinger-Pham (DDP) formula.
Now we get to the crucial point. Since \(t\) is the isomonodromic time, these Stokes multipliers must be preserved under the variation of \(t\). Therefore, the above formulas suggest the following: Let \((\nu^{+},\rho^{+})\) and \((\nu^{-},\rho^{-})\) be possibly different pairs of parameters, and let \(\mathscr{T}^{\pm}(t,\nu^{\pm},\rho^{\pm},g_{s})\) be the Borel sum of \(\tau(t,\nu^{\pm},\rho^{\pm},g_{s})\) defined at \(t=t_{c}\pm\mathrm{i}\epsilon\). If these \(\tau\)-functions correspond to a common solution of Painleve I, then they must be glued at \(t=t_{c}\) so that the corresponding Stokes multipliers are identical. Namely, we have
\[\mathscr{T}^{-}_{P_{1}}(t,\nu^{-},\rho^{-};g_{s})=\mathrm{e}^{\frac{1}{2\pi \mathrm{i}}\mathrm{Li}_{2}(\mathrm{e}^{2\pi\mathrm{i}\rho^{+}/g_{s}})}\, \mathscr{T}^{+}_{P_{1}}(t,\nu^{+},\rho^{+};g_{s}) \tag{31}\]
with
\[(\nu^{+},\rho^{+})=\left(\nu^{-}-\frac{g_{s}}{2\pi\mathrm{i}}\log(1-\mathrm{e} ^{2\pi\mathrm{i}\rho^{-}/g_{s}}),\rho^{-}\right). \tag{32}\]
Here, the prefactor \(\exp(\frac{1}{2\pi{\rm i}}{\rm Li}_{2}(e^{2\pi{\rm i}\rho^{+}/g_{s}}))\) on the right hand side is related to the generating function of the monodromy symplectomorphism (which is the cluster transformation in this case); see [17, 16, 18]5.
Footnote 5: We would like to thank I. Coman and F. Del Monte for helpful discussion on the prefactor.
This formula can be regarded as the connection formula which describes the non-linear Stokes phenomenon for the Painleve transcendents. We may write the formula in terms of the Stokes automorphism as
\[\mathfrak{S}\tau(t,\nu,\rho;g_{s})={\rm e}^{\frac{1}{2\pi{\rm i}}{\rm Li}_{2}(e ^{2\pi{\rm i}\rho/g_{s}})}\tau\left(t,\nu-\frac{g_{s}}{2\pi{\rm i}}\log(1-e^{2 \pi{\rm i}\rho/g_{s}}),\rho;g_{s}\right). \tag{33}\]
If we look at the zero Fourier mode (i.e., coefficient of \(e^{2\pi{\rm i}k\rho/g_{s}}\) with \(k=0\)), we have the all order instanton corrections of the partition function:
\[\mathfrak{S}Z(t,\nu;g_{s})=\sum_{n=0}^{\infty}Z^{(n)}(t,\nu;g_{s}) \tag{34}\]
with the terms of the form
\[Z^{(n)}(t,\nu;g_{s})=\mathfrak{Z}^{(n)}(t,\nu-ng_{s};g_{s})\,Z(t,\nu-ng_{s};g_{ s}). \tag{35}\]
Here, \(\mathfrak{Z}^{(n)}(t,\nu;g_{s})\) is a differential polynomial of the free energy \(F(t,\nu;g_{s})\) with respect to \(\nu\). The Seiberg-Witten relation \(\partial_{\nu}F_{0}=\oint_{B}y{\rm d}x\) implies that \(Z^{(n)}\) is a formal power series in \(g_{s}\) with an exponential factor \({\rm e}^{-n\oint_{B}y{\rm d}x/g_{s}}\). Namely, \(Z^{(n)}\) is an \(n\)-instanton amplitude. It turns out that the first few terms
\[Z^{(0)}(t,\nu;g_{s}) =Z(t,\nu;g_{s}) \tag{36}\] \[Z^{(1)}(t,\nu;g_{s}) =\left(1+\frac{g_{s}}{2\pi{\rm i}}\frac{\partial F}{\partial\nu} (t,\nu-g_{s};g_{s})\right)Z(t,\nu-g_{s};g_{s}) \tag{37}\]
of (34) precisely agree (up to a normalization factor) with the multi-instanton results for the topological string obtained in [11, 12]. In the next section, we will show that the agreement occurs for all \(n\) as well. We may also observe that the first few terms of the \(1\)-instanton part (37) are consistent with a known connection formula for Painleve I (see e.g. [19, 20]). This supports our heuristic derivation of (33).
Before ending the section, let us make a remark on a relation with the results of [17, 23, 24]. These works also derive a connection formula for solutions of Painleve I through the isomonodromy property. Our result (33) describes the connection formula at the level of \(\tau\)-function, and the main difference is the appearance of the prefactor given by the dilogarithm function. The factor disappears in the solution of Painleve I due to the logarithmic derivative (10). As we'll see in the next section, we can relate the connection formula of Painleve I with the results of [11, 12] thanks to the prefactor. This is our new observation.
## 3. Resurgent structure and Stokes automorphisms in topological string theory
In this section we show that the main result from the previous section, (33), is a consequence of the conjectures of [12, 13] on the resurgent structure of the topological string.
### Resurgent structure of the topological string
In [12, 13] a general conjecture on the resurgent structure of the topological string on arbitrary Calabi-Yau manifolds was put forward. This conjecture is based on a trans-series solution of the HAE of [12], as proposed in [14, 14]. For this reason, it applies to the free energies obtained by doing topological recursion on curves of genus \(g\geq 1\), but it also applies to the free energies of compact Calabi-Yau threefolds, since both are perturbative solutions to the HAE. For simplicity, we will first present the results in the one-modulus, local case originally studied in [12]. The generalization to the multi-modulus, general case is straightforward and will be presented below.
The conjecture of [12, 13] is as follows. First, it asserts that Borel singularities of the topological string free energy are integral periods of the Calabi-Yau manifold, up to some overall normalization (in the local case, this was already conjectured in [10]). In the one-modulus, local case, this means that the singularity \(\mathcal{A}\) can be written as
\[\mathcal{A}=c\partial_{\nu}F_{0}+d\nu+d_{0}, \tag{38}\]
where \(c,d,d_{0}\) are integer numbers, times a normalization factor which depends on the normalization of \(g_{s}\) (see [13] for a detailed discussion of normalizations). We will assume that \(\mathcal{A}\) is a primitive vector of the period lattice. Then, \(\ell\mathcal{A}\), with \(\ell\in\mathbb{Z}_{>0}\) is also a Borel singularity, and we are interested in the structure of these "multi-instanton" singularities. There are two different situations. When \(c=0\), the resurgent structure is of the Pasquetti-Schiappa form [15]. This means the following. Let us define
\[F_{\mathcal{A}}^{(\ell)}=\left(\frac{1}{\ell}\frac{\mathcal{A}}{g_{s}}+\frac {1}{\ell^{2}}\right)\mathrm{e}^{-\ell\mathcal{A}/g_{s}}. \tag{39}\]
Then, the alien derivatives of the free energy are given by
\[\dot{\Delta}_{\ell\mathcal{A}}F=aF_{\mathcal{A}}^{(\ell)},\qquad\ell\in \mathbb{Z}_{>0}, \tag{40}\]
where \(a\) is a Stokes constant. When \(c\neq 0\), one defines a modified genus zero free energy \(\tilde{F}_{0}\) by the equation
\[\mathcal{A}=c\partial_{\nu}\tilde{F}_{0}. \tag{41}\]
We note that \(\tilde{F}_{0}\) differs from the original \(F_{0}\) in a quadratic polynomial in \(\nu\). The total free energy appearing in the formulae for the trans-series involves \(\tilde{F}_{0}\), i.e. it is given by
\[F(\nu;g_{s})=g_{s}^{-2}\tilde{F}_{0}(\nu)+\sum_{g\geq 1}g_{s}^{2g-2}F_{g}(\nu). \tag{42}\]
Let us now define
\[F^{(\ell)}=\left(\frac{\hbar F_{-\ell}^{\prime}}{\ell}+\frac{1}{\ell^{2}} \right)\mathrm{e}^{F_{-\ell}-F}, \tag{43}\]
where we have introduced the rescaled coupling constant
\[\hbar=cg_{s} \tag{44}\]
and the notation
\[F_{k}(\nu;g_{s})=F(\nu+k\hbar;g_{s}). \tag{45}\]
The prime in (43) and other equations in this section denotes derivative w.r.t. \(\nu\). Then, the alien derivatives of \(F\) are given by
\[\dot{\Delta}_{\ell\mathcal{A}}F=aF^{(\ell)},\qquad\ell\in\mathbb{Z}_{>0}. \tag{46}\]
These are the main conjectures of [14, 1]. They recover and extend partial results along this direction in [13, 14, 15, 16]. We note that both in (46) and (40) it is assumed that the Stokes constant is independent of \(\ell\).
From the formulae for the alien derivatives one can compute the Stokes automorphism, through Ecalle's formula (see e.g. [17])
\[\mathfrak{S}_{\mathcal{C}}=\exp\left(\sum_{\ell=1}^{\infty}\mathcal{C}^{\ell} \dot{\Delta}_{\ell\mathcal{A}}\right). \tag{47}\]
We have introduced an additional formal parameter \(\mathcal{C}\) to keep track of \(\ell\). In view of the results of the previous section, we should consider the action of the Stokes automorphism on the partition function, \(Z\), and we have
\[\mathfrak{S}_{\mathcal{C}}(Z)=\exp\left(\mathfrak{S}_{\mathcal{C}}(F)\right). \tag{48}\]
We have again two cases to consider. The simplest one is when \(c=0\). In that case, the action of more than one alien derivative vanishes, and we simply have
\[\mathfrak{S}_{\mathcal{C}}(Z)=\exp\left(a\sum_{\ell=1}^{\infty}\mathcal{C}^{ \ell}F_{\mathcal{A}}^{(\ell)}\right)Z=\exp\left(a\sum_{\ell=1}^{\infty} \mathcal{C}^{\ell}\left(\frac{1}{\ell}\frac{\mathcal{A}}{g_{s}}+\frac{1}{\ell ^{2}}\right)\mathrm{e}^{-\ell\mathcal{A}/g_{s}}\right)Z, \tag{49}\]
which we can write as
\[\mathfrak{S}_{\mathcal{C}}(Z)=\exp\left(a\operatorname{Li}_{2}(\mathcal{C} \operatorname{e}^{-\mathcal{A}/g_{s}})-a\frac{\mathcal{A}}{g_{s}}\log(1- \mathcal{C}\operatorname{e}^{-\mathcal{A}/g_{s}})\right)Z. \tag{50}\]
This is the result obtained for the resolved conifold after using the Pasquetti-Schiappa form (see e.g. [1]). We note that the ingredients for our main formula (the dilogarithm and the logarithm) are already here, and they follow from the Pasquetti-Schiappa form of the multi-instanton amplitudes, which is in turn ultimately due to the universal behavior of the topological string at the conifold point [11].
The non-trivial case for the calculation of the Stokes automorphism occurs when \(c\neq 0\), since one has to act with multiple alien derivatives. Explicit calculations show that the result has the form
\[\mathfrak{S}_{\mathcal{C}}(Z)=\sum_{\ell\geq 0}\mathcal{C}^{\ell}\mathfrak{ 3}^{(\ell)}(\nu-\ell\hbar)Z(\nu-\ell\hbar;g_{s}), \tag{51}\]
where \(\mathfrak{Z}^{(\ell)}(\nu)\) can be computed explicitly and \(\mathfrak{Z}^{(0)}=1\). One finds, for the very first values,
\[\mathfrak{Z}^{(1)} =a\left(1+\hbar F^{\prime}\right), \tag{52}\] \[\mathfrak{Z}^{(2)} =\frac{a^{2}}{2}\left(1+2\hbar F^{\prime}+(\hbar F^{\prime})^{2}+ \hbar^{2}F^{\prime\prime}\right)+a\left(\frac{1}{4}+\frac{\hbar F^{\prime}}{2 }\right).\]
This is in accord with (34), (35). We will show in the next subsection that the structure (51) follows from the results of [13]. The arguments in section 2 suggest in addition the following explicit generating functional for the functions \(\mathfrak{Z}^{(\ell)}(\nu)\):
\[Z(\nu;g_{s})\sum_{\ell\geq 0}\mathcal{C}^{\ell}\mathfrak{Z}^{(\ell)}(\nu)= \mathrm{e}^{a\mathrm{Li}_{2}(\mathcal{C})}Z\left(\nu-\hbar a\log(1-\mathcal{C });g_{s}\right). \tag{53}\]
Indeed, it follows from (51) and (53) that
\[\mathfrak{S}_{\mathcal{C}}(Z)=\exp\left\{a\mathrm{Li}_{2}\left(\mathcal{C} \mathrm{e}^{-\hbar\partial}\right)-\hbar a\log\left(1-\mathcal{C}\mathrm{e}^{ -\hbar\partial}\right)\partial\right\}Z, \tag{54}\]
where \(\partial\) is the derivative w.r.t. \(\nu\). If we introduce now the discrete Fourier transform as in (8),
\[\tau(\nu,\rho;g_{s})=\sum_{k\in\mathbb{Z}}\mathrm{e}^{2\pi\mathrm{i}k\rho/g_{ s}}Z(\nu+kg_{s};g_{s}), \tag{55}\]
one easily finds
\[\mathfrak{S}_{\mathcal{C}}\tau(\nu,\rho;g_{s})=\mathrm{e}^{a\mathrm{Li}_{2}( \mathcal{C}\mathrm{e}^{2\pi\mathrm{i}c\rho/g_{s}})}\tau\left(\nu-\hbar a\log( 1-\mathcal{C}\mathrm{e}^{2\pi\mathrm{i}c\rho/g_{s}}),\rho;g_{s}\right), \tag{56}\]
where we have made a choice of normalizations in such a way that the coefficient \(c\) is an integer. The formula above has precisely the structure anticipated in (33) (the two formulae agree after setting \(\mathcal{C}=c=1\), \(a=1/(2\pi\mathrm{i})\).) In subsection 3.3 we will consider a more general cases for the transformation of the dual partition function and make contact with the results of [1].
### A derivation of the formula for Stokes automorphisms
We will now show that the formulae (53) and (54) follow from the conjecture on the alien derivatives of the free energy, (46). To do this, we will rely on various results of [13, 14], which we summarize very briefly here. We refer to those papers for more details.
In the framework of the HAE, the perturbative free energies \(F_{g}\) are non-holomorphic but global functions on the moduli space. More precisely, they are polynomials in a non-holomorphic propagator \(S\), whose coefficients are functions of a complex coordinate \(z\) on the moduli space (not necessarily flat). The conventional free energies are holomorphic but can be defined in different frames, which are determined by a choice of \(A\) and \(B\) periods. The holomorphic free energies in different frames are obtained by considering the non-holomorphic free energies, and taking different holomorphic limits of the propagator. We define the \(\mathcal{A}\)-frame as the frame in which \(\mathcal{A}\) is the \(A\)-period, and the holomorphic propagator appropriate for the \(\mathcal{A}\)-frame will be denoted by \(S_{\mathcal{A}}\). The boundary conditions to solve the HAE are obtained by evaluating the holomorphic free energies in different frames and imposing particular behaviors at special points in moduli space, in particular the universal behaviour at the conifold point [14] and the resulting gap condition [10].
It was pointed out in [11, 12, 13, 14] that the resurgent structure of the topological string free energy can be obtained by considering trans-series solutions to
the HAE of [1]. The solutions corresponding to the \(\ell\)-th instanton sector are formal power series in \(g_{s}\), whose coefficients are also polynomials in \(S\) with \(z\)-dependent coefficients, and they also involve \(\mathcal{A}\) and their derivatives (the second derivative of \(\mathcal{A}\) can be however re-expressed in terms of \(S_{\mathcal{A}}\)). The \(\ell\)-instanton amplitude involves of course an exponential prefactor of the form \(\mathrm{e}^{-\ell\mathcal{A}/g_{s}}\). Explicit trans-series solutions were obtained in [11, 23] by using an operator formalism first suggested in [10, 1]. The main operator in this formalism is6
Footnote 6: For the factors of \(g_{s}\), we follow the conventions in [23].
\[\mathsf{D}=g_{s}\partial_{z}\mathcal{A}(S-S_{\mathcal{A}})\partial_{z}. \tag{57}\]
When evaluated in the holomorphic limit, this operator becomes \(\hbar\partial_{\nu}\), i.e. a derivative w.r.t. the flat coordinate \(\nu\).
We are now ready to prove the formula (53). The Stokes automorphism, acting on \(Z\), is a formal sum of multi-instanton sectors which has to solve the HAE for the partition function, and in addition it has to satisfy the following boundary condition: when evaluated at the \(\mathcal{A}\)-frame, it is equal to (50). This determines its form uniquely. A general \(n\)-th instanton solution to the HAE for the partition function was determined in [11] and has the structure:
\[Z^{(n)}=\mathfrak{A}_{n}\,\mathrm{e}^{-\Phi_{n}}Z, \tag{58}\]
where
\[\Phi_{n}=\frac{1}{\mathsf{D}}\left(1-\mathrm{e}^{-n\mathsf{D}}\right)G,\qquad G =\frac{1}{g_{s}}\mathcal{A}+\mathsf{D}(F-g_{s}^{-2}F_{0}). \tag{59}\]
Let us note that, in the holomorphic limit,
\[\Phi_{n}\to F(\nu;g_{s})-F(\nu-n\hbar;g_{s}). \tag{60}\]
The prefactor \(\mathfrak{A}_{n}\) is determined as follows. Let
\[X_{n}=\mathrm{e}^{-n\mathsf{D}}G. \tag{61}\]
Then, the \(\mathfrak{A}_{n}\) are arbitrary linear combinations of the objects \(\mathfrak{w}_{\ell}\), defined by
\[\mathfrak{w}_{\ell}=\sum_{\boldsymbol{k},\,d(\boldsymbol{k})=\ell}C_{ \boldsymbol{k}}\mathfrak{X}_{\boldsymbol{k}}. \tag{62}\]
In this formula, \(\boldsymbol{k}=(k_{1},k_{2},\cdots)\) is a vector of non-negative entries, \(d(\boldsymbol{k})\) is given by
\[d(\boldsymbol{k})=\sum_{j}jk_{j}, \tag{63}\]
the coefficients \(C_{\boldsymbol{k}}\) are of the form,
\[C_{\boldsymbol{k}}=\frac{\ell!}{\prod_{j\geq 1}k_{j}!(j!)^{k_{j}}}, \tag{64}\]
and
\[\mathfrak{X}_{\boldsymbol{k}}=X_{n}^{k_{1}}(\mathsf{D}X_{n})^{k_{2}}(\mathsf{ D}^{2}X_{n})^{k_{3}}\cdots. \tag{65}\]
We would like to emphasize that this structure is determined by requiring that (58) is a solution to the HAE. Let us also note that all the \(X\)'s appearing in \(\mathfrak{A}_{n}\) are shifted, i.e.
they are acted by the automorphism \(\mathrm{e}^{-n\mathsf{D}}\), so it is convenient to introduce the "unshifted" prefactor \(\mathfrak{B}_{n}\) defined by
\[\mathfrak{A}_{n}=\mathrm{e}^{-n\mathsf{D}}\mathfrak{B}_{n}. \tag{66}\]
In \(\mathfrak{B}_{n}\), \(X_{n}\) is replaced by \(X=G\). The precise linear combination of \(\mathfrak{w}_{\ell}\) appearing in \(\mathfrak{A}_{n}\) is uniquely determined by the boundary condition, i.e. by its form in the \(\mathcal{A}\)-frame. Let us suppose that, when evaluated in the \(\mathcal{A}\)-frame, the holomorphic limit of \(Z^{(n)}\) is of the form
\[Z^{(n)}_{\mathcal{A}}=\left\{\sum_{k}c_{n,k}\left(\frac{\mathcal{A}}{g_{s}} \right)^{k}\right\}\mathrm{e}^{-n\mathcal{A}/g_{s}}Z_{\mathcal{A}}, \tag{67}\]
where the prefactor is an arbitrary polynomial in \(\mathcal{A}/g_{s}\). Then,
\[\mathfrak{B}_{n}=\sum_{k}c_{n,k}\mathfrak{w}_{k}. \tag{68}\]
Let us now apply these results to the calculation of the Stokes automorphism. As we explained before, the Stokes automorphism is the holomorphic limit of a formal linear combination of solutions of the form (58). Therefore, we must have
\[\mathfrak{S}_{\mathcal{C}}(Z)=\sum_{\ell\geq 0}\mathcal{C}^{\ell}\mathfrak{A} _{\ell}Z(\nu-\hbar\ell;g_{s}). \tag{69}\]
(To lighten the notation, we are using the same symbols for the non-holomorphic quantities appearing in the HAE, and for their holomorphic limits. Hopefully, which one is being used at a given moment is clear from the context.) This is precisely the structure of (51), which follows from the general results for multi-instantons. We deduce that
\[\mathfrak{A}_{\ell}=\mathfrak{Z}^{(\ell)}(\nu-\hbar\ell). \tag{70}\]
Both sides involve functions whose argument is shifted by \(-\hbar\ell\). In terms of the unshifted prefactors introduced in (66) we have
\[\mathfrak{B}_{\ell}=\mathfrak{Z}^{(\ell)}(\nu). \tag{71}\]
The boundary condition obtained from (50) is
\[\sum_{n\geq 0}\mathcal{C}^{n}\mathfrak{B}_{n,\mathcal{A}}=\exp\left(a\mathrm{ Li}_{2}(\mathcal{C})-a\frac{\mathcal{A}}{g_{s}}\log(1-\mathcal{C})\right)= \mathrm{e}^{a\mathrm{Li}_{2}(\mathcal{C})}\sum_{k\geq 0}\frac{1}{k!}\left(-a \log(1-\mathcal{C})\right)^{k}\left(\frac{\mathcal{A}}{g_{s}}\right)^{k}. \tag{72}\]
According to what we explained before, we can already write the general solution to the HAE, by simply replacing \((\mathcal{A}/g_{s})^{k}\) by \(\mathfrak{w}_{k}\):
\[\sum_{n\geq 0}\mathcal{C}^{n}\mathfrak{Z}^{(n)}=\mathrm{e}^{a\mathrm{Li}_{2}( \mathcal{C})}\sum_{k\geq 0}\frac{1}{k!}\left(-a\log(1-\mathcal{C})\right)^{k} \mathfrak{w}_{k}, \tag{73}\]
where we have already used (71). It was proven in [14] that
\[\Xi(\xi)=\sum_{\ell\geq 0}\frac{\xi^{\ell}}{\ell!}\mathfrak{w}_{\ell}=\exp \left(\sum_{j=1}^{\infty}\frac{\xi^{j}}{j!}\mathsf{D}^{j-1}X\right), \tag{74}\]
where \(\xi\) is an arbitrary complex parameter. We conclude that
\[\sum_{n\geq 0}\mathcal{C}^{n}\mathfrak{Z}^{(n)}=\mathrm{e}^{a\mathrm{Li}_{2}( \mathcal{C})}\exp\left(\frac{1}{\mathsf{D}}\left(\mathrm{e}^{-a\log(1-\mathcal{ C})\mathsf{D}}-1\right)X\right), \tag{75}\]
In the holomorphic limit, we have that
\[X\to\hbar\partial_{\nu}F \tag{76}\]
where \(F\) is the total free energy, and we get in the end
\[\sum_{n\geq 0}\mathcal{C}^{n}\mathfrak{Z}^{(n)}=\exp\left(a\mathrm{Li}_{2}( \mathcal{C})+F(\nu-a\hbar\log(1-\mathcal{C});g_{s})-F(\nu;g_{s})\right). \tag{77}\]
This is precisely (53).
### Generalizations
The above results concern the one-modulus, local case. However, the generalization to arbitrary CY threefolds is straightforward, by using the results of [10] (to which we refer for further details). We will now write in some detail the more general formula for the Stokes automorphism. In the case of an arbitrary CY, the genus \(g\) free energies depend on the "big moduli space" flat coordinates \(X^{I}\) of the CY, where \(I=0,1,\cdots,n\). The Borel singularities or instanton actions are again integral periods, given by linear combinations,
\[\kappa^{-1}\mathcal{A}=c^{I}\frac{\partial F_{0}}{\partial X^{I}}+d_{I}X^{I}. \tag{78}\]
Here, we have introduced explicitly the normalization factor \(\kappa\) relating the action to the integral periods. If all \(c^{I}=0\), the multi-instanton amplitudes are again of the Pasquetti-Schiappa form (39) and the Stokes automorphism is given by (50). When not all \(c^{I}\) vanish, one defines a new genus zero free energy by
\[\mathcal{A}=\kappa c^{I}\frac{\partial\tilde{F}_{0}}{\partial X^{I}}, \tag{79}\]
as in the local case. It can be written as
\[\tilde{F}_{0}(X^{I})=F_{0}(X^{I})+\frac{1}{2}a_{IJ}X^{I}X^{J},\qquad a_{IJ}c^{ I}=d_{J}. \tag{80}\]
Of course, the final formulae will not depend on \(a_{IJ}\), but only on \(c^{I}\), \(d_{J}\). As shown in [10], one has to define a new genus one free energy
\[\tilde{F}_{1}=F_{1}-\left(\frac{\chi}{24}-1\right)\log X^{0}. \tag{81}\]
Such a redefinition has appeared before in e.g. eq. (2.77) of [1]. The total free energy relevant for the multi-instanton amplitudes will be denoted by \(\tilde{F}(X^{I};g_{s})\), and is given by
\[\begin{split}\widetilde{F}(X^{I};g_{s})&=g_{s}^{-2 }\tilde{F}_{0}(X^{I})+\tilde{F}_{1}(X^{I})+\sum_{g\geq 2}g_{s}^{2g-2}F_{g}(X^{ I})\\ &=\frac{1}{2g_{s}^{2}}a_{IJ}X^{I}X^{J}+F(X^{I};g_{s}).\end{split} \tag{82}\]
Then, one has the following generalization of (54),
\[\mathfrak{S}_{\mathcal{C}}(\tilde{Z})=\exp\left\{a\mathrm{Li}_{2}\left(\mathcal{C} \mathrm{e}^{-\kappa g_{s}c^{I}\partial_{I}}\right)-a\kappa g_{s}\log\left(1- \mathcal{C}\mathrm{e}^{-\kappa g_{s}c^{I}\partial_{I}}\right)c^{I}\partial_{I }\right\}\tilde{Z}, \tag{83}\]
where we have denoted \(\tilde{Z}=\mathrm{e}^{\tilde{F}}\), and
\[\partial_{I}=\frac{\partial}{\partial X^{I}}. \tag{84}\]
As we have explained before, the action of the Stokes automorphism has a simpler form when it acts on an appropriate dual partition function. We could obtain a direct generalization of (56) involving the redefined partition function \(\tilde{Z}\). However, in order to make contact with the results of [1], it is convenient to consider the dual partition function to the original \(Z\). This means that in (83) we have to treat separately the quadratic term in \(X^{I}\) appearing in the second line of (82). If we denote
\[Y^{I}=X^{I}-a\kappa g_{s}c^{I}\log(1-\mathcal{C}) \tag{85}\]
we find
\[\begin{split}&\tilde{Z}\left(Y^{I};g_{s}\right)\\ &=\exp\left\{\frac{\kappa^{2}}{2}a^{2}d^{I}c_{I}\log^{2}(1- \mathcal{C})-g_{s}^{-1}a\kappa\log(1-\mathcal{C})d_{I}X^{I}+\frac{1}{2g_{s}^{ 2}}a_{IJ}X^{I}X^{J}\right\}Z\left(Y^{I};g_{s}\right).\end{split} \tag{86}\]
We also note that, for \(n\in\mathbb{Z}\),
\[\mathrm{e}^{-n\kappa g_{s}c^{I}\partial_{I}}\exp\left\{\frac{1}{2g_{s}^{2}}a_ {IJ}X^{I}X^{J}\right\}=\exp\left\{-g_{s}^{-1}\kappa d_{I}X^{I}n+\frac{n^{2}}{2 }\kappa^{2}c^{I}d_{I}\right\}\exp\left\{\frac{1}{2g_{s}^{2}}a_{IJ}X^{I}X^{J} \right\}. \tag{87}\]
We have to be more concrete about the normalization factor \(\kappa\). It was found in [10] that, with the canonical normalization of \(g_{s}\), one has
\[\kappa^{2}=-2\pi\mathrm{i}, \tag{88}\]
and this means that
\[\mathrm{e}^{\frac{n^{2}}{2}\kappa^{2}c^{I}d_{I}}=\mathrm{e}^{-\pi\mathrm{i}nc ^{I}d_{I}}, \tag{89}\]
since \(n,d_{I},c^{I}\in\mathbb{Z}\). We will now put together \(c^{I}\), \(d_{I}\) in a symplectic vector \(\gamma=(c^{I},d_{I})\). Let us introduce
\[\boldsymbol{X}_{\gamma}=\sigma(\gamma)\exp\left[-\kappa g_{s}^{-1}\left(d_{I}X ^{I}-\rho_{I}c^{I}\right)\right], \tag{90}\]
where \(\rho_{I}\), \(I=0,1,\cdots,n\), are additional variables, and
\[\sigma(\gamma)=(-1)^{d_{I}c^{I}}. \tag{91}\]
Then, one finds that (83) is equivalent to
\[\begin{split}&\mathfrak{S}_{\mathcal{C}}\left(\sum_{\boldsymbol{ \ell}\in\mathbb{Z}^{n}}\mathrm{e}^{\kappa\rho_{I}\ell^{I}/g_{s}}Z\left(X^{I}+ \kappa\ell^{I}g_{s};g_{s}\right)\right)=\mathrm{e}^{a\mathrm{Li}_{2}\left( \mathcal{C}\boldsymbol{X}_{\gamma}\right)-\pi\mathrm{i}a^{2}d^{I}c_{I}\log^{ 2}(1-\mathcal{C}\boldsymbol{X}_{\gamma})}\\ &\times\sum_{\boldsymbol{\ell}\in\mathbb{Z}^{n}}\mathrm{e}^{-ag_{ s}^{-1}\kappa\log(1-\mathcal{C}\boldsymbol{X}_{\gamma})d_{I}(X^{I}+\kappa g_{s}\ell^{I})}Z \left(X^{I}+\kappa g_{s}\ell^{I}-ag_{s}\kappa c^{I}\log\left(1-\mathcal{C} \boldsymbol{X}_{\gamma}\right);g_{s}\right)\mathrm{e}^{\kappa\rho_{I}\ell^{I }/g_{s}}.\end{split} \tag{92}\]
The appropriate definition of the dual partition function in this general case is [1]:
\[\tau(X^{I},\rho_{I};g_{s})=\mathrm{e}^{\frac{1}{2g_{s}^{2}}X^{I}\rho_{I}}\sum_{ \boldsymbol{\ell}\in\mathbb{Z}^{n}}Z\left(X^{I}+\kappa\ell^{I}g_{s};g_{s}\right) \mathrm{e}^{\kappa\rho_{I}\ell^{I}/g_{s}}, \tag{93}\]
and the action of the Stokes automorphism is
\[\begin{split}&\mathfrak{S}\tau(X^{I},\rho_{I};g_{s})\\ &=\exp\left(aL_{\sigma(\gamma)}(\boldsymbol{X}_{\gamma})\right) \tau\left(X^{I}-ag_{s}\kappa c^{I}\log\left(1-\boldsymbol{X}_{\gamma}\right), \rho_{I}-ag_{s}\kappa d_{I}\log\left(1-\boldsymbol{X}_{\gamma}\right);g_{s} \right),\end{split} \tag{94}\]
where we have put \(\mathcal{C}=1\), and \(L_{\epsilon}(z)\) is the twisted Rogers dilogarithm, as in [1]:
\[L_{\epsilon}(z)=\mathrm{Li}_{2}(z)+\frac{1}{2}\log(\epsilon^{-1}z)\log(1-z). \tag{95}\]
It is easy to see that (94) agrees precisely with the wall-crossing formula (1.9) in [1], where their variables \(\xi^{I}\), \(\tilde{\xi}_{I}\) are related to ours by \((X^{I},\rho_{I})=-\kappa g_{s}(\xi^{I},\tilde{\xi}_{I})\)7. In addition, the agreement between the formulae requires the identification (4). One advantage of (94) is that, when \(c^{I}=0\), one recovers as well the transformation formula (50) (this is easily seen by looking e.g. at the mode with \(\ell^{I}=0\)).
Footnote 7: [1] also give a wall-crossing formula for \(Z\), in terms of an integral transform, which should be equivalent to (83). We would like to thank Boris Pioline for many discussions on the relation between the approach of [1, 1] and the one presented in this paper.
There is another generalization of the formula (53) that one could consider. So far we have only included forward alien derivatives, and correspondingly purely instanton sectors. We can also consider alien derivatives in both the negative and the positive directions, which lead to amplitudes with both instantons and "negative instantons." Let us define
\[F^{(0|\ell)}(\nu;g_{s})=-F^{(\ell)}(\nu;-g_{s}),\qquad F^{(0|\ell)}_{\mathcal{ A}}(\nu;g_{s})=-F^{(\ell)}_{\mathcal{A}}(\nu;-g_{s}). \tag{96}\]
The basic alien derivative in the negative direction is simply,
\[\dot{\Delta}_{-\ell\mathcal{A}}F=aF^{(0|\ell)}\qquad\text{or}\qquad aF^{(0| \ell)}_{\mathcal{A}}, \tag{97}\]
depending on whether \(c\neq 0\) or \(c=0\). We can then consider the "mixed" Stokes automorphism:
\[\mathfrak{S}_{\mathcal{C}_{1},\mathcal{C}_{2}}=\exp\left(\sum_{\ell\geq 1} \mathcal{C}_{1}^{\ell}\dot{\Delta}_{\ell\mathcal{A}}+\mathcal{C}_{2}^{\ell} \dot{\Delta}_{-\ell\mathcal{A}}\right). \tag{98}\]
Acting on \(Z\), it has the structure
\[\mathfrak{S}_{\mathcal{C}_{1},\mathcal{C}_{2}}(Z)=\sum_{\ell_{1},\ell_{2}\geq 0 }\mathcal{C}_{1}^{\ell_{1}}\mathcal{C}_{2}^{\ell_{2}}\mathfrak{Z}^{(\ell_{1}| \ell_{2})}(\nu-(\ell_{1}-\ell_{2})\hbar)Z\left(\nu-(\ell_{1}-\ell_{2})\hbar \right), \tag{99}\]
where \(\mathfrak{Z}^{(0|0)}=1\). In this case, the boundary condition follows from (97), (96) and (39). It is given by,
\[\begin{split}&\exp\left\{a\sum_{\ell=1}^{\infty}\left(\mathcal{C}_{1 }^{\ell}F^{(\ell)}_{\mathcal{A}}+\mathcal{C}_{2}^{\ell}F^{(0|\ell)}_{\mathcal{ A}}\right)\right\}\\ &=\exp\left\{a\left(\mathrm{Li}_{2}(\mathcal{C}_{1})-\mathrm{Li}_{ 2}(\mathcal{C}_{2})\right)-a\frac{\mathcal{A}}{g_{s}}\left(\log(1-\mathcal{C}_ {1})+\log(1-\mathcal{C}_{2})\right)\right\}.\end{split} \tag{100}\]
By using the results in [14], we can generalize (53) to
\[Z(\nu;g_{s})\sum_{\ell_{1},\ell_{2}\geq 0}\mathcal{C}_{1}^{\ell_{1}} \mathcal{C}_{2}^{\ell_{2}}\mathfrak{Z}^{(\ell_{1}|\ell_{2})}(\nu) \tag{101}\] \[\qquad=\mathrm{e}^{a(\mathrm{Li}_{2}(\mathcal{C}_{1})-\mathrm{Li} _{2}(\mathcal{C}_{2}))}Z\left(\nu-a\hbar\log(1-\mathcal{C}_{1})-a\hbar\log(1- \mathcal{C}_{2});g_{s}\right).\]
It is easy to write this formula in the form (83) or (94).
## Appendix A Definition of correlators and free energy in topological recursion
To apply the topological recursion, we regard (6) as a family of spectral curves in the sense of [1] (i.e., a data consisting of a compact Riemann surface \(C\) and a pair \((x,y)\) of meromorphic functions on it), through the Weierstrass parametrization:
\[C=\mathbb{C}/\Lambda,\quad x(z)=\wp(z),\quad y(z)=\wp^{\prime}(z). \tag{102}\]
Here \(\wp(z)=\wp(z;g_{2},g_{3})\) is the Weierstrass \(\wp\)-function with \(g_{2}=-2t\) and \(g_{3}=-u\), which is doubly-periodic with periods \(\omega_{A}\) and \(\omega_{B}\) (we omit the \(t\) and \(\nu\) dependence for simplicity). \(\Lambda=\mathbb{Z}\omega_{A}+\mathbb{Z}\omega_{B}\) is the lattice generated by the periods of the elliptic curve (6).
Let \(z_{o}\in\mathbb{C}\) be a generic point, and \(\Omega\) be the quadrilateral with \(z_{o}\), \(z_{o}+\omega_{A}\), \(z_{o}+\omega_{B}\) and \(z_{o}+\omega_{A}+\omega_{B}\) on its vertices; that is, a fundamental domain of \(\mathbb{C}/\Lambda\). The ramification points (i.e., zeros of \(\mathrm{d}x\)) on \(\Omega\) are given by the half-periods \(r_{1}\equiv\omega_{A}/2\), \(r_{2}\equiv\omega_{B}/2\) and \(r_{3}\equiv(\omega_{A}+\omega_{B})/2\) modulo \(\Lambda\). These points correspond to the branch points \(e_{i}=x(r_{i})\) (\(i=1,2,3\)) of the elliptic curve which are defined by \(4x^{3}+2tx+u=4(x-e_{1})(x-e_{2})(x-e_{3})\). The covering involution \(y\mapsto-y\) is realized by \(z\mapsto\sigma(z)\equiv-z\) mod \(\Lambda\).
To run the topological recursion, we also need the Bergman bidifferential normalized along the chosen A-cycle, For our spectral curve, it is given by
\[B(z_{1},z_{2})=\left(\wp(z_{1}-z_{2})+\frac{\eta_{A}}{\omega_{A}}\right) \mathrm{d}z_{1}\mathrm{d}z_{2}. \tag{103}\]
Then, the Eynard-Orantin's topological recursion recursively constructs a doubly-indexed sequence of meromorphic multi-differentials \(\omega_{g,n}(z_{1},\ldots,z_{n})\) (\(g\geq 0\), \(n\geq 1\)) on the spectral curve, called _correlators_, as follows:
\[\omega_{0,1}(z_{1})=y(z_{1})\,\mathrm{d}x(z_{1}),\quad\omega_{0,2}(z_{1},z_{2 })=B(z_{1},z_{2}), \tag{104}\]
and for \(2g-2+n\geq 1\), we define
\[\omega_{g,n}(z_{1},\ldots,z_{n})=\sum_{j=1}^{3}\underset{z=r_{j}}{\mathrm{Res }}\ K(z_{1},z)R_{g,n}(z,z_{2},\ldots,z_{n}), \tag{105}\]
where
\[R_{g,n}(z,z_{2},\ldots,z_{n})=\omega_{g-1,n+1}(z,\sigma(z),z_{2},\ldots,z_{n} )+\sum_{\begin{subarray}{c}g_{1}+g_{2}=g\\ I\sqcup J=\{2,\ldots,n\}\end{subarray}}^{\prime}\omega_{g_{1},|I|+1}(z,z_{I} )\,\omega_{g_{2},|J|+1}(\sigma(z),z_{J}). \tag{106}\]
Here, the _recursion kernel_\(K(z_{1},z)\) is given by
\[K(z_{1},z)=\frac{1}{\left(y(z)-y(\sigma(z))\right)\mathrm{d}x(z)}\,\int_{w=0}^{w= z}\omega_{0,2}(z_{1},w). \tag{107}\]
We use the convention for tuple of variables as \(z_{I}=(z_{i_{1}},\ldots,z_{i_{k}})\) if \(I=\{i_{1},\ldots,i_{k}\}\), and the prime in the r.h.s. of (106) means that only indices satisfying \((g_{i},I_{i})\neq(0,\emptyset)\) are taken (i.e., \(\omega_{0,1}\) does not appear) in the summation.
Here we also recall the definition of the genus \(g\) free energy \(F_{g}=F_{g}(t,\nu)\) introduced in [1]. The genus \(0\) free energy \(F_{0}\) is defined in [1, SS4.2.2]. In our case, it is given by
\[F_{0}=\frac{t\,u}{5}+\frac{\nu}{2}\,\oint_{B}y\mathrm{d}x. \tag{108}\]
The genus \(1\) free energy \(F_{1}\) is also defined in [1, SS4.2.3] up to a multiplicative constant. We employ
\[F_{1}=-\frac{1}{12}\log\!\left(\omega_{A}^{6}\,\mathcal{D}\right) \tag{109}\]
as the definition. Here
\[\mathcal{D}=-8t^{3}-27u(t,\nu)^{2}\quad(=16(e_{1}-e_{2})^{2}(e_{2}-e_{3})^{2} (e_{3}-e_{1})^{2}) \tag{110}\]
is the discriminant of (6). Finally, we define the genus \(g\) free energy \(F_{g}\) for \(g\geq 2\) by
\[F_{g}=\frac{1}{2-2g}\,\sum_{j=1}^{3}\operatorname*{Res}_{z=r_{j}}\Phi(z)\, \omega_{g,1}(z), \tag{111}\]
where \(\Phi\) is any primitive of \(\omega_{0,1}\). See [1] for properties of \(\omega_{g,n}\) and \(F_{g}\).
|
2307.12293 | Dissipative learning of a quantum classifier | The expectation that quantum computation might bring performance advantages
in machine learning algorithms motivates the work on the quantum versions of
artificial neural networks. In this study, we analyze the learning dynamics of
a quantum classifier model that works as an open quantum system which is an
alternative to the standard quantum circuit model. According to the obtained
results, the model can be successfully trained with a gradient descent (GD)
based algorithm. The fact that these optimization processes have been obtained
with continuous dynamics, shows promise for the development of a differentiable
activation function for the classifier model. | Ufuk Korkmaz, Deniz Türkpençe | 2023-07-23T11:08:51Z | http://arxiv.org/abs/2307.12293v1 | # Dissipative learning of a quantum classifier
###### Abstract
The expectation that quantum computation might bring performance advantages in machine learning algorithms motivates the work on the quantum versions of artificial neural networks. In this study, we analyze the learning dynamics of a quantum classifier model that works as an open quantum system which is an alternative to the standard quantum circuit model. According to the obtained results, the model can be successfully trained with a gradient descent (GD) based algorithm. The fact that these optimization processes have been obtained with continuous dynamics, shows promise for the development of a differentiable activation function for the classifier model.
## I Introduction
The theory of learning artificial neural networks is four-ded on mathematical models adapted to the working principle of the human brain, introduced by McCulloch, Pitts and Rosenblatt [1; 2]. Particularly in the new millennium, in which the computing capacities of computers have increased, it has become a period in which deep learning methods outperform other methods in multi-layer artificial neural networks, which bring many useful applications [3; 4; 5].
Quantum computation (QC) brings exciting advantages to computer science and all relevant computational sciences [6; 7]. Although many efforts have been paid for quantum versions of neural networks (QNN), there is no broadly accepted QNN, even at a single neuron level [8; 9; 10; 11; 12; 13]. In addition, quantum noise severely limits the performance of gate-based quantum network proposals. Therefore hardware-efficient solutions have began to emerge [14; 15].
In past work, we proposed a dissipative quantum classifier as a basic unit of QNN hardware, based on repeated interactions protocol [16; 17; 18; 19]. Dissipation-based quantum computing has been shown to be equivalent to the standard QC model [20]. In the protocol, identical qubit sequences with pure initial quantum state successively interact with a target qubit. The repeated interactions are unitary in the weak coupling limit in a vanishingly small time portion. However, the quantum state of the target qubit is obtained by calculating the reduced dynamics, so that global evolution is a non-unitary process. We dub these identical qubit sequences quantum information reservoir [21; 22]. As a result of repeated interactions, the target qubit reaches a steady state in which the diagonal entries of its density matrix become identical to the information reservoir units. This process is known as quantum homogenization [23].
In this task, some amount of information is transferred from the reservoir to the target qubit at the steady state. This can be interpreted as quantum reservoirs being quantum channels that transfer information to open systems [24; 25]. All these approaches make sense for open quantum neuron design when the target qubit is connected to more than one information reservoir with arbitrary coupling strengths. In this case, the target qubit reaches a non-trivial steady state depending on the coupling coefficients (weights) and the input data parameters. We have numerically and analytically proposed that this model is an open quantum classifier that returns a binary decision at the steady state when measured by Pauli observables [16; 17].
In the current work, we study this model in the framework of supervised learning schemes by adopting a gradient descent-based model. To this end, we derive a cost function setting different parameters of the system as variables and examine the availability of the model for learning tasks. We observe that the cost function can be smoothly minimized for all relevant parameters with appropriate differentiability.
## II Model and system dynamics
### Classic model
Binary classification is a subtask for machine learning (ML) covering ANN alongside different models. However, If we discuss, in particular the artificial neural network model, the perceptron is referred to as a basic unit of ANN computing performing binary classification tasks. Technically speaking, a perceptron performs a binary decision \(z\) with binary labels \(\{0,1\}\) depending on the input. In the model, input is formulated as \(\varphi_{in}=\mathbf{x}^{\mathsf{T}}\mathbf{w}\) where \(\mathbf{x}=[\mathbf{x}_{1},\ldots\mathbf{x}_{\mathbf{N}}]^{\mathsf{T}}\) defines input feature instances and \(\mathbf{w}=[\mathbf{w}_{1},\ldots\mathbf{w}_{\mathbf{N}}]^{\mathsf{T}}\) is the set of corresponding weight vectors.
The binary output is modulated by, in general a non-linear function \(f(.)\) where \(z=f(\varphi_{in})\). The decision rule reads \(z=0\) if \(z=f(\varphi_{in})\geq 0\) and \(z=1\) else. The choice of binary labels is arbitrary and can be defined variously depending on the expressivity requirements. Note that, in principle, a perceptron with identity activation can still achieve linear classification. However, non-linear activation functions are desirable for multi-layer ANN learning tasks. Although our model, in principle, is a
quantum perceptron with identity activation, we prefer to present our model and related learning tasks as "quantum classifier learning".
Supervised learning can be defined as a mapping from a feature space to a binary label set
\[\mathcal{X},\mathcal{Y}\rightarrow\{0,1\} \tag{1}\]
where \(\mathcal{X}\) and \(\mathcal{Y}\) are, respectively, the input and output data of a given a training set \(\mathcal{S}=(\mathcal{X},\mathcal{Y})\). In this scheme, the \(\mathcal{Y}\) part of the training set is the desired output, and the cost function \(C\) quantifies how close the actual output is to the desired output.
In analogy with the least squares method, the cost function expression reads
\[C=\frac{1}{2}(\mathbf{Y}-\mathbf{A})^{\mathbf{2}} \tag{2}\]
where \(\mathbf{A}\) is the actual and \(\mathbf{Y}\) is the desired output. In general, the weight instances are updated
\[\mathbf{w_{k+1}}=\mathbf{w_{k}}+\delta\mathbf{w_{k}} \tag{3}\]
iteratively by back-propagation. However, any desired parameter can be adjusted to minimize the cost. Among different procedures, we adopt a gradient-descent based method for the training task. In this method, the change in the parameter reads
\[\delta w_{k}=-\eta\frac{\partial C}{\partial w_{k}} \tag{4}\]
where \(\eta\) is a non-negative number, the so-called learning rate, characterizing the speed of the learning task. As the name of the method implies, the partial derivative expresses the change of the parameter to be adjusted in the direction of the largest descent.
### Quantum dissipative dynamics
In this subsection, we discuss the open system dynamics with preliminary definitions. As we have pointed out in the previous sections, the model operates by a dissipative protocol. The input data expressed classically can be rephrased as
\[\varphi_{in}=\mathbf{x^{T}w}=\sum_{\mathbf{i}}\mathbf{w_{i}x_{i}}. \tag{5}\]
the weighted summation of the input features. In our view, the quantum equivalent of the classic description above reads
\[\Lambda_{t}[\varrho_{0}]=\sum_{i}P_{i}\Phi_{t}^{(i)}[\varrho_{0}] \tag{6}\]
where \(\Phi_{t}^{(i)}\) is a completely positive trace preserving (CPTP) quantum dynamical map acting on the target qubit \(\varrho_{0}\), \(P_{i}\) is the probability of the map interacting with the \(i\)th information reservoir. The subscript \(t\) stands for the time dependence of the maps generated by a physical process
\[\Phi_{t}^{(i)}[\varrho_{0}]=\mathrm{Tr}_{\mathcal{R}_{i}}\{U_{t}(\varrho_{0} \otimes\varrho_{\mathcal{R}_{i}})U_{t}^{\dagger}\} \tag{7}\]
with a unitary propagator \(U_{t}\) acting on both the target qubit and the reservoir. Here, \(\rho_{\mathcal{R}_{i}}\) is the \(i\)th reservoir quantum state and \(\mathrm{Tr}_{\mathcal{R}_{i}}\) is the partial trace over the \(i\)th reservoir.
The quantum reservoirs provide initial quantum data in pure states. Each reservoir is composed of non-correlated, non-interacting two-level quantum systems (subunits) defined by
\[\rho_{\mathcal{R}_{i}}=\bigotimes_{k=1}^{n}\rho_{k}(\theta_{i},\phi_{i}). \tag{8}\]
the tensor product of finite \(n\) subunits. As each subunit is in a pure quantum state, they could initially be prepared by identical Bloch parameters \(\rho_{k}(\theta_{i},\phi_{i})\). This parametrization allows for a dissipative equivalence of the model with parametrized quantum circuits.
### Quantum collision model and the quantum classifier
As mentioned above, the dynamical process of the introduced model relies on a standard quantum collisional model [23; 26; 27]. In our proposal, the target qubit undergoes a collisional dissipative process under multiple, independent information reservoirs with arbitrary couplings. In this scheme, the steady state readout of the target qubit by Pauli observables gives the binary classification output. The dynamical process in the presence of the \(i\)th information reservoir reads
\[\Phi_{n\tau}^{(i)}= \mathrm{Tr}_{n}\big{[}\mathcal{U}_{0i_{n}}\dots\mathrm{Tr}_{1} \big{[}\mathcal{U}_{0i_{1}}\left(\varrho_{0}\otimes\rho_{\mathcal{R}_{i_{1}}} \right)\mathcal{U}_{0i_{1}}^{\dagger}]\otimes\dots\] \[\dots\otimes\rho_{\mathcal{R}_{i_{n}}}\mathcal{U}_{0i_{n}}^{ \dagger}\big{]}. \tag{9}\]
Here, \(n\tau\) is the time elapsed of the dynamical map for \(n\) collisions and \(\mathcal{U}_{0i_{k}}=\exp[-\mathrm{i}\mathcal{H}_{0i}^{k}\tau]\) is the unitary propagator. Initially, system plus reservoir quantum states prepared in \(\varrho(0)=\varrho_{0}(0)\otimes\rho_{\mathcal{R}_{i}}\) a tensor product state. Note that the time dependence is only relevant for the target qubit, and after every collision, the reservoir states are reset to their initial state.
On the other hand, the Hamiltonian governing the system plus reservoir dynamics depicted as \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{int}\) where
\[\mathcal{H}_{0}=\frac{\hbar\omega_{0}}{2}\sigma_{0}^{z}+\frac{\hbar\omega_{i} }{2}\sum_{k=1}^{n}\sigma_{k_{i}}^{z} \tag{10}\]
is the free part and
\[\mathcal{H}_{int}=\hbar\sum_{k=1}^{n}g_{i}(\sigma_{0}^{+}\sigma_{k_{i}}^{-}+ \mathrm{H.c.}), \tag{11}\]
is the interaction part. Here, respectively, the Pauli-\(z\) operator, the Pauli raising and lowering operators read as \(\sigma^{z}\), \(\sigma^{+}\) and \(\sigma^{-}\). The Planck's constant divided by \(2\pi\) is taken as \(\hbar=1\) throughout the calculations. As a notable point, the value of the coupling coefficient \(g_{i}\ll\omega_{0}\) ranges within the weak coupling regime where the cross-talk between the reservoirs is avoided. Moreover, the coefficients are proportional to the probabilities \(g_{i}\propto P_{i}\) in eq. (6) as the quantum equivalent to the weights in the classic model.
Following the recipe above, the steady state of the target qubit in the presence of \(N\) distinct reservoirs reported as the solution of the collisional master equation [17]
\[\varrho_{0}^{\text{ss}}= \frac{1}{\sum_{i}^{N}g_{i}^{2}}\sum_{i=1}^{N}g_{i}^{2}\Big{(} \langle\sigma_{i}^{+}\sigma_{i}^{-}\rangle\ket{e}\bra{e}+\langle\sigma_{i}^{- }\sigma_{i}^{+}\rangle\ket{g}\bra{g}\] \[+i\gamma_{1}^{-}\left(\langle\sigma_{i}^{+}\sigma_{i}^{-}\rangle- \langle\sigma_{i}^{-}\sigma_{i}^{+}\rangle\right)\ket{e}\bra{g}+\text{H.c.} \Big{)} \tag{12}\]
where \(\ket{e}\) and \(\ket{g}\) are the computational basis and \(\gamma_{1}^{-}=r\tau\sum_{i=1}^{N}g_{i}\langle\sigma_{i}^{-}\rangle\), \(r\) being the interaction rate of the master equation. The binary decision at the steady state is read upon the Pauli-\(z\) operator acting on the target qubit
\[\langle\sigma_{z}^{0}\rangle^{ss}=\frac{1}{g_{\sum}}\sum_{i}^{N}g_{i}^{2} \langle\sigma_{z}\rangle_{i} \tag{13}\]
as the classification identifier where \(g_{\sum}=\sum_{i}g_{i}^{2}\). Based the eqs (12) and (13), finally the binary classification rule reads
\[Decision:\begin{cases}0,&\langle\sigma_{z}^{0}\rangle^{ss}=\frac{1}{g_{\sum}} \sum_{i}^{N}g_{i}^{2}\langle\sigma_{z}\rangle_{i}\geq 0\\ 1,&\text{else}\end{cases} \tag{14}\]
where \(\langle\sigma_{z}\rangle_{i}\) is the \(i\)th information reservoir magnetization. The steady state binary decision expressed by the Pauli-\(z\) observable is a summation of the input quantum data weighted by respective couplings. This is reasonable as the classic model has a similar expression.
Figure 1 depicts the numerical verification of the introduced model as a benchmark calculation. Here, the target qubit contacted two different information reservoirs with \(g_{1}\) and \(g_{2}\) couplings. The quantum state of the reservoirs are \(\ket{\Psi(\theta=0,\phi=0)}\equiv\ket{\uparrow}\) and
\(\ket{\Psi(\theta=\pi,\phi=0)}\equiv\ket{\downarrow}\), respectively. The dots on the curve represents the steady state magnetization of the target qubit corresponding to \(g_{1},g_{2}\) coupling values. These values modulated as \(g_{1}=g/2-\Delta g\) and \(g_{2}=g/2+\Delta g\) where \(-0.5g<\Delta g<0.5g\). For instance, when \(\Delta g=-0.5g\), \(g_{1}=g\) and \(g_{2}=0\) means that the target qubit is in contact only with the first reservoir with the quantum state \(\ket{\uparrow}\) and vice versa. In these limits, the steady state magnetization gets \(\langle\sigma_{z}\rangle=1,-1\) and takes intermediate values when \(-0.5g<\Delta g<0.5g\) as expected. In the numerical simulation, we have used the realistic parameters of the superconducting circuits in the weak coupling range [28]. Transmon qubits operate at a resonator frequency \(\omega_{r}\sim 1-10\) GHz with \(g\sim 1-100\) MHz effective qubit-qubit coupling.
As depicted above, the relevant parameters are the Bloch parameters \(\{\theta,\phi\}\), characterizing the input quantum data. Looking more closely at eqs 12 and 13, one can see the signatures of input data at the steady state as expected values. The expected values can be related to the Bloch parameters as
\[\rho_{\mathcal{R}_{i}} =\begin{bmatrix}\frac{1+\cos\theta_{i}}{2}&\frac{e^{-i\phi_{i}}}{ 2}\sin\theta_{i}\\ \frac{e^{i\phi_{i}}}{2}\sin\theta_{i}&\frac{1-\cos\theta_{i}}{2}\end{bmatrix}\] \[:=\begin{bmatrix}\langle\sigma_{i}^{+}\sigma_{i}^{-}\rangle& \langle\sigma_{i}^{-}\rangle\\ \langle\sigma_{i}^{+}\rangle&\langle\sigma_{i}^{-}\sigma_{i}^{+}\rangle\end{bmatrix} \tag{15}\]
where \(\rho_{\mathcal{R}_{i}}\) is the quantum state of the \(i\)th reservoir. Therefore in our model, Pauli-\(z\) and Pauli-\(y\) observables can be chosen to extract relevant information for \(\theta\) and \(\phi\) parameters, respectively, at the steady state. Expectation value of the Pauli-\(y\) observable of the target qubit at the steady state reads
\[\langle\sigma_{y}^{0}\rangle^{ss}=\frac{-(\gamma_{1}^{-}+\gamma_{2}^{+})}{g_{ \sum}}\sum_{i}^{N}g_{i}^{2}\langle\sigma_{z}\rangle_{i} \tag{16}\]
where \(\gamma_{1}^{-}=r\tau\sum_{i=1}^{N}g_{i}\langle\sigma_{i}^{-}\rangle\), \(\gamma_{2}^{+}=r\tau\sum_{i=1}^{N}g_{i}\langle\sigma_{i}^{+}\rangle\). Regarding eqs (13) and (16) together, one evaluates that relevant information of the Bloch parameters can be extracted at the steady state of the target qubit through Pauli observables.
Figure 1: (Colour online.) The steady state magnetization of the target qubit depending on the variation of the couplings to the reservoirs. Variations of the coupling rates are \(g_{1}=g/2-\Delta g\), \(g_{2}=g/2+\Delta g\). Here, \(\Delta g\) represents a fraction of \(g\) with \(g=0.01\). The probe qubit prepared initially in \(\ket{+}=(\ket{e}+\ket{g})/\sqrt{2}\) state and interacted collisionally with the reservoir units \(\ket{\Psi(\theta,\phi)}\) with \(\theta=0\), \(\phi=0\) and \(\theta=\pi\), \(\phi=0\). The target qubit-reservoir interaction time \(\tau=3\) and the coupling coefficient \(g\) are dimensionless and scaled by \(\omega_{r}\).
## III Learning of the model
In this section, we explore the gradient descent-based learning of the introduced open classifier model. First we define the cost function to be optimized as
\[C=\frac{1}{2}(\langle\sigma_{\lambda}^{0}\rangle_{des}^{ss}-\langle\sigma_{ \lambda}^{0}\rangle_{act}^{ss})^{2}, \tag{17}\]
where \(\lambda=\{y,z\}\) denotes the Pauli matrices we choose for specific parameters. Here, \(\sigma_{\lambda}^{0}\rangle_{des}^{ss}\) is the desired and \(\langle\sigma_{\lambda}^{0}\rangle_{act}^{ss}\) is the actual steady state expectation values of the target qubit for the Pauli observable \(\sigma_{\lambda}\). Definition of the cost function above is similar to [29], however, note that the expected values are obtained in steady states in our task.
Following the classic definitions, we rephrase eqs (4) and (5) as
\[\nu_{\mathbf{k+1}}=\nu_{\mathbf{k}}+\delta\nu_{\mathbf{k}} \tag{18}\] \[\delta\nu_{k}=-\eta\frac{\partial C}{\partial\nu_{k}} \tag{19}\]
where \(\nu\)=\(\{g,\theta,\phi\}\). Therefore, the relevant parameters are the Bloch parameters and the couplings of the target qubit with the reservoirs. We first derive the cost function for \(g\) corresponding to the weights in the classic model (see A). However, we also examine the learning tasks for the Bloch parameters \(\theta\) and \(\phi\) corresponding to fixed values of \(g\).
Figure 1(a) depicts the cost minimization given the parameters against the episodes (the \(k\) index) of \(\nu=g\) in eqs (18) and (19) depending on different values of \(\eta\) when the target qubit contacted to two reservoirs. That is, we examine the model using different learning rates (or different optimization speeds). We observe that the optimization always has a smooth feature for different \(\eta\)s. In the problem, we also observe that the largest possible learning rate is one order of magnitude smaller than the coupling rate.
Figures 2(a) and 2(b) present the same minimization problem when considering the surface topology of the cost function. In the single target qubit case coupled to two information reservoirs, the structure of the surface cost function seems trivial to optimize without any local plateaus. Therefore, the success of optimization depends on the selection of the learning rate value. In figure 2(a), the model successfully performs the optimization task with an appropriate learning rate. However, an unstable procedure occurs when a very large value of \(\eta\) is selected, as in figure 2(b). Although, in the figure, cost function minimization seems to have been successfully achieved, in most similar problems the iteration value extends beyond the cost function surface. This is known as 'overshooting' the minimum.
Conversely, extremely small learning rate values lead to being stuck in the local minimums. Therefore adaptive tasks, in which the learning rates might take different values during the process, are developed for GD-based methods [30]. We find that for the training of the open classifier model, one order of magnitude around the coupling rate in the weak coupling regime seems a reasonable choice for \(\eta\).
As we have pointed out above, we also examine the cases where the couplings to the reservoirs are fixed. In this case, input data parameters are assumed to be adjustable to obtain the desired output. Regarding figure 1(b), we observe, again, smooth convergence with three orders of magnitude greater learning rate than the coupling coefficient. Corresponding cost function is depicted in figure 3(a). In this case, the Bloch parameter \(\theta\) is iterated to minimize the cost. See eqs (A2) and (A4) to obtain the cost function in case of the \(\theta\) parameter
dependent iteration. The Pauli-\(z\) observable is, again, relevant in the calculations.
Next, we consider the training task concerning the Bloch parameter \(\phi\). The Pauli-\(y\) observable was chosen to extract \(\phi\) parameter-dependent data. This task requires special attention as the proposed classifier operates as an open quantum system, driven by non-equilibrium reservoirs. Steady states bear mixed quantum states in which quantum coherent information is irreversibly lost. However, some non-vanishing quantum coherence may exist when the system is driven by non-equilibrium environments. [31; 32]. In our case, eq. (12) demonstrates that the target qubit retains quantum coherence at the steady state as the non-diagonal part of the density matrix is non-zero. In addition, eq. (16) states that the steady coherence is weighted by the coupling coefficients where it can be parametrized by \(\phi\) through the Pauli-\(y\) observable.
Figure 3(b) shows the cost minimization depending on different learning rates. The cost function values around the \(\times 10^{-5}\) scale reveals a small coherence value at the steady state compared to the diagonal elements of the target qubit density matrix. In addition, the \(\eta\) value for the \(\phi\) parameter optimization has the largest value compared to the optimizations for the \(g,\theta\) parameters. Finally, figure 5 represents he cost function minimization considering the update of Bloch parameters \(\phi_{1}\) and \(\phi_{2}\). The 3D surface of the cost function is similar to figure 3(a), only differing by the value of the learning rate.
If a comment is made by evaluating all the results together, we see that the proposed classifier is suitable for GD-based training schemes. Moreover, open system dynamics allows for smooth convergence in learning tasks which makes the model favourable for multi layer feedforward networks once an activation function is introduced. Since binary classification is a task in itself for ML, the model we propose is a candidate to be a train
able model in ML processes, even when considered alone.
## IV Conclusions
In this study, we examined the training of a classifier model based on the open quantum model in different parameter spaces with the GD-based method. Using our analytical results, we have derived cost functions for three different parameters for training our model and made calculations that minimize the cost functions with the gradient descent algorithm. Obtaining the classification response of the model in a stationary state makes the system dynamics continuous dynamics. As a result of this, we achieved optimization of the model, namely its training with smooth, continuous results. Since the training processes are continuous, which means that they are differentiable, it is concluded that the model we propose is suitable for developing an activation function and using it in larger quantum networks. In addition, although the classification result is taken in a stationary state, it becomes possible to train in all Bloch parameter spaces as well as the coupling coefficients by the steady quantum coherence.
Our study revealed that the derived cost functions are trained at different values of learning rates for corresponding parameters. In our model, cost functions successfully minimized with appropriate learning coefficients.
## Acknowledgment
The authors acknowledge support from the Scientific and Technological Research Council of Turkey (TUBITAK-Grant No. 120F353). The authors also wish to extend special thanks to the Cognitive Systems Lab in the Department of Electrical Engineering providing the atmosphere for motivational and stimulating discussions.
## Appendix A Derivation of the cost function
In this section, we present the mathematical justifications for numerical calculations in the text. First, we substitute \(\nu=g\) in eq. (19)
\[\delta g_{i}=-\eta\frac{\partial C}{\partial g_{i}}. \tag{19}\]
and obtain the cost function expression taking the partial derivative with respect to the coupling constant \(g\).
\[\frac{\partial C}{\partial g_{i}}=(\langle\sigma_{z}^{0}\rangle_{des}^{ss}- \langle\sigma_{z}^{0}\rangle_{act}^{ss})(-\frac{\partial(\sigma_{z}^{0})_{ act}^{ss}}{\partial g_{i}}) \tag{20}\]
In our current example, we have two information reservoirs corresponding to specific magnetizations. Therefore, the actual steady state magnetization (eq. (13)) reads as
\[A=\langle\sigma_{z}^{0}\rangle_{act}^{ss}=\frac{g_{1}^{2}\langle\sigma_{z}^{1 }\rangle+g_{2}^{2}\langle\sigma_{z}^{2}\rangle}{g_{1}^{2}+g_{2}^{2}}. \tag{21}\]
According to the recipe to derive the cost function, the partial derivatives with respect to \(g_{1}\) and \(g_{2}\) separately obtained as
\[\frac{\partial A}{\partial g_{1}}=\frac{2g_{1}\langle\sigma_{z}^{1}\rangle(g_ {1}^{2}+g_{2}^{2})-2g_{1}(g_{1}^{2}\langle\sigma_{z}^{1}\rangle+g_{2}^{2} \langle\sigma_{z}^{2}\rangle)}{(g_{1}^{2}+g_{2}^{2})^{2}}\] \[\frac{\partial A}{\partial g_{2}}=\frac{2g_{2}\langle\sigma_{z}^{ 2}\rangle(g_{1}^{2}+g_{2}^{2})-2g_{2}(g_{1}^{2}\langle\sigma_{z}^{1}\rangle+g _{2}^{2}\langle\sigma_{z}^{2}\rangle)}{(g_{1}^{2}+g_{2}^{2})^{2}} \tag{22}\]
In our example, the desired magnetization is \(\langle\sigma_{z}^{0}\rangle_{des}^{ss}=0.4\) a constant value in the cost function. After substituting eqs. (21) and (22) in eq. (20), the expression obtained after substituting them in eq. (19), eq. (18) becomes as follows:
\[(g_{1})_{k+1}=(g_{1})_{k}+\delta(g_{1})_{k}\] \[(g_{2})_{k+1}=(g_{2})_{k}+\delta(g_{2})_{k}. \tag{23}\]
Next, we substitute \(\nu=\theta\) in eq. (19) as
\[\delta\theta_{i}=-\eta\frac{\partial C}{\partial\theta_{i}}. \tag{24}\]
Regarding eq. (15), one can easily see that the magnetization of the \(i\)th reservoir is \(\langle\sigma_{z}\rangle_{i}=\langle\sigma_{i}^{+}\sigma_{z}^{-}\rangle- \langle\sigma_{i}^{-}\sigma_{i}^{+}\rangle\). Therefore, azimuth parameter-dependent expression of the magnetization can be easily written as \(\langle\sigma_{z}\rangle_{i}=\cos\theta_{i}\).
Equation (22) is obtained when we take the partial derivative of the cost function with respect to \(\theta\).
\[\frac{\partial C}{\partial\theta_{i}}=(\langle\sigma_{z}^{0}\rangle_{des}^{ss} -\langle\sigma_{z}^{0}\rangle_{act}^{ss})(-\frac{\partial\langle\sigma_{z}^{0 }\rangle_{act}^{ss}}{\partial\theta_{i}}) \tag{25}\]
In our current example, we have two information reservoirs corresponding to specific magnetizations. Therefore, the actual steady state magnetization (eq. (13)) reads as
\[A=\langle\sigma_{z}^{0}\rangle_{act}^{ss}=\frac{g_{1}^{2}\cos\theta_{1}+g_{2}^{2 }\cos\theta_{2}}{g_{1}^{2}+g_{2}^{2}}. \tag{28}\]
According to the recipe to derive the cost function, the partial derivatives with respect to \(\theta_{1}\) and \(\theta_{2}\) separately obtained as
\[\frac{\partial A}{\partial\theta_{1}}=-\frac{g_{1}^{2}\sin\theta_{1}}{g_{1}^ {2}+g_{2}^{2}}\] \[\frac{\partial A}{\partial\theta_{2}}=-\frac{g_{2}^{2}\sin\theta_ {2}}{g_{1}^{2}+g_{2}^{2}} \tag{29}\]
In our example, the desired magnetization is \(\langle\sigma_{z}^{0}\rangle_{des}^{ss}=0\) a constant value in the cost function. After substituting eqs (28) and (29) in eq. (27), the expression obtained after substituting them in eq. (A6), eq. (18) becomes as follows:
\[\left(\theta\right)_{\mathbf{k+1}}=\left(\theta_{\mathbf{1}}\right)_{ \mathbf{k}}+\delta\left(\theta_{\mathbf{1}}\right)_{\mathbf{k}}\] \[\left(\theta_{\mathbf{2}}\right)_{\mathbf{k+1}}=\left(\theta_{ \mathbf{2}}\right)_{\mathbf{k}}+\delta\left(\theta_{\mathbf{2}}\right)_{ \mathbf{k}}. \tag{30}\]
Let's edit Eq. (18) for \(\nu=\phi\)
\[\delta\phi_{i}=-\eta\frac{\partial C}{\partial\phi_{i}}. \tag{31}\]
Equation (32) is obtained when we take the partial derivative of the cost function with respect to \(\phi\).
\[\frac{\partial C}{\partial\phi_{i}}=(\langle\sigma_{y}^{0}\rangle_{des}^{ss}- \langle\sigma_{y}^{0}\rangle_{act}^{ss})(-\frac{\partial\langle\sigma_{y}^{0} \rangle_{act}^{ss}}{\partial\phi_{i}}) \tag{32}\]
In our current example, we have two information reservoirs corresponding to specific magnetizations. Therefore, the actual steady state magnetization (eq. (13)) by using eq. (15) reads as
\[A=\langle\sigma_{y}^{0}\rangle_{act}^{ss}=-r\tau\frac{g_{1}^{3} \sin\theta_{1}\cos\theta_{1}\cos\phi_{1}+g_{1}g_{2}^{2}\sin\theta_{1}\cos \theta_{2}\cos\phi_{1}+g_{1}^{2}g_{2}\cos\theta_{1}\sin\theta_{2}\cos\phi_{2} +g_{2}^{3}\sin\theta_{2}\cos\theta_{2}\cos\phi_{2}}{g_{1}^{2}+g_{2}^{2}}. \tag{33}\]
According to the recipe to derive the cost function, the partial derivatives with respect to \(\phi_{1}\) and \(\phi_{2}\) separately obtained as
\[\frac{\partial A}{\partial\phi_{1}}=r\tau\frac{g_{1}^{3}\sin \theta_{1}\cos\theta_{1}\sin\phi_{1}+g_{1}g_{2}^{2}\sin\theta_{1}\cos\theta_{2 }\sin\phi_{1}}{g_{1}^{2}+g_{2}^{2}}\] \[\frac{\partial A}{\partial\phi_{2}}=r\tau\frac{g_{1}^{2}g_{2} \cos\theta_{1}\sin\theta_{2}\sin\phi_{2}+g_{2}^{3}\sin\theta_{2}\cos\theta_{2 }\sin\phi_{2}}{g_{1}^{2}+g_{2}^{2}} \tag{34}\]
In our example, the desired magnetization is \(\langle\sigma_{y}^{0}\rangle_{des}^{ss}=0\) a constant value in the cost function. After substituting eqs (33) and (34) in eq. (32), the expression obtained after substituting them in eq. (31), eq. (35) becomes as follows:
\[\left(\phi\right)_{\mathbf{k+1}}=\left(\phi_{\mathbf{1}}\right)_{ \mathbf{k}}+\delta\left(\phi_{\mathbf{1}}\right)_{\mathbf{k}}\] \[\left(\phi_{\mathbf{2}}\right)_{\mathbf{k+1}}=\left(\phi_{\mathbf{ 2}}\right)_{\mathbf{k}}+\delta\left(\phi_{\mathbf{2}}\right)_{\mathbf{k}}. \tag{35}\]
In our example, the desired magnetization is \(\langle\sigma_{y}^{0}\rangle_{des}^{ss}=0\) a constant value in the cost function. After substituting eqs (33) and (34) in eq. (32), the expression obtained after substituting them in eq. (31), eq. (35) becomes as follows:
\[\left(\phi\right)_{\mathbf{k+1}}=\left(\phi_{\mathbf{1}}\right)_{ \mathbf{k}}+\delta\left(\phi_{\mathbf{1}}\right)_{\mathbf{k}}\] \[\left(\phi_{\mathbf{2}}\right)_{\mathbf{k+1}}=\left(\phi_{ \mathbf{2}}\right)_{\mathbf{k}}+\delta\left(\phi_{\mathbf{2}}\right)_{ \mathbf{k}}. \tag{36}\]
|
2308.02605 | Revision of Sabine's reverberation theory by following a different
approach to Eyring's theory | The room acoustic theory was established based on Sabine's reverberation
theory. However, in Sabine's theory, the reverberation time does not reach
zero, even if the absolute absorption condition is satisfied. This is a
contradiction of Sabine's theory, and Eyring revised the reverberation theory
to resolve this contradiction. In this paper, a theoretical framework for the
consistent reverberation theory is presented. Using this framework, it was
demonstrated that Eyring's theory has a contradiction between the sound energy
density in the steady state and energy decay from the steady state, which is
absent in Sabine's theory. Based on the proposed theoretical framework,
Sabine's reverberation theory was revised using an approach that is different
from that of Eyring. The reverberation time obtained using the revised theory
was shorter than that obtained using Sabine's theory and longer than that
obtained using Eyring's theory. Results of sound ray tracing simulations were
in better agreement with the values calculated using the revised theory rather
than those calculated using Sabine's and Eyring's theories. | Toshiki Hanyu | 2023-08-04T10:14:25Z | http://arxiv.org/abs/2308.02605v4 | **Revision of Sabine's reverberation theory by following a different approach to Eyring's theory**
**Revision of Sabine's reverberation theory by following a different approach to Eyring's theory**
Toshiki Hanyu1*
Footnote *: [email protected]
\({}^{1}\) _Nihon University, Junior College, Dept. of Architecture and Living Design, 7-24-1 Narashinodai, Funabashi, Chiba, 274-8501 Japan_
**Abstract: The room acoustic theory was established based on Sabine's reverberation theory. However, in Sabine's theory, the reverberation time does not reach zero, even if the absolute absorption condition is satisfied. This is a contradiction of Sabine's theory, and Eyring revised the reverberation theory to resolve this contradiction. In this paper, a theoretical framework for the consistent reverberation theory is presented. Using this framework, it was demonstrated that Eyring's theory has a contradiction between the sound energy density in the steady state and energy decay from the steady state, which is absent in Sabine's theory. Based on the proposed theoretical framework, Sabine's reverberation theory was revised using an approach that is different from that of Eyring. The reverberation time obtained using the revised theory was shorter than that obtained using Sabine's theory and longer than that obtained using Eyring's theory. Results of sound ray tracing simulations were in better agreement with the values calculated using the revised theory rather than those calculated using Sabine's and Eyring's theories.**
**Keywords: Reverberation theory, Reverberation time, Sabine, Eyring, Revised theory**
## 1 Introduction
The room acoustic theory is based on Sabine's [1] and Eyring's [2] reverberation theories. Over the years, it has been improved from various points of view, such as the concept of averaging absorption [3], air absorption [4], directional reverberation [5, 6], uneven absorption [7, 8], and consideration of sound scattering [9, 10, 11]. However, even in present times, Sabine's and Eyring's theories are the most important theoretical frameworks for room acoustics. The important physical quantities derived from these theories are the average sound pressure level (sound energy density) and reverberation time (sound energy decay). Despite that, Sabine's and Eyring's theories are incompatible in these aspects, and are not an integrated and consistent theoretical system. In particular, Eyring's theory has contradictions regarding the sound energy density in a steady state and energy decay from the steady state.
This study presents the theoretical framework for a consistent system, and the contradictions in the current room acoustic theory are clarified. It aims to revise Sabine's reverberation theory by following a different approach to Eyring's theory based on the proposed theoretical framework.
## 2 Framework of reverberation theory
### Theoretical framework
It is assumed that a sound source with sound power \(W\) emits sound for an extremely short time \(\Delta t\) in a room with volume \(V\) and that the sound energy density \(E\) exponentially decays with a decay parameter \(\lambda\). This is expressed using Eq. (1). This equation expresses the average impulse response in the room.
\[\Delta E(t)=\frac{W\Delta t}{V}\exp\left(-\lambda t\right)\cdot \tag{1}\]
Eq. (2) is obtained by setting \(\Delta t\to 0\).
\[\frac{dE(t)}{dt}=\frac{W}{V}\exp\left(-\lambda t\right). \tag{2}\]
The Schroeder integration [12] of Eq. (2) yields the following equation, which expresses the sound energy decay from the steady state of the sound fields:
\[E(\,t)=\frac{W}{V}\int_{t}^{\infty}\exp(-\,\lambda t)\ d\tau=\frac{W}{V}\frac {1}{\lambda}\exp(-\,\lambda t)\,. \tag{3}\]
In Eq. (3), \(W/V\lambda\) is the sound energy density \(E_{0}\), and \(\exp\ (-\lambda t)\) denotes the reverberation decay. In this theoretical framework, \(\exp\ (-\lambda t)\) is construed as the time variation of the probability that the emitted sound energy remains in a room. Based on these equations, using \(\lambda\), the
sound energy density \(E_{0}\) in the steady state can be defined using Eq. (4), and the reverberation decay from the steady state can be expressed using Eq. (5).
\[E_{0}=\frac{W}{V\lambda}. \tag{4}\]
\[E(t)=E_{0}\exp\left(-\lambda t\right). \tag{5}\]
In the theoretical framework, the sound energy density of the steady state and reverberation decay are determined by the identical parameter \(\lambda\). Parameter \(\lambda\) is a key factor in the room acoustic theory. Thus, to achieve a consistent theoretical system, parameter \(\lambda\) in the sound energy density \(E_{0}\) and the reverberation decay \(\exp(-\lambda t)\) should be identical.
### Reverberation time and average sound pressure level in the theoretical framework
The reverberation time and average sound pressure level are defined using \(\lambda\) according to the theoretical framework. The reverberation time \(T\) is defined as the time in which the sound energy density becomes \(10^{-6}E_{0}\). This is expressed as follows:
\[10^{-6}=\exp\left(-\lambda T\right). \tag{6}\]
Solving for \(T\) defines the reverberation time using \(\lambda\) as follows:
\[T=\frac{6\ln(10)}{\lambda}\ \ \ \ \ \ [\mathrm{s}]\,. \tag{7}\]
Using \(I=cE\), Eq. (4) can be transformed into \(I=cW/V\lambda\) where \(c\) is the speed of sound. Therefore, when the sound power level is \(L_{w}\), the average sound pressure level \(L_{p}\) can be calculated using Eq. (8) as follows:
\[L_{p}=L_{w}+10\log_{10}\left(\frac{c}{V\lambda}\right)\ \ \ \ \ [\mathrm{dB}]\,. \tag{8}\]
As shown in Eqs. (7) and (8), important physical quantities in room acoustics are determined using the identical parameter \(\lambda\).
## 3 Theories of Sabine and Eyring regarding the framework
### Sabine's theory
In Sabine's theory, reverberation decay from the steady state can be expressed using Eq. (9), where \(S\) and \(\bar{\alpha}\) are the total surface area and average absorption coefficient of the room, respectively.
\[E(t)=\frac{W}{V}\frac{4V}{cS\bar{\alpha}}\exp\left(-\frac{cS\bar{\alpha}}{4V} t\right). \tag{9}\]
From Eq. (9), \(\lambda\) becomes \(cS\bar{\alpha}/4V\) identically both in the sound energy density of the steady state and in the reverberation decay. Therefore, Sabine's theory is consistent with the theoretical framework. If \(\lambda=cS\bar{\alpha}/4V\) is substituted into Eqs. (7) and (8), the well-known definitions of \(T\) and \(L_{p}\) can be obtained using Eqs. (10) and (11), respectively, where \(\lambda\) is an equivalent absorption area.
\[T=\frac{24\ln(10)V}{cS\bar{\alpha}}=\frac{0.163V}{\lambda}\ \ \ \ [\mathrm{s}]\,. \tag{10}\]
\[L_{p}=L_{w}+10\log_{10}\left(\frac{4}{A}\right)\ \ \ \ [\mathrm{dB}]\,. \tag{11}\]
As mentioned previously, Sabine's theory is consistent with the theoretical framework described in Section 2. However, the reverberation time in Sabine's theory does not reach zero, even if \(\bar{\alpha}=1.0\). This has been considered as a shortcoming of Sabine's theory.
### Eyring's theory
In Eyring's theory, the reverberation decay from the steady state becomes \(E(n)=E_{0}(1-\bar{\alpha})^{n}\) where \(n\) is an order of reflection. When the mean free path of room \(\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{ \barbar{ }}}}}}}}}}}}}}}} \) is defined as \(\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{\bar{\bar{\bar{\bar{\barbar{\bar
Therefore, Eyring's theory is inconsistent with the theoretical framework described in Section 2.
## 4 Revision of Sabine's theory
### Parameter \(\lambda\) in revised theory
Here, we examine a revision of Sabine's theory. In the revised theory, reverberation is defined as "a decay of average sound energy density in the entire space, which includes both direct sound and reflected sounds." Moreover, it was assumed that a perfect diffusion state would be maintained both in a steady state and in the entire reverberation process.
Fig. 1 shows a conceptual diagram of the reverberation decay based on Eq. (2). In Sabine's theory, the reverberation decay in Eq. (2) becomes \((W/V)\)exp\((\)-\(ct/\bar{\ell})\) for \(\bar{\alpha}\)=1, where \(\bar{\ell}=4V/S\). Because reflected sounds are not generated under these conditions, this decay is construed as the decay of the direct sound.
The sound energy of the direct sound does not vanish till it reaches wall surfaces even when \(\bar{\alpha}\)=1. Thus, if the sound source stops in a steady state, the sound energy of the direct sound remains in space for a while. This is the concept of reverberation of direct sound. According to this concept, the reverberation time does not necessarily reach zero, as per Eyring's theory.
However, it is incorrect that the direct sound energy remains far beyond time \(t=\bar{\ell}/c\). Therefore, the direct sound is limited to the time \(t=\bar{\ell}/c\) on average in the revised theory. Based on this, the energy density of the direct sound \(E_{d}\) can be calculated using Eq. (16).
\[E_{d}\!=\!\frac{W}{V}\!\int_{0}^{\frac{\bar{\ell}}{c}}\!\exp\!\left(\!-\frac{ c}{\ell}\tau\right)\!d\tau\!=\!\frac{W}{V}\frac{\bar{\ell}}{c}\!\big{(}\ 1-e^{-1}\big{)}. \tag{16}\]
The energy density of the direct sound becomes \(W\bar{\ell}/c\nu\) in both Sabine's and Eyring's theories but \(W\bar{\ell}(1-e^{-1})/cV\) in the revised theory. Based on this, the energy density in the steady state \(E_{0}\) can be obtained using Eq. (17).
\[\begin{split} E_{0}&=\frac{W}{V}\frac{\bar{\ell}}{ c}\bigg{[}\big{(}1-e^{-1}\big{)}+\sum_{n=1}^{\infty}\big{(}1-\bar{\alpha} \big{)}^{n}\bigg{]}\\ &=\frac{W}{V}\frac{4V\left(1-e^{-1}\bar{\alpha}\right)}{cS\bar{ \alpha}}\end{split} \tag{17}\]
Comparing Eqs. (17) to (4), parameter \(\lambda\) in the revised theory can be derived as \(cS\bar{\alpha}/4V(1-e^{-1}\bar{\alpha})\). Using parameter \(\lambda\), the reverberation decay from the steady state can be expressed using Eq. (18), where \(R\) is the room constant and \(R=S\bar{\alpha}/(1-\bar{\alpha})\).
\[\begin{split} E(t)&=\frac{W}{V}\frac{4V\left(1-e^{ -1}\overline{\alpha}\right)}{cS\overline{\alpha}}\exp\left[-\frac{cS\overline{ \alpha}}{4V\left(1-e^{-1}\overline{\alpha}\right)}t\right]\\ &=\frac{4W}{c}\bigg{(}\frac{1}{A}-\frac{1}{eS}\bigg{)}\!\exp \left[-\bigg{(}\frac{1}{A}-\frac{1}{eS}\bigg{)}^{\!\!-1}\frac{ct}{4V}\right] \\ &=\frac{4W}{c}\bigg{(}\frac{1}{R}+\frac{1-e^{-1}}{S}\bigg{)}\!\exp \left[-\bigg{(}\frac{1}{R}+\frac{1-e^{-1}}{S}\bigg{)}^{\!\!-1}\frac{ct}{4V} \right]\end{split}. \tag{18}\]
### Reverberation time and average sound pressure level
From Eq. (7), the reverberation time in the revised theory can be defined using Eq. (19).
\[\begin{split}& T=\frac{0.163V}{A}\big{(}1-e^{-1}\overline{ \alpha}\big{)}\\ &=0.163V\bigg{(}\frac{1}{A}-\frac{1}{eS}\bigg{)}\!=0.163V\bigg{(} \frac{1}{R}+\frac{1-e^{-1}}{S}\bigg{)}\!\bigg{[}s\big{]}\end{split}. \tag{19}\]
The reverberation time from Eq. (19) is equal to Sabine's reverberation time multiplied by \((1-e^{-1}\bar{\alpha})\). Stephenson [17] clarified that for small \(\bar{\alpha}\), Eyring's reverberation time is nearly equal to Sabine's reverberation time multiplied by \((1-0.5\bar{\alpha})\). Therefore, because of the factor \((1-e^{-1}\bar{\alpha})\) of the revised theory, the reverberation time obtained using the revised theory would be shorter than that of Sabine's theory and longer than that of Eyring's theory.
Substituting \(\lambda=cS\bar{\alpha}/4V(1-e^{-1}\bar{\alpha})\) into Eq. (8), the average sound pressure level \(L_{p}\) in the revised theory can be expressed using Eq. (20). According to Eq. (20), \(L_{p}\) includes the components of the reflected \(4/R\) and direct \((1-e^{-1})/S\) sounds.
\[\begin{split} L_{p}&=L_{w}+10\log_{10}\left(\frac{1 }{A}-\frac{1}{eS}\right)+6\ \ \ \ \ \ \ \ \text{[dB]}\\ &=L_{w}+10\log_{10}\left(\frac{1}{R}+\frac{1-e^{-1}}{S}\right)+6 \ \ \ \ \ \text{[dB]}\end{split}. \tag{20}\]
Figure 1: Conceptual diagram of the energy density of direct sound.
### Mean free path of direct sound
When the mean free path of direct sound is \(\overline{\ell_{d}}\), the energy density of the direct sound \(E_{d}\) can be expressed as
\((W/V)(\overline{\ell_{d}}/c)\). Comparing this to Eq. (16), the mean free path of direct sound \(\overline{\ell_{d}}\) can be expressed as Eq. (21).
\[\overline{\ell_{d}}=\overline{\ell}\left(1-e^{-1}\right)=\frac{4V}{S}\left(1-e ^{-1}\right). \tag{21}\]
In the revised theory, the mean free paths of direct sound \(\overline{\ell_{d}}\) and reflected sound \(\overline{\ell}\) differ, as demonstrated in Eq. (21). The mean free path of direct sound \(\overline{\ell_{d}}\) is shortened by just the length of \(\overline{\ell}/e\) compared to that of reflected sound \(\overline{\ell}\).
### Interpretation of parameter \(\lambda\) in reverberation decay
Sabine's reverberation decay by Eq (9) can be also expressed using the mean free path \(\overline{\ell}\) as Eq (22).
\[E(t)=\frac{W}{cV}\cdot\frac{\overline{\ell}}{\alpha}\exp[-\left(\frac{ \overline{\ell}}{\alpha}\right)^{-1}ct\right]. \tag{22}\]
The factor \(\overline{\ell}/\overline{\alpha}\) can be interpreted as a mean absorption free path, which is an average distance where the sound energy travels in space until absorbed.
The reverberation decay of the revised theory by Eq. (18) can be transformed using the mean free path \(\overline{\ell}\) as Eq. (23).
\[E(t)=\frac{W}{cV}\left(\frac{\overline{\ell}}{\alpha}-\frac{\overline{\ell} }{e}\right)\exp[-\left(\frac{\overline{\ell}}{\alpha}-\frac{\overline{\ell}}{ e}\right)^{-1}ct\right]. \tag{23}\]
From Eq. (23), parameter \(\lambda\) in the revised theory can be expressed also as \(c/\left(\overline{\ell}/\overline{\alpha}-\overline{\ell}/e\right)\). The factor \(\left(\overline{\ell}/\overline{\alpha}-\overline{\ell}/e\right)\) means the mean absorption free path \(\overline{\ell_{a}}\) in the revised theory. Similar to the mean free path of direct sound \(\overline{\ell_{d}}\), the mean absorption free path in the revised theory is shortened by just the length of \(\overline{\ell}/e\) regardless of \(\overline{\alpha}\) compared to Sabine's theory. We can easily understand this by what the revised theory distinguishes between the mean free paths of direct sound \(\overline{\ell_{d}}=\overline{\ell}(1-e^{-1})\) and reflected sound \(\overline{\ell}\).
Here, we examine the reason why the length of \(\overline{\ell}/e\), which seems to be related to direct sound, influences the reverberation process far beyond time \(t=\overline{\ell}/c\), which should not relate to the direct sound.
As mentioned above, the revised theory assumed that the perfect diffusion state maintains both in the steady state and in the entire reverberation process. Therefore, the sound fields in the steady state and throughout the reverberation process cannot be distinguished, except for the sound energy density. Both are the same statistically. Thus, the energy decay from \(t\)=0 and that from any time even far beyond time \(t=\overline{\ell}/c\) essentially should have the same decay mechanism. Based on this, we can understand that both \(\overline{\ell_{d}}\) and \(\overline{\ell_{a}}\) are shortened by just the same length \(\overline{\ell}/e\) in the revised theory.
## 5 Comparison of theories with computer simulation
### Simulation method
To verify the revised theory, simulations were performed using the sound ray tracing method, and the obtained reverberation time and average sound pressure level were compared with the values calculated using each theory.
Two rectangular rooms with dimensions of 15 m\(\times\)10 m \(\times\) 5 m and 10 m\(\times\) 10 m\(\times\) 10 m were used for the simulation. Absorption coefficients were set from 0.2 to 0.8 at every step of 0.1. The absorption coefficients were uniform across all walls.
To simulate a perfect diffusion state as the initial condition of the sound field, 10\({}^{6}\) sound particles (representing the acoustic energy) were uniformly distributed in the room using random numbers. The traveling direction of each particle was determined randomly using uniform spherical random numbers. This is statistically equivalent to the uniform distribution of an infinite number of point sound sources in a room. The total sound power \(W\) of all particles was set to 1.0 [watt]. The time step \(\Delta t\) was 0.1/\(c\) because the travel of sound particles is calculated every 0.1 m. Information such as energy, direction of propagation, number of times reflected, number of times absorbed, of all the particles were recorded in every calculation step. Mean free paths of direct and reflected sound were calculated using the information.
When each sound particle is reflected on the wall, it is diffusely reflected according to Lambert's cosine law, and the energy of the sound particle is multiplied by \(\left(1-\overline{\alpha}\right)\). From the total energy of all the particles at each time step \(\Delta t\), the change in the acoustic energy density over time was calculated using Eq. (1). The reverberation decay curve \(E(t)\) was calculated using Eq. (3). The reverberation time was calculated from the -5 dB to -35 dB slope of the reverberation decay curve. The average sound pressure level was obtained from the initial energy of the reverberation decay \(E(0)\).
### Results and discussion
Table 1 shows comparison between the revised theory and simulation in mean free paths of direct sound (mfp_dir) and reflected sound (mfp_ref). Results of the simulation are shown in cases of absorption coefficients from 0.2 to 0.8 of 0.2 steps. As shown in the simulation results, mfp_dirs were
shorter than mfp_refs regardless of absorption coefficients. Solutions of the revised theory corresponded well with the simulation results in both mfp_dis and mfp_refs, except that the mfp_dis were slightly shorter than those of the simulation.
Fig. 2 compares the theoretical solutions with the simulation results in the reverberation times and average sound pressure levels. The results of the simulations agreed better with the revised theory rather than Sabine's and Eyring's theories. The reverberation time obtained using the revised theory was shorter than that of Sabine's theory and longer than that of Eyring's theory. The average sound pressure levels were lower than those of existing theories.
Fig. 3 compares reverberation decay curves obtained by the theories with the simulation results in cases of \(\alpha=0.2\) and \(\alpha=0.5\). The reverberation decay curves of the simulations agreed better with the revised theory rather than Sabine's and Eyring's theories. Sabine's decay curves were always overestimated and Eyring's ones were always underestimated.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{15m\(\times\)10m\(\times\)5m} & \multicolumn{3}{c}{10m\(\times\)10m} \\ \cline{2-5} & mfp\_dir [m] & mfp\_ref [m] & mfp\_dir [m] & mfp\_ref [m] \\ \hline \multicolumn{5}{c}{Revised theory} & 3.45 & 5.45 & 4.21 & 6.67 \\ \hline \multirow{5}{*}{Simulation} & \(\alpha\)=0.2 & 3.77 & 5.46 & 4.49 & 6.68 \\ & \(\alpha\)=0.4 & 3.78 & 5.48 & 4.49 & 6.70 \\ & \(\alpha\)=0.6 & 3.79 & 5.49 & 4.49 & 6.72 \\ & \(\alpha\)=0.8 & 3.78 & 5.50 & 4.49 & 6.75 \\ \hline \end{tabular}
\end{table}
Table 1: Mean free paths of direct sound (mfp_dir) and reflected sound (mfp_ref).
Figure 2: Comparison between theories and simulation (left: reverberation time, right: average sound pressure level).
## 6 Conclusions
In this study, a theoretical framework for the consistent reverberation theory was presented. Based on this theoretical framework, Sabine's reverberation theory was revised by following an approach that is different from Eyring's. The reverberation time obtained using the revised theory is equal to Sabine's reverberation time multiplied by \((1-e^{-1}\bar{\alpha})\). The newly defined reverberation time was shorter than that obtained using Sabine's theory and longer than that obtained using Eyring's theory.
The computer simulation results, obtained using the ray-tracing method, confirmed that the revised theory explains the results more reasonably than Sabine's and Eyring's theories in the practical range of absorption coefficients from 0.2 to 0.8.
In the future, it will be necessary to verify the revised theory under various sound field conditions.
|
2304.03816 | Towards Generating Functionally Correct Code Edits from Natural Language
Issue Descriptions | Large language models (LLMs), such as OpenAI's Codex, have demonstrated their
potential to generate code from natural language descriptions across a wide
range of programming tasks. Several benchmarks have recently emerged to
evaluate the ability of LLMs to generate functionally correct code from natural
language intent with respect to a set of hidden test cases. This has enabled
the research community to identify significant and reproducible advancements in
LLM capabilities. However, there is currently a lack of benchmark datasets for
assessing the ability of LLMs to generate functionally correct code edits based
on natural language descriptions of intended changes. This paper aims to
address this gap by motivating the problem NL2Fix of translating natural
language descriptions of code changes (namely bug fixes described in Issue
reports in repositories) into correct code fixes. To this end, we introduce
Defects4J-NL2Fix, a dataset of 283 Java programs from the popular Defects4J
dataset augmented with high-level descriptions of bug fixes, and empirically
evaluate the performance of several state-of-the-art LLMs for the this task.
Results show that these LLMS together are capable of generating plausible fixes
for 64.6% of the bugs, and the best LLM-based technique can achieve up to
21.20% top-1 and 35.68% top-5 accuracy on this benchmark. | Sarah Fakhoury, Saikat Chakraborty, Madan Musuvathi, Shuvendu K. Lahiri | 2023-04-07T18:58:33Z | http://arxiv.org/abs/2304.03816v1 | # Towards Generating Functionally Correct Code Edits from Natural Language Issue Descriptions
###### Abstract.
Large language models (LLMs), such as OpenAI's Codex, have demonstrated their potential to generate code from natural language descriptions across a wide range of programming tasks. Several benchmarks have recently emerged to evaluate the ability of LLMs to generate functionally correct code from natural language intent with respect to a set of hidden test cases. This has enabled the research community to identify significant and reproducible advancements in LLM capabilities. However, there is currently a lack of benchmark datasets for assessing the ability of LLMs to generate functionally correct _code edits_ based on natural language descriptions of intended changes. This paper aims to address this gap by motivating the problem _nl2fix_ of translating natural language descriptions of code changes (namely bug fixes described in Issue reports in repositories) into correct code fixes. To this end, we introduce _Defects47-N2fix_, a dataset of 283 Java programs from the popular Defects4J dataset augmented with high-level descriptions of bug fixes, and empirically evaluate the performance of several state-of-the-art LLMs for the this task. Results show that these LLMS together are capable of generating plausible fixes for 64.6% of the bugs, and the best LLM-based technique can achieve up to 21.20% top-1 and 35.68% top-5 accuracy on this benchmark.
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †
test (Kalik et al., 2019). Second, although a program may have accompanying regression tests, running such tests in CI pipelines can be expensive and can only be invoked a small number (say, 5) times to be practical. As discussed later in related work (Section 5), keeping the tests hidden distinguishes the _nl2fix_ problem from the problem of automated program repair (APR) (Kalik et al., 2019; Kalik et al., 2020).
This paper also contributes a detailed empirical evaluation of the performance of current state-of-the-art (SOTA) LLMs on this dataset. We choose three different flavors of LLMs based on the generative pre-trained transformers (GPT) neural architecture from OpenAI, (a) the Codex code completion model code-davinci-002, (b) the Codex code editing model code-davinci -edit-001, and (c) the ChatGPT conversational model gpt-3.5 -turbo. We evaluate these models under different sampling settings and _prompting_ strategies and perform detailed quantitative and qualitative analysis on the accuracy and quality of the suggested fixes.
Our results demonstrate that these LLMs together are capable of generating _plausible_ patches (_i.e._, satisfy the regression and trigger tests) for a significant fraction, 64.6% of the bugs when sampling for up to 100 candidates. More interestingly, the ChatGPT model gpt-3.5-turbo outperforms other models in terms of both the pass(01, pass@5, and pass@100 accuracy metrics (we describe the pass@k metric more formally later in Section 3.4.2). Finally, we describe a generic approach to rank an unordered set of candidate patches based on LLM computed _embedding similarity_; the ranking makes the suggestions deterministic with top-1 and top-5 accuracy of 21.20% and 35.68%, respectively.
These findings highlight both the non-trivial nature of the _Defects47-N12fix_ benchmark, as well as the capabilities of current LLMs to form decent initial baselines that can spur further research.
**Contributions.** In summary, in this paper: (_i_) We motivate the _nl2fix_ problem and present a non-trivial augmented benchmark _Defects47-N12fix_ along with metrics.
(_ii_) We perform an extensive empirical evaluation of the performance of three state-of-the-art LLMs on this benchmark.
(_iii_) We describe a ranking strategy based on embedding similarity to provide a ranked and deterministic list of fixes.
## 2. Research Questions
This paper aims to understand the performance of current state-of-the-art LLMs on the problem of _nl2fix_: the task of fixing a buggy program from natural language intent. To this end, we define four research questions to empirically evaluate model capabilities:
**RQ1. Can LLMs generate fixes from natural language intent for NL2Fix?**
To answer this RQ we explore the pass@k accuracy of three SOTA LLMs for generating correct bug fixes using natural language issue descriptions. Additionally, we extract quantitative statistics about the generated fixes, including: the prevalence of duplicate suggestions, compilation percentage, and the distribution and overlap of unique bugs for which there are plausible patches from each approach.
**RQ2. What kind of candidate fixes do LLMs generate?**
To shed light on the nature of LLM generated candidate patches, we study the characteristics of the patches in the context of their similarity to the developer written ground truth fixes and the input buggy code.
**RQ3. What sources of information do LLMs need to generate fixes for NL2Fix?**
We explore what level of information LLMs need in order to correctly generate fixes from natural language descriptions. We experiment with different prompting styles using curated information, including: the high-level issue summary, in-depth issue description, 0-shot and 1-shot prompt settings, as well as bug fix reasoning generated using Reasoning Extraction prompting strategies.
**RQ4. Can LLMs be used to rank fixes for NL2Fix?**
Based on observations from RQ1, RQ2, and RQ3 we explore how LLMs can be used to design a simple ranking approach to rank fixes from the unordered set of candidate fixes, allowing better approximation of pass@k metrics needed for a developing a deterministic and practical bug fix recommender.
Figure 1. Overview of the NL2Fix problem setting. Illustrated is the issue title, description, and plausible fix for the JXPATH-149 bug. The figure demonstrates the standard LLM prompt used to generate candidate patches, and the evaluation of the patches using ground truth trigger and regression tests.
## 3. Approach
### Dataset
In this paper, we take the _first step_ towards creating a benchmark for a _nl2edit_ and evaluating current state-of-the-art LLMs on the problem. We focus on the restricted problem of _nl2fix_ which consists of the task of fixing a buggy program where the bug is described in natural language within an issue description.
We choose the Defects4J (Han et al., 2017) benchmark, comprising of bugs and tests from real-world issues, from which we can extract issue descriptions.
In particular, we use Defects4J 2.0, a well-known benchmark of 835 manually curated real-world bugs and fixes gathered from 17 Java projects. The existing dataset consists of a set of bugs, bug reproducing test cases (trigger tests), and regression test cases which load the class in which the method under test is contained. Each bug in the Defects4J dataset contains a PRE_FIX_REVISION and POS_FIX_REVISION version that represents the buggy/fixed versions of the code respectively. The two versions reflect the actual state of the project when the bug was discovered/fixed.
We use these developer-written tests to evaluate generated patches, a patch must pass both the trigger and regression tests to be considered a plausible patch. While this does not guarantee semantic equivalence between a generated patch and the ground truth fix, we argue that it may be a realistic proxy for patch correctness for two reasons. First, with the use of LLMs that are capable of generating hundreds of candidate patches, manually evaluating each generated patch for semantic equivalence can be prohibitively expensive (in the order of 28,000 per model configuration). Thus, evaluating candidate patches with the developer written tests serves as a scalable proxy for preserving functionality and defect-freedom. Secondly, semantic equivalence with user-provided fix may not be necessary, as there can be multiple, non-equivalent yet acceptable, fixes to developers in practice. Without knowledge of the detailed invariants of the project, it is difficult to determine if a particular ground truth fix is the only acceptable fix.
We augment the Defects4J dataset in three distinct ways:
* We restrict the _Defects4J-N2fix_ dataset to consist of fixes that affect a _single method_ body. Among 835 bugs in the Defects4J 2.0 dataset, 283 contain single-method bugs, i.e., bugs that can be fixed with single method changes. Fixes may include multiple lines, but are scoped to a singular function. Table 1 contains a breakdown of the number of bugs per project.
We justify our decision to focus on single method bugs as methods generally define a unit of code that can be reviewed independently compared to isolated lines, or an entire file or repository. Second, the input prompt for LLMs are restricted to only a few thousand tokens that may not suffice to capture the entire file or repository level information. APR approaches using the Defects4J dataset often restrict their dataset to contain only single hunk, or single line bugs(Zhu et al., 2019). We do not make this restriction, and Table 1 shows the average number of hunks for bugs in our dataset.
* Second, to serve the _nl2fix_ problem, we augment the Defects4J dataset by pairing each bug with its corresponding issue metadata, including the issue title and description, that we scrape from GitHub, SVN and Jira.
* Finally, upon close investigation of the buggy methods in the Defects4J dataset, we notice that as a side-effect to the bug patching process used by the benchmark creators, comments that appear in the POS_FIX_REVISION also appear in the PRE_FIX_REVISION of the code2. This means that comments related to the actual fix made by the developer, may appear in the PRE_FIX_REVISION that we use as input to the LLMs. To avoid these comments providing hints about the solutions to the model, we remove all comments from the PRE_FIX_REVISION.
Footnote 2: [https://github.com/riust/defects4j/issues/477](https://github.com/riust/defects4j/issues/477)
### Generative Pre-trained Transformers (GPT)
Generative Pre-trained Transformers (GPT) are large-scale auto-regressive (Han et al., 2017) generation models trained to predict the next token given a natural language prefix (prompt) context. The recent development of ultra-large-scale GPT models with billions of parameters has shown to exhibit emergent properties where they can perform tasks without finetuning (Han et al., 2017; Wang et al., 2018). While asked to generate responses to a prompt, GPT models samples over tokens' probability distributions of one token at a time. To generate the most probable response (or multiple responses), these models perform different sampling, including temperature-based sampling, which manipulates the distribution of tokens, controlling the diversity in the responses. A lower temperature typically results in less diversity, and a higher temperature otherwise. We use two temperate - 0.2 and 0.8 throughout the experiments.
To answer the defined research questions, we select three state-of-the-art GPT-based Large Language Models (LLM) that have shown strong capabilities on a variety of code generation tasks3.
Footnote 3: At the time of submission, the authors did not have API access to the most recent state-of-the-art model, GPT+ (Zhu et al., 2019)
**Codex.** OpenAI's Codex, code-davinci-002 is a language model specifically designed for code completion tasks. It is based on the GPT-3 architecture and has been fine-tuned on a large corpus of code from public repositories. Codex excels at generating syntactically correct code and has been shown to be highly effective for tasks involving code generation.
**Codex Edit Model.** The Codex edit model, code-davinci-edit-001, is a version of Codex GPT-3 with editing capabilities. Given a code
\begin{table}
\begin{tabular}{l r|r r|r r|r r} \hline \hline
**Project** & \multirow{2}{*}{**\# Bugs**} & \multicolumn{1}{c|}{**SH\({}^{\dagger}\)**} & \multicolumn{1}{c|}{**Avg.**} & \multicolumn{1}{c|}{**Avg. Change**} & \multicolumn{1}{c}{**Lissue Length**} \\ & & **Bugs** & **hunks** & **Line** & **Token** & **Title** & **Desc.** \\ \hline Chart & 6 & 5 & 1.17 & 1.83 & 9.67 & 7.17 & 149.35 \\ Ci & 28 & 16 & 1.68 & 4.07 & 22.07 & 9.36 & 206.46 \\ Codex & 11 & 9 & 1.27 & 2.18 & 10.82 & 12.73 & 171.09 \\ Collections & 1 & 1 & 1.00 & 1.00 & 1.00 & 20.00 & 457.00 \\ Compress & 36 & 19 & 1.78 & 5.39 & 29.28 & 10.03 & 320.42 \\ Csv & 12 & 8 & 14.2 & 2.50 & 18.92 & 10.83 & 1042.5 \\ JacksonCore & 13 & 9 & 1.38 & 3.69 & 20.69 & 11.38 & 251.69 \\ JacksonDataBindind & 67 & 36 & 1.87 & 5.37 & 33.90 & 11.90 & 294.49 \\ JacksonXML & 5 & 1 & 2.80 & 6.20 & 30.80 & 11.60 & 126.80 \\ JPyath & 10 & 5 & 1.60 & 4.80 & 22.60 & 9.70 & 195.10 \\ Math & 73 & 37 & 2.11 & 5.29 & 35.00 & 10.07 & 165.18 \\ Mockito & 21 & 16 & 1.33 & 3.29 & 27.90 & 9.00 & 311.67 \\ \hline
**Overall** & 283 & 162 & 1.78 & 4.65 & 28.76 & 10.53 & 231.92 \\ \hline \hline \end{tabular} \({}^{\dagger}\)SH: Single-Hunk
\end{table}
Table 1. Statistics of the Dataset
and instruction written in NL such as "Improve the runtime complexity of this function", the models edits the code to possibly satisfy the instruction.
**ChatGPT.** The recently released ChatGPT model (gpt-3.5-turbo) is based on the pretrained GPT-3.5 model, which is further finetuned using Reinforcement Learning with Human Feedback (RLHF) (Krishnan et al., 2017). While gpt-3.5-turbo is not explicitly fine-tuned for code generation tasks, early evaluation has demonstrated strong capabilities in several fields of science and engineering (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019) including understanding and generating code snippets (Krishnan et al., 2017; Krishnan et al., 2019; Krishnan et al., 2019), ChatGPT's conversational nature allows it to excel in tasks that require both code generation and human-like interactions, allowing the use of advanced prompt structures that involve chain of thought(Krishnan et al., 2019) and reasoning extraction(Krishnan et al., 2019).
**Embeddings.** OpenAI embedding models generate a high dimension vector representation of input strings.
Research shows that similarity in such high-dimension vector space translates to semantic similarity of strings. Among many other applications of such representation, they can facilitate similarity analysis, searching, etc. In this work, we leverage the text-embedding-ada-002 model to generate the embedding of code and use embedding-based similarities to rank the patches.
### Prompting Framework
Capabilities of LLMs are not fixed across all contexts, _i.e._, if an LLM gets a question wrong, slightly changing the prompt by modifying the contents or format of the information given may yield different outcomes. There as several techniques to improve the accuracy and reliability of LLM output, these techniques are referred to as prompt engineering(Krishnan et al., 2019). In this paper we use standard prompting as well as two distinct strategies that have been shown to improve the performance of LLMs for complex tasks: 1) few-shot prompting and 2) reasoning extraction. These strategies are designed to help provide context and guidance to effectively solve a task while mitigating potential pitfalls associated with model-generated outputs.
#### 3.3.1. Zero-Shot Prompting
Zero-shot prompting, also frequently referred to as _standard prompting_, is the basic configuration of prompting a model with a task. The prompt does not include any examples of acceptable solutions and does not break down the problem into easier sub-problems. In this paper, our zero-shot, or standard, prompt is illustrated in Figure 1. It is composed of the issue title and description, along with the buggy code and an instruction to provide a fix to the code.
#### 3.3.2. Few-Shot Prompting
Few-shot prompting is a technique that involves presenting the model with a series of examples or demonstrations in order to guide its understanding of the task at hand. By providing the models with a few instances of similar tasks, along with their respective inputs and desired outputs, we guide the model output to the desired output, both in terms of functionality and format. This approach enables the models to adapt their responses based on the provided examples, leading to more accurate and coherent output. For a given issue, we select another issue along with the buggy code and fixed code to represent as a shot. We select the example to be an issue where the example buggy code is closest to the target buggy code, using standard edit distance metric.
#### 3.3.3. Reasoning Extraction
Reasoning extraction is a strategy that focuses on extracting the underlying rationale behind a specific task or problem (Krishnan et al., 2019). We apply this strategy to help the model comprehend the objective(s) and solution to the code fix task. In particular, we explicitly interact three times with the model with different queries. First, given the buggy code and issue report, we ask the model to localize the bug, then we ask to explain why the localized lines are buggy, finally we ask to fix the bug. ChatGPT's conversational nature allows the natural the use of advanced prompt structures that maintain conversational context, like chain of thought reasoning and reasoning extraction. Therefore, we only use this prompt strategy with gpt-3.5-turbo.
### Correctness of Generated Code
Experiments are run in two phases: fix generation and fix validation. All validation experiments are run in a Docker container running Ubuntu 20.04.4 with Java version OpenJDK 1.8.0 for which we make the Docker image public. To generate candidate fixes we use the OpenAI API. The rest of this section discusses details of patch validation and evaluation metrics:
#### 3.4.1. Patch Validation
Each bug in the Defects4] dataset contains a PREFIX_REVISION and POST_FIX_REVISION version that represents the buggy/fixed versions of the code respectively. The two versions reflect the actual state of the project when the bug was discovered/fixed. To determine whether a generated fix is correct, we follow the following steps: 1) Check out the PREFIX_REVISION version of the project 2) Replace the original buggy function with the generated function and 3) Run the trigger and regression test(s) to determine if code containing the generated fix passes the tests or not.
For each fix, the validation outcome can be either 1) Plausible: all bug reproducing tests and regression tests pass 2) Wrong: at least 1 of the trigger or regression tests fail or 3) Uncompilable.
#### 3.4.2. Evaluation metrics
To measure the quality of a solution to the _nl2fix_, we use the pass@k metric4 introduced and widely used for evaluating LLMs for _nl2code_ problems (Beng et al., 2016; Dosov et al., 2017). Intuitively, given an unordered set of candidate fixes, the pass@k provides the likelihood of choosing a correct fix when given \(k\) tries to sample from this set of candidate fixes. In _nl2fix_ scenario, a fix is correct if it passes all the Trigger tests and Regression tests for the bug. Given \(n\) as the number of samples generated, \(k\) as the number of samples to estimate pass@k and \(c\) is the number of correct samples in \(n\), we use a the following formula for calculating pass@k defined by (Dosov et al., 2017):
Footnote 4: [https://github.com/j](https://github.com/j)
## 4. Results
### RQ1: Can LLMs generate fixes from natural language intent for NL2Fix?
For each model, we consider two settings with the zero-shot prompt: temp. at 0.2 and temp. at 0.8. For each setting we generate 100 candidate fixes for each of the 283 bugs and evaluate the correctness of each candidate against the trigger and relevant tests. Next, we calculate the pass@k using technique described in Section 3.4.2 and plot the results for temp. 0.8 in Figure 1(a) and temp. 0.2 in Figure 1(b).
At temp. 0.8, we see that the edit model,code-davinci-edit-@01, is the best performing model overall with a pass@100 at 54.12%. gpt-3.5-turbo achieves a slightly higher pass@1 (13.93% with respect to 12.18 for the edit model), however at pass@20 and above, performance dips lower than of the completion model,code-davinci-@02. The completion model achieves the second highest pass@100 with 45.9%, which is 8.22% lower than the edit model.
At temp. 0.2, we see precision improvements for pass@1 from both code-davinci-edit-@01 and gpt-3.5-turbo with 17.55% and 16.0% respectively. However, we see consistently lower precision of all models starting at pass@5 through pass@100. Most notably, the precision of code-davinci-@02 is much lower at temp. 0.2 with just 0.65% pass@1 and 3.55% pass@100. We tested the precision of the model in two separate runs, and noticed consistently poor performance at this setting.
To better understand the pass@k accuracy per model, we extracted high level statistics about the code generated by each model for both temp. configurations. Table 2 contains the average percentage of duplicate code candidates generated per bug as well as the average percentage of candidates that compile, pass on the regression tests, and pass on both the regression and trigger tests (plausible) across bugs.
From Table 2 we can see that the number of duplicates generated increases drastically when the temp. is decreased, which is expected as model behavior is more deterministic at lower temperatures. For example,code-davinci-edit-@01, which is optimized for code edits, generates more than 90% duplicates at 0.2 but only 26% at 0.8.
Overall, the percentage of generated code that compiles varies significantly across models. We observe that only 4.66% of code generated by code-davinci-@02 at temp. 0.2 compiles, which explains the extremely low accuracy seen in Figure 1(b). However, at temp. 0.8 the compilation rate increases significantly, between 35.9% and 74.3%. Looking to related work, on a different subset of the Defects4J dataset, SOTA neural APR techniques generate patches with 15% to 28% compilation rates in top 100 [(59)][(32)][(19)]. Although the APR setting is different from that of nl2fix, the trigger and relevant tests _are not_ hidden from the APR setting, we observe that using the entire method as input to the LLMs have an advantage over generating a higher proportion of compilable patches.
Both gpt-3.5-turbo and code-davinci-edit-@01 achieve a higher precision for pass@100 in the temp. 0.8 setting, compared to 0.2 (Figure 1(b)). However, in table 2 we see that the average percentage of plausible patches is higher in the 0.2 setting. While this appears counter intuitive, the presence of high number of duplicates per patch modulates the calculated pass@k across bugs. In other words, a model may have high confidence for a small number of bugs and generate a high percentage of plausible patches for that bug.
**Result 1.1**: Given only an NL description of a bug, all three LLMs are able to generate plausible fixes for a modest number of bugs in the dataset, with pass@1 of at most 6.29% - 17.55% and pass@100 of at most 42.19% - 54.12%. In the 0-shot setting,code-davinci-edit-@01 achieves the overall highest accuracy compared to both code-davinci-@02 and gpt-3.5-turbo.
We report the number of bugs with plausible fixes for each project in Table 3. At a high level, we observe that the distribution of plausible fixes generated from each model is distributed across every project. In general for every project, the three models generate plausible fixes for a similar percentage of bugs. There are a few notable exceptions, for example, the Collections project, which only contains 1 bug, was only correctly fixed by code-davinci-edit-@01. gpt-3.5-turbo also has lower accuracy on the Mockito and JacksonDatab projects only generating plausible fixes for 1/21 and 7/67 bugs respectively, compared to 6/21 and at least 22/67 for the other two models. However, on the Code project, gpt-3.5-turbo generates plausible fixes for two more bugs than the other two models. Overall, the total bugs patched by each model aligns with the pass@100 metrics seen in Figure 1(a).
While two models may patch the same number of bugs for a project, the exact bugs that are patched may be different. Figure 3 shows the overlap of the number of bugs each model is able to generate plausible patches for. All three models are able to generate plausible patches for the same 28% (82 of 283) of the bugs. However, we observe that when combined together, the three models can generate plausible patches for 64% (183 of 283) of the dataset. Each model has a unique subset of bugs that the other two models are not able to generate plausible patches for: 22 unique bugs by code-davinci-edit-@01, 18 by gpt-3.5-turbo, and 10 by
\begin{table}
\begin{tabular}{l c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**code-davinci-edit-@01**} & \multicolumn{2}{c|}{**code-davinci-@02**} & \multicolumn{2}{c}{**gpt-3.5-turbo**} \\ Temp. & 0.2 & 0.8 & 0.2 & 0.8 & 0.2 & 0.8 \\ \hline
**Duplicate** & **90.7**\% & 26\% & 87.9\% & **35.2**\% & 51.8\% & 8.41\% \\ \hline
**Compile** & **74.3**\% & 54.8\% & 4.66\% & 35.9\% & 57.2\% & **60.4**\% \\
**Regression** & **51.2**\% & **41.7**\% & 3.23\% & 20.19\% & 25.1\% & 33.0\% \\
**Plausible** & **20.9**\% & 12.0\% & 0.65\% & 6.0\% & 16.0\% & **13.4**\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. Average candidate patch statistics across bugs.
Figure 2. Pass@k results for 0-shot setting.
### RQ2: What kind of candidate fixes do LLMs generate?
To answer this RQ, we select the best-performing configuration from RQ1 (temp. 0.8) to better understand the nature of the code generated by the LLMs. We study the characteristics of the patches generated by different models. To understand the characteristics of different patches, we study the similarity of those patches _w.r.t._, the buggy code (present as part of the input) and the actual fixed code. We use CodeBLEU [(44)] as the representative similarity measurement. Given two code \(c_{1}\) and \(c_{2}\), CodeBLEU is defined as \(\alpha*B+\beta*W+Y*S+\delta*D\), where \(B\), \(W\), \(S\), \(D\) are BLEU score, Keywords BLEU score, Syntax match score, and Dataflow match score, respectively between \(c_{1}\), and \(c_{2}\), and \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) are weighting constants typically all set to 0.25 5. We choose CodeBLEU for this research question since it considers the syntax and semantic match between code, in addition to the lexical match.
Footnote 5: Microsoft’s CodeBLEU implementation
Figure 3(a) shows the similarity of patches with the actual fixed code. Across all three models we have studied in this paper, the patches that passed both the regression test and the bug revealing trigger test (_i.e._, plausible patch) exhibit higher CodeBLEU with the actual fixed code compared to the patches that are not plausible. Such a result is expected since the plausible patches pass the whole test suite; the plausible patch should be, in theory, a semantic equivalent of the actual fix exhibiting higher CodeBLEU. Interestingly, the plausible patches generated by the gpt-3.5-turbo model exhibit higher variability regarding CodeBLEU similarity with the actual patch. The Inter-Quartile Range (IQR) of CodeBLEU between plausible patches and actual fixed code is 0.18 and 0.19, respectively, for code-davinci-edit-001 and code-davinci-002. In contrast, for gpt-3.5-turbo, the IQR is 0.26. In addition, for the code-davinci-edit-001 and code-davinci-002 models, the kurtosis are 2.62 and 1.12, respectively, signifying the distributions are more centered, whereas gpt-3.5-turbo models kurtosis is 0.41, signifying diverse generation capability, as evident from Figure 3(a).
Further, we analyze how similar the generated patches are _w.r.t._, the buggy code. Figure 3(b) shows the distribution of CodeBLEU of different types of patches across different models. Across all three models, interestingly, we observe that plausible patches exhibit higher CodeBLEU with the buggy code than their non-plausible counterparts. We conjecture that when models make extensive modifications of the input buggy code, the resultant code are infested with different problems causing them to fail compilation, regression tests, and the trigger test (see Table 2). We conjecture that LLMs would make a more significant impact on the _nl2fix_ problem if we had the option to control the deviation from the input buggy code. Nevertheless, the observation that plausible patches exhibit higher similarity with buggy code opens up a new possibility of ranking the LLM-generated code based on their similarity with buggy code, which we will investigate in detail in the following research question.
[leftmargin=*]
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & \# **Fixed** & \# **EM\({}^{*}\)** & \# **Plausible** & \# **EM** \\ & **Bugy** & **Bugy** & **Patches** & **Patches** \\ \hline code-davinci-edit-001 & 151 & 32 & 1724 & 146 \\ code-davinci-002 & 129 & 24 & 1182 & 120 \\ gpt-3.5-turbo & 118 & 11 & 2356 & 73 \\ \hline \hline \end{tabular} \({}^{*}\)EM : Exact Match ignoring whitespaces
\end{table}
Table 4. Statistics of bugs and patches where models generate patches which exactly matches the ground truth fix
Figure 3. Overlap between bugs with plausible patches for each LLM in the 0-shot temp 0.8 setting.
code-davinci-002. While some bugs may be easier to fix for certain models, this does not appear to be a consistent artifact of the project that the bug originates from, as observed from Table 3.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Approach** & davinci-002 & edit-001 & gpt-3.5 \\ \hline Chart & 6/6 & 6/6 & 5/6 \\ Cli & 13/28 & 16/28 & 13/28 \\ Codec & 7/11 & 7/11 & 9/11 \\ Collections & 0/1 & 1/1 & 0/1 \\ Compress & 15/36 & 19/36 & 18/38 \\ Csv & 10/12 & 8/12 & 9/12 \\ JacksonCore & 8/13 & 9/13 & 5/13 \\ JacksonDatabind & 22/67 & 27/67 & 7/67 \\ JacksonXml & 1/5 & 2/5 & 2/5 \\ JxPath & 3/10 & 5/10 & 4/10 \\ Math & 38/73 & 45/73 & 45/73 \\ Mockito & 6/21 & 6/21 & 1/21 \\ \hline
**Total** & **129/283** & **151/283** & **118/283** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Bugs with plausible patches, by project. Bugs with plausible fix / total number of bugs for that project.
LMs have already observed the methods in our dataset during their respective pretraining. This raises a question, how much do the LLMs memorize from their pretraining [8, 34, 52]? Unfortunately, there is no good way to measure such unless we know what LLMs' pretraining data is. Regardless, to qualitatively understand the generated patches, we investigate the CodeBLEU similarity of the patches with the buggy code (which was available in the input to LLM) and the actual fixed code. Across all three models, generated patches exhibit slightly higher similarity with the buggy code than the actual fixed code (see Figure 3(c)). Such difference is statistically significant by one-sided Wilcoxon sign rank test with p-values of \(1.6*10^{-9}\), \(9.9*10^{-5}\), \(2.7*10^{-14}\) for code-davinci-edit-001, code-davinci-002, and gpt-3.5-turbo, respectively.
Table 4 shows summary statistics of patches that match the dataset's ground truth fix exactly. The code-davinci-edit-001 model correctly generated at least one patch for 151 bugs, among which the patch for 32 bugs exactly matches the ground truth. For the code-davinci-002 model, such number is 24 out of 129 and 11 out of 118 for the gpt-3.5-turbo. Only 8.5% (146 out of 1724), 10.1% (120 out of 1182), and 3% (73 out of 2356) patches of the plausible patches are an exact match with the ground truth for code-davinci-edit-001, code-davinci-002, and gpt-3.5-turbo, respectively. These results show that most of the plausible patches are syntactically distinct from the ground truth fix.
In Figure 5, we show one of the plausible patches generated by gpt-3.5-turbo for the example in Figure 1. The buggy code misplaced the arguments of containsMatch function in line 16(Figure 5(a)), while the developer-written patch fixed the error by putting the arguments in correct positions (line 16 in Figure 5(b)). Figure 5(d) shows a fixed code generated by the gpt-3.5-turbo model, which is semantic equivalent of the developer-written code. In fact, LLM generated fix actually inclines the implementation of containsMatch (shown in Figure 5(c)) function into the context (line 23-28 in Figure 5(d)). In addition, the LLM-generated patch refactors the code by extracting two variables corresponding to
Figure 4. Code similarity analysis of generated patches by different models. We analyze the CodeBLEU similarity here.
Figure 5. An example showing the contrast between actual fixed code and model-generated plausible patch for Bug id 20 of JxPath project. Even though the generated patch passed does not exactly match the ground truth fix, it passed all the regression tests and the trigger test, making it a semantic equivalent of the actual fix.
two boolean expressions used in the original code, making the resultant code more readable. We observe that even though arguably LLMs already had seen everything open source, they explore new variations of code when applied to the NL2Fix problem.
* *
project6. When asked to identify the lines of code where the bug exists, gpt-3.5-turbo returns the correct defect region in response 1. Then, prompt 2 appends the original prompt1 and response 1 as part of the context for prompt 2, with additional instructions to explain why the identified lines of code contain a bug. In response 3 gpt-3.5-turbo explains the issue and extracts a sample input that the code would fail on and the corresponding error, from the issue description. In the final prompt, we append all inputs and responses to request a final fixed version of the buggy function. This is an example that gpt-3.5-turbo is able to generate a plausible patch for in the reasoning extraction setting, but not the 0-shot setting.
Footnote 6: [https://github.com/fastersxml/jackson-databind/issues/2265](https://github.com/fastersxml/jackson-databind/issues/2265)
However, in the 1-shot setting gpt-3.5-turbo is able to generate correct patches for 56 new bugs, and loses the ability to patch 14 bugs from the 0-shot setting. Using Reasoning Extraction, gpt-3.5-turbo can generate correct patches for 34 new bugs, but loses the ability to patch 18 bugs from the 0-shot setting. Compared to all approaches pooled together, gpt-3.5-turbo in the reasoning extraction setting can only uniquely patch 4 bugs. Looking at the information contained in the issue descriptions for each of these examples, we observe gpt-3.5-turbo is able to correctly localize buggy lines and reason about why they are buggy, but only with help from the context in the issue description. See Figure 6 for an example. While these prompting techniques do boost aggregate performance metrics, they also may degrade on a subset of the bugs in the dataset.
Result 3: Issue descriptions provide helpful context for solving the NL2fix problem. Prompting techniques that provide examples, _i.e._, few-shot prompting, and break down the task, _i.e._, reasoning extraction, significantly improve accuracy of gpt-3.5-turbo on aggregate metrics like pass@k, however performance may degrade on certain subsets of the dataset and does not guarantee solutions over standard prompting.
### RQ4: Can LLMs be used to rank fixes for NL2Fix?
Recall, given an unordered set of \(n\) candidate solutions with \(c\) correct solutions for a given problem (a bug in our case), the pass@k metric refers to the likelihood of picking at least one correct solution within \(k\) tries. Such a statistics is useful for evaluating language models, but does not readily provide an useful real-world recommender system that proposes a small number candidate fixes (upto say \(k=5\)) deterministically to a user. For instance, for \(n=100\) and \(k=5\), there are \(\binom{n}{k}>75\) million ways to choose \(5\) solutions from \(100\) samples. For a practical tool for _nl2fix_, we would like to develop a (\(i\)) a deterministic way to _rank_ the suggestions and present the top \(k\) ranked suggestions to a user, and (\(ii\)) retain a high accuracy closer to the average pass@k metric.
In this section, we leverage LLMs to propose a simple and generic _ranking_ strategy that helps realize the two objectives (\(i\)) and (\(ii\)) above. In particular, inspired by our findings from RQ2, we explore if using a similarity between the embeddings of the input buggy function and the generated patches can identify plausible patches. We generate embeddings for all buggy functions and corresponding patches using the text-embedding-ada-002 embedding model from OpenAI, see Section 3 for details. We compute a cosine similarity between the embeddings for every pair of buggy code, and corresponding candidate patch. We use this score to prune away patches with similarity scores lower than the median (0.95). We fix this number across models for consistency. Then, to avoid ranking patches with extremely high similarity scores, _e.g._, 1.0, we rank the patches in order of lowest cosine similarity (starting at 0.95) to highest. Based on observations from RQ2, patches with lower scores are more likely to belong to the distribution of wrong or uncompilable patches. Patches should be sufficiently close to the buggy input program, but not so close that there is not significant change.
For each model, we exercise the ranking scheme on two sets of patches: (\(a\)) all generated patches, and (\(b\)) the subset of generated patches that pass the compiler.
To evaluate the ranking strategy, we choose the LLM configurations for each model with the highest discrepancy between pass@1 and pass@100 from RQ3 (see Table 5), which also happens to be the best performing configuration for each model.
Table 6 shows the ranked pass@1 and ranked pass@5 accuracy (denoted by rP@k) for the two sets of patches: 1) before (denoted
Figure 6. A bug correctly patched using Reasoning Extraction.
as \(\Lambda\)11) and 2) after (denoted as Pruned) pruning the compiler errors. For reference, we also report the P@k metrics in parenthesis. We observe that pruning compiler errors does improve the metric P@k, especially the P@5 for davinci-002 model by over 14% (33.97 up from 19.8). Second, while providing determinism, rP@1 improves over P@1 for most configurations except a slight dip for gpt-3.5-turbo for Pruned. For davinci-002, ranking improves the r.P@1 by over 5 percentage points. Finally, r.P@5 remains close to P@5 for most configurations except for gpt-3.5 where the r.P@5 trails P@5 by 4.68%.
Result 4: By pruning compilation failures LLMs can achieve as high as 39.66% pass@5. Using cosine similarity between embeddings of candidate patches and buggy input code, we can apply a deterministic ranking strategy that retains this high accuracy with 34.2% - 35.68% r.pass@5.
## 5. Related Work
Our work is most closely related to two broad lines of work (a) automated code editing, and (b) automatic program repair (APR).
**Automated Code Editing.** Earlier works in learning code editing include learning to edit code for refactoring (Srivastava et al., 2015; Wang et al., 2016), learning semantic code changes for bugs found by code analyzers (Wang et al., 2016). Recent approaches leverage Deep Learning techniques to learn frequent code edit patterns from code changes mined from GitHub (Beng et al., 2016; Wang et al., 2016; Wang et al., 2016).
In addition to learning from historic changes, some recent approaches (Wang et al., 2016; Wang et al., 2016) propose to guide code editing with auxiliary inputs such as commit message. We argue that commit messages are _post-facto_ description of the code changes, and do not capture the intent of the change, rather a summarization of the change. In contrast, the issue report we consider in _nl2Fix_ problem is _anti-facto_ description of the changes, which arguably capture the intent closer. Tufano _et al._(Tufano et al., 2016) proposed automating code-review activity by editing the code based on reviewers comments. While closest to our work, the approaches differ in our usage of tests and semantic metrics such as pass@k to validate the functional correctness and defect-freedom of proposed patches.
**Automatic Program Repair.**
Approaches for APR broadly fall under search-based techniques and machine learning-based methods. For a comprehensive overview of automated program repair techniques, we refer readers to recent works that survey the area (Srivastava et al., 2015; Wang et al., 2016; Wang et al., 2016; Wang et al., 2016).
Search-based techniques use a _generate-and-validate_ approach, where variations of the original code are generated and then evaluated using the failing tests(Wang et al., 2016; Wang et al., 2016; Wang et al., 2016; Wang et al., 2016; Wang et al., 2016). These approaches transform buggy code using different transformation including random transformation (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016), manually designed transformation (Wang et al., 2016), and transformation learned from previous corpus of code (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016; Wang et al., 2016).
Recently, researchers have also leveraged LLMs for APR. Al-pharepair (Zhao et al., 2016) uses a _cloze_-style APR where an LLM directly fills-in the correct code given its surrounding context from the buggy program. Xia et al. (Xia et al., 2016) uses ChatGPT to setup a conversational APR problem where feedback from failed patches is used to augment subsequent prompts to LLMs. Finally, there are approaches leveraging LLMs for generating trigger tests for APR problem from Issue descriptions (Wang et al., 2016) as well as aiding rootcausing for APR (Wang et al., 2016). Finally, Fan et al. (Fan et al., 2016) leverage APR methods (including those based on LLMs) to repair code generated from natural language intent.
Although closely related, our approach subtly differs from APR in the use of _hidden tests_ that are only used for evaluation and never specified as inputs to the repair algorithm. This makes it a more applicable problem for real-world bug fixes that may not have failing tests available (or be prohibitively expensive to leverage) during the inference time for help in rootcausing buggy lines and validating the fixes.
## 6. Limitations and Threats
_Stability of models' output._ As we have used OpenAI web API to access different models, we cannot control the stochasticity of the output by the model. The models themselves are often updated. This poses a threat to the replicability of our study. To mitigate this threat, we make available all the outputs generated by the models.
_Assumption on patch correctness._ In this paper, we leverage tests to determine if a fix is plausible. However, such a plausible patch may not fix a bug completely. This may need manual analysis or a semantic equivalence with the user-provided fix. However, the former is infeasible for a large-scale benchmark such as _Defects4J-Nl2fix_, and semantic equivalence-checking techniques do not scale to handle most real programs. Given that test-suites are never exhaustive, we can appeal to recent research that investigates patch-correctness (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016) to improve confidence on the patches.
_Generalization of findings._ Given the relatively small number of bugs (283) considered in _Defects4J-Nl2fix_ benchmark, our findings may not generalize to arbitrary bugs across different languages and software repositories. To mitigate this threat we use real-world bugs from open source projects.
## 7. Conclusion
In this paper, we motivate the _nl2fix_ problem, define the first benchmark _Defects4J-Nl2fix_, and perform detailed empirical evaluation of various SOTA LLMs on the problem.
We believe that the task of _nl2fix_ along with challenging benchmarks such _Defects4J-Nl2fix_ will serve as an important real-world benchmark for evaluating future generation of these LLMs (such as GPT-4), while leveraging new emergent behaviors of such LLMs (such as the ability of these models to predict the correctness) to improve the performance on such benchmarks. In future work, we plan to combine issue-driven trigger test generation (Wang et al., 2016) and user-in-the-loop (Wang et al., 2016) to improve the trust in generated fixes, as well as extend our framework to the more general problem of _nl2edit_ to
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{Approach} & \multirow{2}{*}{Metric} & gpt-3.5 & davinci-edit & davinci-002 \\ & & 1-shot & 0-shot & 0-shot \\ \hline \multirow{2}{*}{All} & r.P@1 (P@1) & 16.96 (16.22) & 13.78 (12.18) & 11.30 (6.29) \\ & r.P@5 (P@5) & 31.8 (31.93) & 30.74 (28.86) & 17.66 (19.8) \\ \hline \multirow{2}{*}{Pruned} & r.P@1 (P@1) & 21.20 (22.22) & 19.78 (18.99) & 17.66 (16.89) \\ & r.P@5 (P@5) & 34.98 (39.66) & 35.68 (36.73) & 34.2 (33.97) \\ \hline \hline \end{tabular}
\end{table}
Table 6. Accuracy of ranking with and without compiler pruning.
cover other forms of program evolution including feature additions, refactorings as well as optimizations.
|
2310.01146 | NewsRecLib: A PyTorch-Lightning Library for Neural News Recommendation | NewsRecLib is an open-source library based on Pytorch-Lightning and Hydra
developed for training and evaluating neural news recommendation models. The
foremost goals of NewsRecLib are to promote reproducible research and rigorous
experimental evaluation by (i) providing a unified and highly configurable
framework for exhaustive experimental studies and (ii) enabling a thorough
analysis of the performance contribution of different model architecture
components and training regimes. NewsRecLib is highly modular, allows
specifying experiments in a single configuration file, and includes extensive
logging facilities. Moreover, NewsRecLib provides out-of-the-box
implementations of several prominent neural models, training methods, standard
evaluation benchmarks, and evaluation metrics for news recommendation. | Andreea Iana, Goran Glavaš, Heiko Paulheim | 2023-10-02T12:33:01Z | http://arxiv.org/abs/2310.01146v1 | # NewsRecLib:
###### Abstract
NewsRecLib1 is an open-source library based on Pytorch-Lightning and Hydra developed for training and evaluating neural news recommendation models. The foremost goals of NewsRecLib are to promote _reproducible research_ and _rigorous experimental evaluation_ by (i) providing a unified and highly configurable framework for exhaustive experimental studies and (ii) enabling a thorough analysis of the performance contribution of different model architecture components and training regimes. NewsRecLib is highly modular, allows specifying experiments in a single configuration file, and includes extensive logging facilities. Moreover, NewsRecLib provides out-of-the-box implementations of several prominent neural models, training methods, standard evaluation benchmarks, and evaluation metrics for news recommendation.
Footnote 1: [https://github.com/andreeaiana/newsreclib](https://github.com/andreeaiana/newsreclib)
## 1 Introduction
Personalized news recommendation has become ubiquitous for customizing suggestions to users' interests (Li and Wang, 2019; Wu et al., 2023). In recent years, there has been a surge of effort towards neural content-based recommenders. With increasingly complex neural architectures able to ever more precisely capture users' content-based preferences, neural recommenders quickly replaced traditional recommendation models as the go-to paradigm for news recommendation.
Despite the abundance of model designs, research on neural news recommenders (NNRs) suffers from two major shortcomings: (i) a surprising amount of non-reproducible research (Ferrari Dacrena et al., 2021) and (ii) unfair model comparisons (Ferrari Dacrena et al., 2019; Sun et al., 2020). The former is, on the one hand, due to many NNR implementations not being publicly released (Sertkan and Neidhardt, 2022). Existing open source repositories, on the other hand, expose a multitude of programming languages, libraries, and implementation differences, hindering reproducibility and extensibility (Said and Bellogin, 2014). Moreover, a lack of transparency in terms of evaluation datasets, experimental setup and hyperparameter settings, as well as the adoption of ad-hoc evaluation protocols, further severely impede direct model comparisons. Many personalized news recommenders have been evaluated on proprietary datasets (e.g., Bing News (Wang et al., 2018), MSN News (Wu et al., 2019, 2019), News App (Qi et al., 2022)). Even the models trained on the more recently introduced open benchmarks (e.g., Adressa (Gulla et al., 2017), MIND (Wu et al., 2020)) cannot be directly compared due to the lack of standard dataset splits and evaluation protocols (Wu et al., 2021, 2021; Gong and Zhu, 2022; Wang et al., 2022). Even more concerning, crucial details regarding the setup of the experiments are regularly omitted from the publications or hard-coded without explanation.
It is thus particularly difficult to evaluate the impact of specific components in NNR architecture and training (e.g., news encoder, user modeling, training objectives) on the overall performance of the model (Iana et al., 2023). Many models simultaneously change multiple components in both the news and the user encoder, while carrying out only partial ablation studies or evaluating against suboptimal baselines (Rendle et al., 2019).
In this work, we introduce NewsRecLib, an open source library for NNRs, to remedy these critical limitations.2 NewsRecLib aims to facilitate reproducible research and comprehensive experimental studies, using an end-to-end pipeline powered by a single configuration file that specifies a complete experiment - from dataset selection and pre-processing over model architecture and training to evaluation protocol and metrics. NewsRecLib is
built based on the following guiding principles:
**Modularity and extensibility.** With PyTorch Lightning (Falcon and The PyTorch Lightning team, 2019) as its backbone, NewsRecLib is designed in a modular fashion, with core individual components being decoupled from one another. This enables mixing and matching different modules, as well as seamlessly integrating new ones.
**Easy configurability and reproducibility.** NewsRecLib is powered by Hydra (Yadan, 2019), in which each experiment is defined through a single configuration file composed from the configurations of specific pipeline components. The configuration of every experiment is automatically stored at the start of the run and as such trivially enables reproducibility.
**Logging and profiling.** The library supports multiple standard tools (e.g., WandB (Biewald, 2020), Tensorboard (Abadi et al., 2016)) for extensive logging, monitoring, and profiling of experiments with neural models - in terms of losses, evaluation metrics, runtime, memory usage, and model size.
Overall, NewsRecLib is designed to support the development and benchmarking of NNRs as well as the specific analysis of contributions of common components of the neural recommendation pipelines. In this paper, we discuss the building blocks of NewsRecLib and provide an overview of the readily available models. For a detailed documentation on the usage of the library, we refer to its project page.
## 2 NewsRecLib - the Library
Figure 1 depicts the structure of NewsRecLib, comprising different functional modules: from data modules for downloading and processing datasets to recommendation modules for training and evaluating a particular NNR. The overall pipeline of an experiment is built automatically from the high-level experimental flow provided by the user in the form of a single Hydra configuration file.
### Modularization and Extensibility
NewsRecLib is highly modularized: it decouples core components to the largest extent possible. This allows for combinations of different news encoders (e.g., over different input features - text, aspects, entities) with different user modeling techniques, click fusion strategies, and training objectives. NewsRecLib is easily extensible with new features: the user only needs to write a new sub-component class (e.g., category encoder), or, in the case of new datasets or recommenders, to define a new PyTorch Lightning data module or (model) module, respectively.
Concretely, we decouple the essential building blocks of a NNR, namely the _news encoder_ (NE), the _user encoder_ (UE), and the _click predictor_. NE is further decomposed into a configurable set of feature encoders (i.e., components that embed different aspects of the news, e.g., title, topical category or named entities). Different model components can be interchanged with corresponding sub-modules of other recommenders, ensuring freedom in choosing each building block of a model independently of the other components (i.e., by mixing the NE of "NNR 1" with the UE of "NNR 2"), in contrast to practices in existing NNR libraries, in which sub-components are tied to concrete NNR architectures that introduced them. Because of this, NewsRecLib allows for clear-cut and comprehensive analyses of impact of NNR components on their overall performance.3 NewsRecLib currently implements feature encoders used in pre-implemented models (see Appendix SSB); users can, however, easily incorporate new ones (e.g., an image encoder) by extending the respective class.
Footnote 3: E.g., we leveraged an earlier version of NewsRecLib to analyze the impact of click behavior fusion strategies and training objectives on NNRs’ performance (Iana et al., 2023).
### Configurability and Reproducibility
Reproducibility strongly relies on the transparency of each step and component in the pipeline, as well as the availability of metadata regarding the factors that influence the model (e.g., hyperparameter values, training objective) and the environment in which it is trained and evaluated (e.g., library versions). Because of this, NewsRecLib leverages the Hydra4 framework (Yadan, 2019) to decouple the experiment configuration (i.e., a pipeline of modules) from the concrete implementations (i.e., source code) of the modules.
Footnote 4: [https://hydra.cc/](https://hydra.cc/)
Each concrete module setting is specified and retrieved automatically from a dedicated configuration file which can be accessed by all the pipeline components. A variety of callbacks supported by PyTorch Lightning (e.g., model checkpointing, early stopping, debugging) can be defined, and modified via a corresponding configuration. A single configuration file guides each experiment:
the default configurations of the used modules and callbacks are hierarchically inherited and can be overridden. Experiment configurations can also be overwritten directly from the command line, removing the need to store many similar configuration files: this facilitates fast experimentation and minimizes boilerplate code. Experiments can be executed on CPU, GPU, and in a distributed fashion by specifying the type of accelerator supported in PyTorch Lightning. The integration with extensive logging capabilities (see SS2.3) ensures that any modifications are persistently stored in the experiment directory, together with other log files and model checkpoints.
Fig. 2 shows a minimal configuration example for an experiment that trains an instance of the NRMS (Wu et al., 2019) model. The main configuration file experiment.yaml guides the pipeline. It inherits the data and model-specific configurations from mind.yaml and nrms.yaml, which specify the default configurations of the data module and NNR model, respectively. experiment.yaml further uses the default configurations for the WandB logger, the trainer, and various callbacks. The example also illustrates the interplay between modularization and configurability: we replace the original NE of the NRMS model with a pretrained language model (in this case roberta-base).
### Performance Evaluation and Profiling
With Hydra's pluggable architecture as its backbone, every part of the recommendation pipeline is transparent to the user. NewsRecLib records comprehensive information during training, including number of trainable model parameters and total model size, runtimes, training and validation losses. Moreover, it stores important metadata regarding hyperparameter settings, operating system, PyTorch version, environment details, and dependencies between libraries. Any profiler supported by PyTorch can be incorporated by a simple modification of the corresponding configuration file.
Figure 1: Illustration of the NewsRecLib framework.
Figure 2: A minimal configuration example for training an NRMS (Wu et al., 2019) model. All settings defined in the main and the imported configuration files are merged and persisted into a single configuration object.
NewsRecLib supports widely used loggers like WandB5Biewald (2020) and Tensorboard6Abadi et al. (2016). Moreover, users can export evaluation metrics for further analysis. Appendix A shows an example of the logging output. We rely on TorchMetrics7Detlefsen et al. (2022) for model evaluation. Users can track numerous metrics ranging from accuracy-based to beyond-accuracy (e.g., diversity) performance. New metrics can be easily added to the pipeline, either by defining the necessary callbacks in the case of metrics already available in TorchMetrics, or by implementing a custom metric as a subclass of the base Metric class in TorchMetrics.
Footnote 5: [https://wandb.ai/site](https://wandb.ai/site)
Footnote 6: [https://www.tensorflow.org/tensorboard](https://www.tensorflow.org/tensorboard)
Footnote 7: [https://torchmetrics.readthedocs.io/en/stable/](https://torchmetrics.readthedocs.io/en/stable/)
### Hyperparameter Optimization
NNR performance heavily depends on model hyperparameters, making hyperparameter optimization a crucial ingredient in the empirical evaluations of NNRs. NewsRecLib supports hyperparameter tuning using the Optuna frameworkAkiba et al. (2019), which offers a wide range of samplers, such as random search, grid search, and Bayesian optimizationBergstra et al. (2011); Ozaki et al. (2020).8 In conjunction with the modularity of NewsRecLib, this allows nearly every component of a news recommender to be treated as a hyperparameter, so that users can optimize the choice of encoders or scoring functions. Figure 3 shows a basic multi-objective hyperparameter search over the number of negative samples, the model's learning rate, and temperature for the supervised contrastive loss.
Footnote 8: [https://optuna.readthedocs.io/en/stable/index.html](https://optuna.readthedocs.io/en/stable/index.html)
### Available Modules
NewsRecLib currently encompasses two popular benchmark datasets, 13 news recommendation models, and various evaluation metrics.
**Datasets.** We provide out-of-the-box utilities for two prominent monolingual news recommendation benchmarks: MINDWu et al. (2020) (with English news) and AdressaGulla et al. (2017) (with Norwegian news). For both datasets, NewsRecLib supports automatic downloading (when available)9, data parsing, and pre-processing functionalities to create a unified PyTorch Lightning datamodule. For both datasets, we include their small and large versions, MINDsmall and MINDLarge, and Adressa-1 week and 10 weeks, respectively.
Footnote 9: Note that for the Adressa dataset, only a limited version of the dataset is available for download. For the full version containing additional features, users should contact the authors, as detailed in [https://reclab.idi.ntnu.no/dataset/](https://reclab.idi.ntnu.no/dataset/)
Since Wu et al. (2020) do not publicly release test labels for MIND, we use the provided validation portion for testing, and split the respective training set into temporally disjoint training and validation portions. We follow established practices on splitting the Adressa datasetHu et al. (2020); Xu et al. (2023) into train, validation, and test sets. In contrast to MIND, which consists of impression log (lists of clicked and non-clicked news by the user), the Adressa dataset contains only positive samplesGulla et al. (2017). Following Yi et al. (2021), we build impressions by randomly sampling 20 news as negatives for each clicked article.
We additionally automatically annotate datasets with sentiment labels obtained by VADER Hutto and Gilbert (2014), a monolingual (English) rule-based algorithm (only for MIND), and a multilingual sentiment classification model of Barbieri et al. (2021), fine-tuned from XLM-RoBERTa Base Conneau et al. (2020).
**Recommendation Models.** NewsRecLib provides implementations for 10 general-purpose NRRs and 3 fairness-aware recommenders. To support analysis of model components, for the models that did not use PLMs in their NEs (but rather contextualized embeddings with convolutional or attention layers), we implement an additional variant with a PLM-based NE (as proposed in Wu et al. (2021)). Furthermore, models can be trained either with _early fusion_, i.e., learning a parameterized user encoder to aggregate embeddings of news or the
Figure 3: Example of a hyperparameter optimization process. The configuration first runs 10 trials of a search using Bayesian optimization. The hyperparameter search space is defined by indicating the interval, range or choice of values for each desired parameter.
simpler _late fusion_ strategy proposed in lana2023multi, which replaces explicit user encoders with parameter-efficient dot products between candidate and clicked news embeddings. Appendix B details all available configurations for each recommendation model.
Training Objectives.Most NNR models are trained with point-wise classification objectives [22, 23] with negative sampling [23, 24]. In lana2023multi, we have shown that contrastive learning constitutes a viable alternative. At the same time, combining point-wise classification with contrastive objectives has been successfully employed in related tasks [14]. We thus implement three training objectives: cross-entropy loss, supervised contrastive loss [15], and a dual objective that is a weighted average between the two.
Evaluation Metrics.NewsRecLib integrates standard accuracy-based metrics, such as AUC, MRR, and nDCG\(@k\). Additionally, we implement aspect-based diversity and aspect-based personalization defined in lana2023multi. The availability of these beyond-accuracy metrics enables multi-faceted evaluation of NNRs.
## 3 Comparison to Related Frameworks
In the past decade, numerous frameworks for the development and comprehensive evaluation of recommender systems have been proposed to address the problem of reproducibility in the field [16, 17, 18, 19, 20, 21, 22, 23]. News recommendation poses different challenges for practitioners in comparison to recommendation in domains such as movies, music, or e-commerce [15, 16]. However, few of the existing and widely used libraries offer support for news recommenders, and especially for the modern neural news recommendation models.
Microsoft Recommenders [1, 23] and RecBole [24, 25] provide implementation for five and three NNRs, respectively, as well as utilities for the MIND dataset. Nonetheless, other datasets, more recent approaches, and in particular fairness-aware models and beyond-accuracy metrics are not supported. StreamingRec [21] is a framework for evaluating streaming-based news recommenders, covering a wide range of algorithms, from trivial baselines (e.g., recently published, most popular) or item-to-item based collaborative filtering or session-based nearest neighbor techniques, to association rule methods and content-based approaches. However, it does not support any of the recent neural models. In these libraries, the sub-modules of a specific recommender are not decoupled from the overall model, which impedes experimentation with and analysis of different model components and training strategies.
In contrast to these frameworks, NewsRecLib focuses solely on the state-of-the-art neural news recommendation models, providing utilities for the most used benchmark datasets, architectures, training techniques, and evaluation metrics tailored to news recommendation. NewsRecLib unifies disparate implementations of recent neural news recommenders in a single open-source library that is built on top of mature frameworks for deep learning (PyTorch Lightning), evaluation (TorchMetrics), and configuration (Hydra).
## 4 Experiments
We conduct experiments with the pre-implemented recommendation models from NewsRecLib to investigate their performance when (1) trained with the original architecture (e.g., NE based on word embeddings and contextualization layer) and (2) trained with a PLM-based NE [23].
### Datasets and Experimental Setup
We carry out the evaluation on the MINDsmall [23] (denoted MIND) and Adressa-1 week (denoted Adressa) [16] benchmark datasets. We evaluate two versions of the models, namely (1) with the original NE and (2) the NE modified to use a PLM [23] (if not used in the original NE). We use RoBERTa Base [15] and NB-BERT Base [14, 17] for experiments on MIND and Adressa, respectively. In both cases, we fine-tune only the last four layers of the PLM in the interest of computational efficiency. We use
100-dimensional TransE embeddings (Bordes et al., 2013) pretrained on Wikidata as input to the entity encoder for models using named entities as input features to their NEs, a maximum history length of 50, and set all other model-specific hyperparameters to optimal values reported in the respective papers. We train all models with mixed precision, and optimize with the Adam algorithm (Kingma and Ba, 2014), with the learning rate of 1e-4. We train models with a PLM-empowered NE for 10 epochs, and the model variant without PLMs for 20 epochs. Since Adressa contains no abstract or disambiguated named entities, we use only the title for the models benchmarked on that dataset.
### Results
Table 1 summarizes the results on content-based recommendation performance (w.r.t. AUC, MRR, nDCG@5, nDCG@10) and aspect diversification for topical categories (\(D_{ctg}\)) and sentiment (\(D_{snt}\)), as per Iana et al. (2023b). We find that PLM-based NEs do not necessarily lead to performance improvements. We hypothesize that this is due to the dataset size: a PLM-based NE requires training a larger number of parameters than one which contextualizes pretrained word embeddings with a CNN or attention network. Note that rather small improvements of PLM-empowered NEs over original NEs have been shown only for larger-scale datasets (Wu et al., 2021b). These findings indicate that more research is needed to understand under which settings older NEs can still benefit NNRs. MANNeR, with its late click behavior fusion approach, outperforms all other models on MIND, but it underperforms on Adressa. Note that the contrastive learning training approach adopted by MANNeR (Iana et al., 2023b) benefits from larger training datasets, and MINDsmall has roughly five times as many news as Adressa 1-week. Expectedly, w.r.t. aspect-based diversity, NNRs with diversification objectives (e.g., for sentiment) outperform models trained only to maximize content-based accuracy.
## 5 Conclusion
In this work, we introduced NewsRecLib, a highly configurable, modular and easily extensible framework for neural news recommendation. Our library is specifically designed to foster reproducible research in recommender systems and rigorous evaluation of models - users only need to create a single configuration file for an experiment. We briefly described the underlying principles of NewsRecLib and the structure of its building blocks. The framework currently provides two standard benchmark datasets, loading and pre-processing functions, 13 neural recommendation models, different training objectives and hyperparameters optimization strategies, numerous evaluation metrics, extensive logging capabilities, and GPU support. We believe that NewsRecLib is a useful tool for the community that will (i) catalyze reproducible NNR research, (ii) foster fairer comparisons between the models, and (iii) facilitate identification of NNR components that drive their performance.
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{**MIND**} & \multicolumn{3}{c|}{**MIND**} & \multicolumn{3}{c|}{**MIND**} \\ \cline{3-13} \multicolumn{1}{c|}{Model} & AUC & MRR & nDCG@5 & nDCG@10 & D\({}_{ctg}\)@10 & D\({}_{ctg}\)@10 & AUC & MRR & nDCG@10 & D\({}_{ctg}\)@10 & D\({}_{ctg}\)@10 \\ \hline \multirow{8}{*}{**GeneralRecified**} & DEN & 50.00 & 26.3\(\pm\)0.0 & 24.6\(\pm\)0.5 & 31.5\(\pm\)0.3 & 50.4\(\pm\)1.0 & 66.0\(\pm\)0.5 & – & – & – & – & – & – & – & – \\ & NPA & 55.1\(\pm\)0.0 & 28.5\(\pm\)1.1 & 25.4\(\pm\)1.1 & 29.1\(\pm\)1.8 & 51.2\(\pm\)0.5 & 57.5\(\pm\)0.7 & 53.3\(\pm\)1.5 & 35.1\(\pm\)1.9 & 30.4\(\pm\)2.8 & 38.2\(\pm\)2.8 & 31.6\(\pm\)0.3 & 60.7\(\pm\)0.4 \\ & NMBs & 54.1\(\pm\)0.8 & 27.2\(\pm\)0.6 & 32.5\(\pm\)0.5 & 31.9\(\pm\)0.4 & 52.1\(\pm\)0.6 & 65.9\(\pm\)1.7 & 63.4\(\pm\)0.7 & 30.5\(\pm\)2.6 & 28.6\(\pm\)5.3 & 37.1\(\pm\)4.8 & 37.0\(\pm\)0.7 & 63.7\(\pm\)0.7 \\ & NAMI & 50.2\(\pm\)0.0 & 31.4\(\pm\)0.6 & 18.4\(\pm\)0.7 & 38.1\(\pm\)0.5 & 47.0\(\pm\)1.0 & 66.9\(\pm\)0.3 & 50.0\(\pm\)0.7 & 38.7\(\pm\)3.5 & 38.4\(\pm\)1.3 & 45.1\(\pm\)1.6 & 31.5\(\pm\)4.6 & 60.7\(\pm\)0.4 \\
**GeneralRec** & ISFNR* & 58.8\(\pm\)1.2 & 32.2\(\pm\)0.9 & 30.4\(\pm\)0.9 & 36.8\(\pm\)0.9 & 43.1\(\pm\)1.2 & 65.6\(\pm\)0.6 & 61.2\(\pm\)2.4 & **36.7\(\pm\)1.3** & **38.2\(\pm\)1.3** & **45.8\(\pm\)2.1** & 27.7\(\pm\)2.4 & 61.0\(\pm\)3.0 \\ & TANR & 50.1\(\pm\)0.4 & 30.7\(\pm\)0.6 & 30.0\(\pm\)0.5 & 38.0\(\pm\)0.4 & 50.5\(\pm\)0.4 & 67.0\(\pm\)0.8 & 50.3\(\pm\)0.5 & 30.4\(\pm\)3.4 & 33.2\(\pm\)4.9 & 40.0\(\pm\)4.1 & 40.8\(\pm\)2.1 & 27.7\(\pm\)2.4 & 60.1\(\pm\)0.3 \\ & CAIM & 59.5\(\pm\)0.0 & 31.3\(\pm\)0.4 & 31.2\(\pm\)0.5 & 37.6\(\pm\)0.7 & 47.4\(\pm\)0.6 & 67.0\(\pm\)0.8 & 51.3\(\pm\)0.4 & 33.4\(\pm\)2.9 & 44.0\(\pm\)0.4 & 40.1\(\pm\)2.1 & 38.0\(\pm\)0.9 & 60.1\(\pm\)0.3 \\ & MNBS & 56.1\(\pm\)1.3 & 31.0\(\pm\)0.4 & 31.2\(\pm\)0.5 & 37.6\(\pm\)0.7 & 47.4\(\pm\)0.6 & 67.0\(\pm\)0.8 & 52.2\(\pm\)5.3 & 36.0\(\pm\)1.3 & 37.7\(\pm\)4.4 & 44.9\(\pm\)3.1 & 37.2\(\pm\)3.3 & 60.5\(\pm\)0.3 \\ & CNNRecLib & 56.1\(\pm\)1.3 & 31.0\(\pm\)0.4 & 32.4\(\pm\)0.5 & 37.1\(\pm\)0.7 & 57.6\(\pm\)1.1 & **7.38\(\pm\)3.2** & **47.2\(\pm\)2.5** & 38.4\(\pm\)1.4 & **55.2\(\pm\)** & 32.2\(\pm\)4.8 & 40.8\(\pm\)0.6 & 60.3\(\pm\)0.3 \\ & ConNewRec & 54.7\(\pm\)1.3 & 26.9\(\pm\)0.9 & 25.4\(\pm\)0.8 & 20.7\(\pm\)0.9 & 20.7\(\pm\)0.9 & 68.1\(\pm\)0.7 & 20.2\(\pm\)1.2 & 32.9\(\pm\)2.6 & 26.9\(\pm\)3.9 & 35.1\(\pm\)3.2 & 37.1\(\pm\)0.5 & 60.7\(\pm\)0.3 \\ & Seattle & 50.1\(\pm\)0.5 & 27.2\(\pm\)0.9 & 25.1\(\pm\)0.9 & 25.1\(\pm\)0.9 & 31.8\(\pm\)0.6 & 51.2\(\pm\)0.7 & 51.1\(\pm\)1.1 & 35.6\(\pm\)0.7 & 36.0\(\pm\)0.4 & 50.4\(\pm\)0.4 & 52.4\(\pm\)0.3 & 39.1\(\pm\)0.7 & 51.0\(\pm\)0.6 & 50.4\(\pm\)0.6 \\
**FairRec** & SemiDebus & 56.6\(\pm\)1.7 & 25.4\(\pm\)0.7 & 27.3\(\pm\)0.9 & 30.3\(\pm\)0.6 & 53.5\(\pm\)1.1 & 68.0\(\pm\)1.3 & 66.5\(\pm\)0.9 & 29.6\(\pm\)0.7 & 29.2\(\pm\)1.6 & 36.9\(\pm\)1.2 & 33.1\(\pm\)0.8 & 61.1\(\pm\)0.3 \\ \hline \multirow{8}{*}{**FairRec**} & NMBs-FM & 50.0\(\pm\)0.0 & 21.9\(\pm\)2.2 & 19.5\(\pm\)2.0 & 20.3\(\pm\)0.3 & 53.2\(\pm\)1.7 & 61.6\(\pm\)1.3 & 51.2\(\pm\)1.2 & 34.2\(\pm\)9.5 & 34.7\(\pm\)3.0 & 42.8\(\pm\)2.8 & 23.3\(\pm\)2.1 & 61.6\(\pm\)0.3 \\ \cline{1-1} \cline{2-11} \cline{2
### Limitations
While we have striven to build a comprehensive library for the design and fair evaluation of neural news recommendation models, several additional factors must be taken into account. Firstly, even though we aim to replicate the original implementations of the models to the highest degree possible, discrepancies in our code and results can arise from the usage of different frameworks, as well as scarce availability of implementation details in the source code or publications of some of the recommenders. Secondly, our library is heavily dependent on the changes and maintenance of the frameworks on which it is built, namely PyTorch Lightning (and by extension, PyTorch), Hydra, TorchMetrics, Optuna. As such, new plugins for logging (e.g., Neptune (Neptune team, 2019), Comet (Rei et al., 2020), MLFlow (Zaharia et al., 2018)) or hyperparamter optimization (e.g., Ax13) need to be integrated with PyTorch Lightning and Hydra.
Footnote 13: [https://ax.dev/](https://ax.dev/)
Moreover, we rely on open benchmark news datasets for training and evaluating the recommenders. Consequently, any biases that might be contained in the news and user data could be propagated through the recommendation pipeline. Additionally, the usage of these datasets is intertwined with their public availability. Any changes to the datasets or access restrictions are likely to impact the way pre-implemented models in NewsRecLib can be trained and benchmarked.
Lastly, neural news recommendation is a computationally expensive endeavor which requires availability of large compute resources. Although NewsRecLib technically supports execution of experiments on CPU, this would be not only highly inefficient and time-consuming, but also infeasible for large-scale datasets with hundreds of thousands of users and news. Consequently, users should ideally have access to GPUs to efficiently use our library.
## Ethics Statement
Users of our library should differentiate the recommendation models available in NewsRecLib from the originals. Consequently, they should explicitly credit and cite both NewsRecLib, as well as the original implementations, as specified on our GitHub page.
## Acknowledgements
The authors acknowledge support by the state of Baden-Wurttemberg through bwHPC and the German Research Foundation (DFG) through grant INST 35/1597-1 FUGG.
|
2308.14067 | An optically thin view of the solar chromosphere from observations of
the O I 1355Å spectral line | The O I 1355{\AA} spectral line is one of the only optically thin lines that
are both routinely observed and thought to be formed in the chromosphere. We
present analysis of a variety of observations of this line with the Interface
Region Imaging Spectrograph (IRIS), and compare it with other IRIS diagnostics
as well as diagnostics of the photospheric magnetic field. We utilize special
deep exposure modes on IRIS and provide an overview of the statistical
properties of this spectral line for several different regions on the Sun. We
analyze the spatio-temporal variations of the line intensity, and find that it
is often significantly enhanced when and where magnetic flux of opposite
polarities cancel. Significant emission occurs in association with
chromospheric spicules. Because of the optically thin nature of the O I line,
the non-thermal broadening can provide insight into unresolved small-scale
motions. We find that the non-thermal broadening is modest, with typical values
of 5-10 km/s, and shows some center-to-limb variation, with a modest increase
towards the limb. The dependence with height of the intensity and line
broadening off-limb is compatible with the line broadening being dominated by
the superposition of Alfv\'en waves on different structures. The non-thermal
broadening shows a modest but significant enhancement above locations that are
in between photospheric magnetic flux concentrations in plage, i.e., where the
magnetic field is likely to be more inclined with respect to the line-of-sight.
Our measurements provide strict constraints on future theoretical models of the
chromosphere. | Mats Carlsson, Bart De Pontieu | 2023-08-27T10:48:09Z | http://arxiv.org/abs/2308.14067v1 | # An optically thin view of the solar chromosphere from observations of the O i 1355A spectral line
###### Abstract
The O i 1355A spectral line is one of the only optically thin lines that are both routinely observed and thought to be formed in the chromosphere. We present analysis of a variety of observations of this line with the Interface Region Imaging Spectrograph (_IRIS_), and compare it with other _IRIS_ diagnostics as well as diagnostics of the photospheric magnetic field. We utilize special deep exposure modes on _IRIS_ and provide an overview of the statistical properties of this spectral line for several different regions on the Sun. We analyze the spatio-temporal variations of the line intensity, and find that it is often significantly enhanced when and where magnetic flux of opposite polarities cancel. Significant emission occurs in association with chromospheric spicules. Because of the optically thin nature of the O i line, the non-thermal broadening can provide insight into unresolved small-scale motions. We find that the non-thermal broadening is modest, with typical values of 5-10 km/s, and shows some center-to-limb variation, with a modest increase towards the limb. The dependence with height of the intensity and line broadening off-limb is compatible with the line broadening being dominated by the superposition of Alfven waves on different structures. The non-thermal broadening shows a modest but significant enhancement above locations that are in between photospheric magnetic flux concentrations in plage, i.e., where the magnetic field is likely to be more inclined with respect to the line-of-sight. Our measurements provide strict constraints on future theoretical models of the chromosphere.
Sun: chromosphere - Sun: transition region - Sun: magnetic fields - magnetohydrodynamics (MHD) +
Footnote †: journal: ApJ
0000-0002-4882-7885]Mats Carlsson
0000-0002-0703-0886]Bart De Pontieu
## 1 Introduction
The solar chromosphere is a highly dynamic and finely structured region of the solar atmosphere that is sandwiched between the visible surface or photosphere and the million-degree outer atmosphere or corona. All non-thermal energy that drives the solar wind and the heating of the corona traverses this critical region. Moreover, despite the only modest enhancement of the chromospheric temperature compared to that of the photosphere, it requires several orders of magnitude more non-thermal energy to drive the dynamics and energetics of the chromosphere than the rest of the solar atmosphere combined. This is because of the high chromospheric densities: the chromosphere contains more mass than the region stretching from the transition region to the edges of the heliosphere. Despite its obvious importance, it is relatively poorly understood, with many open questions remaining regarding the physical processes that drive the dynamics and energetics in the chromosphere (Carlsson et al., 2019).
One of the main reasons for our limited knowledge is the lack of unambiguous diagnostics. Most spectral lines that emanate from the chromosphere are optically thick and are subject to non-LTE radiative transfer effects such as scattering, partial frequency redistribution, etc (e.g., Leenaarts et al., 2013, 2013). In addition, non-equilibrium ionization plays a key role in the chromosphere and for some chromospheric diagnostics (e.g., Carlsson & Stein, 2002; Golding et al., 2014, 2016; Leenaarts et al., 2016; Golding et al., 2017). This renders their interpretation difficult and dependent on inversion techniques that often suffer from limiting assumptions and/or non-uniqueness, despite major advances in techniques (de la Cruz Rodriguez et al., 2016; Sainz Dalda et al., 2019). One of the few spectral lines that is optically thin and formed in the chromosphere is the O i 1355.598 A intersystem line (\(2s^{2}\,2p^{3}\,3s^{5}\!S_{2}-2s^{2}\,2p^{4}\,3P_{2}\)), hereafter called the O i 1355A line. It is routinely observed with the Interface Region Imaging Spectrograph (_IRIS_, De Pontieu et al., 2014). Lin & Carlsson (2015) performed an analysis of the formation of this line in an advanced numerical simulation cal
culated with the Bifrost code (Gudiksen et al., 2011) and demonstrated that the line formation is optically thin.
In this paper we provide an overview of observational findings with _IRIS_ related to the O i 1355A line. We describe the observations and analysis techniques of the spectral line profiles in Section 2. We then describe the statistical properties and center-to-limb variation of the intensity and line broadening for various solar targets in Section 3. We also describe how the properties of O i 1355A are impacted by flux cancellation and the presence of flux concentrations in, respectively, Sections 4 and 5. We finish with a discussion and conclusions in Section 6.
## 2 Observations and Analysis Techniques
For the observational analysis we used several different datasets from _IRIS_. For all of these observations we chose to use OBS-ID 3610091469 (for NOAA AR 12412), OBS-ID 3690092077 (for AR 12920) or 3690084078 (for quiet Sun), in order to maximize the signal-to-noise (S/N) ratio while maintaining high spectral resolution (to resolve the narrow O i 1355A line) by introducing long exposures (15s for AR 12412, 30s for AR 12920, 60s for QS), lossless compression, and asymmetric summing (x2 summing spatially, no summing spectrally). AR 12920 was followed over multiple days in September 2015 (see Table 1), as the AR crossed the disk. The field-of-view of the observations was 45\({}^{\prime\prime}\)x 120\({}^{\prime\prime}\)(AR 12412), 112\({}^{\prime\prime}\)x 175\({}^{\prime\prime}\)(AR 12920) and 140\({}^{\prime\prime}\)x 175\({}^{\prime\prime}\)(QS) with a spatial sampling of 0.35\({}^{\prime\prime}\) by 0.33\({}^{\prime\prime}\). The full detector was read out, i.e., three different wavelength ranges covering wavelengths from, respectively, 1331.56-1358.40A (FUV1), 1390.00-1406.79A (FUV2), and 2782.56-2833.89A (NUV), with a spectral sampling of 12.98 mA in FUV1, 12.72 mA in FUV2, and 25.46 mA in NUV. The nominal spatial resolution of _IRIS_ is 0.33\({}^{\prime\prime}\) in the FUV and 0.4\({}^{\prime\prime}\) in the NUV. Note that since we used spatially summed data, our spatial resolution along the slit is Nyquist limited to twice the plate scale (0.66\({}^{\prime\prime}\), i.e., 2 times 0.33\({}^{\prime\prime}\)). The spectral resolution (FWHM) of _IRIS_ is dominated by Nyquist sampling, i.e., 5.7 km/s in both FUV and NUV.
To determine the first moments of the O i spectral line, we used the following steps: 1. remove the impact of cosmic rays on the detector (using the clean_exposure.pro code in IDL SolarSoft), 2. remove the impact of fixed pattern noise in an extra dark current subtraction, 3. take the mean of 3x3 pixels to increase the S/N. 4. apply a single Gaussian fit to the spectral line profile in each location.
The second step of this procedure is non-standard and is therefore described in some detail here. The O i 1355A line is rather weak and especially the width determination is influenced by errors in the dark-level. We often see horizontal stripes of increased width in the width-maps. These stripes are in the same location over some time and are caused by fixed patterns in the dark-level that are not removed in the standard reduction pipeline. When a "hot" pixel is present at a location in the wing of the O i line, a larger width results from the Gaussian fit. We used the QS dataset from the center of the disk (2016-03-06T23:01:56) and formed a mean intensity, \(I(y,\lambda)\) by taking the mean over scan position x of \(I(x,y,\lambda)\) for all positions where the line-core intensity was less than 15 DN. This mean will still have an imprint of the spectral line. We removed that by subtracting at each wavelength and pixel y the mean over \(\pm\) 10 pixels. The fixed pattern noise is not completely fixed in time so this procedure makes the most significant improvement to the width deter
\begin{table}
\begin{tabular}{|l|l|r|r|r|r|r|r|r|} \hline Start Date & End Date & Target & OBS-ID & Exp & Solar x & Solar y & \(\theta\) & \(\mu\) \\ \hline
2015-09-09T07:59:58 & 2015-09-09T10:56:08 & AR 12412 & 3610091469 & 15 & -449 & -213 & 32 & 0.82-0.88 \\
2015-09-23T00:09:43 & 2015-09-23T02:58:30 & AR 12920 & 3690092077 & 30 & -754 & 104 & 59 & 0.49-0.68 \\
2015-09-23T20:22:31 & 2015-09-23T23:11:18 & AR 12920 & 3690092077 & 30 & -565 & 81 & 41 & 0.74-0.85 \\
2015-09-24T17:22:13 & 2015-09-24T20:11:00 & AR 12920 & 3690092077 & 30 & -408 & 77 & 29 & 0.86-0.93 \\
2015-09-25T21:00:43 & 2015-09-25T23:49:30 & AR 12920 & 3690092077 & 30 & -182 & 63 & 15 & 0.96-0.99 \\
2015-09-26T21:23:41 & 2015-09-27T00:12:28 & AR 12920 & 3690092077 & 30 & 27 & 58 & 4 & 0.98-1.00 \\
2015-09-28T17:08:41 & 2015-09-28T19:57:28 & AR 12920 & 3690092077 & 30 & 425 & 83 & 23 & 0.84-0.92 \\
2015-10-02T00:54:43 & 2015-10-02T03:43:30 & AR 12920 & 3690092077 & 30 & 892 & 140 & 62 & 0.00-0.49 \\
2015-10-02T20:47:20 & 2015-10-02T23:36:08 & AR 12920 & 3690092077 & 30 & 917 & 165 & 68 & 0.00-0.50 \\
2016-03-04T10:34:35 & 2016-03-04T17:26:20 & QS & 3690094078 & 60 & -3 & 900 & 69 & 0.00-0.55 \\
2016-03-05T10:14:03 & 2016-03-05T17:05:48 & QS & 3690094078 & 60 & 5 & -890 & 67 & 0.00-0.56 \\
2016-03-05T17:17:07 & 2016-03-06T00:08:52 & QS & 3690094078 & 60 & 5 & -714 & 48 & 0.55-0.76 \\
2016-03-06T00:20:11 & 2016-03-06T07:11:56 & QS & 3690094078 & 60 & 5 & -538 & 34 & 0.76-0.89 \\
2016-03-06T10:41:36 & 2016-03-06T17:33:21 & QS & 3690094078 & 60 & 5 & -890 & 67 & 0.00-0.56 \\
2016-03-06T23:01:56 & 2016-03-07T05:53:41 & QS & 3690094078 & 60 & -2 & 7 & 4 & 0.99-1.00 \\
2016-03-07T06:05:00 & 2016-03-07T12:56:45 & QS & 3690094078 & 60 & -4 & 728 & 49 & 0.52-0.75 \\ \hline \end{tabular}
\end{table}
Table 1: For all IRIS observations used, we provide the start and end date, target, IRIS OBS-ID, exposure time (s), solar x/y coordinates (\({}^{\prime\prime}\)), \(\theta\) (the viewing angle between the local vertical and line-of-sight vector, in degrees), and \(\mu=\cos\theta\).
minations for the QS datasets, and not so much for the AR 12920 datasets. The time-scales on which the fixed pattern changes significantly appear to be days to months. This issue and potential fixes in the pipeline are currently being studied in more detail by the IRIS calibration team.
The single Gaussian fit is made with three free parameters: the maximum intensity, the Doppler shift of the profile and the width. A constant background is first determined from a mean of the spectrum at blue-shifts of between 200 and 40 km/s relative to the rest-wavelength of the O i 1355A line (a line-free region of the spectrum). This constant background is subtracted from the spectrum before the single Gaussian fit. The fit is only including the spectrum between -40 and 40 km/s relative to the rest-wavelength of the O i 1355 A line, in order to avoid influence from the C i line at 1355.843 A (corresponding to 54 km/s from the rest-wavelength of the O i 1355 A line). A fit is attempted for all pixels but the fit is judged unsuccessful if the fitting algorithm (mpfitpeak.pro in IDL SolarSoft) does not converge or the determined values are outside reasonable values ([0,1000] DN in maximum intensity, [-30,30] km/s in Doppler shift, [2,30] km/s in width).
The reported line broadening is the 1/e half-width (henceforth referred to as the 1/e width) as this provides the most probable velocity, which is useful for interpretation in terms of physical mass motions in the solar atmosphere. The line width has several contributions: \(w=\sqrt{w_{inst}^{2}+w_{th}^{2}+w_{th}^{2}}\). We note that the instrumental broadening \(w_{inst}\) in the FUV channel of _IRIS_ has been reported (De Poutieu et al., 2014) as 25.85 mA (i.e., 5.7 km/s) full-width-half-maximum (FWHM), which translates to 15.52 mA or 3.4 km/s 1/e width. The thermal broadening \(w_{th}\) of the O i 1355 A line is somewhat uncertain given the large range of chromospheric heights this line is formed over. If we assume that the chromospheric temperature is within a range of 5,000 to 15,000 K, the thermal broadening would range between 2.3 and 3.9 km/s (1/e width). Both the instrumental and thermal broadening are smaller than most widths we report on here and play only a minor role. For the Mg ii k profiles we used a double Gaussian fit similar to that used by Carlsson et al. (2015a) and Bryans et al. (2016). The reported width is the 1/e width of the Gaussian fit to the profile outside the central reversal. For the C ii profiles we used a double Gaussian fit similar to that used by Rathore et al. (2015b).
For some _IRIS_ observations we also provide magnetogram data from _HMI_ for context. The magnetogram data is based on the circular polarization Stokes signal in the Fe i 6173A line and provides information on the line-of-sight (LOS) magnetic field at photospheric levels. The _HMI_ data has a pixel size of 0.5\({}^{\prime\prime}\). We use the full-disk data to obtain a field-of-view that is matched to that of _IRIS_. The co-alignment of the _IRIS_ and _HMI_ magnetogram data is accomplished through comparison of an _IRIS_ spectroheliogram at 2800A in the photospheric wings of the Mg ii h & k lines and the _HMI_ magnetogram. This alignment is greatly facilitated by the similarities between the bright points in the 2800A images and the flux concentrations that are detected with _HMI_.
For one of the _IRIS_ observations, we also provide high-resolution context magnetograms of the circular polarization in the Fe i 6173A line using the CRISP instrument (Scharmer, 2006) at the Swedish 1-m Solar Telescope (_SST_). This data was obtained on 2015-09-09 from 09:04:12 UTC to 10:29:04 UTC with a cadence of 67 seconds. The pixel size is 0.06\({}^{\prime\prime}\). The data was calibrated and corrected for deformations because of seeing conditions introduced by the Earth's atmosphere in an identical manner as described in SS 2.2 in Rouppe van der Voort et al. (2020).
## 3 Properties of O i 1355 A
### Morphology and relationship to other spectral lines
Lin and Carlsson (2015) used numerical simulations of a region that aims to mimic an enhanced network region in quiet Sun and found that the line forms through a recombination cascade to the upper level of the O i 1355A line, followed by emission in the line under optically thin conditions. The ionization balance of O i and O ii is coupled to the ionization of hydrogen through charge exchange. The O i 1355 A line is thus formed in the chromosphere, as O ii ionization occurs towards the top of the chromosphere. Since the line is optically thin, the line broadening is of particular interest as it provides constraints on turbulent motions in the chromosphere, a property otherwise only accessible through spectral line inversions of optically thick chromospheric lines such as Mg ii h 2803A and Mg ii k 2796A (e.g., de la Cruz Rodriguez et al., 2016). Since the line is chromospheric, it is of interest to study the morphological properties of spectroheliograms or maps of the O i 1355 A intensity and width.
Figure 1 shows that in an active region at disk center, the O i 1355 A line is bright in plage regions and some parts of filaments. The morphology appears to be a combination of dot-like and more extended features. The regions where O i 1355 A is bright (top row, middle panel) slightly extend beyond the plage perimeters that are seen in a photospheric 2800A spectroheliogram (top row, left panel). The extensions seem wispy and resemble the lower parts of loop-like or fibril-like features.
There is overall a strong resemblance with the Mg ii k3 intensity maps1. The main difference between both maps is that the regions where Mg ii k3 is bright extend even further beyond the photospheric boundaries of the plage, possibly because the signal-to-noise (S/N) of the O i 1355 A line is significantly lower. The other difference is that filaments are typically dark in Mg ii k3 and most often bright in O i 1355 A (but see the filament around \(x=-10^{\prime\prime}\), \(y=60^{\prime\prime}\) which is dark in both). There are also significant morphological similarities with the C ii 1335A intensity (bottom
Figure 1: _IRIS_ spectroheliograms of NOAA AR 12920 at 2015-09-26T21:23 UTC showing in the top row 2800Å, O i 1355Å intensity, and Mg ii k3 intensity, and in the bottom row C ii 1335Å intensity, O i 1355Å line broadening, and the Mg ii k line broadening. Black horizontal lines are fiducial marks. Black vertical lines are data drop-outs. The O i 1355 Å intensity, Mg ii k3 intensity, and C ii 1335Å intensity are scaled, respectively, between 1 and 35 DN, 0 and 10586 DN, and 3 and 311 DN. Regions with bad fits and/or low peak counts (below 6 DN) are masked out in the O i 1355 Å line width map. This figure is accompanied by an animation that allows the reader to blink between the different panels of the figure to see the various similarities and offsets described in the text.
row, left panel). The main difference here is that the latter shows more differences in brightness between various plage regions, because it is more sensitive to transition region conditions (Rathore et al., 2015), which are, in turn, dominated by the overlying coronal conditions. In summary, our comparison shows that in plage regions the O i 1355 A line does indeed look very chromospheric in nature with significant similarity between the intensity patterns in the O i 1355 A and Mg ii k line.
The spatial patterns in the map of O i 1355 A linewidths (bottom row, middle panel) generally map, on large scales, those of the O i 1355 A intensity (top row, middle panel). The O i 1355 A linewidth is enhanced in and around plage regions, and very low in surrounding quiet Sun regions. The map of O i 1355 A line widths is remarkably homogeneous within each plage region, with only relatively small deviations around the average value of about 7.5 km/s, as previously remarked by Carlsson et al. (2015). For each plage region, the high values of O i 1355 A widths spatially extend even further beyond the photospheric plage boundaries than the O i 1355 A intensity. The values of O i 1355 A broadening in quiet Sun are very small, often insignificant, suggesting that, in many quiet Sun locations, the line is not broadened beyond the combination of instrumental and thermal broadening.
The spatial patterns of the line broadening in O i 1355 A and Mg ii k (bottom row, right panel) show significant correspondence on very large scales (\(\sim\)10''): both lines are broader in plage regions and the immediate vicinity, and narrower in the quiet Sun regions surrounding the plage. However, the broadening values themselves are very different between these two lines. Typical values for O i 1355 A are of order 10 km/s or less, while the Mg ii k values are larger by a factor of \(\sim\)3. This is in agreement with the results from Carlsson et al. (2015). It is not surprising given that O i 1355 A is an optically thin line and its width is sensitive to velocity variations along the LOS and turbulent motions, while Mg ii k is an optically thick line with velocity gradients, turbulent motions, and broadening as a result of opacity all playing a role in the line width. On very small (\(<1\arcsec\)) scales there is most often not a good match between O i 1355 A broadening and Mg ii k broadening.
Figure 2 gives four examples of observed profiles together with the single Gaussian fit of the O i 1355 A line from four different regions seen in Fig. 1: Plage, a filament, internetwork and an unusually wide profile. Note that the spectra have been averaged over 3x3 spatial pixels - the actual number of photons per wavelength bin is thus nine times the given DN/pixel times four photons per DN (the gain in the FUV passband2, De Pontieu et al., 2014). The dominant error source is photon-noise, proportional to the square-root of the number of photons for Poisson statistics. For very low count-rates, a number of other error sources come into play (readout-, digitization-, flat-fielding, fixed-pattern-noise, straylight subtraction, etc, Wulser et al., 2018). We estimate the total error as the square-root of the sum of the squares of a term not dependent on the count-rate (estimated from the standard deviation of the count-rate in a line-free region between -200 and -40 km/s) and the Poisson noise term. We furthermore restrict our analysis to profiles with a line-centre count-rate above a threshold (given in the respective figures). For the three first profiles, the fit with a single Gaussian is very good. The unusually wide profile has a markedly non-Gaussian shape. This is typical for the widest profiles.
Footnote 2: The gain in the NUV passband is 18 photons per DN.
In quiet Sun, we find many of the same properties and correspondences, as shown in Fig. 3. The O i 1355 A is bright in and around quiet Sun network regions and faint in the internetwork regions. At disk center, we see that the O i 1355 A line is too weak to detect in some of the internetwork regions, despite the deep exposures and spatial summing in the special observing mode for these observations. However, in much of the FOV there is sufficient signal to determine the intensity and line width of the O i 1355 A line. The findings for QS are very similar to those described above for active regions. The strongest signal can be found in and around network regions. A disk center view shows that the regions where O i 1355 A intensity is high again extend beyond the
Figure 2: Examples of O i 1355 Å profiles from four locations of Fig. 1. The observed profile is given with symbols and the single Gaussian fit with a solid line. The positions and reduced \(\chi^{2}\) of the fit are given in the upper left with the 1/e width and its 1\(\sigma\) error in the upper right. The increased intensity in the red part of the window is due to an emission line from C i at 1355.8 Å.
Figure 3: _IRIS_ spectroheliograms of a quiet Sun region at 2016-03-06T23:01 UTC showing in the top row 2800Å, O i 1355Å intensity, and Mg ii k3 intensity, and in the bottom row C ii 1335Å intensity, O i 1355Å line broadening, and the Mg ii k line broadening. Black horizontal lines are fiducial marks. Black vertical lines are data drop-outs. The O i 1355 Å intensity, Mg ii k3 intensity, and C ii 1335Å intensity are scaled, respectively, between 2 and 141 DN, 0 and 9563 DN, and 19 and 125 DN. Regions with bad fits and/or low peak counts (below 8 DN) are masked out in the O i 1355 Å line width map. This figure is accompanied by an animation that allows the reader to think between the different panels of the figure to see the various similarities and offsets described in the text.
spatial boundaries of the network, similar to what we see in plage regions. The network-associated O i 1355 A emission appears to be a combination of dot-like features and more extended wispy structure that are reminiscent of the spicules that are often seen to protrude from network flux concentrations. The internetwork regions are significantly fainter. The O i 1355 A intensity map shows strong similarity with the Mg ii k3 intensity map. There is, again, similarity with the C ii 1335A intensity map, although somewhat less than with Mg ii k3, with C ii 1335A showing higher contrast and strongly reduced internetwork intensities.
The O i 1355 A line width maps reveal extremely low values in the internetwork regions. In many locations these values are so low that they appear compatible with a lack of non-thermal broadening. The network regions show a significant increase in the line width, with the regions of high broadening extending even further from the network regions than the intensity itself. This is similar to what we found for plage, although the line width is somewhat reduced in network compared to plage. The O i 1355 A broadening is much less than that found in the Mg ii k line, again similar to what we found for active regions.
To gain further insight into the morphology and formation region of O i 1355 A it is of interest to study the appearance of the O i 1355 A intensity and line width in solar targets that are closer to the limb, both for an active region and a quiet Sun region. Fig. 4 shows NOAA AR 12920 as it is tracked over the course of 9 days from close to the east limb to the west limb of the Sun, with both the minimum and maximum of the color scale for both intensity and line width identical for all images. It appears that the intensity is lowest close to disk center, with a steady increase as we approach the limb. At the limb itself the intensity is increased significantly, as is expected from an optically thin line: the line-of-sight increases and captures more structures as the viewing angle is more oblique. We also see localized regions of strong intensity enhancements, in particular on 2015-09-23, 2015-09-24, 2015-09-25, and 2015-10-02. We will describe these very bright regions in more detail in SS 4.
The line width remains relatively independent of the viewing angle except there is a clear increase for viewing angles close the limb (e.g., 2015-09-23, 2015-10-02). At the limb itself the line width increases in a step-like fashion, which is to be expected since the integration length doubles at the limb. Nevertheless the line widths remain relatively modest and the overall variation is limited to just a few km/s on average, as described below using histograms.
A more detailed view of NOAA AR 12920 on 2015-10-02 is provided in Fig. 5. At this time the AR is close to the limb so that the FOV covers a wide range of values for \(\mu=\cos\theta\). At this extreme viewing angle it appears that O i 1355 A intensity features (top row, middle panel) are slighly offset towards the limb when compared with the photospheric plage footpoints (top row, left panel). In addition, the O i 1355 A intensity features that protrude towards the limb from the photospheric footpoints protrude less (i.e., are shorter) than the features in the Mg ii k3 intensity maps. The protrusions are even more extended towards the limb in the C ii 1335A images. They are strongly reminiscent of spicules. This suggests that the O i 1355 A line is indeed formed in the chromosphere, but perhaps at lower heights than the upper chromospheric and lower transition region features that are visible in the Mg ii k3 and C ii 1335A maps. This difference in apparent formation height is perhaps even more clear when comparing the line width maps in O i 1355 A and Mg ii k. The regions with enhanced O i 1355 A line width around the plage regions are relatively narrow, and definitely shorter than the equivalent features in Mg ii k. In addition, the O i 1355 A line width map appears to show significantly less enhancement (compared to the Mg ii k linewidth map) and the region where the line width is enhanced covers a smaller part of the plage regions.
A comparison between Si iv 1402, Cl i 1352A and O i 1355 A spectroheliograms of NOAA AR 12920 towards the limb is also illustrative (Fig. 6). The Cl i line intensity3, thought to be formed at similar heights in the chromosphere (Shine, 1983), shows a large degree of similarity to that of O i 1355 A. That correlation is also present to some extent for the line width of both lines. For the Si iv this is different. Here we see a clear difference between the limbward side and the disk-center side of the plage region. Towards the limb there is a clear offset of order 1-2\({}^{\prime\prime}\), with the Si iv features offset towards the limb. The Si iv brightenings are quite well understood (Skogsrud et al., 2016), and are the TR counterparts to the magneto-acoustic shocks that dominate the plage chromosphere, as can be determined from \(\lambda-t\) plots. This comparison suggests that the O i 1355 A intensity features occur at lower heights than the Si iv shocks. Such shocks are visible both in the blue and red wing of the Si iv line, with the blue wing showing the upward phase and the red wing the downward phase. These shocks have been shown to drive dynamic fibrils (Skogsrud et al., 2016) and type II spicules (Rouppe van der Voort et al., 2015). It seems that many O i 1355 A intensity features are somehow related to both, with the small round features possibly related to shocks (as seen from above), and the wispy linear features possibly related to type II spicules (when viewed from the side).
Footnote 3: The moments of the Cl i line are calculated in the same way as for O i.
The behavior of the O i 1355 A line in quiet Sun regions appears to be similar. We used observations that were obtained along the central meridian in March 2016 covering several locations between the north and the south pole, as shown in Fig. 7. We see very similar behavior, with the intensity lowest around disk center, and significantly increased at or close to the limb, again as expected from an optically thin line. The line width is enhanced around the network regions, independent of the viewing angle, and lowest at disk center. The increase of the line width exactly at the limb is significant but only a few km/s. There is also a more gradual
Figure 4: _IRIS_ spectroheliograms of NOAA AR 12920 as it traverses the disk between 2015-09-22 and 2015-10-02. Top two rows show the O i 1355Å intensity, while the bottom two rows show the O i 1355Å line broadening. The O i 1355 Å intensity is scaled between 0 and 60 DN for all panels. Regions with bad fits and/or low peak counts (below 4 DN) are masked out in the O i 1355 Å line width maps.
Figure 5: _IRIS_ spectroheliograms of NOAA AR 12920 at 2015-10-02T00:54 UTC showing in the top row 2800Å, O i 1355Å intensity, and Mg ii k3 intensity, and in the bottom row C ii 1335Å intensity, O i 1355Å line broadening, and the Mg ii k line broadening. Black horizontal lines are fiducial marks. Black vertical lines are data drop-outs. The O i 1355 Å intensity, Mg ii k3 intensity, and C ii 1335Å intensity are scaled, respectively, between 0 and 92 DN, 0 and 13169 DN, and 3 and 224 DN. Regions with bad fits and/or low peak counts (below 6 DN) are masked out in the O i 1355 Å line width map. This figure is accompanied by an animation that allows the reader to blink between the different panels of the figure to see the various similarities and offsets described in the text.
Figure 6: _IRIS_ views of NOAA AR 12412 at 2015-09-09T07:59 UTC. The right column shows spectroheliograms of Si iv 1394Å in the red wing (top), blue wing (middle), and core (bottom). The left and middle columns show, respectively the intensity and line broadening from a Gaussian fit for the O i 1355 Å line (top), Si iv 1394Å (middle), and CI i 1351Å (bottom). The color scale for the line broadening is between 3 and 12 km/s for O i and CI i, and between 5 and 40 km/s for Si iv. The O i 1355 Å intensity, Si iv red wing intensity, Si iv intensity, Si iv blue wing intensity, Cl i intensity, and Si iv core intensity, are scaled, respectively, between 2 and 30 DN, 5 and 500 DN, 10 and 500 DN, 5 and 500 DN, 10 and 250 DN, and 5 and 500 DN. Regions with bad fits and/or low peak counts (below 5 DN) are masked out in the O i 1355 Å line width map. This figure is accompanied by an animation that allows the reader to blink between the different panels of the figure to see the various similarities and offsets described in the text.
Figure 7: _IRIS_ spectroheliograms of quiet Sun taken along the meridian from the north pole (top left) to the south pole (bottom right). Top two rows show the O i 1355Å intensity, while the bottom two rows show the O i 1355Å line broadening. The O i 1355 Å intensity is scaled between 0 and 60 DN for all panels. Regions with bad fits and/or low peak counts (below 4 DN) are masked out in the O i 1355 Å line width maps.
increase of the line width from disk center towards the limb, but only by about 1 or 2 km/s.
A detailed view at the polar limb again shows an offset towards the limb of the bright O i 1355 A intensity features when comparing to the photospheric network concentrations (visible in 2800A). The O i 1355 A intensity features appear to be shorter (in the direction towards the limb) and offset below the features in Mg ii k3 (upper chromosphere) and C ii 1335A (TR).
A coherent picture then appears in which, for both active regions and quiet Sun, the O i 1355 A formation height appears to be chromospheric in nature, but somewhat lower and thus not quite identical to that of Mg ii k.
### Statistics and center to limb variation
Since the O i 1355 A line is optically thin in the chromosphere, it is of interest to study how the line broadening depends on the viewing angle. The probability density function (PDF) of the line width as a function of the cosine of the viewing angle is shown in Fig. 9 for the AR dataset (top panel) and the QS dataset (bottom panel). There is, on average, a slightly larger line width at the limb than at disk center, for both datasets. The line broadening appears, on average, to increase by only a small amount towards the limb, of order 1-2 km/s. The increase is larger for the AR 12920 dataset. The PDF of the line-core intensity (from the fit) as a function of viewing angle is shown in Fig. 10 for the two datasets. The intensity increases towards the limb for both datasets. For QS, the core intensity increases from an average of 0.3 DN/s at disk center to 2.7 DN/s at the limb. The corresponding numbers for the AR dataset is an increase from 0.48 DN/s at disk center to 3.7 DN/s. The AR dataset includes a larger variety of features and shows a larger spread in intensity.
We first discuss the center-to-limb behavior of the line broadening on the disk (i.e., not including the off-limb region). The changed viewing angle towards the limb could, in principle, cause two effects on the O i 1355 A line broadening. First, it could mean that the measurements closer to the limb are more sensitive to the motions perpendicular to the magnetic field. This would be the case if one assumes that, on average, the magnetic field is typically more vertically (rather than horizontally) oriented in the line formation region of O i 1355 A. This is likely a reasonable assumption for a plage or network region, but may not be a good assumption for other regions in the FOV, such as those regions just adjacent to network or plage where fibril or spicule-like features often appear more inclined, or for internetwork regions where the chromospheric field is poorly known and may include more horizontal fields. Secondly, the longer line-of-sight implies more superposition of different structures within each pixel. If these locations have different LOS velocities, this can broaden the line as well. If this were the case, one would also expect to see a significant increase in intensity, depending on the velocity gradients in the FOV and/or along the line-of-sight.
Disentangling these two effects is not straightforward. In addition, the likely different field orientations between network or plage on the one hand, and the other regions on the other hand, can easily render interpretation of a center to limb variation plot such as that in Fig. 9 muddled. By mixing both types of regions into one plot, any center-to-limb variation of each sub-region may be hidden since the two different types of regions may have oppositely signed center-to-limb variations. Another key aspect that should be taken into account is that the line formation region of O i 1355 A is very large, covering a region from the low chromosphere all the way to the top of the chromospheric spicules. We have already discussed the presence of spicule-like features in the O i 1355 A maps of Figs. 1 and 3. The likely field-aligned motions and the amplitude of, e.g., Alfven-wave associated motions perpendicular to the field, are expected to vary very significantly between the dense lower chromosphere and the top of spicules, because of the very large density differences between those two regions. Assuming mass conservation for field-aligned flows, or constant energy flux density for Alfven waves implies a strong increase of the flows from the low chromosphere to the top of spicules. In addition, there are most likely significant differences in intensity between the chromosphere and the top of spicules, since the intensity is proportional to the square of the electron density for this optically thin line (Lin & Carlsson, 2015). The relative contribution of these two components (chromosphere vs. spicules) within an IRIS resolution element is also expected to vary between disk center and the limb. This is because at disk center, spicules will more often originate and be suspended straight above the bright network or plage regions, whereas toward the limb their greater height will show them more spatially offset from the network or plage.
In an attempt to disentangle these effects we have applied masks to isolate the chromospheric from the spicular contributions as much as possible. We perform this only for the quiet Sun dataset since the magnetic field in active regions is more complex. Fig. 11 shows the center-to-limb variation of quiet Sun network, regions adjacent to the network, and the internetwork. The masks are created by first making a least-squares cubic polynomial fit to the core intensity (from the fit) as a function of the cosine of the viewing angle (\(\mu\)). The network mask includes all pixels with a core intensity more than 22 DN above the center-to-limb fit. The "adjacent to the network" mask includes all pixels less than one arcsecond limb-ward from the network pixels, excluding network pixels. The internetwork mask includes all the other pixels. The regions adjacent to the network are typically dominated by fibrils or spicules. We see that the regions adjacent to the network show an increased line broadening. Because of the morphology, we believe that this is caused by the larger motions in the low density environment of spicules. It is possible that this effect also contributes to the modest increase of the line width by 1-2 km s\({}^{-1}\) increase towards the limb that is seen in Fig. 9. The fainter spicular signal (with its increased LOS motions) becomes more apparent towards the limb because the spatial offset from the brighter network is increased as the viewing angle changes. Another effect that may help explain the modest increase of line width toward the limb
Figure 8: _IRIS_ spectroheliograms of a quiet Sun at 2016-03-06T23:01 UTC showing in the top row 2800Å, O i 1355Å intensity, and Mg ii k3 intensity, and in the bottom row C ii 1335Å intensity, O i 1355Å line broadening, and the Mg ii k line broadening. Black horizontal lines are fiducial marks. Black vertical lines are data drop-outs. The O i 1355 Å intensity, Mg ii k3 intensity, and C ii 1335Å intensity are scaled, respectively, between 0 and 150 DN, 0 and 7865 DN, and 4 and 108 DN. Regions with bad fits and/or low peak counts (below 8 DN) are masked out in the O i 1355 Å line width map. This figure is accompanied by an animation that allows the reader to blink between the different panels of the figure to see the various similarities and offsets described in the text.
is the increased superposition along the line-of-sight of different structures with different LOS velocities. However, it is also possible that towards the limb the LOS is more perpendicular to the magnetic field direction, and that the modest increase of line width towards the limb is in part caused by stronger motions perpendicular to the magnetic field direction. This slight anisotropy (with respect to the magnetic field direction) of turbulent motions likely plays a significant role, as we illustrate in what follows below.
We now turn our attention to the off-limb behavior of the O i 1355 A intensity and line broadening in a quiet Sun dataset as a function of distance above the limb (positive values). This is shown in Fig. 12. The distance from the limb is based on a calculation of the solar radius for the date of the observation and the header information in the _IRIS_ data. The latter has an accuracy of order \(\sim 0.6\arcsec\) or better, now that cross-correlation of _IRIS_ FUV SJI images and AIA 1700A data is automatically applied in the _IRIS_ level 2 data pipeline. The top panel shows the sharp drop-off of the intensity in the photospheric wing of Mg ii k at 2800A, which starts about 0.5\(\arcsec\) above the solar limb. This is a reasonable number since this photospheric wing emission is formed in the upper photosphere, confirming that the limb distance is accurately calculated.
The second panel from the top in Fig. 12 shows that the O i 1355 A intensity increases from the photospheric limb outward until it peaks at about 1.5\(\arcsec\)from the limb. The intensity then rapidly drops off with increasing distance from the limb. It is very interesting to note that between the limb and the 1.5\(\arcsec\) distance where the O i 1355 A intensity peaks, the line width increases (for increasing distance from the limb) only very modestly to a value of about 9 km s\({}^{-1}\). This changes drastically after the peak intensity is reached: the line width then increases rapidly to values of 15 km s\({}^{-1}\) until the intensity drops (at distances of 5 \(\arcsec\)) to very low values where the Gaussian fits can no longer be reliably.
What causes this puzzling spatial offset of about 3\(\arcsec\) of the off-limb peak of the intensity and the line width? If one assumed that the off-limb properties of this optically thin line were caused by line-of-sight superposition of intensity features with a random distribution of LOS velocities, the peaks of the intensity and line width should be co-located.
Let us, instead, examine a different scenario that is inspired by our findings in SS 3. We assume that there are two major contributors to the O i 1355 A signals off-limb: a contribution from the chromosphere proper, and one from spicules. Let us further assume that, at the limb, the line width is mostly determined by motions perpendicular to the magnetic
Figure 10: Probability density function (PDF) of the O i 1355 Å line core intensity (from the fit) as function of the cosine of the viewing angle (\(\mu\)) for the AR 12290 datasets (top panel) and the QS datasets (bottom panel). The color-scale is linear
Figure 9: Probability density function (PDF) of the O i 1355 Å line width as function of the cosine of the viewing angle (\(\mu\)) for the AR 12290 datasets (top panel) and the QS datasets (bottom panel). To facilitate comparison, a quadratic least-squares fit to the QS width as function of \(\mu\) is shown in both panels as a dashed line. The color-scale is linear and profiles with a central intensity below 10 DN/pixel are not included.
field, as expected from Alfven wave associated motions, and that the field is mostly vertical. As we have seen in SS 3, there are indications that the line width increases along the spicule-like structures emanating from the network. It is natural to assume that beyond a certain height, these spicular features dominate the signal.
Let us now assume that the energy flux density \(F\) of Alfven waves propagating from the lower chromosphere to the top of the spicules is roughly constant:
\[F=\rho\delta v^{2}v_{A}\sim\sqrt{\rho}\delta v^{2}B=k \tag{1}\]
in which \(\rho\) is the mass density of the plasma, \(\delta v\) is the Alfven wave amplitude, \(v_{A}\) is the Alfven speed, with \(v_{A}\sim B/\sqrt{\rho}\), and \(B\) the magnetic field, and \(k\) a constant. This would then imply:
\[\delta v\sim\rho^{-\frac{1}{4}}B^{-\frac{1}{2}}\sim n_{e}^{-\frac{1}{4}}B^{- \frac{1}{2}}\sim I_{peak}^{-\frac{1}{8}}B^{-\frac{1}{2}} \tag{2}\]
in which we assume that \(\rho\sim n_{e}\) (the electron density), and \(I_{peak}\sim n_{e}^{2}\) (based on Lin & Carlsson, 2015).
In the bottom panel of Fig. 12, we show the variation of the component \(I^{-\frac{1}{8}}\), normalized to the same line width values at the distance where the O i 1355 A line width peaks (about 1.5\({}^{\prime\prime}\)). We find that the increase of line width from 1.5\({}^{\prime\prime}\) to 5\({}^{\prime\prime}\)is well reproduced qualitatively. Note that we expect that the average magnetic field decreases with increasing distance from the limb. Such a decrease in magnetic field \(B\) would lead to an even larger increase of the line width. It seems
Figure 11: Probability density function (PDF) of the O i 1355 Å line width as function of the cosine of the viewing angle (\(\mu\)) for quiet Sun internetwork areas (top panel), network areas (middle panel) and for areas 1\({}^{\prime\prime}\) limbward of network areas (bottom panel). In all panels the least-squares quadratic fit is shown with a solid line and the fits for the two other area-types as dashed lines.
Figure 12: Probability density functions (PDF) for the intensity at 2800Å (upper photosphere, top), O i 1355 Å intensity \(I_{peak}\) (second from top), O i 1355 Å line width (third from top), and a quantity \(A\) that is proportional to \(I_{peak}^{-\frac{1}{8}}\), all as a function of distance from solar limb, with positive values of the latter for off-limb positions. The vertical dashed line is the location of the photospheric limb, as determined from the solar radius and _IRIS_ headers. The solid vertical lines show the range of distances from the limb for which \(I_{peak}\) decreases with height. The value of \(A\) is scaled so that it is equal to the line broadening at the distance from the solar limb indicated by the leftmost solid vertical line.
that our relatively simplistic model thus reproduces the observed behavior very well. Note also that our approach appears to also be compatible with the observed slight increase of line width between the solar limb and the distance of peak O i 1355 A intensity. While the \(I_{peak}^{\frac{-1}{355}}\) component predicts a slight decrease in line width in this region, we note that it is precisely here (at heights between the photosphere and low chromosphere) that we expect the largest drop in average magnetic field strength as flux tubes expand with height from the photosphere into the low chromosphere. If the field were to drop by a factor of 3, this could reverse the predicted decrease of line width (from the intensity alone) to the model increase we actually see for increasing distance from the solar limb.
Our results thus support a scenario in which the spatial variation of the O i 1355 A linewidth is related to the superposition of Alfvenic wave motions or turbulence along structures that show a density that significantly decreases with height. Naturally reality is most likely more complex. However, the fact that this scenario can easily explain the spatial offset between the peaks of O i 1355 A intensity and line width, and shows an increase of line width that is self-consistent with the observed intensities and expected magnetic field variations, both suggest that this scenario plays a role in the observed behavior.
## 4 Relationship to Magnetic Flux Emergence and Cancellation
The very bright features in O i 1355 A in the active region (Fig. 4) appear to be related to the effects of flux emergence and/or cancellation. A detailed comparison of the locations in O i 1355 A and magnetograms from _HMI_ (Fig. 13, 14) shows that the O i 1355 A intensity is often enhanced around the neutral line between opposite polarities. To allow a proper comparison between the _IRIS_ rasters and _HMI_ (or _SST_) magnetograms, we create, from the _HMI_ (or _SST_) data, synthetic rasters with identical field-of-view as the _IRIS_ raster, in which each raster position includes a vertical strip of _HMI_ (or _SST_) data at the same time as the time of each _IRIS_ raster position.
On the largest scales (\(\sim 20^{\prime\prime}\)), we see that filaments, which most often occur above the neutral line between opposite polarities, are often bright in O i 1355 A e.g., at (-166\({}^{\prime\prime}\), 42\({}^{\prime\prime}\)) (left), and at (53\({}^{\prime\prime}\), 70\({}^{\prime\prime}\)) (middle) in Fig. 14. However, the correlation between O i 1355 A intensity and neutral lines is also strong on smaller scales (\(\sim 5^{\prime\prime}\)), at locations where flux concentrations of opposite polarities are in very close proximity, i.e., touching or almost touching. This is clearly illustrated, in Fig. 13 (red circles), by the bright region at (-770\({}^{\prime\prime}\), 145\({}^{\prime\prime}\)) in the left column, at (-526\({}^{\prime\prime}\), 33\({}^{\prime\prime}\)) in the middle column, and (-367\({}^{\prime\prime}\), 44\({}^{\prime\prime}\)) in the right column. Similarly, Fig. 14 (red circles) shows examples at (-235\({}^{\prime\prime}\), 65\({}^{\prime\prime}\)) (left), (-5\({}^{\prime\prime}\), 70\({}^{\prime\prime}\)) (middle), and (420\({}^{\prime\prime}\), 170\({}^{\prime\prime}\)) (right).
When we saturate _HMI_ magnetograms to enhance the visibility of weak flux of opposite polarity, we find that on arsecond scales very weak flux immediately adjacent to the dominant polarity flux similarly often leads to enhanced brightenings in O i 1355 A. Examples of that can be found in Fig. 15 (red circles), at (-435\({}^{\prime\prime}\), 110\({}^{\prime\prime}\)), (-435\({}^{\prime\prime}\), 92\({}^{\prime\prime}\)), (-412\({}^{\prime\prime}\), 90\({}^{\prime\prime}\)), (-424\({}^{\prime\prime}\), 60\({}^{\prime\prime}\)), (-426\({}^{\prime\prime}\), 48\({}^{\prime\prime}\)), (-420\({}^{\prime\prime}\), 10\({}^{\prime\prime}\)). There are, however, also some locations in which opposite polarity flux appears in close proximity but the O i 1355 A intensity does not appear to be significantly enhanced (e.g., the weak concentrations at (-370\({}^{\prime\prime}\), 55\({}^{\prime\prime}\))).
To investigate this correlation on sub-arcsecond scales, we also analyze a time series of very high-resolution magnetograms obtained at the SST (Fig. 16). These magnetograms have much higher spatial resolution (\(\sim 0.1^{\prime\prime}\)) and greater magnetic sensitivity. They similarly reveal locations of enhanced O i 1355 A brightenings where flux of the minority polarity is in the proximity of stronger dominant polarity concentrations, e.g., at (4\({}^{\prime\prime}\), 17\({}^{\prime\prime}\)). However, there are also locations with very weak minority flux where this correlation is not as clear (e.g., at (12\({}^{\prime\prime}\), 35\({}^{\prime\prime}\))) or not apparent (e.g., at (18\({}^{\prime\prime}\), 56 \({}^{\prime\prime}\))). Such locations of reduced or no correlation appear to be more common in the SST dataset than in the HMI datasets. This perhaps suggests that a minimum flux size is required for significant O i 1355 A emission, or that the O i 1355 A intensity increase is perhaps shorter lived for very weak minority flux concentrations, and missed by the slow cadence of the _IRIS_ rasters. The latter is driven by the required deep exposures to detect the faint O i 1355 A line.
The animation associated with Fig. 16 reveals that, in addition to the correlation between neutral lines and O i 1355 A intensity, which could be associated with flux cancellation, there is another process that is clearly associated with increased O i 1355 A intensity. Flux emergence is present in the region around \(x=5-15^{\prime\prime}\), \(y=45-55^{\prime\prime}\), and \(x=38-43^{\prime\prime}\), \(y=40-45^{\prime\prime}\). These are the two regions that show the brightest O i 1355 A emission. This can also be seen in Fig. 6, with the brightest emission around (-465\({}^{\prime\prime}\), -210\({}^{\prime\prime}\)) occurring at a location where flux has emerged into a pre-existing field configuration. The region of enhanced intensities in O i 1355 A (also seen in Si iv and CI i) appears to outline a dome-like structure that separates the two flux systems, possibly a quasi-separatrix layer (QSL).
The above findings appear to be compatible with a scenario in which significant ionization and heating of the plasma, in response to reconnection or currents associated with the interaction between field concentrations of opposite polarity, leads to enhanced electron densities, which, based on theoretical work, the O i 1355 A intensity is proportional to (Lin & Carlsson, 2015). Our observational results indicate that such heating could occur in association with flux cancellation or flux emergence. Out of all _IRIS_ observables the O i 1355 A intensity appears to be the most sensitive to the effects of cancellation and emergence, possibly because it is formed at low enough heights that it is sensitive to heating, even from small-scale flux concentrations whose fields do not reach into the upper chromosphere. Another key aspect is that O i 1355 A is optically thin and uniquely sensitive to the electron density, thereby picking up any enhancement caused by heating or ionization. It appears that O i 1355 A may be
Figure 13: _IRIS_ spectroheliograms of NOAA AR 12920 as it traverses the disk between 2015-09-22 and 2015-09-24. Top row shows the O I 1355Å intensity, while the bottom row shows the line-of-sight magnetic field as deduced from _HMI_ magnetograms stitched together to replicate the _IRIS_ raster timings and locations. The red circles indicate locations with neutral lines between opposite polarities and increased O I 1355 Å and intensities, as described in the text.
Figure 14: _IRIS_ spectroheliograms of NOAA AR 12920 as it traverses the disk between 2015-09-25 and 2015-09-28. Top row shows the O I 1355Å intensity, while the bottom row shows the line-of-sight magnetic field as deduced from _HMI_ magnetograms stitched together to replicate the _IRIS_ raster timings and locations. The red circles indicate locations with neutral lines between opposite polarities and increased O I 1355 Å intensities, as described in the text.
acting as a canary in the coal mine for solar atmospheric effects of cancellation or emergence.
One complication of our analysis is that the O i 1355 A line is faint and requires deep exposures, leading to low cadence observations. Given the dynamic and ephemeral nature of heating associated with flux emergence it is thus quite possible that some heating signals are simply missed in the O i 1355 A rasters, when it occurs either before or after the _IRIS_ slit has passed the site of cancellation or emergence. Alternatively, it may be that not all cancellations or emergence lead to significant heating at chromospheric heights. Detailed comparisons with numerical simulations are required to address this.
One final observational finding is that the locations of enhanced line broadening do not appear to show a significant correlation with flux cancellation or emergence, as illustrated by Fig. 16. In fact, regions of active emergence often appear to show significantly reduced broadening (e.g., at (5'', 52'') in Fig. 16).
## 5 Flux Concentrations
Our analysis of the O i 1355 A measurements and the underlying photosphere in plage regions has revealed another intriguing finding. In particular, on sub-arcsecond spatial scales, there is an anti-correlation between locations of enhanced line broadening and the locations of strong flux concentrations in the photosphere. This is illustrated in Fig. 17, which shows the O i 1355 A line broadening (top), a spectro-heliogram at 2800A (middle), and the O i 1355 A intensity (bottom).
While the range of values for line broadening is not enormous, there is a clear difference between non-plage regions and plage regions, with the former (e.g., \(x=-10\) to 15'', \(y=80\) to 90''), showing significantly reduced line broad
Figure 15: _IRIS_ spectroheliograms of NOAA AR 12920 on 2015-09-24. Left column shows the O i 1355Å intensity, while the right column row shows the line-of-sight magnetic field as deduced from _HMI_ magnetograms stitched together to replicate the _IRIS_ raster timings and locations. The red circles indicate locations with neutral lines between opposite polarities and increased O i 1355 Å intensities, as described in the text.
ening (less than 5 km/s). The plage region itself shows a relatively narrow range of values for the line broadening, between 7 and 11 km/s, as already remarked upon by Carlsson et al. (2015b). The top panel of Fig. 17 nevertheless reveals a spatial pattern of enhanced values for the line broadening (3 to 4 km/s higher than the rest of the plage), occurring in contiguous regions showing coherence on 0.5 to 2''spatial scales. These regions of enhanced line width do not seem to correlate with locations of enhanced O I 1355 A brightness. Similar, they do not occur where the photospheric wing emission at 2800A occurs. Instead, they preferentially avoid the locations of bright points in the photosphere. The latter are indicated with black contours, which are determined by setting a threshold brightness in the 2800A spectroheliogram. As can be seen in the top panel, most often the regions of enhanced line broadening in O I 1355 A occur in between the black contours.
This anti-correlation is further illustrated by Fig. 18, which shows the joint probability density function (JPDF) of the O I 1355 A line broadening and the intensity at 2800A. Only the locations where the O I 1355 A intensity is larger than 10 DN are included, within the plage region that is within the red contours in the bottom panel of Fig. 18. By introducing the threshold for O I 1355 A intensity we ensure that the resulting line widths included in the JPDF are not significantly affected by noise.
As we can see, the locations of highest broadening occur at lower values in photospheric intensity. The locations with highest photospheric intensity (bright points) show moderate levels of broadening. The lowest broadening occurs at the edge of the plage regions, as seen in the top panel of Fig. 17. The scatter plot thus supports our finding that the locations of highest line broadening occur in locations with the lowest values of 2800A intensity within plage regions, i.e., those in between photospheric bright points. We note that the broader range of values for the O I 1355 A broadening at lower 2800A intensities is not a signature of noise, as the noise in determining the line width depends on the O I 1355 A intensity (not the photospheric intensity). If noise were behind this issue, then we would expect a correlation between decreased O I 1355 A intensity and decreased photospheric intensity. That is not the case, as can be seen in the bottom two panels of Fig. 17.
It is also noteworthy that the O I 1355 A line broadening map does not show a good correlation or anti-correlation with any other width or intensity measure of other chromospheric
Figure 16: Comparison between SST Fe I 6173Å magnetogram raster (middle) and _IRIS_ O I intensity (left) and line width (right), taken on 2015-09-09. The magnetogram synthetic raster has been constructed from a magnetogram timeseries by matching with _IRIS_ raster timings and locations. The O I 1355 Å intensity is scaled between 2 and 30 DN. Regions with bad fits and/or low peak counts (below 5 DN) are masked out in the O I 1355 Å line width map. This figure is accompanied by an animation with the same layout as the figure, except it shows in the middle panel a time sequence of magnetograms. Short vertical bars at the top and bottom of each panel in the animation indicate, for each time step, the location of the _IRIS_ raster step (left and right panels) at the time of the magnetogram shown in the middle panel. The time in the animation is expressed in seconds after 9-Sep-2015 09:00:00 UTC.
Figure 17: For NOAA AR 12902 on 2015-09-26, the O i 1355 Å line width in km/s, the intensity at 2800Å (formed in the photosphere), and the O i 1355Å intensity. Black contours are based on intensity thresholding of photospheric bright points in the 2800Å spectoheliogram. The O i 1355 Å intensity is scaled from 3 to 40 DN.
or TR lines. There are perhaps a few locations where there is some correspondence with Mg ii k line broadening, but this is not the general trend (see Figs. 1 and 3).
This begs the question what causes this anti-correlation. Several scenarios come to mind. First, the area between photospheric bright points in plage is the region where the magnetic field canopy is expected to occur, as seen in both observations (e.g., de la Cruz Rodriguez, 2019) and numerical simulations (e.g., Hansteen et al., 2006). While the magnetic field is expected to be mostly vertical directly above photospheric flux concentrations, it is expected to be significantly more inclined with respect to the vertical in the regions where the canopy forms, i.e., the locations where we find the highest O i 1355 A line broadening. This is particularly the case at low chromospheric heights where the transition occurs from high to low plasma \(\beta\). The O i 1355 A line is thought to be sensitive to the electron density, with the formation height covering the low chromospheric regions in particular.
But why would a heavily inclined magnetic field lead to enhanced O i 1355 A line broadening? The line width is determined by three contributors. The instrumental broadening, which does not vary significantly across the FOV; the thermal broadening, which depends on the local temperature, and the unresolved motions along the line-of-sight ("non-thermal" broadening). If increased heating (i.e., higher temperatures) were to occur in the chromospheric canopy region between photospheric bright points, it would be expected to lead to enhanced electron densities. This in turn would cause enhanced O i 1355 A intensities, since this optically thin line scales with the square of the electron density. However, such a correlation is not observed in our data (top and bottom panels of Fig. 17).
This leaves the possibility that enhanced non-thermal motions, either in the form of unresolved macroscopic motions along the line-of-sight, or in the form of microscopic turbulence, are the cause of these enhancements in line broadening. It is tempting to speculate that this may be caused by one (or both) of two scenarios. The first is one in which strong Alfvenic wave and/or vortical motions (i.e., perpendicular to the magnetic field direction) are ubiquitous in plage and register in the O i 1355 A line broadening only when the magnetic field direction is more inclined from the line-of-sight vector. In a disk center observation like the one considered here, such enhanced vortical motions or Alfvenic wave power would thus be most visible in between photospheric bright points. This is an intriguing scenario given the evidence from the center-to-limb variation of line broadening that Alfvenic waves play a key role in explaining the strong increase of line broadening off-limb. These observational findings may also be compatible with recent suggestions from numerical simulations that vortical motions in the photosphere are ubiquitous and often propagate into the low atmosphere (e.g., Moll et al., 2011; Yadav et al., 2021; Breu et al., 2023). However, it is not fully clear whether these recent modeling results are fully compatible with our observations. In particular, the lack of correlation between locations of enhanced line width and O i 1355 A intensity in observations does, at first blush, not seem to be fully compatible with a scenario in which the vortices lead to increased heating in the plage chromosphere (Yadav et al., 2021). This is because one would expect that such heating leads to enhanced O i 1355 A densities and thus intensities. Detailed studies of synthetic O i 1355 A profiles are required to further investigate this.
An alternative possibility is that there are strong velocity gradients at the interface of flux concentrations. These could, for example, occur because of the LOS overlap between upward propagating shocks on neighboring flux concentrations. Such shocks are ubiquitous in plage and drive strong flows along dynamic fibrils (Hansteen et al., 2006). Perhaps a combination of both effects plays a role. Detailed comparisons with advanced numerical simulations of the chromosphere are required to further investigate these scenarios.
## 6 Conclusions
We have analyzed the properties of the O i 1355 A spectral line, which is unique in that it is regularly observed with _IRIS_, optically thin, and formed in the chromosphere. We find that the line shows properties that are different from other spectral lines formed in the chromosphere. We find that intensities are strongest in plage and network regions and in the proximity of neutral lines where magnetic fields of opposite polarity are in close contact. Our data suggests that the O i 1355 A intensity is often increased and appears to be very sensitive to the effects of cancellation and emergence of magnetic flux. Because of the optically thin nature of the O i 1355 A line, this indicates that electron densities are locally enhanced, a signature of heating in the chromosphere. We also see a significant increase of O i 1355 A intensities at the solar limb, as expected for an optically thin spectral line from the increased line-of-sight superposition of different structures. We find evidence for O i 1355 A intensity structures being associated with shocks in plage and network, as well as spicules, with some similarities to counterparts at low TR temperatures that are visible in Si ivlines.
The O i 1355 A line width offers a unique view on the unresolved or non-thermal motions in the chromosphere, a quantity that is otherwise difficult to directly determine, requiring inversions of optically thick spectral lines whose formation is subject to non-LTE radiative transfer effects. We find, for both active regions and quiet Sun, that the line width modestly increases towards the limb, and along spicule-like structures that protrude away from plage and network flux concentrations. The modest center-limb-variation suggests that there are unresolved motions both along the magnetic field and across the magnetic field, with the latter being stronger.
Off the limb the line broadening rapidly increases, compatible with a scenario in which turbulent, Alfven wave or vortical motions perpendicular to the magnetic field dominate the line width, as they propagate upward with a roughly constant flux along spicular structures in which the density decreases with height.
The presence of strong vortical or Alfvenic motions in plage are further supported by a curious but significant enhancement of line width in between photospheric flux concentrations, when viewed close to disk center. These results are compatible with the combined impact of inclined canopy fields and enhanced motions perpendicular to the magnetic field on the O i 1355 A linewidth. Such inclined fields are expected to occur in between photospheric flux concentrations, while enhanced motions perpendicular to the field are predicted by various numerical simulations of vortices and Alfven waves in plage. However, the lack of obvious correlation between enhanced line width and O i 1355 A intensity is not immediately compatible with the enhanced heating expected from vortices.
Our observations provide strict constraints on models of the chromosphere.
The authors are grateful to the observers at the _SST_ who obtained the _SST_ data and to Luc Rouppe van der Voort who calibrated and processed the _SST_ timeseries. B.D.P. gratefully acknowledge support by NASA contract NNG09FA40C (_IRIS_). This research is supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622. To analyze the data we have used IDL. Data are courtesy of _IRIS_. _IRIS_ is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research Center and major contributions to downlink communications funded by ESA and the Norwegian Space Agency.
|
2301.00519 | Holistic Network Virtualization and Pervasive Network Intelligence for
6G | In this tutorial paper, we look into the evolution and prospect of network
architecture and propose a novel conceptual architecture for the 6th generation
(6G) networks. The proposed architecture has two key elements, i.e., holistic
network virtualization and pervasive artificial intelligence (AI). The holistic
network virtualization consists of network slicing and digital twin, from the
aspects of service provision and service demand, respectively, to incorporate
service-centric and user-centric networking. The pervasive network intelligence
integrates AI into future networks from the perspectives of networking for AI
and AI for networking, respectively. Building on holistic network
virtualization and pervasive network intelligence, the proposed architecture
can facilitate three types of interplay, i.e., the interplay between digital
twin and network slicing paradigms, between model-driven and data-driven
methods for network management, and between virtualization and AI, to maximize
the flexibility, scalability, adaptivity, and intelligence for 6G networks. We
also identify challenges and open issues related to the proposed architecture.
By providing our vision, we aim to inspire further discussions and developments
on the potential architecture of 6G. | Xuemin, Shen, Jie Gao, Wen Wu, Mushu Li, Conghao Zhou, Weihua Zhuang | 2023-01-02T04:15:33Z | http://arxiv.org/abs/2301.00519v1 | # Holistic Network Virtualization and Pervasive Network Intelligence for 6G
###### Abstract
In this tutorial paper, we look into the evolution and prospect of network architecture and propose a novel conceptual architecture for the 6th generation (6G) networks. The proposed architecture has two key elements, i.e., holistic network virtualization and pervasive artificial intelligence (AI). The holistic network virtualization consists of network slicing and digital twin, from the aspects of service provision and service demand, respectively, to incorporate service-centric and user-centric networking. The pervasive network intelligence integrates AI into future networks from the perspectives of networking for AI and AI for networking, respectively. Building on holistic network virtualization and pervasive network intelligence, the proposed architecture can facilitate three types of interplay, i.e., the interplay between digital twin and network slicing paradigms, between model-driven and data-driven methods for network management, and between virtualization and AI, to maximize the flexibility, scalability, adaptivity, and intelligence for 6G networks. We also identify challenges and open issues related to the proposed architecture. By providing our vision, we aim to inspire further discussions and developments on the potential architecture of 6G.
6G, network architecture, network virtualization, digital twin, AI for networking, networking for AI.
## I Introduction
### _Background_
With the ongoing worldwide deployment of the 5th generation (5G) networks, the technical community in wireless communications and networking has started looking into the 6th generation (6G) networks for 2030 and beyond. While the exact concepts and techniques that define 6G are not determined yet, visions, requirements, use cases, and candidate techniques are discussed in an increasing amount of works, e.g., [1, 2, 3]. Among these discussions, some preliminary consensus regarding 6G emerges. For instance, in terms of main requirements of 6G, the urgency of improving security [4] and energy efficiency [5] is understood unanimously. For use cases of 6G, the combination of enhanced mobile broadband (eMBB), ultra-reliable and low-latency communications (uRLLC), and massive machine-type communications (mMTC) have been brought up, despite the different terminologies used in different works [6, 7]. As to candidate techniques, commonly mentioned examples include the integration of satellite, aerial, terrestrial, and underwater networks [8, 9], (sub)terahertz and visible light communications [10], artificial intelligence (AI) empowered networks [11, 12, 13], to name a few.
One consensus deserving special attention is that 6G may need a brand-new network architecture. Driven by cost effectiveness and efficiency, the evolution of network architecture follows the evolving services provided by the networks. For instance, to introduce data service, a packet-switched core network component emerged in the 3G architecture as a complement to its circuit-switched counterpart for voice service [14]. Then, to accommodate the exponential growth of data traffic, 4G introduced a redesigned and simplified network architecture for a flat all-Internet protocol (IP) network with increased data rate and reduced latency [15]. In the era of 5G, as networks become more heterogeneous than ever while services become diversified, various network architecture innovations have been proposed towards flexible service-oriented networking, including software defined networking (SDN) [16], cloud radio access network (C-RAN) [17], and network slicing [18, 19]. Therefore, envisioned to support unprecedentedly diverse services with exceedingly stringent quality of service (QoS) or quality of experience (QoE) requirements, 6G will most likely need ground-breaking innovations in network architecture.
While conceiving an architecture for 6G, it is difficult to overlook two key elements, i.e., virtualization and AI. Network virtualization already plays an important role in the architecture of 5G [20]. The virtualization of resources, functions, and networks enables resource sharing, software implementation of network functions, and service-orientated networking, respectively, and thereby increases resource utilization while reducing the cost for deploying and operating networks. Virtualization reflects a trend of softwarization for flexible, scalable, and adaptive network management [21]. Therefore, it is foreseeable that virtualization will remain crucial in the architecture of 6G. As for the second key element, i.e., AI, a growing number of research teams worldwide are investigating AI-driven networks, and high expectation is placed on AI for empowering 6G [1, 22]. In comparison with heuristic or mathematical model based approaches for communications and networking, AI based approaches can handle complicated networking problems and obtain accurate results, provided
that sufficient data are available for training. This advantage suits the increasingly heterogeneous and dynamic networks, where mathematical models may not exist or cannot accurately characterize the considered problems. Therefore, it is not difficult to predict the significance of AI in 6G.
### _Architectural Innovations Required for 6G_
Recognizing the importance of virtualization and AI, we further look into their limitations in the state-of-the-art to comprehend the architectural innovations required for 6G. Existing virtualization techniques mostly deal with _service provision_ in communication networks. For instance, network slicing highlights available network resources, service provision capability, and QoS satisfaction for various services [23]. While such virtualization, with a focus on service provision, enables 5G to handle diverse coexisting services, it may not suffice for 6G since the characteristics of end user _service demand_ can be the key to achieving user-centric networking. Therefore, in the future, virtualization should focus on both the service provision capability of a network and the service demand of end users in the network. This will lead to the virtualization of end users in addition to the virtualization of networks. As for AI, existing research on AI mostly addresses specific functions (e.g., routing [24]), layers (e.g., physical layer [25]), network segments (e.g., access networks [26]), or applications (e.g., autonomous driving [27]) of a network. Meanwhile, how to integrate AI into the network architecture across different layers or network segments needs further investigation. The scope and extent of AI-driven networks are yet to be determined.
As virtualization extends to cover both service provision and service demand while AI pervades every corner of the network, close connections between the two elements are foreseeable and can dominate the architectural needs of 6G. The first connection is through network and end user _data_[28]. Virtualization facilitates the characterization of network service provision capability, service performance, resource utilization and, in future networks, end user service demand. As a result, a vast amount of data will be generated, which can be exploited to characterize the network and end users. Such data, if properly managed, can empower both AI-driven networking and AI applications (e.g., object detection) [29]. The second connection is through network _control_. AI can be used to make decisions pertinent to virtualization, including service admission, slice establishment, dynamic virtual network function orchestration, and resource scheduling. In the future, AI can also help control data collection for the virtualization of end users and extract features of virtualized end users. Thus, AI has the potential to improve the efficacy and adaptivity of virtualization. The third connection is through network _resources_. A main motivation of network virtualization is to coordinate resource sharing among different services and thereby improve network resource utilization and service satisfaction. AI-driven networking can target efficient utilization of network resources. As both virtualization and AI consume computing, communication, and storage resources, they may compete for network resources. However, AI has a potential to increase the efficiency of virtualization through intelligent network planning and operations, while virtualization may increase the efficiency of AI through proper data provision and management. As a result, they should work together to enhance network resource utilization and service quality.
A rudiment of the above connections through data and control can be observed in the existing architecture of 5G. For instance, the 3rd Generation Partnership Project (3GPP) introduces a network data analytics function (NWDAF) for 5G in Release 15 [30] and enablers for network automation (eNA) in Release 16 [31]. The architecture design provides a framework for the NWDAF to collect data from other network functions (such as policy control and network slice selection functions) and provide analytics (such as data traffic statistics and predictions) back to these network functions. In 6G, the scope and level of both data collection and analytics will expand significantly. Most likely, network data analysis, instead of being limited to one or two specific functions, will be AI-driven and available everywhere in a network. Similarly, the data available for network management, instead of being limited in type, content, or format, should provide information of the network and end users as needed. Such expectations can be fulfilled by extending the roles of virtualization and AI in the network architecture.
### _Our Vision_
Our vision of network architecture for 6G is based on the importance of virtualization and AI, their limitations in existing networks, and the essential connections between them. Specifically, we aim to design a network architecture that i) supports virtualization of the network and end users from the perspectives of service provision and service demand, respectively, ii) integrates AI in various network functions, layers, segments, and applications under a unified architecture, and, more importantly, iii) facilitates the interplay between virtualization and AI, enabling their coexistence, integration, and mutual enhancement. To consolidate the vision, we raise the following three key questions:
* How to further advance virtualization beyond network slicing?
* How to enable AI into every facet of a network?
* How to effectively integrate virtualization and AI through network architecture design?
In pursuit of answering the preceding questions, we develop the ideas of holistic network virtualization and pervasive network intelligence for 6G network architecture. _Holistic network virtualization_ advances virtualization toward 6G by incorporating network slicing and digital twin paradigms. The former enables service-centric network management, and the latter adds a user-centric perspective to virtualization for future networks. _Pervasive network intelligence_ enables generic integration of AI into a network from the perspectives of AI for networking and networking for AI. The former emphasizes the role of AI in network management, while the latter leverages network design to support AI applications. In this tutorial paper, for both holistic network virtualization and pervasive network intelligence, we survey existing studies,
present our network architecture designs, and illustrate their benefits. Unifying these two components, we further introduce an overall conceptual network architecture, which fulfills our vision of unprecedentedly flexible, scalable, adaptive, and intelligent networks for 6G.
This tutorial paper can provide useful information and benefit readers from three aspects. First, for readers who are interested in the historical and current developments of virtualization and AI techniques, we survey the literature and provide a review of both in the context of communication networks. Second, for readers who are exploring future directions in virtualization and AI, we propose original ideas for advancing them toward 6G. Specifically, we illustrate designs and ideas, such as incorporating digital twins for holistic network virtualization, connected AI for network management, AI slices with training and inference separation, and hybrid data-model driven methods, throughout this paper. Last, after introducing our vision of holistic network virtualization and pervasive network intelligence, we present open issues and challenges to inspire further research.
There are a few surveys on virtualization and AI in the literature [21, 32, 33, 34]. Regarding virtualization, Minerva _et al._ present existing digital twin based applications in the context of IoT [32], and another survey introduces the key enabling technologies and design principles of network slicing [21]. Regarding AI, Boutaba _et al._ undertake a comprehensive survey on AI applications in various areas of networking [33], and another survey focuses on deep learning (DL) based applications in wireless networking [34]. In comparison, this tutorial paper focuses on the vision of 6G. Specifically, after introducing state-of-the-art virtualization and AI techniques, we propose original designs, including holistic network virtualization and pervasive network intelligence, to establish a novel conceptual architecture for 6G networks.
### _Structure of the Paper_
The structure of this tutorial paper is shown in Fig. 1.
Section II illustrates our vision of 6G networks from the aspect of holistic network virtualization. We review existing network virtualization concepts and techniques in Subsection II-A. Then, we introduce end user virtualization with a focus on digital twins in Subsection II-B. Lastly, we present our idea of holistic network virtualization, highlighting a six-layer virtualization architecture, in Subsection II-C.
Section III illustrates our vision of 6G networks from the aspect of pervasive network intelligence. Subsection III-A presents an overview of representative AI techniques that are potentially useful for 6G networks. Subsection III-B introduces the motivation for pervasive network intelligence and presents a four-level AI architecture. Subsections III-C and III-D summarize the existing research and present our ideas on AI for networking and networking for AI, respectively.
Section IV integrates holistic network virtualization and pervasive network intelligence and presents our overall vision for 6G. Subsection IV-A reviews related studies on architectures for 6G. Subsection IV-B introduces a conceptual architecture for 6G networks that incorporates holistic network virtualization and pervasive network intelligence. Subsections IV-C and IV-D discuss the components, subsystems, and potential implementation of the proposed architecture. Subsections IV-E to IV-G elaborate on three types of interplay enabled by the proposed architecture, i.e., the interplay between digital twin and network slicing, between data-driven and model-driven methods, and between virtualization and AI, respectively.
Section V identifies key challenges and open issues related to the proposed network architecture, and Section VI concludes this research.
Table I lists the acronyms used in this paper.
## II Holistic Network Virtualization
In this section, we first review virtualization techniques in existing networks and their benefits. Then, we introduce the idea of holistic network virtualization.
### _Network Virtualization_
The concept and techniques of network virtualization have been evolving over more than three decades [35]. Early research on network virtualization includes virtual local area networks motivated by facilitating different types of operations (services) in distributed systems [36], as well as providing flexible network control and improving link utilization [37]. Another example of network virtualization is virtual private
\begin{table}
\begin{tabular}{l|l} \hline \hline
3GPP & 3rd Generation Partnership Project \\
5G & 5th Generation \\
6G & 6th Generation \\ AI & Artificial Intelligence \\ AL & AI Level \\ AP & Access Point \\ API & Application Programming Interface \\ ARQ & Automatic Repeat-Request \\ BS & Base Station \\ C-RAN & Cloud Radio Access Network \\ DL & Deep Learning \\ DNN & Deep Neural Network \\ DRL & Deep Reinforcement Learning \\ eMBB & Enhanced Mobile Broadband \\ FL & Federated Learning \\ IoT & Internet of Things \\ IP & Internet Protocol \\ ITU & International Telecommunication Union \\ LSTM & Long Short-Term Memory \\ LTE & Long Term Evolution \\ mMTC & Massive Machine-Type Communications \\ MIMO & Multiple-Input Multiple-Output \\ ML & Machine Learning \\ NFV & Network Function Virtualization \\ NN & Neural Network \\ NWDAF & Network Data Analytics Function \\ QoE & Quality of Experience \\ QoS & Quality of Service \\ RAN & Radio Access Network \\ SBS & Small Base Station \\ SDN & Software Defined Networking \\ SNR & Signal-to-Noise Ratio \\ UAV & Unmanned Aerial Vehicle \\ URLLC & Ultra-Reliable and Low-Latency Communications \\ VL & Virtualization Layer \\ VM & Virtual Machine \\ WSN & Wireless Sensor Network \\ \hline \hline \end{tabular}
\end{table} TABLE I: List of Acronyms
networks, which establish efficient and secure communication links to connect geographically dispersed end users. Over time, the desire for programmable network management extends to the objective of enhancing network architecture.
The advancement in cloud computing has propelled recent development in network virtualization, including network function virtualization (NFV) and network slicing. With NFV, software instances running on virtual machines at general computing servers replace customized and proprietary hardware for various network functions [38]. At the network core, NFV applies to functions such as switching, firewall, deep packet inspection, and session border controller [39]. At radio access networks, NFV applies to frame generation, modulation, carrier allocation, etc. [40]. The realization of NFV becomes an enabler for network slicing, which is a key network architecture innovation in 5G. Network slicing emphasizes a service-oriented perspective in network management by creating multiple end-to-end virtual networks, i.e., slices, for different services on top of shared physical network infrastructure. With network slicing, network resources are first reserved for respective services in network planning stages and later allocated to individual users in network operation stages [11]. The creation, adjustment, and termination of slices are based on the varying spatiotemporal distribution of service demands to provide a high level of flexibility and adaptivity in network management [23].1
Footnote 1: SDN and C-RAN are also closely related to network virtualization since virtualization significantly simplifies and expedites their realization in modern wireless networks.
Virtualization can be applied on different levels and scales in a network. Existing techniques include virtualization at _node, link, resource_, and _network_ levels. Virtual nodes are abstractions of substrate nodes in a network such as servers, routers, and switches, and typical examples of node virtualization are storage and computing server virtualization [41, 42]. Virtual links are the logical channels that interconnect virtual nodes. Virtual resources are abstractions of computing, memory, storage, and communication resources in a network [43], while physical resources at different locations can form virtual resource pools [38]. For instance, the virtualization of a network function is the execution of a network control or service function by running software, supported with necessary resources. A virtual network is the combination of virtual nodes and links with proper virtual resource allocation for a service request to meet its QoS requirements, supported by necessary networking protocols. Besides the aforementioned works, more representative research works on node, link, resource, and network virtualization are summarized in Table II.
Regardless of its level and scale, virtualization in the context of networking typically demonstrates the following characteristics:
* Abstraction provides a high-level overview of a network while hiding details of the underlying physical network entities (nodes, links, or networks) or resources [63]. This simplifies network management and facilitates flexible service provision;
* Multiple virtual entities corresponding to a shared physical entity co-exist, or multiple virtual resource pools co-exist on the same physical resource pool [35]. This enables service-oriented virtual networks and improves network resource utilization efficiency;
* Coexisting virtual entities corresponding to the same physical entity should function independently [64]. This is necessary for guaranteeing service reliability, security, scalability, and QoS satisfaction.
Both academia and industry have spent a significant amount of efforts on network virtualization. For virtualizing core networks, some works leverage SDN techniques to separate the control and data planes through different protocols or application programming interface (API), e.g., OpenFlow [65]. Furthermore, network virtualization has been extended to radio access networks (RANs), and several frameworks for RAN virtualization are proposed. A SoftRAN framework enables both centralized and distributed RAN control based on the time sensitiveness of control decisions [66]. Another framework, FlexRAN, offers a hierarchical architecture for real-time RAN control and incorporates a flexible API to separate control and data planes in RANs [67]. Initia
Fig. 1: The structure of this paper.
AT\(\&\)T and China Mobile, Open-RAN (O-RAN) is proposed as an open-source and open-interface platform to support RAN virtualization [68], which can incorporate AI and provide APIs for data-driven networking [69, 70, 71].
The adoption of virtualization techniques renders modern networks programmable, flexible, and scalable, which significantly increases cost effectiveness in network deployment and operation. Due to these benefits, it is foreseeable that advanced virtualization techniques will be essential to 6G. Meanwhile, the existing scope of network virtualization is limited in the sense that virtualization techniques mostly focus on network infrastructure and resources, yet less attention is given to end users. In 6G, end user virtualization will become necessary for two reasons. First, with increasingly diverse end user devices, resource-demanding services, and heterogeneous and dynamic networks, providing QoE guarantee for end users will become more challenging in the era of 6G. Accurate characterization and abstraction of end users, which necessitate end user virtualization, can be a precondition to QoE satisfaction. Second, as AI will be a highlight of 6G, extensive user data are required to fuel AI services and AI-based network management. Given such need for data, end user virtualization can be a competitive approach for collecting, managing, and processing data from end users.
### _End User Virtualization_
Until recently, only a few works study end user virtualization in the context of networking. One early example relevant to end user virtualization is network-hosted avatars, i.e., virtual agents, of end users for applications such as file downloading when the users are offline [72]. Another example is virtual objects, proposed as a component in Internet of things (IoT) platforms [73]. The motivation is to handle the heterogeneity of physical objects (end users) via virtualization and to facilitate the provision of services to end users.
As a potential paradigm to enable end user virtualization, digital twin attracts much attention lately. The concept of digital twin was originally conceived by Michael Grieves for product life-cycle management in industry in 2003 [74, 75]. Later, NASA and U.S. Air Force Vehicles developed a digital twin paradigm for vehicles to forecast their remaining usable life and the mission success probability [76]. A digital twin is characterized by a full digital representation of a physical object or a process and real-time synchronization between the physical object or process and its corresponding digital replica. Digital twins can contain a large volume of data from physical objects or processes for advanced analytics, and the analytical results can be used to improve the performance of the corresponding physical objects or processes. Exemplary digital twins in general application scenarios, as well as potential requirements for the digital twins to enable big data analytics, are discussed in [77]. Potential implementation of digital twins representing IoT devices in industrial systems is proposed in [78]. Other representative research works on digital twins are summarized in Table III.
Most existing research on digital twins in the network field focuses on applications, e.g., distributed clock synchronization [79] and computation offloading [80]. In comparison, the study of digital twins from the perspective of network
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Type** & **Work** & **Scenario** & **Research Focus** & **Objective** \\ \hline \multirow{8}{*}{Node} & [44] & Edge computing & Virtual edge node placement & Low-cost placement and fast response to user requests \\ \cline{2-5} & [45] & Cloud computing & Virtual machine (VM) placement & Reliable VM placement and routing \\ \cline{2-5} & [46] & IP network layer & Virtual node/route as IP overlay & Practical IP-level resilience to link failures \\ \cline{2-5} & [47] & Wireless sensor network (WSN) & Architecture for sensor virtualization in WSN & Multiple applications share the same WSN \\ \cline{2-5} & [48] & C-RAN & Clustering of access points & Forming user-specific virtual base stations given QoS requirements \\ \hline \multirow{8}{*}{Link} & [49] & WSN & Virtual backbone construction & Enabling low-complexity backbone construction with performance guarantee \\ \cline{2-5} & [50] & Generic & Virtual link embedding & Reducing congestion probability given bandwidth demands \\ \cline{2-5} & [51] & Internet service provider (ISP) network with SDN & Virtual link provision & Maximizing network throughput subject to QoS and robustness constraints \\ \hline \multirow{8}{*}{Resource Virtualization} & [52] & Cloud computing & Composite virtual resource mapping & Efficient mapping of computing and networking resources to substrate resources within networked clouds \\ \cline{2-5} & [53] & Cloud computing & VM migration & Low-cost transferring of VM storage and memory during VM migration over wide area networks \\ \cline{2-5} & [54] & Radio access network (RAN) & Radio resource virtualization operators & Maximizing throughput with fairness among multiple mobile network operators \\ \cline{2-5} & [55] & RAN & Radio resource virtualization & Delay-bounded QoS provisioning through radio resource virtualization \\ \cline{2-5} & [56] & Vehicular network slices & Resource sharing among slices & Reusing communication and caching resources to support applications with different QoS requirements \\ \hline \multirow{8}{*}{Network Virtualization} & [57] & 5G core network with SDN & Network function chain embedding & Minimizing embedding cost subject to network resource constraints \\ \cline{2-5} & [58] & C-RAN & Sloc request admission & Maximizing toel flow in the network subject to network resource constraints \\ \cline{2-5} & [60] & Heterogeneous wireless network & Dynamic radio resource slicing & Maximizing network utility through optimal bandwidth slicing and user association \\ \cline{2-5} & [61] & 5G RAN & Radio resource allocation in RAN slicing & Satisfying QoS requirements by proper resource mapping and scheduling \\ \cline{2-5} & [62] & IoT & Service-oriented authentication & Privacy-preserving slice selection and secure access of service data \\ \hline \end{tabular}
\end{table} TABLE II: Some Representative Works on Node, Link, Resource, and Network Virtualization
architecture and network management is limited at the moment.2 A digital twin based cloud-centric network architecture is proposed in [83], where digital twins of end users hosted at the network edge play the role of communication assistants or network data loggers.
Footnote 2: Some works focus on distributed networks, e.g., vehicular networks, and adopt digital twins as an approach for network virtualization instead of end user virtualization [81, 82].
Digital twin appears to be an intuitive solution to end user virtualization. Nevertheless, extending the existing network virtualization, represented by network slicing, to end users is not straightforward, given the target of flexible and efficient network management and service provision. For instance, it is trivial to simply use node-level virtualization and to represent end users as virtual data sources or sinks in a virtual network. Moreover, while end users may possess communication and computing resources, resource-level virtualization does not characterize user-specific properties, e.g., location and mobility, or service-specific properties, e.g., data traffic variations, of end users. It is necessary to understand potential _benefits, requirements_, and _implementation_ of digital twin based end user virtualization, with a particular focus on the integration of digital twin and existing network virtualization frameworks.
There are potentially two-fold _benefits_ of digital twin based end user virtualization, i.e., extensive end user data and powerful network emulation capability. While the virtualization of network infrastructure and resources characterizes the network status and _service provision_ capabilities, digital twins of end users can provide extensive data regarding _service demand_ and user QoS/QoE satisfaction. Such data can play a significant role in network management through facilitating well-informed network planning and operation decisions. Moreover, the real-time or near real-time synchronization between end users and their digital twins enables powerful network emulations. For instance, multiple instances of the same virtual network can be created, with real-time end user information, e.g., location and data traffic volume, provided to all instances through synchronized end user digital twins.3 Different network planning or operation strategies can be applied and emulated in different instances, while each instance remains synchronized with the real-world network environment through the information provided by the digital twins of end users.
Footnote 3: The emulation can apply to a virtual network segment, e.g., the network edge.
To take part in network virtualization, digital twins of end users should satisfy the following _requirements_:
* Flexible: The abstraction of end users into digital twins must be sufficiently flexible to represent heterogeneous physical devices (such as smartphones, vehicles, and industrial sensors) and serve various applications (such as virtual reality gaming, autonomous driving, and industrial automation);
* Compatible: The end user virtualization based on digital twins should complement and enhance the state-of-the-art network virtualization, i.e., network slicing. For instance, digital twins of end users should provide data to support various network slices, while each slice may only have access to a subset of data pertinent to that slice;
* Customizable: The attributes of digital twins should be customized and updated based on the corresponding service, network traffic, resource utilization, etc. For instance, the amount and types of data included in a digital twin should be adaptable rather than fixed. In addition, while the focus of digital twins is placed on end users, digital twins should be able to represent other network entities, e.g., unmanned aerial vehicle (UAV) mounted mobile base stations (BSs).
In addition, network resource consumption from creating and maintaining digital twins should be taken into account.
Noting the aforementioned benefits and requirements, we aim to answer the following key questions with respect to the _implementation_ of digital twins:
* Location: Where should digital twins be hosted in a network?
* Affiliation: Should digital twins exist within or outside network slices?
* Data: What data attributes pertinent to networking should be included in a digital twin? How much historical data should be included for a specific attribute? Should predicted user information be included?
* Synchronization: How to determine the frequencies of updating various data entries of a digital twin by acquiring new data from the physical object?
* Control: Who should determine and update digital twin models and based on what information?
In the next subsection, we propose a novel conceptual architecture for holistic network virtualization, which integrates digital twins and network slicing, and delve into the above questions.
### _Holistic Network Virtualization_
We propose a novel virtualization architecture, i.e., holistic network virtualization, for integrating digital twins into network virtualization, in order to improve network management and service provision capabilities. The proposed virtualization architecture consists of six layers and is illustrated in Fig. 2, in which virtualization layer (VL) 1 is the bottom layer for data collection and VL 6 is the top layer for digital twin model control. The outline of each layer is given as follows:
_VL 1 - Data Collection_: Data required for the digital twin representation of selected end users are collected from the corresponding physical entities following prescribed data precision, uploading method, collection frequency, etc. The data are collected via access points, and the data collection is controlled by local controllers deployed at network edge;
_VL 2 - Level-One Abstraction_: Based on the current _digital twin model_ from the digital twin model control layer (i.e., VL 6), which determines the content and format of data included in every digital twin, digital twins are formed and updated using data collected by VL 1. The abstraction may include the aggregation of data from different sources, the update of historical data, and the creation of digital twins for new or additional end users. The digital twins created in this layer
are _level-one digital twins_, representing individual end users, and hosted at servers connected to local controllers;
_VL 3 - Local Processing and Control_: The data from level-one digital twins are processed at network edge for predicting behaviors of individual users, such as user data traffic and mobility patterns, and making user-level service decisions, such as computing offloading, content delivery, and link-layer protocol adaption. Local processing may also include emulations of an edge network or a part of it based on level-one digital twins. Local control may include further data aggregation from level-one digital twins for VL 4, the migration of digital twins based on user mobility, and the selection of end users for digital twin representation. Similar to the case of VL 2, the local processing and control occur at servers affiliated with local controllers;
_VL 4 - Level-Two Abstraction_: The aggregated data from VL 3 is sorted into service-specific data for respective network slices in VL 4. Additional data that describe slice configuration, slice resource utilization, slice service level agreement satisfaction, etc., are generated for each slice. Then, the aforementioned data are abstracted to form or update the digital twins of various slices. The digital twins created in this layer are _level-two digital twins_, which are associated with virtual networks (slices). The level-two digital twins are hosted at servers connected to the centralized controller of the network;
_VL 5 - Slice-Level Processing and Control_: The data from level-two digital twins of network slices are processed for service-specific prediction, e.g., spatiotemporal service demand distribution forecast, or slice-level decision making, e.g., planning and operation decisions. Slice-level processing may include emulations of an end-to-end slice or a part of it based on level-two digital twins. Slice-level control may include slice admission, resource reservation, and slice service coverage control. Similar to the case of VL 4, the service-level processing and control occur at servers affiliated with the centralized controller of the network;
_VL 6 - Digital Twin Model Control_: This layer determines and updates the models of level-one and level-two digital twins based on available network resources for digital twins, the performance of network management and service provision decisions derived based on the current digital twins, and the dynamic spatiotemporal service demands. For instance, VL 6 determines data precision, synchronization frequencies for different data attributes, the amount of historical data contained in the digital twins for each attribute, and the inclusion of predicted user information. In addition, this layer decides the subset of data in level-one digital twins that each slice can access. The digital twin model control also occurs at servers affiliated with the centralized controller of the network;
The level-one digital twin model configured by VL 6 may include the following data, which shall be collected by the local controllers from end users at VL 1: (1) connectivity and channel information, such as the AP(s) that an end user
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Work** & **Application** & **Type of Physical Object** & **Role of Digital Twin** & **Target of Using Digital Twin** \\ \hline
[84] & Underwater network for ocean observation & Underwater sensory/actuators & Monitoring and testing observation system & Visualizing an ocean observation system and enhancing simulations \\ \hline
[85] & Edge computing for internet of vehicles & Vehicles & Collecting and sharing information about vehicles and surroundings & Empowering computing offloading by facilitating data analytics \\ \hline
[86] & SG network slicing & Network slices & Predicting and monitoring slice performance & Assisting autonomous network slicing \\ \hline
[87] & Smart factory & Workstations in a conveyor system & Evaluating and validating control strategies & Implementing intelligent conveyor systems \\ \hline
[88] & Smart city & Road infrastructure & Monitoring roads and detecting vehicles/spons & Supporting smart city applications through gathering and processing data \\ \hline
[89] & IoT & Objects with sensing capability & Storing data for detecting events and recognizing behaviors & Facilitating synthetic sensing through situation awareness and explainability \\ \hline
[90] & Industry 4.0 & Industrial machinery & Generating training dataset and simulations & Achieving accurate anomaly detection with limited labelled data \\ \hline
[91] & Smart healthcare & Patients & Handling data for analysis and developing AI models & Improving healthcare operations \\ \hline
[92] & Industry 4.0 & Technical assets (e.g., machine, environment) & Integrating knowledge from model and data for simulations & Enhancing simulation-based systems engineering \\ \hline
[93] & Mobile edge computing & Real-world network environments & Training learning algorithms and monitoring network environments & Enabling learning for optimizing user association, resource allocation, and offloading \\ \hline
[94] & Industrial 4.0 & Products, workstations, conveyor belts & Data sharing and control of security, critical processes & Building as security architecture based on state replication and synchronization \\ \hline
[95] & Cyber-physical systems & Connect physical devices & Monitoring, diagnostics, and prognostics & Supporting applications such an context aware interaction and driving assistance \\ \hline
[96] & IoT & Generic physical systems & Managing context information and self-adapting & Increasing autonomy and enhancing cooperation through autonomic digital twins \\ \hline
[97] & Welding manufacturing & Human-robot interaction systems & Monitoring welding robot and enabling simulations & Visualizing welder behavior and training welders \\ \hline
[98] & Smart manufacturing & Job shop scheduling systems & Obtaining scheduling data and simulating schedules & Enabling timely response and reducing scheduling plan deviation \\ \hline
[99] & Smart manufacturing & Manufacturing systems & Predicting and verifying the system performance & Increasing autonomy and enhancing fault diagnosis \\ \hline
[100] & Smart building & Photovoltaic energy conversion units & Estimating the status of photovoltaic energy conversion unit & Improving the accuracy of fault detection \\ \hline
[101] & Internet of vehicles & Vehicles and road side units & Monitoring the real-time status of vehicles and road side units & Supporting network resource management \\ \hline
[102] & Mobile edge caching & Vehicles & Capturing the social characteristics of vehicles & Improving the effectiveness of cache management \\ \hline \end{tabular}
\end{table} TABLE III: Some Related Works on Digital Twins
is connected to and the channel state information for each connection; (2) service information, such as active service types, data traffic volume of each service, and QoS satisfaction of each service; (3) user information, such as user profile, user location and mobility, network resources allocated to the user, and the local computing and caching capabilities of the user; and (4) additional use case-specific information, such as motion sensor readings for augmented reality interactive gaming or operation log for industrial IoT devices. The level-two digital twin model configured by VL 6 may include the following data, which shall be collected or generated by the centralized controller: (1) slice service demand, such as the number of service requests and the spatiotemporal service request distribution; (2) slice resource configuration, such as the reserved communication, computing, and caching resources for the slice; (3) slice performance, such as the slice service level agreement satisfaction, slice resource utilization, and slice energy consumption; (4) slicing strategy, such as the method or algorithm used for network function deployment, resource reservation, and resource scheduling; (5) additional use case-specific information, such as UAV trajectory configuration for UAV-assisted networks. Note that different end user digital twin models are applicable to different types of end users, and each network slice may have a uniquely defined slice digital twin model. For example, the digital twins of vehicles and industrial IoT devices most likely contain different data, and the digital twin models may differ between slices for industrial IoT and those for smart home or between slices of different network operators. Accordingly, the need for customization necessitates the digital twin model control in VL 6.
In the conceptual virtualization architecture, VL 1 to VL 3 interface with the local controllers in the network, VL 4 and VL 5 interface with the centralized controller of the network, and VL 6 interfaces with both the local controllers and the centralized controller. This architecture fully exploits the two benefits of digital twins, i.e., providing extensive data for network management and enabling powerful network emulations. It also satisfies the aforementioned requirements for digital twins in terms of flexibility, compatibility, and customization. Last but not least, it answers the key questions regarding the implementation of digital twins raised in Subsection II-B.
With the architecture design in Fig. 2, digital twins and network slicing are integrated in the idea of holistic network virtualization. Network slicing incorporates existing network virtualization techniques such as NFV. Digital twins enhance network slicing by providing organized and customized end user data to slices and by further abstracting slices into level-two digital twins. The design of two-level digital twins avoids extra resource consumption from creating and maintaining multiple digital twins of the same user for different slices and the resulting burden of synchronizing them. Instead, each slice has access to a subset of data from level-one digital twins pertinent to either the corresponding service or general user information such as location and mobility, and the pertinent data are further aggregated to the level-two digital twins for that slice. In this architecture, network slicing conforms to service-centric network management, while digital twins add a user-centric perspective to the virtualization. Specifically, level-one digital twins characterize end users and their service demands, and level-two digital twins characterize network service provision capability, network performance, and network resource utilization. Overall, the digital twin paradigm and network slicing jointly support network management and service provision, while the network configures digital twins and network slices as needed, depending on network dynamics.
Fig. 2: The conceptual six-layer virtualization architecture for holistic network virtualization.
### _Holistic Network Virtualization: A Summary_
In this section, we have reviewed the existing scope and techniques of network virtualization, identified the insufficiency of current network virtualization, introduced the idea of holistic network virtualization to incorporate network and end user virtualization, and developed a six-layer virtualization architecture for holistic network virtualization.
The virtualization of resources, network functions, and networks in 5G is expected to remain important in 6G, since they contribute to flexible and adaptive network management. Meanwhile, the virtualization techniques in 5G, represented by network slicing and NFV, mostly focus on network virtualization from the perspective of service provision. In 6G, it will be essential to extend the scope of virtualization and incorporate end user virtualization.
The digital twin paradigm is a promising solution to end-user virtualization. In 6G, digital twins can be used for characterizing the status and the service demand of individual end users. The study of digital twins in the context of 6G networks is still in an initial stage, and various definitions or implementations exist. In our vision of holistic network virtualization, digital twins are configurable assemblage of data, including both historical and real-time data and both collected and generated data, for describing end users, infrastructure, or network slices. Moreover, the corresponding data collection and processing are also configurable.
To consolidate holistic network virtualization, we have proposed a six-layer virtualization architecture for 6G. The architecture provides a reference design for systematically integrating digital twins and network slicing and answers important questions related to digital twins in 6G networks, including where are they hosted, what data do they contain, and how to manage them.
## III Pervasive Network Intelligence
Pervasive network intelligence is the second element of our vision for 6G. In this section, we first present an overview of existing AI techniques. Then, we introduce the motivation and propose a four-level architecture for pervasive network intelligence. Next, we elaborate the idea of pervasive network intelligence from the perspectives of AI for networking and networking for AI, and review related works. Rather than surveying specific AI techniques, this section focuses on the architecture and methods of pervasive network intelligence.
### _AI Techniques: An Overview_
The idea of AI is to design intelligent machines or systems to demonstrate human intelligence and perform tasks as humans do or even better [131]. The advancement of machine learning (ML) has facilitated the success of AI in both academia and industry. Applications supported by ML techniques, such as computer vision and natural language processing, can achieve beyond human-level accuracy. Lately, for its potential in enabling intelligent networks, AI has received significant attention in the research field of wireless networks.
ML techniques can be categorized into three types: unsupervised learning, supervised learning, and reinforcement learning. In terms of learning structures, the techniques can be subdivided into centralized and decentralized techniques. We list common ML techniques used in wireless networks in Table IV.
Unsupervised learning evaluates features and patterns hidden in data for data analysis, such as prediction, without using a labeled dataset. One popular application of unsupervised learning techniques is data clustering, e.g., _k_-means [103] and mixture models [105], for solving network planning problems, such as cluster-forming in wireless sensor networks [104] and small-cell deployment [132]. Neural networks can be adopted to facilitate novel unsupervised learning algorithms. For example, neural network-based autoencoders can learn the compressed features of input data with a limited number of neurons and can be leveraged for data prediction, such as traffic forecasting [107].
Supervised learning exploits the mapping between the input and output data via a given labeled dataset. Supervised learning techniques can derive a mapping function, i.e., a training model, from the input data to the labeled output data in the dataset. Through applying a training model, the output corresponding to a new input can be evaluated, which can be utilized for decision making or prediction. A typical method for supervised learning is using deep neural networks (DNNs). DNNs use layers of artificial neurons to estimate a non-linear correlation between the input and the output data and iteratively improve the estimation accuracy. There have been many successful applications of DNN techniques in communications. For example, convolutional neural networks (CNNs) utilize convolutional and pooling layers to identify the correlation of multi-dimensional input data and have been applied in modulation classification [112]; recurrent neural networks (RNNs) explore the correlation among a sequence of the data and have been widely adopted for traffic prediction [113] and wireless channel modeling [133].
Reinforcement learning iteratively learns the optimal policy by interacting with the environment, sensing network states, and evaluating feedback. The goal is to maximize a cumulative reward in a dynamic environment. Deep reinforcement learning (DRL), which combines DNN and reinforcement learning techniques, is used extensively in resource management to solve complex decision-making problems. In DRL, neural networks play the role of approximators to store high-dimensional states or actions, which enables DRL to solve complex problems efficiently. DRL has been widely used for network optimization [134], resource allocation [19, 118, 135], and user association [116, 121, 123] in wireless networks.
With the development of mobile edge computing, distributed AI has been developed to harvest computing resources at network edge and reduce communication overhead due to data collection and exchange [136]. The learning models can be trained and evaluated at network edge in a semi- or fully-distributed manner. Specifically, federated learning (FL), as one of the most popular distributed learning techniques, trains models with data distributed over network edge. A centralized
controller aggregates locally-computed learning models and updates parameters in the learning models. Due to such decentralized model training, FL is capable of preserving privacy and can be applied in privacy-sensitive network management scenarios [137, 138]. In addition, multi-agent reinforcement learning has been developed to implement reinforcement learning in a distributed manner, which aims to handle scenarios in which network agents cannot obtain sufficient information from each other. Multi-agent reinforcement learning techniques can be used, for example, to solve resource allocation problems in heterogeneous networks [129, 130].
### _Motivation and AI Architecture_
In 6G, AI is expected to penetrate every facet of the network including end users, the network edge, and the cloud, resulting in _pervasive network intelligence_. Such trend is due to advancements and innovations in the areas of ML, data collection, edge and cloud computing, and programmable network control in recent decades. As such, AI will fundamentally transform modern networks in many aspects and foster a myriad of exciting applications.
The AI applications can be categorized into management-oriented and service-oriented applications, which are detailed as follows:
* In these applications, AI is used as a tool for network management, such as transmission power allocation in cellular networks [139] and resource reservation in network slices [19]. AI techniques, such as reinforcement learning, have the potential of handling complicated decision making problems in a dynamic network environment. Resorting to AI techniques, the management-oriented AI applications can analyze a large amount of network data, make real-time network management decisions, and then update network management policies based on the newly analyzed data. Hence, for such applications, the key issue is how to leverage advanced AI techniques to manage and enhance complex networks, which falls into the scope of _AI for networking_;
* In these applications, AI is offered as services for end users. Fueelled by powerful computing servers and well-curated datasets, AI techniques, especially DL, can outperform traditional techniques in a wide range of applications, such as environmental perception in autonomous driving, audio recognition in intelligent healthcare, and object detection in mobile virtual reality [140, 141, 142]. For instance, an AI-based YOLO algorithm can detect objects with a high accuracy [143, 144], and the state-of-the-art DL-based face recognition algorithm can achieve an accuracy of 99% or higher [145]. Facilitating service-oriented AI applications in a network consumes a large amount of network resources, including storage and computing resources for model training/inference, and communication resources for data collection and model uploading. Hence, for such applications, the key issue is how to design and optimize the network to support emerging AI services, which falls into the scope of _networking for AI_.
Note that the scope of AI in 6G includes AI for networking and networking for AI, which is larger than that in 5G, as the latter simply focuses on applying AI in communications.
An AI architecture is needed to characterize AI's different functionalities in different network segments. In the literature, there are a few studies on the AI architecture. Edge intelligence (or edge AI) is represented in six levels based on the amount and path length of data offloading [131]. Moreover, edge intelligence can be categorized into two parts: AI for edge (i.e., to enhance and optimize the network edge with AI techniques) and AI on edge (i.e., to carry out AI models on the network edge) [146, 147]. Different from these works on edge intelligence, our work focuses on a broader scope of pervasive network intelligence and categorizes it into multiple levels based on AI's locations and functionalities in the network.
As shown in Fig. 3, we propose a four-level AI architecture, in which AI levels (ALs) 1 and 2 focus on service-oriented applications, and ALs 3 and 4 aim at management-oriented applications. We describe each level in detail as follows.
_AL 1 - End User-Hosted Service-Oriented AI_: Utilizing local data and computing resources at end users, end user-hosted service-oriented AI applications are offered as services
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & Unsupervised Learning & Supervised Learning & Reinforcement Learning \\ \hline \multirow{3}{*}{Centralized Learning Algorithms} & \(\bullet\) K-means [103, 104] & \(\bullet\) Support-vector machine [110] & \(\bullet\) Deep Q-learning [114, 115, 116] \\ & \(\bullet\) Mixture models [105, 106] & \(\bullet\) Logistic regression [111] & \(\bullet\) Policy gradient [117, 118, 119] \\ & \(\bullet\) Autoencoders [107] & \(\bullet\) Deep neural network & \(\bullet\) Actor-critic [120, 121] \\ & \(\bullet\) Generative adversarial network [108, 109] & [107, 112, 113] & \(\bullet\) Deep deterministic policy gradient (DDPG) [122, 123] \\ \hline \multirow{3}{*}{Distributed Learning Algorithms} & \(\bullet\) Federated learning [124, 125] & & \(\bullet\) Multi-agent reinforcement learning \\ & \(\bullet\) Split learning [126] & & \(\bullet\) Multi-agent reinforcement learning [127, 128, 129, 130] \\ \hline \end{tabular}
\end{table} TABLE IV: Common ML algorithms.
Fig. 3: An illustration of the four-level AI architecture for pervasive network intelligence.
for end users by processing AI tasks locally, such as next word prediction in mobile keyboards [148], user traffic demand prediction [149], and vehicle trajectory prediction [150]. When computing resources of end users are insufficient for computation-intensive AI tasks, partial computation workloads can be offloaded to nearby edge servers for collaborative processing.
_AL 2 - Edge-Hosted Service-Oriented AI_: Residing at network edge (e.g., Wi-Fi access points and BSs) close to end users, edge-hosted service-oriented AI applications are offered as low-latency services for end users, such as face recognition in video surveillance [151] and object detection in virtual reality [152]. To support edge-hosted service-oriented AI applications, service demand data from end users are collected, stored, and analyzed, and then the analytical results are utilized for service provision.
_AL 3 - Edge-Hosted Management-Oriented AI_: At this level, AI is hosted at local controllers at network edge for network management that is executed in real time, such as spectrum allocation, content caching [153], and computation offloading [154]. Specifically, the edge-hosted management-oriented AI is to allocate network resources to network nodes for supporting services, including AI services at ALs 1 and 2. For instance, the edge-hosted management-oriented AI can be used to perform service migration across edge networks to guarantee service continuity for high-mobility users, e.g., vehicular users.
_AL 4 - Cloud-Hosted Management-Oriented AI_: Cloud-hosted management-oriented AI resides at the centralized controller in the cloud for network management that is executed once every several minutes or hours, such as slice admission control [155] and virtual network function deployment [156]. Since the cloud possesses abundant computing and storage resources, powerful and complex AI models can be trained and deployed for managing large-scale networks.
Next, AI for networking is elaborated in Subsection III-C to illustrate AI's role in network management, and networking for AI is discussed in Subsection III-D to illustrate AI service provision in 6G networks.
### _AI for Networking_
In this subsection, we discuss how AI techniques can support network management. We first review existing works on AI-based network slicing. Then, we introduce our idea of connected AI solution for AI-based network slicing.
#### Iii-C1 AI-Based Network Slicing
Network slicing includes two stages: network planning stage for resource reservation and network operation stage for resource scheduling [11]. In the _network planning_ stage, network resources are reserved for network slices on a large time scale (e.g., from several minutes to several hours). In the _network operation_ stage, the reserved resources of each slice are allocated to end users on a small time scale (e.g., from several milliseconds to several seconds). Due to network dynamics such as spatiotemporally changing network traffic, it can be difficult for model-based solutions to attain the optimal network slicing strategies. By contrast, AI techniques can characterize network dynamics by analyzing the collected network data and obtain the optimal network slicing strategies accordingly. Next, we review AI-based network slicing, taking into account the interplay between the planning and operation stages. Representative research works on AI-based network slicing are summarized in Table V.
On a small time scale, a local controller collects data and provides resource scheduling strategies to allocate resources reserved for each slice to end users. Specifically, the local controller determines resource scheduling strategies based on two factors: the amount of resources reserved for each slice, which is determined by the centralized controller, and the instantaneous user data from level-one digital twins pertinent to that slice, such as service type, user location, and user mobility. The main challenges of determining the optimal resource scheduling strategies are two-fold: a large number of end users and service demand dynamics. AI techniques have potentials to cope with both challenges. First, to schedule resources for a large number of end users, unsupervised learning methods, such as _k_-means [167] and DNN based autoencoders [107], can be utilized to classify end users according to their service demands. Similar resource scheduling strategies can be applied to end users with similar service demands, which facilitates scalable network management. For instance, end users in close proximity and with similar mobility patterns may experience similar channel statistical behaviors, and the same power control policy can be applicable to them. Second, to deal with network dynamics, reinforcement learning can be applied for generating adaptive resource scheduling strategies [168]. Reinforcement learning iteratively allocates resources to maximize a long-term reward function and updates the reward function based on feedback from the network environment. Moreover, reinforcement learning can be combined with DNNs, such as recurrent neural networks [27] and conventional neural networks [123], to analyze the spatiotemporal pattern of user data for finding the optimal resource scheduling strategies.
On a large time scale, local controllers aggregate the collected user-level data to service-level information from level-two digital twins, i.e., slice digital twins. Utilizing information from slice digital twins, the centralized controller reserves network resources for each slice. The challenges of resource reservation are two-fold. First, making proactive resource reservation that can avoid either resource over-provisioning or under-provisioning is challenging with time-varying network traffic. Second, the strategies for resource reservation and scheduling are coupled, which further complicates resource reservation. AI techniques can cope with these challenges as follows. To address the challenge of proactive resource reservation, supervised learning, such as long short-term memory (LSTM) networks, can be used to exploit the features of historic network traffic loads and predict traffic loads in near future [149, 159]. The centralized controller can then use the predicted traffic loads for proactive resource reservation. To handle the correlation between resource reservation and scheduling, reinforcement learning can be adopted to reserve resources while considering network operation strategies as a part of the dynamic network environment [164, 135, 19]. Moreover, an option-based hierarchical reinforcement learning technique can be a potential solution for jointly optimizing
resource reservation and network operation policies and addressing network dynamics in both stages. This technique has been used to tackle complex DRL problems by grouping decision variables according to decision time scales [169] or decision-making agents [170] and then determining the decision variables. Through this novel reinforcement learning technique, the complex correlation between resource reservation and scheduling strategies can be obtained iteratively. To apply this technique in network slicing, the centralized controller can select the resource reservation strategies on a large time scale, and local controllers find optimal resource scheduling strategies on a small time scale, thereby jointly optimizing both the resource reservation and the scheduling strategies.
#### Iv-B2 Connected AI Solution for Network Management
Existing AI applications on network management mostly focus on individual control functions. For instance, learning-based autoencoders can achieve reliable transmission power control for high-speed data transmission with limited channel state information [171], and DNNs can select medium access control protocol parameters with low communication and processing overhead [172, 173]. Although various AI techniques have been proposed for network management, AI models among network control functions are usually isolated. Such isolation may result in inefficient and redundant data processing, which brings up a pressing need for integrating the AI models in AI-based network control functions.
There are three types of solutions for integrating AI models [6]. In the first type of solutions, the entire network is viewed as a black box, where a single AI model characterizes the entire network and generates network control policies. Such structure simplifies decision-making processes in network management. However, training the single AI model can be extremely difficult due to high-dimensional input data. Then, the second type of solutions adopts different AI models in a network for different network control functions, and the AI models are generally independent on each other to reduce the complexity of training. However, this approach neglects the correlation and interplay among network functions and thus cannot obtain a global-optimal network management strategy. Moreover, network data may be repetitively processed by different AI models with similar network functions, which degrades network management efficiency. For instance, AI models for user mobility management and computing service migration would repetitively analyze end user mobility. In contrast, the third type of solutions, namely _connected AI_, exploits the correlations among network control functions, connects their AI models, and allows them to jointly make network control decisions. The connected AI solution offers benefits in integrating AI models by highlighting the interplay among them and balancing training complexity and network performance. Therefore, the connected AI solution has great potential in facilitating AI-based network slicing. However, existing research on the connected AI solution is limited. How to apply connected AI solution to network management requires further studies [26].
Recent advancements in distributed learning techniques facilitate the development of a connected AI solution for network management. Model partition, investigated in [174], can divide a global DNN into multiple sub-neural networks (sub-NNs). The sub-NNs can reside at different network entities, according to the available computing and communication resources, and communicate with each other [175, 176]. Furthermore, the idea of _nested DNN_, which allows sub-NNs to have their own functionalities while contributing to the global DNN for model inference and training, has been proposed and evaluated in [177] and [178]. Using the above two techniques, each sub-NN can perform a specific network control function. Accordingly, multiple sub-NNs can collaboratively fulfill common control functions, thereby applying the connected AI solution to network management.
Based on the above advanced DNN techniques, we present
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Stage** & **Work** & **Research Focus** & **Objective** & **AI Method** \\ \hline \multirow{8}{*}{Planning} & [107] & Network capacity prediction & Forecasting the capacity for individual virtual networks & Deep neural network based autoencoder \\ \cline{2-5} & [86] & Virtual representation for network slices & Capturing the relationships among slices and monitoring the end-to-end performance in dynamic network environments. & Graph neural networks \\ \cline{2-5} & [157] & Resource reservation adjustment & Maximizing the overall reward obtained from the results of slices & Deep duality neural networks \\ \cline{2-5} & [158] & Bandwidth allocation & Jointly maximizing spectrum efficiency and the QoS requirement satisfaction ratio & Generative adversarial network and deep Q network \\ \cline{2-5} & [159] & Bandwidth allocation & Jointly maximizing spectrum efficiency and overall service level agreement satisfaction ratio of slices & Long short-term memory and sequence actor-critic \\ \cline{2-5} & [160] & Traffic prediction and resource provisioning & Minimizing the probability of slice service level agreements & Gated recurrent unit \\ \hline \multirow{8}{*}{Network Operation} & [161] & Computation offloading & Minimizing average computing time of services and maximizing user computing experience & Deep Q network \\ \cline{2-5} & [162] & Slice selection and channel allocation & Maximizing the power consumption of wireless transmission for a slice fog-RAN & Reinforcement learning \\ \cline{2-5} & [163] & Content caching placement and delivery & Manging caching resources to maximize cache hit ratio while satisfying resource reservation constraints & Deep Q network \\ \cline{2-5} & [164] & Inter-slice coordination & Maximizing long-term payoff from the competition among service providers through resource orchestration & Deep Q network \\ \cline{2-5} & [165] & Inter-slice coordination & Maximizing QoS satisfaction ratio for slices by scheduling transmission power and sharing resources among slices & Multi-agent deep Q learning \\ \hline \multirow{4}{*}{Two-Stage Interplay} & [19] & Computing resource allocation in vehicular networks & Allowating spectrum and computing resources for slices while minimizing computing service delay & Deep deterministic policy gradient \\ \cline{2-5} & [166] & Cross-slice admission and congestion control & Maximizing operator revenue by resource reservation and adjust reserved resources in real time & Stic-action-reward-state-action (SARSA) \\ \hline \end{tabular}
\end{table} TABLE V: Representative Works on AI-based Network Slicing
our idea of applying the connected AI solution to network management next. The control functions for network management are encapsulated into _intelligent modules_. An intelligent module can be implemented solely by a DNN or cooperatively by a DNN and conventional model-based techniques. An example is shown in the upper right corner of Fig. 4, in which the intelligent module for power control includes a learning-based channel estimator and a model-based power allocation scheme, e.g., water-filling power allocation [179]. Moreover, the DNN in each intelligent module can play the role of a sub-NN of a global DNN. The intelligent modules connect with each other to share information, such as their outputs and gradient information in model training, and aggregated user data. Via the intelligent modules, control functions can manage the network in a divide-and-conquer manner to avoid the complicated model training required for layer-free AI. With model partition and nested DNN techniques, multiple network control functions can cooperatively make control decisions to achieve globally optimal network management.
Fig. 4 shows another example of the connected AI design, i.e., supporting mobile edge computing. We explain the design using the case of vehicular networks as an example. Note that other networks can use the same or a similar design. Small base stations (SBSs), as edge servers, can process computation tasks offloaded by vehicles. Intelligence modules at SBSs provide computing offloading decisions, including the computing tasks to be offloaded, transmit power for offloading, task scheduling, etc., based on network status and computing offloading requests. Due to the high mobility of vehicles and the limited communication coverage of the SBSs, computing tasks are often migrated among the SBSs, referred to as service migration, and migration decisions are determined by a macro base station (MBS). Service migration and computing offloading decisions are highly coupled. For example, the chance of service migration increases when vehicles offload more computing tasks to an SBS. In addition, service migration requires the collaboration of multiple SBSs. In our idea of connected AI, service migration and computing offloading decisions are jointly determined. Specifically, we split the DNN into multiple sub-NNs by DNN splitting and nested DNN techniques. Some sub-NNs are deployed at the SBSs to provide computing offloading decisions. These sub-NNs are also connected with a sub-NN deployed at an MBS, which can be leveraged to make migration decisions. In this example, the input of the service migration module includes the output of intelligent modules at the SBSs, e.g., computing offloading decisions and the parameters of sub-NNs, and the output of the service migration module is the service migration policy. In this way, the intelligent modules can cooperate to make decisions for mobile edge computing.
### _Networking for AI_
In addition to managing networks, AI can function as services, namely AI services, which reside at ALs 1 and 2 in the proposed AI architecture in Fig. 3. _Networking for AI_ is to design and optimize networks to facilitate AI services. In this subsection, we first introduce the motivation of networking for AI. Next, existing works are reviewed, and research challenges are presented. Finally, the idea of AI slice is proposed and elaborated.
#### Iv-D1 Motivation
Networking for AI is attracting great attention in both academia and industry. In academia, networking for AI calls for extensive interdisciplinary research efforts between networking researchers and AI researchers to develop new communication standards and technologies to cater for AI services at scale [180, 181, 182, 141]. In industry, the International Telecommunication Union (ITU) is discussing high-level architectures to integrate, orchestrate, and update AI components for future networks, including IMT-2020 networks [183, 184]. Some 3GPP working groups are studying data collection frameworks in the network for supporting AI services [185, 186]. Notably, networking for AI is becoming an indispensable component for facilitating AI services in networks and is expected to be a key enabling technology in 6G.
Networking for AI should take the following factors into consideration:
* With the wide deployment of various IoT devices and small BSs, massive data are generated from many distributed network nodes, e.g., end users and the network edge. In the traditional cloud-based AI paradigm, the cloud collects massive distributed data for model training, and a well-trained model is deployed at the cloud for model inference. This paradigm suffers from spectrum resource scarcity and user privacy leakage concerns.4 To address these issues, a potential solution is to facilitate AI services over a large number of network nodes in a distributed manner [148], which requires new networking protocols to coordinate multiple network nodes; Footnote 4: Google’s autonomous driving vehicle can generate more than 750 MB of data per second [187].
* Network nodes, such as end users, have limited resources, while state-of-the-art AI models (e.g., DNN models with dozens of neural network layers) are complex. As such, running a complex
Fig. 4: The connected AI solution for network management.
AI model on a single network node can exhaust its computing resource and energy.5 With advanced model partition techniques (e.g., DNN partition), a complex AI model can be partitioned into multiple sub-models and embedded into a network with data exchange among the sub-models [189]. Executing sub-models consumes computing resources of network nodes, and exchanging data between sub-models also consumes communication resources. Hence, running AI models at multiple network nodes in a cost-effective manner requires innovative network embedding designs; Footnote 5: The energy consumption of using AlexNet to process an image on a tailored energy-efficient Eyeriss chip is up to 0.28 W [188].
* 6G networks will be highly heterogeneous, in which network nodes possess different amounts of communication, computing, and storage resources. As complex AI models need to be deployed at multiple network nodes, executing AI tasks requires judiciously allocating resources of these network nodes. Moreover, network dynamics, such as time-varying channel conditions among network nodes and spatiotemporal service demands, further complicate the resource allocation decision making problem. Hence, it is necessary to design tailored resource management algorithms to optimize AI performance, while adapting to network dynamics.
The _scope_ of networking for AI covers the entire lifecycle of AI services, which consists of three stages. The first stage is _data collection_ for model training via communication links. For instance, real-time service load data from end users need to be collected to train an AI model for service demand prediction. The second stage is _model training_, which is to achieve a certain objective based on the collected data. For instance, a large number of images are processed to train DNN-based object detection modules until the target accuracy requirement is satisfied. The third stage is _model inference_, which is to apply well-trained models to complete specific computation tasks. For instance, AI-based object recognition for autonomous driving detects and classifies nearby vehicles, pedestrians, and obstacles based on real-time images captured by on-board cameras [143].
#### Iii-C2 State-of-the-Art Approaches
The research on networking for AI is still at its infancy stage with only a few existing works. In this subsection, the existing studies are categorized into data collection, model training, and model inference according to the lifecycle of AI services. Representative related works are summarized in Table VI.
**Data Collection** - The objective is to efficiently collect the data from end users for optimizing AI performance. Since data are distributed across end users in the network, transmission resource is scheduled to end users for uploading their data. For instance, the level-one digital twins require periodical data synchronization with the end users, and such data can be provided for AI services. Data collection is a classic research problem widely investigated in wireless sensor networks [203] and UAV networks [204], and these works focus on optimizing either the reliability of data collection or the amount of collected data. In AI services, the collected data are used to train AI models, and the data samples may have different importance levels for model training. Merely maximizing the reliability or the amount of the collected data is not optimal. Hence, novel data collection designs taking model training into account are required for performance optimization.
Recently, AI-centric data collection is investigated in the following two research directions:
* Data importance-aware resource allocation schemes have been proposed to optimize AI model accuracy. The idea is to schedule data transmission while taking both end users' channel conditions and data importance levels into account [190]. The data importance level can be captured via data uncertainty, i.e., higher uncertainty means higher importance. The data uncertainty can be measured by entropy [205]. Power allocation for data collection is investigated in multi-model training scenarios [191]. Since the number of collected data samples impacts the model accuracy, a learning-centric power allocation scheme can adjust the users' transmission power to determine the amount of collected data for different AI models, thereby maximizing the overall model accuracy given a transmission power budget;
* There are a few AI-centric data collection protocols. In a network environment with poor channel conditions, data retransmission is applied to improve data collection reliability. Existing automatic repeat-request (ARQ) retransmission protocols, such as hybrid ARQ in long term evolution (LTE) networks, trigger data retransmissions for lost packets once the end user's signal-to-noise ratio (SNR) threshold is satisfied. The importance of data samples should be incorporated in transmission protocols to speed up the model training process. An importance-aware ARQ protocol is proposed for CNN-based classification model training in [192]. In the protocol, both data importance levels and channel conditions are taken into account to determine the data retransmission threshold, which can enhance the model training performance.
**Model Training** - Due to the distributed data and user privacy concerns, distributed training is suitable for training AI models in a network [206]. FL is one of the most promising distributed training paradigms, which can be applied in various fields such as smart healthcare and financial services [138, 207, 208]. The FL operates as follows. Each end user iteratively trains a local model with its own data, and the local model is uploaded to an edge server. Then, the edge server aggregates the local models to obtain a global model. The model training lasts multiple rounds until the global model achieves satisfactory accuracy.
Since the model is trained locally, FL is communication-efficient and can preserve data privacy of end users [148, 209]. However, with the increase of data sizes in state-of
the-art AI models,6 uploading local AI models still places a growing strain on spectrum-constrained wireless networks. In addition, end users with powerful computing servers can conduct local model training with a low delay. As such, the model uploading delay due to limited radio resources can be the dominant component in the entire FL delay. Hence, it is necessary to maximize FL performance in resource-constrained wireless networks.
Footnote 6: The data sizes of ResNet32 [210], Inception-v3 [211], AlexNet [212] and VGG16 [213] models are 50 MB, 108 MB, 240 MB, and 552 MB, respectively [214].
Recent research works optimize FL performance from the following perspectives:
* A line of works focus on designing innovative FL frameworks to reduce communication overhead. A novel two-tier hierarchical FL framework is proposed in [193], which coordinates end users, edge servers, and the cloud server to perform FL. Each edge server aggregates local models from end users in its coverage in every FL round, and the cloud server aggregates the models from edge servers in its coverage once in a few FL rounds. The proposed two-tier framework can accommodate a large number of end users for model training due to its broad coverage and, at the same time, reduce the backhaul data traffic between the cloud server and edge servers due to a low model aggregation frequency. Such framework is applied to industrial IoT networks with geographically distributed data in [215];
* Another line of works study radio spectrum-efficient model aggregation. Over-the-air computation based approaches are investigated in [194, 216, 217]. The basic idea is to exploit the superposition property of wireless multiple-access channels to perform model aggregation, which can reduce radio resource consumption;
* The FL performance can be optimized via efficient resource management. As FL performance depends on multiple factors, such as end user selection, the number of local model updates, and local model importance levels, different resource management schemes are developed as follows: (1) User selection
- How to select participating end users in the FL process impacts model convergence and training delay and hence is crucial to FL performance. A few end user selection algorithms are proposed based on principles such as training data distribution [195] and local training latency [196]; (2) FL parameters
- To alleviate communication overhead, end users conduct a few local model updates before model uploading. Given a resource budget, the optimal number of local model updates is studied in [197], which provides a theoretical guideline for selecting the number of local model update; (3) Local model importance level, which is a concept extended from the idea of data importance7
- An importance-aware model uploading strategy is proposed in [198], in which end users with high model importance levels and good channel conditions are scheduled with high priority, to speed up the convergence of FL.
Footnote 7: The model importance can be measured by layer-wise gradient norm. Local models with larger gradient norm contribute more to global model convergence in FL [197].
**Model Inference** - For many AI services in the network, AI models are usually deployed at end users and edge servers to achieve low service latency. The model inference is computation-intensive, while end users and edge servers usually have limited computing capabilities and battery power. Executing model inference tasks usually results in long service latency and high energy consumption. Hence, performing model inference should satisfy service latency under node energy constraints, thereby calling for innovative model inference schemes.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Topic** & **Work** & **Contribution** & **Highlight** \\ \hline \multirow{4}{*}{Data Collection} & [190] & Scheduling data transmission based on users’ data importance levels and channel conditions & Data importance-aware spectrum allocation \\ \cline{2-5} & [191] & Allocating users’ transmission power that can adjust the amount of collected data samples for multiple AI models to enhance the overall model accuracy & Data amount-aware power allocation \\ \cline{2-5} & [192] & Designing an importance-aware ARO protocol, in which users’ data importance levels and channel conditions are jointly considered to trigger data retransmission & Data importance-aware retransmission protocol \\ \hline \multirow{4}{*}{Model Training} & [193] & Proposing an edge-cloud assisted FL framework, in which the edge and cloud servers alternatively aggregate local models to reduce communication overhead & Two-tier FL framework \\ \cline{2-5} & [194] & Proposing an over-the-air computation approach for model aggregation & Over-the-air model aggregation \\ \cline{2-5} & [195] & Selecting users with more contribution to convergence for model aggregation based on users’ data distribution & Data distribution-aware user selection \\ \cline{2-5} & [196] & Selecting users with low training delay considering heterogeneity among users & Training latency-aware user selection \\ \cline{2-5} & [197] & Optimizing the number of local model updates given a resource budget & Local update frequency optimization \\ \cline{2-5} & [198] & Scheduling model uploading based on end users’ model importance levels and channel conditions uploading & Model importance-aware model uploading \\ \hline \multirow{4}{*}{Model Inference} & [144] & Optimizing video frame rate and input image resolution to balance service latency and detection accuracy for virtual reality users & Data resolution optimization \\ \cline{2-5} & [199] & Selecting the optimal DNN model for real-time video analytics & DNN model selection \\ \cline{2-5} & [200] & Selecting the optimal DNN model’s cut layer to minimize inference latency for user-edge DNN & User-edge DNN model partition \\ \cline{2-5} & [201] & Partitioning a complicated DNN model across end users, the network edge, and the cloud to reduce communication overhead & User-edge-cloud DNN model partition \\ \cline{2-5} & [202] & Designing a collaborative DNN model inference scheme with light-weight models at IoT devices and an uncompressed model at the network edge & Collaborative DNN model inference \\ \hline \end{tabular}
\end{table} TABLE VI: Summary of Related Works on Networking for AI
Existing studies on model inference can be categorized as follows:
* Raw data are offloaded to edge or cloud servers for model inference. The input data resolution influences the inference accuracy. For instance, the accuracy of object detection is related to the input image resolution [144], which in turn affects the offloaded data volume since the data size of high-resolution images is usually large. Taking into account the trade-off between the inference accuracy and the amount of offloaded data, the input image resolution should be optimized to satisfy the target AI service requirements. The optimal video frame rate and input image resolution are investigated in [144] to balance service latency and detection accuracy for virtual reality users;
* An appropriate AI model is selected to satisfy specific AI service requirements. In addition to the data resolution, the inference accuracy depends on the type of AI models. A DNN model with more hidden layers can usually achieve a higher inference accuracy than a shallow DNN model. Considering multiple available DNN models deployed at the network edge, the optimal DNN model selection for real-time video analytics is investigated in [199];
* With advanced model partition techniques, an AI model can be partitioned into multiple sub-models and then embedded into different network nodes to conduct model inference. For instance, leveraging the layered structure of DNNs, the entire DNN model can be partitioned into an end user-side model and a server-side model at a proper DNN layer (i.e., the cut layer). As such, the end users and the edge servers can conduct model inference in a collaborative manner. DNN models can be partitioned for achieving different goals. For instance, the optimal model partition for minimizing inference latency is studied in [200], in which an online learning algorithm can adaptively determine the optimal cut layer. To reduce communication overhead among network nodes, complicated DNNs models can be partitioned into sub-models for end users, edge servers, and the cloud as in [201];
* Light-weight models are used to facilitate prompt model inference at end users. Computation-efficient compressed models can be obtained via various model compression techniques, such as weight pruning [218], knowledge distilling [219] and fast exiting [220]. For instance, weight pruning techniques remove less important model weights to reduce the computational complexity of model inference, while achieving inference accuracy close to that of the un-compressed models. To enhance service performance, a collaborative model inference scheme that deploys light-weight models at IoT devices and uncompressed models at the network edge is proposed in industrial IoT networks [202]. The IoT devices dynamically make AI task offloading decisions according to time-varying channel conditions to minimize the service delay while guaranteeing the accuracy requirements of DNN-based fault diagnosis services.
#### Vi-B3 Research Challenges
Despite the aforementioned research efforts, facilitating AI services in a network faces various challenges, some of which are discussed in the following.
_Complex Implementation Option Selection_ - An AI service can be implemented by various options with different model structures, training procedures, and inference processes. For instance, a service of object detection can be implemented via different neural networks, such as AlexNet [212] and SqueezeNet [221]. Even if the model structure is the same, a model can be trained in different ways, such as centralized training, decentralized training (e.g., FL [207]), and semi-centralized training (e.g., split training [126]). In addition, model inference can be conducted in various manners, such as end user-only inference, edge-only inference, and collaborative inference. Different implementation options consume different amounts of computing, storage and communication resources. Hence, it is necessary to select an implementation solution for AI services that suits the service characteristics and network dynamics.
_Multi-Dimensional QoS Requirements_ - The QoS requirements of AI services are multi-dimensional. AI model accuracy is usually a key performance metric. In addition, AI services should be offered to end users with low latency in many use cases. For instance, the service latency of object detection in autonomous driving should be less than 100 \(ms\) for safety considerations [143], whereas autonomous vehicles require an ultra-high accuracy in 3D object detection [222]. Moreover, these performance metrics are correlated. High-accuracy object detection usually requires high-resolution images as input and advanced AI models to process the input images, which can result in long service latency. How to satisfy multi-dimensional QoS requirements of AI services requires further investigation.
#### Vi-B4 AI Slice
To better support AI services, we extend the network slice concept and propose an idea of _AI slice_ with two subslices. The basic idea is to construct a _training subslice_ for model training and an _inference subslice_ for model inference. The two subslices are logically isolated and use their own network resources. The rationale behind training and inference separation is that the two stages can have different goals.
An illustration of an AI slice is given in Fig. 5. In the AI slice, the training and inference subslices share the same resource pool and are coordinated to jointly support the AI service. First, the multi-dimensional QoS requirement of the AI slice is decoupled into two separate QoS requirements for the two subslices. For an object detection service in autonomous driving, both high detection accuracy (e.g., 99%) and low service latency (e.g., 100 \(ms\)) are required. The training subslice should satisfy the detection accuracy requirement, while the inference subslice should satisfy the service latency requirement. Second, to satisfy the individual QoS requirements of the two subslices, the resources reserved for the AI slice are judiciously allocated between the two subslices, based on the performance of the two subslices and their QoS requirements. Then, given the allocated resources, the two subslices are configured to satisfy their individual QoS requirements, as described in the following:
* In the training subslice, based on the training data distribution in the network, a subslice controller determines training configurations (e.g., data collection schemes and model training methods) and schedules resources to network nodes to train a model given the target accuracy. In addition, since the training data vary over time in a dynamic network, the AI model may need to be retrained from time to time. Note that allocating dedicated resources for the training subslice can effectively mitigate the straggler effect that plagues distributed learning in large-scale networks, thereby speeding up the model training process;
* In the inference subslice, the subslice controller analyzes the service demand pattern at each BS and determines inference configurations (e.g., model inference and input data compression schemes) to satisfy the inference latency requirement. For instance, uncompressed models can be deployed at resource-abundant BSs, and partitioned and pruned models can be deployed at resource-limited BSs. This can achieve close inference service latency performance across different BSs.
Overall, the two logically-isolated subslices focus on satisfying different QoS requirements and jointly support the AI service.
To elaborate the idea of AI slices, we present the following example on real-time video analytics in vehicular networks [199]. Smart cameras are deployed in intersections to provide a video surveillance service such as vehicle plate recognition. In such service, a CNN model is trained using the video streams collected by smart cameras, and then the well-trained model is used to conduct video analytics tasks. Using the proposed AI slice framework, CNN model training is conducted in a training subslice, while real-time video analytics is conducted in an inference subslice. Specifically, in the training subslice, the CNN model can be trained via a FL framework for protecting data privacy. The corresponding computing resources at smart cameras and spectrum resources in the network are allocated to satisfy model training requirements, such as training accuracy. In the inference subslice, different user-edge orchestration schemes (e.g., DNN model partition), input data compression schemes (e.g., frame rate reduction), and network resource management policies can be configured to satisfy the inference delay requirement in video analytics services based on time-varying service demands and network conditions due to vehicle mobility. With the AI slice for video analytics, both training accuracy and inference latency requirements can be satisfied in a dynamic network environment.
### _Summary_
In this section, we have reviewed some common AI techniques, explored the role of AI in 6G networks, and proposed a four-layer AI architecture for pervasive intelligence in 6G. Two perspectives of AI in wireless networks, i.e., AI for networking and networking for AI, have been discussed, which correspond to using AI as a powerful tool for network management and optimizing networks to support AI applications, respectively.
Recent advancements in ML algorithms have accelerated the deployment of AI in wireless networks. In 5G, AI techniques are used to address particular networking problems, whereas, in 6G, AI will penetrate every corner of wireless networks from network management to network services. Therefore, an architecture for AI is needed for identifying the role of AI and characterizing the functionalities of AI across a network.
Fig. 5: Conceptual AI slice consisting of a training subslice and an inference subslice.
Appropriate AI techniques should be selected to tackle networking problems with different characteristics and on different decision time scales when it comes to AI for networking. Furthermore, the collaboration among intelligent modules is important to implement AI-driven networks efficiently and flexibly. The idea of connected AI is to enable cooperative decision making among intelligent modules for network control. In terms of networking for AI, a distributed architecture of AI algorithms connects AI models and network resources located at network edge. The study of networking for AI is still in its infancy but essential to supporting an expanding group of AI services. Network slicing will remain to be an enabler for delivering AI services, but slicing policies should be customized according to the features of AI algorithms and the training and inference stages of AI.
## IV A Potential Network Architecture for 6G
In this section, we propose a conceptual network architecture for 6G, which integrates holistic network virtualization (including digital twins and network slicing) and pervasive network intelligence (including connected AI and AI slices). Then, we illustrate three types of interplay enabled by the proposed architecture, i.e., the interplay between digital twin paradigm and network slicing, model-driven methods and data-driven methods, and virtualization and AI, respectively.
### _Related Studies on Architecture for 6G_
Several works have proposed architectures with various focuses for 6G networks, e.g., space-air-ground integrated networks for global coverage [8], cell-free massive multiple-input multiple-output (MIMO) architecture for inter-cell interference mitigation [223], and multi-tier computing architecture for ubiquitous computing service provisioning [224]. Pursuing the goal of advanced network management, most of the proposed architectures highlight AI techniques to optimize network architecture, control, and management [12, 183]. For example, AI-based data analytics functions, which mine historical data for network operation troubleshooting, network resource optimization, and network traffic prediction, are incorporated in the network architecture in [12]. The ITU specifies a high-level AI-based architectural framework for future networks, in which several novel components such as ML management and orchestration functionalities are incorporated for flexible AI-based function placement [183]. In addition to AI techniques, some recent conceptual network architectures start to embrace digital twin techniques [75, 83]. For example, a digital twin-based network architecture constructs a digital twin for each end user to serve as its communication assistant and data asset manager [83]. Another digital twin-enabled network architecture adopts three categories of digital twins, i.e., edge-based, cloud-based, and hybrid digital twins, for supporting different types of services [75].
Different from the existing network architectures, our proposed network architecture features novel holistic network virtualization, which incorporates network slicing and digital twin paradigms, and pervasive network intelligence, which integrates AI for networking and networking for AI. Moreover, featuring the designs in Sections II and III, the proposed architecture enables various interplay among its key elements to empower 6G. In the following subsections, we present the details of the proposed architecture.
### _Architecture Overview_
The overall network architecture is illustrated in Fig. 6, which consists of the physical space and the cyber space. The physical space includes end users and network infrastructure at the edge and the core networks. Data describing end users are collected from the physical network to create level-one digital twins as introduced in detail in Subsection II-C, and network slices are created for various services. The slices are further abstracted into level-two digital twins, which are supplemented with service-specific information aggregated from level-one digital twins. The six-layer virtualization architecture in Fig. 2 applies to the network slices and the digital twins, both of which reside in the cyber space in Fig. 6.
AI pervades the entire architecture, which supports both AI for networking and networking for AI. First, AI is used to manage network slices and digital twins, as shown in the logic network control section in Fig. 6. For network management, a connected AI solution discussed in Subsection III-C is applied to enable intelligent modules, which in turn manage network slices and digital twins. The connected AI solution corresponds to AL 3 and 4 in Fig. 3. Second, the architecture supports dedicated AI slices with training and inference separation for AI service provisioning, as mentioned in Subsection III-D. AI slices provide services corresponding to AL 1 and 2 in Fig. 3, while the management of AI slices is conducted by intelligent modules.
With the overall network architecture in Fig. 6, we integrate holistic network virtualization and pervasive network intelligence for 6G. Virtualization is supported from the aspects of both the network and the end users, while intelligence is reflected through both AI for networking and networking for AI. Taking advantage of digital twin paradigm and network slicing as well as those of virtualization and AI, the proposed architecture aims at exceeding flexibility, scalability, adaptivity, and intelligence.
### _Components and Subsystems_
In the physical space, the proposed architecture includes both RANs and core networks. Specifically, the following components are involved:
* Assorted APs: This component includes MBSs, SBSs, mobile APs (such as UAVs), satellites, and other non-cellular APs;
* Network controllers: This component includes local controllers located at APs or servers on network edge and the centralized controller located at servers in core networks or in the cloud. Each controller can consist of computing servers and affiliated network storage servers;
* General computing servers: This component includes computing servers for implementing network functions, such as routing and firewall, and hosting the VNFs;
* Application servers: This component includes computing and network storage servers for supporting general edge computing and AI services. These servers are not used for network management or implementing network functions;
* Other network devices: This component includes specialized network hardware that are not general computing servers, such as baseband processing units and network switches;
* End users: This component includes human mobile users, sensors, vehicles, and various IoT devices, such as meters, actuators, and robots.
In the cyber space, the proposed architecture includes three subsystems, i.e., network slices, digital twins, and connected AI, as follows:
* Network slices: This subsystem includes all virtual networks created in network slicing, including AI slices. A network slice can involve a RAN, a core network, or both. General slices are inherited from existing networks, while AI slices are described in detail in Subsection III-D;
* Digital twins: This subsystem includes level-one and level-two digital twins. The digital twin subsystem is described in detail in Subsection II-C;
* Connected AI: This subsystem includes intelligent modules deployed across a network at both the local controllers and the centralized controller. The connected AI subsystem is described in detail in Subsection III-C.
Interconnections between different components and subsystems of the proposed architecture are elaborated in Subsections IV-E to IV-G, which highlight the interplay between digital twin paradigm and network slicing, between model-driven and data-driven methods, and between virtualization and AI, in the proposed architecture. Some open issues and challenges regarding the architecture are presented in Section V.
Note that the proposed conceptual architecture can apply to various types of physical networks, such as vehicular networks and integrated terrestrial-satellite networks, although Fig. 6 cannot illustrate every possible network scenario. In different physical networks, the implementation of holistic network virtualization and pervasive network intelligence can be different and require certain customization. For example, the deployment of intelligent modules and the data flow among the modules in a satellite network segment can be different from those in a terrestrial network segment. Furthermore, the
Fig. 6: The proposed network architecture for 6G networks.
migration of digital twins can be more important in a vehicular network than in a static IoT network. Related discussions can be found in Section V, where we present challenges and open issues. Nevertheless, the basic ideas in the proposed conceptual architecture, including the two-level digital twins, intelligent modules, and AI slices, are applicable in various physical networks.
### _Implementation_
In this subsection, we provide a case study on a vehicular network to demonstrate the potential implementation of the proposed network architecture. Roadside BSs co-located with edge computing and caching servers facilitate autonomous driving services for vehicles on the road. To implement the proposed network architecture, the following steps are conducted.
* _Network Slice Establishment_: Multiple network slices are established for autonomous driving services with different QoS requirements, achieving network virtualization. Conventional network slices are established for non-AI based services, e.g., high-definition map downloading, while AI slices consisting of training and inference subslices are established for AI based services, such as deep learning based cooperative sensing. The network slices are stored and managed by a centralized controller.
* _Digital Twin Construction_: By collecting extensive data from physical entities, digital twins are constructed for vehicle users, roadside BSs, and the established network slices, achieving the virtualization of end users and slices. Digital twins of vehicle users and roadside BSs are located at edge servers, while digital twins of network slices are located at a cloud server. Due to high vehicle mobility, digital twins of vehicle users should be migrated across edge servers to ensure service continuity. In addition to collected data, digital twins can include generated user and service specific data, such as predicted vehicle trajectory and spatial-temporal service demands, via mining historical data. The generated vehicle data will be used for network management and service provision.
* _AI Module Deployment_: AI modules with different functionalities can be deployed at both the centralized and local network controllers, achieving intelligent network management. The AI modules at the centralized network controller are in charge of network planning. For guaranteeing QoS requirements of different slices, these AI modules can make resource reservation decisions based on the predicted service demands from the digital twins of roadside BSs and collected slice performance data from the digital twins of network slices. The AI modules at local network controllers are in charge of network operations. For enhancing the perceived performance of the vehicle users, the AI modules schedule on-demand network resources based on the collected data (e.g., vehicle users' channel conditions) and the generated data (e.g., predicted vehicle trajectory) from the digital twins of vehicle users.
### _Interplay between Digital Twin Paradigm and Network Slicing_
As the two components of holistic network virtualization, digital twin paradigm and network slicing are connected in the following two aspects.
First, the digital twin paradigm for end user virtualization focuses on data management, while network slicing focuses on network management. Data may be viewed as a new type of resources in future networks, in addition to communication, computing, caching, and sensing resources. Meanwhile, as a resource, data has its unique features. First, data can be considered as an application-layer resource rather than a physical-layer resource. Second, different from computing or communication resources, the amount of data resources available to a network is not fixed but progressive. Last, the collection and processing of data, which is necessary for utilizing any data resource, consume other network resources. On one hand, effectual utilization of the data resource will benefit network management, and hence digital twin paradigm can enhance network slicing. On the other hand, network management should take into account the need and cost of allocating other network resources for utilizing the data resource. Hence, network slicing can facilitate digital twins.
Second, digital twins will enable user-centric networking in future networks, while network slicing enables service-centric networking. Creating an isolated slice for each service and provisioning the service through managing the slice yield a service-centric focus in network management. Meanwhile, creating a digital copy of each end user and administrating data that characterize the end user provide a user-centric perspective of network management. Having a set of information, selected by the centralized controller through digital twin model control, to describe various characteristics of the end users, such as their location, service request profile, resource utilization, and channel information, creates the possibility of user-specific scheduling within each slice in the network operation stage. For instance, access control and resource allocation decisions for an end user may depend on the data profile from its digital twin, while different data profiles may lead to different scheduling policies. Accordingly, future networks may feature service-centric network planning and user-centric network operations, which can improve the granularity of network management for handling highly diversified end users and dynamic network environments.
### _Interplay between Model-Driven and Data-Driven Methods_
The second interplay enabled by the proposed architecture is the interplay between model-driven and data-driven methods in network operation and service provision. This interplay applies to the intelligent modules for network management shown in Fig. 6.
Network management mostly relied on model-driven or heuristic methods before 5G. Prior to the prevalence of AI, mathematical tools such as optimization methods and game theory have been widely used for network management. Optimization methods formulate the objective and constraints in a closed form, and the corresponding network
management problems are solved using optimization algorithms [225, 226, 227]. Game-theoretic approaches analyze the interactions among network entities in either cooperative or non-cooperative scenarios to identify the optimal strategy of each entity [228, 229, 230]. Mechanism design, an analytical framework in game theory, has also been used to coordinate network entities with locally-held information to achieve desirable network-wide solutions in network utility maximization problems [231].
Through characterizing the relations among several key variables, model-driven methods can lead to either closed-form solutions or algorithms for network management problems. Based on mathematical models, model-driven methods are usually explainable and generalize well for different specific problems [232].8 However, when networks become complex (i.e., when there are a large number of variables and/or complicated correlation among them) or highly dynamic (e.g., when the network environment changes too rapidly for an optimization algorithm to converge or for a game to achieve an equilibrium), model-driven methods may no longer be accurate or applicable.
Footnote 8: For instance, the water-filling algorithm could be applied to various power allocation problems, and the Rayleigh fading model could characterize channels in various network environments.
The investigation of data-driven methods for network management has gained momentum since 5G. Through collecting and exploiting real-world data, data-driven methods implicitly characterize the relations among variables to generate and fine-tune policies for network management. Given sufficient data and a stationary network environment, data-driven methods can provide close-to-optimal solutions to problems that are too complicated for model-driven methods. However, when the network environment is non-stationary so that new and unknown situations occur from time to time, the performance of data-driven methods can be questionable [233]. In addition, data-driven methods may not generalize well due to their strong dependence on data collected from a specific network environment.
In 6G, data-driven and model-driven methods should work in synergy. The proposed architecture enables the interplay between data-driven and model-driven methods for creating advanced _hybrid data-model driven_ methods. There are different options of hybrid data-model driven methods, as illustrated in Fig. 7 and elaborated below. The first three options suit AI for networking, while the last option suits networking for AI.
* Data-driven and model-driven methods can be the backup for each other. For instance, models can be selected to back up data-driven methods, for the case when unknown situations occur in the network environment and degradation in the performance of data-driven methods appears. Meanwhile, switching between data-driven and model-driven methods, e.g., based on the available resources, can potentially increase the adaptivity of network management.
* Date-driven and model-driven methods can target different steps and solve different subproblems of network management. Specifically, data-driven methods can solve the subproblems with a large number of variables or complicated coupling relations among variables, while model-driven methods can solve relatively isolated subproblems with a few key variables. This would allow data-driven and model-driven methods to play to their respective strengths.
* Model-driven methods can provide rough solutions based on general mathematical models, and then data-driven methods, taking the rough solutions as input and exploiting real-world data from the network, can refine the solutions for the specific network scenario. Having the initial solution generated from models may reduce either the amount of data or the amount of time needed by data-driven methods.
* In networking for AI, while deploying a service function chain for an AI service, some of the function modules can use data-driven methods, while other function modules in the same service function chain can use model-driven methods. For example, in an AI-based image processing service, a model-driven module can be used for image resolution adjustment prior to a data
Fig. 7: Options for hybrid data-model driven methods. The “Data” and “Model” blocks represent “data-driven methods” and “model-driven methods”, respectively.
driven module for object detection. The idea is similar to task division, except that the scenario here is networking for AI instead of AI for networking [28].
### _Interplay between Virtualization and AI_
The third interplay enabled by the proposed architecture, i.e., the interplay between virtualization and AI, is illustrated in Fig. 8.
First and foremost, virtualization and AI are coupled through data. With the introduction of digital twins, a vast amount of organized data regarding end users, i.e., level-one digital twins, and network services, i.e., level-two digital twins, become available. The data included in the digital twins can be provided to the intelligent modules, the training or inference subslice of an AI slice, or both. For instance, edge-hosted AI, possibly collaborating with end user-hosted AI, can perform user-specific data processing and prediction based on the data from digital twins. The results, such as prediction results, resource scheduling schemes, or slicing policies, can be fed back to the digital twins to record certain predicted status, e.g., location and mobility, of the end users. Correspondingly, data in the digital twins of end users, network infrastructure, and slices can be either the input or the output of AI modules, leading to a bidirectional interaction between virtualization and AI.9
Footnote 9: Interested readers are referred to [234] for the relation between AI and data life cycle, although the discussions therein do not involve virtualization.
The second connection between virtualization and AI is through control. Based on the data from digital twins, AI functions hosted at the edge and core networks can make the network management and service provisioning decisions. The decisions may include network slice control, which are fed back to the physical network and network slices for execution and, at the same time, to the level-two digital twins for data update. In addition, the decisions may include digital twin model control for level-one and level-two digital twins. Digital twin model control may include the determination of the type and the amount of data to be included in digital twins, the frequency and the method of data collection, the format and the precision of stored data, and so on. The digital twin models affect the availability and quality of data available for network control, especially AI-driven network control, and thereby impact the network performance. Therefore, from the perspective of network control, the interaction between virtualization and AI is also bi-directional.10
Footnote 10: The interaction between digital twin and AI for intelligent network control is discussed in [235]. Note that the definition of digital twins therein is different from ours.
The third and implicit connection between virtualization and AI is through resources. Holistic network virtualization requires extensive resources, including computing resource for virtual network functions, caching resource for storing digital twins, and communication resource for the synchronization between end users and their digital twins. Similarly, pervasive network intelligence also requires extensive computing resource and possibly other resources, e.g., communication resource for distributed training as mentioned in Section III. Therefore, the network resources need to be shared and coordinated between virtualization and AI functions. However, this does not mean that virtualization and AI functions simply compete for resources. Instead, they can help each other improve resource utilization efficiency. Digital twin paradigm may reduce the resource consumption of AI functions by providing high-importance data only. This can be achieved by the aforementioned digital twin model control. Meanwhile, creating a digital twin for every end user may be too resource-demanding for networks in the near future. Using AI to select representative end users for generating digital twins and optimizing digital twin models may reduce the resource consumption of maintaining and updating digital twins. One potential implementation is using AI to categorize end users and select a portion of users from each category for creating digital twins. Alternatively, since it may be more challenging to provide QoE guarantee for some end users than others, using AI to select such end users for creating digital twins can potentially reduce the resource consumption on digital twins for 6G networks.
### _Potential Network Architecture for 6G: A Summary_
This section has provided a potential network architecture for 6G, which integrates two key elements, i.e., pervasive network intelligence and holistic network virtualization. In the proposed network architecture, detailed network components, subsystems, and potential implementation have been discussed. Moreover, three types of interplay in the architecture are provided to characterize the proposed network architecture.
The proposed network architecture holds great potential for achieving advanced network management schemes and supporting AI services in 6G networks. Firstly, integrating digital twins and network slicing facilitates user-centric networking and improves the granularity of network management.
Fig. 8: Interplay between virtualization and AI.
Secondly, integrating data-driven methods and model-driven methods enables novel hybrid data-model driven methods, which has the potential to outperform existing network management methods in terms of adaptivity, granularity, and so on. Thirdly, leveraging the network slicing concept in AI services facilitates AI services targeting QoS performance guarantees.
## V Challenges and Open Issues
Many challenges and open issues are yet to be addressed for holistic network virtualization and pervasive network intelligence in 6G. In the following, we present some key challenges and open issues.
### _Digital Twin_
The six-layer architecture in Subsection II-C provides a high-level design for integrating the digital twin paradigm into network virtualization. Open issues to be investigated for practical implementation of this architecture include quantitative performance characterization of digital twins, the optimal digital twin model, digital twin migration, and data security.
First, it is necessary to quantitatively characterize the network performance improvement from introducing digital twins, either from the perspective of QoS/QoE satisfaction or from the perspective of resource utilization. Second, level-one digital twin models configured by the centralized controller may be different for different edge networks to account for network heterogeneity, and how to determine effective digital twin models is a challenge. Third, the mobility of end users such as vehicles creates a need for updating and migrating digital twins across different edge networks, which requires further study. Last, ensuring the security of user data in the digital twin paradigm is yet another challenge. As the local and centralized network controllers have access to a vast amount of user data, developing proper security mechanisms for data collection, aggregation, and migration becomes essential. Readers are referred to [236, 237, 238, 326, 327, 328] for discussions on some of the aforementioned challenges, such as the heterogeneity and migration of digital twins, and more open issues related to digital twins in 6G.
### _Network Management Oriented Data Abstraction and Processing_
While digital twins provide data to enable AI for networking, including automated network slicing and AI-empowered network control, efficient data management can be challenging. First, it is necessary to develop data abstraction methods to aggregate the data with different levels of granularity for making different network management decisions. For instance, in network slicing, high-granularity data are required for determining the optimal network operation strategies and low-granularity data are sufficient for determining the optimal network planning strategies [11, 239]. How to determine the appropriate data granularity for different network management decisions is an open issue. A potential solution is to empirically adjust data granularity and the time scale for decision-making [240]. Meanwhile, as the number of variables and data types in network management can be huge, more scalable and efficient solutions are required. Second, while applying the connected AI solution for network management, the settings of intelligent modules, such as the selection of algorithms, the input and output attributes, and the connections among intelligent modules, should be configured to maximize the utilization of data with low communication and processing overhead, yet finding the optimal settings is challenging. The cooperation between model-driven and data-driven methods in intelligent modules can be a potential approach to address the challenge, yet how to support such cooperation among different types of intelligent modules requires further investigation. Third, as data can be generated, transmitted, and processed at different network stakeholders, configurable and regulation-compliant data management is also a challenge. The integration of the blockchain and privacy-enhancing technologies can be a potential solution, while the trade-offs between privacy preservation and processing efficiency need in-depth investigation. Readers are referred to [131, 4, 182, 234] for discussions on the aforementioned challenges, such as privacy preservation, AI model selection, intelligent modules, and more open issues about data abstraction and processing.
### _Model and Resource Orchestration_
Networking for AI in Subsection III-D can facilitate AI services in a network. One key issue is to optimize AI service performance, which requires judicious configuration of the network, including AI algorithm selection, data collection, and network resource allocation. The main challenge lies in modeling the relationship between AI performance and these network configurations. Establishing an accurate mathematical or empirical model requires extensive measurements in real-world networks. Even if establishing a model is viable, the model may be suitable only for a chosen AI algorithm. In addition, to adapt to network dynamics (e.g., rapidly fluctuating service demands), an online network configuration scheme is desirable. Since reinforcement learning algorithms are able to make online decisions in a dynamic environment, developing cost-effective reinforcement learning algorithms for high-dimensional network configuration problems can be a promising approach. For example, a reinforcement learning algorithm is developed for joint AI model selection and resource allocation in industrial IoT [202]. For more discussions on the above challenges, interested readers are referred to [175, 241, 242].
### _Training and Inference Coordination_
The concept of AI slice is proposed to meet specific QoS requirements of AI services in Subsection III-D. The training and inference stages for an AI service consume multi-dimensional network resources [131, 243]. In an AI slice, two subslices share the virtualized network resource pool, and hence resource reservation decisions for the two subslices are closely correlated. On the one hand, reserving abundant resources for the training subslice may help achieve a high training accuracy but potentially render resource insufficiency in the inference subslice, which can result in a long service
latency. On the other hand, insufficient resource provisioning for the training subslice may yield a model with low accuracy and consequently create a bottleneck for inference accuracy. To optimize the performance of the AI service, resource reservation for training and inference subslices should be coordinated. Developing an accurate mathematical model to characterize the interplay between training and inference stages is difficult, since a large number of system factors should be taken into account. Hence, it is necessary to study efficient model-free approaches to characterize the interplay.
### _Energy Efficiency of AI_
With hundreds of neural network layers, thousands of neurons, and millions of parameters, state-of-the-art AI models usually consume extensive energy and incur substantial environmental costs.11 Improving energy efficiency has become a major issue for wide deployment of AI services. In addition, recent research shows that improving the accuracy of an AI model may come at an exponential increase in the computation, environmental and economic costs [245].12 Hence, deploying energy-efficient AI services in a network is necessary for reducing costs for the network operator and meeting environmental standards. Several model compression techniques, such as weight pruning [218], parameter quantization [246], and model compression [247], can be applied to alleviate the problem. In addition, hybrid data-model driven methods can train AI models with a reduced amount of data, which can also decrease energy consumption.
Footnote 11: The estimated carbon footprint of training a state-of-the-art natural language processing model is about five times the life emissions of an average car [244].
Footnote 12: It is estimated that reducing the classification error probability from \(11.5\%\) to \(5\%\) over the ImageNet dataset needs to increase computation from \(10^{14}\) to \(10^{19}\) Gflops, carbon emissions from \(10^{6}\) to \(10^{10}\) Ibs, and economic costs from \(10^{6}\) to \(10^{11}\) USD [245], respectively.
### _Hybrid Data-Model Driven Methods_
The four options listed in Subsection IV-F provide our initial ideas for hybrid data-model driven methods. Related open issues to be investigated include the following. First, it is necessary to study how to determine which option to use and how to switch among options. Designing mechanisms for choosing and switching among options will allow networks to flexibly and adaptively integrate data-driven and model-driven methods. Second, for a chosen option, it is important to understand how much the data-driven and model-driven components affect the overall performance and how much impact they have on each other. For instance, in the mixing option, the AI service performance may depend on the combined choices of data-driven and model-driven methods, and finding a proper combination can be a challenge. Third, in addition to the four options as introduced, there should be other potential options for hybrid data-model driven methods, and identifying other promising options is an open issue of great importance. Last, due to the lack of explainability in existing data-driven methods, careful investigations and analysis should be directed to the management of critical network operations. The role of hybrid data-model driven methods in enhancing system robustness is an open issue that deserves further investigation. For more discussions on challenges in hybrid data-model driven methods for networks, interested readers are referred to [248, 249, 232].13
Footnote 13: An application of a hybrid approach in vehicular network simulation can be found in [250].
## VI Conclusion
Designing an architecture for future networks is challenging, especially when the use cases and defining techniques are still beneath the surface. Nevertheless, the evolution of networks through the previous generations demonstrates a necessity to support increasingly heterogeneous networks, diverse services, and stringent QoS/QoE requirements. This has been driving the trend of virtualization and generating significant interest in AI-driven networking. Recognizing the insufficiency of the existing scope and level of virtualization and AI for future 6G networks, we have presented a conceptual architecture design that integrates holistic network virtualization and pervasive network intelligence. To complement and solidify our overall network architecture, we have proposed several specific designs, including the six-layer holistic network virtualization based on digital twins, the connected AI solution for network management, as well as ideas, including AI slices and hybrid data-model driven methods. As a result, the proposed network architecture has the potential to achieve unprecedented scalability and flexibility due to the holistic network virtualization as well as exceeding adaptivity and intelligence due to the pervasive network intelligence. At last, we have identified some challenges and open issues related to the proposed architecture. We hope this study will lead to further discussions and developments on the architecture of 6G networks.
## Acknowledgement
The authors would like to thank Dr. Dongxiao Liu for helpful discussions on open issues related to data privacy and security.
|
2310.11355 | Spectral chaos bounds from scaling theory of maximally efficient
quantum-dynamical scrambling | A key conjecture about the evolution of complex quantum systems towards an
ergodic steady state, known as scrambling, is that this process acquires
universal features when it is most efficient. We develop a single-parameter
scaling theory for the spectral statistics in this scenario, which embodies
exact self-similarity of the spectral correlations along the complete
scrambling dynamics. We establish that the scaling predictions are matched by a
privileged stochastic process, and serve as bounds for other dynamical
scrambling scenarios, allowing one to quantify inefficient or incomplete
scrambling on all timescales. | Tara Kalsi, Alessandro Romito, Henning Schomerus | 2023-10-17T15:41:50Z | http://arxiv.org/abs/2310.11355v2 | # Scaling theory of maximally efficient quantum-dynamical scrambling
###### Abstract
A key conjecture about the evolution of complex quantum systems towards an ergodic steady state, known as scrambling, is that this process acquires universal features when it is most efficient. We develop a single-parameter scaling theory for this scenario, which embodies exact self-similarity of the spectral correlations along the complete scrambling dynamics. We establish that the scaling predictions are matched by a privileged stochastic process, and serve as bounds for other dynamical scrambling scenarios, allowing one to quantify inefficient or incomplete scrambling on all timescales.
A central theme in the study of complex quantum matter is to establish universal characteristics that transfer between systems and application domains. This theme unifies different areas of physics, spanning from its historic origins in nuclear physics [1; 2; 3; 4; 5; 6], to disordered and wave-chaotic electronic and photonic systems [7; 8; 9; 10; 11], to isolated interacting models that display many-body eigenstate thermalization [12; 13; 14; 15; 16; 17], and also provides insights into the black hole information paradox [18; 19; 20; 21; 22; 23]. Across all these settings, a universal endpoint of the dynamics can be defined in terms of random-matrix theory, and systems that approach this endpoint are described as being ergodic. Universal random-matrix behavior also sets benchmarks for systematic deviations reflecting the specific structure of a system, such as those observed in the interplay of short-range interactions, disorder, and conservation laws [10; 24; 25; 11]. Even for systems that establish ergodicity over time, the approach to this endpoint itself--the dynamical process known as scrambling--is system-dependent [26]. This leads to the emergence of characteristic time and energy scales imprinted onto the spectral statistics, but also to the concept of maximally chaotic systems, which are posited to display universal characteristics already for short times [27; 23]. This can be formalized in terms of the unitarity of the dynamics, which enforces a duality between times shorter and longer than the Heisenberg time [28; 29; 30; 31; 32; 33], implying that maximally ergodic long-time behavior provides universal bounds on the short-time scrambling dynamics. For maximal chaos, the conjectured state outside the horizon of a black hole serves as an important motivation [20; 21; 22; 23; 26; 27; 34; 35; 36; 37; 38], in which the information paradox becomes firmly linked with emerging signatures of discrete energy levels after the Heisenberg time.
In this work, we develop a predictive single-parameter scaling theory for the efficient scrambling dynamics of maximally chaotic systems and use this to obtain analytical benchmarks for their behavior over all timescales, which we find to be faithfully replicated in a paradigmatic quantum-dynamical process. This uncovers universality in the language of a powerful general framework, which relates all statistical details to a single intrinsic parameter [39]. Our scaling assumption is simple--we equate the only two invariants of the dynamics under the assumption that the Hilbert space has no further structure, i.e., that the dynamics are invariant under unitary basis changes. This ansatz integrates into a single-parameter version of a specific random-matrix ensemble, the Poisson kernel, which has been widely studied in static settings [7; 40; 41; 42], but in our scaling theory acquires a dynamical interpretation where it embodies exact self-similarity of the spectral correlations along the complete scrambling dynamics. We then make a connection to a second central object of random-matrix theory, the Dyson Brownian motion process [4], which similarly shifts its role from being a tool to study the stationary ergodic endpoint, to become a dynamical process in its own right. Utilizing exact analytical expressions for the scaling parameter, density of states, and spectral correlation functions, supplemented by numerical results, we find that the spectral data from this process agrees with the scaling theory and recovers its key features, such as the self-similarity of correlations along the flow. We contrast this with another unitarily invariant stochastic process, which displays clear deviations from the scaling bounds. As our theory manifestly preserves all unitary constraints, it emphasizes the role of functional relations linking the short and long-time dynamics, both in the universal regime as well as deviations away from it, from which we can draw broader conclusions about the approach to ergodicity in complex quantum matter.
_Objective and scaling ansatz._--The key quantity to capture both the universal and system-specific aspects of the dynamics over all timescales is the spectral form factor (SFF) [10; 25; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. For systems with a finite Hilbert-space dimension \(N\), the SFF can be defined directly in terms of the unitary time-evolution operator \(U(t)\),
\[K(t)\equiv|\overline{\operatorname{tr}U(t)|^{2}}, \tag{1}\]
where the overline denotes a suitable average, or, in some settings, a partition sum [37]. For complex quantum systems, it generally displays a dip down to unity over the scrambling regime, followed by a ramp up until the Heisenberg time, at which the discreteness of the level spectrum becomes resolved, which then is followed by a plateau. System-specific signatures can persist well into the ramp, while fast scramblers are expected to display
universality already during the dip [57; 58; 27; 51]. Expressed in terms of the energy levels, the SFF captures their correlations at all scales, including level repulsion and spectral rigidity, and thus directly gives information about basic system properties such as integrability or time-reversal symmetry [59; 10; 11].
The spirit of our approach is to consider the SFF for a general process \(U(t)\to U(t+dt)=u(t;dt)U(t)\), in which the unitary matrix \(U(t)\) generating the dynamics is updated incrementally by unitary matrices \(u(t;dt)\simeq\openone\) over a small time step \(dt\). For processes in which \(u(t;dt)\) is invariant under unitary basis changes, there are two fundamental anti-Hermitian invariants, \(U^{\dagger}\frac{dU}{dt}\) and \(U-U^{\dagger}\). For maximally chaotic scrambling, we therefore propose the scaling assumption
\[U-U^{\dagger}=g(t)U^{\dagger}\frac{dU}{dt}\equiv U^{\dagger}\frac{dU}{da}, \tag{2}\]
which equates these invariants in the ensemble sense up to a time-dependent factor \(g(t)\), and then expresses this in terms of a single dynamical scaling parameter \(a(t)=\int^{t}(1/g(t^{\prime}))dt^{\prime}+a_{0}\). Upon integration, we obtain a parametrized ensemble,
\[U=(a\openone+V)(\openone+aV)^{-1}, \tag{3}\]
where \(V\) is uniform in the unitary group of degree \(N\). This ensemble is a single-parameter incarnation of the Poisson kernel, a matrix ensemble that previously appeared in stationary scattering settings subject to some constraint [40; 41; 7; 42], where its functional form is tied to a multiple-scattering expansion [60]. Here, we encounter it instead in the context of dynamics generated by a multiplicative composition law, where the dynamical flow of the scaling parameter \(a(t)\) will be of central importance.
_Interpretation of the scaling parameter._--Our scaling assumption reduces the matrix-generated scrambling dynamics to a single dynamical scaling parameter \(a\). In terms of this parameter, the ensemble (3) interpolates between action by the identity (\(U=\openone\)) at \(a=1\) and the random unitary matrix \(U=V\) at \(a=0\), i.e., the static ergodic endpoint defined by the circular unitary ensemble (CUE) of random-matrix theory (RMT). For intermediate time, we can equate \(a=N^{-1}\overline{\operatorname{tr}U}\), to characterize the motion of the center of mass of the eigenvalues \(\lambda_{n}\equiv\exp(i\phi_{n})\), capturing their expansion on the unit circle as ergodicity is established. This center-of-mass motion is illustrated in an individual realization \(U(t)\) in Fig. 1. The cloud of eigenvalues, initially centered at unity, begins to disperse around the unit circle, such that \(N^{-1}\operatorname{tr}U\) performs a stochastic trajectory towards the origin--the RMT result--where the center of mass of the eigenvalues is zero.
The expansion of the eigenvalues corresponds to a flow of the scaling mean density of states (see Appendix A),
\[\rho(\phi)=\frac{1}{2\pi}\frac{1-a^{2}}{1+a^{2}-2a\cos\phi}. \tag{4}\]
The scaling ensemble (3) equips this expanding eigenvalue cloud with universal internal spectral statistics, induced via the matrix-valued Mobius transformation
\[U^{\prime}=\left(\frac{a-a^{\prime}}{1-aa^{\prime}}\openone+U\right)\left( \openone+\frac{a-a^{\prime}}{1-aa^{\prime}}U\right)^{-1} \tag{5}\]
between the ensembles with parameters \(a\) to \(a^{\prime}\). We now develop the scaling theory in terms of the spectral statistics, applicable over the whole dip-ramp-plateau scenario.
_Scaling theory of the spectral form factor._--The dynamical evolution set up here can be equipped with a definite Heisenberg time by considering the stroboscopic SFF \(K_{n}=\overline{|\operatorname{tr}U^{n}(T)|^{2}}\). This if often interpreted as Floquet dynamics, so that from a certain time \(T\) the evolution is repeated periodically, but is well-defined independent of this interpretation. This gives us two timescales--the time \(T\) for the evolution along the scrambling dynamics, and the time \(nT\) resolving the spectral statistics established up to this point. Our objective is to relate both via the single parameter \(a\) as it flows from \(a=1\) to \(a=0\) with increasing \(T\).
We can carry out this program analytically. Within the scaling ensemble (3), we write (see Appendix C)
\[K_{n}=\left(\prod_{r}\int_{0}^{2\pi}\frac{d\psi_{r}}{2\pi}\right)\left|\sum_{ m}\left(\frac{a+e^{i\psi_{m}}}{1+ae^{i\psi_{m}}}\right)^{n}\right|^{2}\det(e^{i(p-q )\psi_{q}}) \tag{6}\]
in terms of the eigenvalues \(e^{i\psi_{m}}\) of \(V\), where the determinant arises from the joint density of the eigenphases
Figure 1: Interpretation of the spectral scaling parameter \(a\) as the center of mass (red) of the eigenvalue distribution (blue), illustrated for a single time evolution generated by the multiplication of random unitary matrices of the form (8) (\(N=16\), \(dt=0.01\)). Panels (a-c) show snapshots after 10, 100, and 1000 time steps, while panel (d) shows the complete center-of-mass trajectory over 1000 time steps.
\(\psi_{m}\) in the CUE. From this, we obtain the expression
\[K_{n} =N+\frac{N(N-1)}{2}a^{2n}-\sum_{q=1}^{N}(N-q)c_{q,n}^{2},\] \[c_{q,n} =\frac{1}{q!}\frac{d^{q}}{dv^{q}}\left.\frac{(a+v)^{n}}{(1+av)^{n} }\right|_{v=0}. \tag{7}\]
Equation (7) recovers the standard CUE result for \(a=0\), where \(c_{q,n}=\delta_{qn}\) such that \(K_{n}=\delta_{0n}N^{2}+\min(n,N)\) falls from \(K_{0}=N^{2}\) to \(K_{1}=1\) over the first stroboscopic time step, \(T\), before ramping up linearly to \(K_{N}=N\) at the stroboscopic Heisenberg time \(N\), after which it plateaus.
Figure 2(a) contrasts this behavior with the scenarios for finite values of \(a\). Tuning the scaling parameter \(a\) away from \(0\)--stopping the dynamics at time \(T\) short of maximally ergodic behavior--results in curves that initially continue to dip, and then take a longer time to recover the plateau. Therefore, incomplete scrambling dynamics at short times are translated into a long-time signal in the form of a modified ramp, demonstrating the consequences of not having established fully ergodic dynamics for the remainder of the time evolution. The time over which the curves continue to dip defines an effective ergodic time, while the time it takes to ramp up to the plateau defines an effective Heisenberg time. Crucially, within the scaling theory, these two timescales are directly linked via the scaling parameter \(a\).
This link is emphasized by the scaling relation between these results. The transformation (5) directly transfers into self-similar correlations of the eigenvalues \(\lambda_{m}\) along the flow. Unfolding the spectrum to a uniform density with Heisenberg time \(N\) collapses the SFF identically onto the RMT result, as illustrated in Fig. 2(b). Within the scaling ensemble, this collapse is exact, underlining both its scale invariance and single-parameter nature.
_Dyson's Brownian motion._--We now turn to question whether this single-parameter behavior within the ensemble can be replicated in a suitable unitary time evolution. Which dynamical process, if any, recovers the statistics of the scaling ensemble (3), parametrized by a single suitable time-dependent scaling parameter \(a\)? We argue that the answer lies in another paradigm of RMT, Dyson's Brownian motion (DBM).
DBM emerges as natural candidate for fast scrambling in the context of quantum circuit models. These come in two main variants: random Haar circuits (e.g., [61; 62; 63; 64]) built out of fully ergodic gates from RMT, and Brownian circuits [65; 66; 67; 21; 68; 69], built from gates with randomly chosen Hamiltonians \(H(t)\) over small time steps \(dt\). Our scaling approach interpolates between both types of models for one of these gates, and so does the Brownian process applied for a finite time. This coincides with the DBM process, where the unitary time-evolution operator \(U(t)\) performs a random walk in the unitary group, sampling it uniformly according to the Haar measure. Originally, this process was introduced to facilitate RMT calculations [4], and has since served as a central tool in celebrated proofs of universality over a broad class of RMT models [70; 71]. Here, we consider it as a genuinely dynamical model with a specific initial condition, \(U(0)=\openone\). This is implemented via generating incremental unitary time steps
\[u(t;dt)=\left(\openone-\frac{iH(t)}{2}\sqrt{dt}\right)\left(\openone+\frac{iH (t)}{2}\sqrt{dt}\right)^{-1} \tag{8}\]
with an instantaneous Hamiltonian \(H(t)\) from the Gaussian unitary ensemble, given by normally distributed matrix elements satisfying \(\overline{H(t)_{lm}}=0,\,\overline{H(t)_{kl}H(t)_{mn}}=N^{-1}\delta_{kh}\delta _{lm}\).
Within this process, we find that the scaling parameter \(a\) decays exponentially from unity to zero, \(a(t)=e^{-t/2}\), corresponding to a dimensionless decay rate \(\gamma_{0}=1/2\). This is accompanied by an exponential decay \(K_{1}(t)=(N^{2}-1)e^{-t}+1\) of the first-order SFF, describing the dip to unity with a decay rate \(\gamma_{1}=2\gamma_{0}\), which agrees with the scaling prediction (7) up to corrections \(O(N^{-2})\). The key question is whether the process recovers the complete spectral statistics encoded in the scaling forms (7) of \(K_{n}\), and displays self-similar spectral correlations up to standard unfolding, as in the scaling theory itself.
Figure 2: (a) Scaling predictions of the stroboscopic form factor \(K_{n}\) for maximally efficient scrambling, Eq. (7), for \(N=16\), where \(a\) describes different points along the scrambling flow. All curves display the paradigmatic dip-ramp-plateau shape. For \(a=0\), scrambling is complete, and the curves follow the RMT predictions of an ergodic system. Finite values of \(a\) describe earlier times along scrambling dynamics, resulting in effective ergodic and Heisenberg times that are linked by the scaling parameter \(a\). (b) Numerical sampling of the ensemble (\(10^{9}\) realizations) confirms that the points along the scrambling flow are linked by the transformation (5), which implies self-similar statistics and the exact collapse onto the RMT result after unfolding the spectrum according to \(\lambda_{m}\to(\lambda_{m}-a)/(1-a\lambda_{m})\), corresponding to setting \(a^{\prime}=0\) in Eq. (5).
This is analyzed in Fig. 3. The top panel shows the SFF after unfolding the DBM spectrum to the scaling mean density of states (4) (see Appendix B). We observe that this agrees with the scaling prediction (7) up to statistical fluctuations, over the whole range of the scaling parameter, hence, over the complete scrambling dynamics. Furthermore, upon fully unfolding the spectrum to a uniform mean density, we find perfect collapse of all data onto straight lines \(K_{n}=n\), which establishes agreement with the scaling theory down to the level of self-similarity under the flow, again on all scales of \(a\).
As we show next, this tight agreement including the higher orders of the SFF is a nontrivial statement about the DBM process, marking it out as a privileged model of fast scrambling among a wider class of dynamical models.
_Chaos bounds._--Our scaling ansatz (2) equates two unitarily invariant generators, and this invariance is also obeyed in DBM. Let us therefore consider more generally any dynamical evolution of this kind, induced by ensembles of generators that are invariant under rotations \(u(t;dt)\to W^{\dagger}u(t;dt)W\). Evaluating the average over \(W\) in the CUE, the first-order SFF incrementally updates as
\[K_{1}(t+dt)=K_{1}(t)+\frac{N^{2}-\overline{|\operatorname{tr}u(t;dt)|^{2}}}{N^ {2}-1}\left(1-K_{1}(t)\right), \tag{9}\]
resulting in an exponential decay with decay constant
\[\gamma_{1}=\lim_{dt\to 0}dt^{-1}(N^{2}-\overline{|\operatorname{tr}u(t;dt)|^{2}})/(N^{2} -1). \tag{10}\]
Studying this dip of the SFF in isolation, one could be led to believe that all systems within this more general class exhibit maximally chaotic scrambling. A first hint that this may not be the case is given by the decay of the scaling parameter itself, where we again observe an exponential decay, but with a decay rate
\[\gamma_{0}=\lim_{dt\to 0}(dt\,N)^{-1}\overline{\operatorname{tr}u(t;dt)} \tag{11}\]
that is not universally linked to \(\gamma_{1}\). Instead, the mathematical definitions of these quantities enforce the relation \(\gamma_{1}\leq 2\gamma_{0}\), again up to corrections \(O(N^{-2})\). The scaling forms (7) satisfy this constraint tightly, and also its extension to the decay rates of the higher-order SFFs, \(\gamma_{n}\leq 2n\gamma_{0}\). We can therefore view these scaling forms as lower bounds that are approached only for maximally scrambling dynamics, as represented, e.g., by DBM.
We exemplify this by modifying DBM, which mathematically corresponds to a Wiener process, into a Cauchy process, obtained from generators
\[u(t;dt)=\left(\sqrt{1-dt}\openone+V\right)\left(\sqrt{1-dt}V+\openone\right) ^{-1} \tag{12}\]
with \(V\) uniform in the unitary group of degree \(N\). This composes generators from the scaling ensemble multiplicatively into a time-evolution operator, which differs from the self-similarity mapping (5) governing the maximally efficient scaling flow. For this process, we find again \(\gamma_{0}=1/2\), while \(\gamma_{1}=1-N^{-1}+O(N^{-2})\) just falls short of the chaos bound stated above.
Figure 4 reports the full scaling analysis of the higher order SSFs, where we exploit the fact that the Cauchy process shares the same mean level density as the scaling theory, so that no unfolding is needed. As shown in panel (a), this now shows clear deviations, where the results rise significantly above the bounds set by the scaling theory. Panel (b) shows that these deviations persist when the spectrum is unfolded to a uniform density, which reveals that the spectral correlations are not self-similar along the scrambling dynamics. Therefore the framework established in this paper allows us to distinguish the effects of efficient but yet incomplete scrambling to a time \(T\), captured by a finite value of \(a\)--already present in Figs. 2 and 3--and intrinsically inefficient scrambling, captured by the departure from the scaling bounds.
_Discussion._--In summary, we developed a quantitatively predictive single-parameter scaling theory of maximally chaotic scrambling dynamics, which embodies self-similarity of the spectral correlations along the whole process. The theory is amenable to a complete analytical treatment, delivering bounds for the decay of the spectral correlations on all scales. These bounds are tightly met by Dyson Brownian motion, which illuminates the physical content of the scaling theory, and underlines the privileged nature of this paradigmatic RMT process. Signatures of inefficient or incomplete scrambling in other
Figure 3: Scaling analysis of scrambling in the DBM process, generated by Eq. (8) (\(N=16\), \(dt=0.01\), \(10^{5}\) realizations). (a) Spectral form factors \(K_{n}\) after unfolding the spectrum to the scaling mean density of states (4) at \(a=e^{-t/2}\), as a function of \(a\). There is agreement within statistical uncertainty with the analytical scaling predictions (7) (black curves). (b) Further unfolding the spectrum to uniform density collapses it onto the RMT prediction \(K_{n}=n\), verifying that DBM generates self-similar spectral statistics along the complete scrambling dynamics.
scenarios are captured sensitively, revealing, for instance, that the purely exponential decay of spectral correlations observed by a wide class of scrambling models is in itself not a sufficient signature of maximally chaotic scrambling. The scaling theory also concretizes deeper conceptual features of general scrambling dynamics, such as the intimate link of short-time scrambling and long-time ergodicity enforced by the unitarity of this process. As chaotic scrambling is a fundamental tenet of complex quantum-matter phenomenology, this approach transfers to a wide range of physical domains. Interesting extensions would include the consideration of constrained dynamics, such as those obtained from symmetries placing systems into different universality classes, as well as examination of the extremal statistics at the spectral edge.
We gratefully thank Amos Chan for his comments on the manuscript. This research was funded by EPSRC via Grant No. EP/T518037/1.
|
2301.07276 | Data thinning for convolution-closed distributions | We propose data thinning, an approach for splitting an observation into two
or more independent parts that sum to the original observation, and that follow
the same distribution as the original observation, up to a (known) scaling of a
parameter. This very general proposal is applicable to any convolution-closed
distribution, a class that includes the Gaussian, Poisson, negative binomial,
gamma, and binomial distributions, among others. Data thinning has a number of
applications to model selection, evaluation, and inference. For instance,
cross-validation via data thinning provides an attractive alternative to the
usual approach of cross-validation via sample splitting, especially in settings
in which the latter is not applicable. In simulations and in an application to
single-cell RNA-sequencing data, we show that data thinning can be used to
validate the results of unsupervised learning approaches, such as k-means
clustering and principal components analysis, for which traditional sample
splitting is unattractive or unavailable. | Anna Neufeld, Ameer Dharamshi, Lucy L. Gao, Daniela Witten | 2023-01-18T02:47:41Z | http://arxiv.org/abs/2301.07276v3 | # Data thinning for convolution-closed distributions
###### Abstract
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation, and that follow the same distribution as the original observation, up to a (known) scaling of a parameter. This very general proposal is applicable to any convolution-closed distribution, a class that includes the Gaussian, Poisson, negative binomial, gamma, and binomial distributions, among others. Data thinning has a number of applications to model selection, evaluation, and inference. For instance, cross-validation via data thinning provides an attractive alternative to the usual approach of cross-validation via sample splitting, especially in unsupervised settings in which the latter is not applicable. In simulations and in an application to single-cell RNA-sequencing data, we show that data thinning can be used to validate the results of unsupervised learning approaches, such as k-means clustering and principal components analysis.
Introduction
As scientists fit increasingly complex models to their data, there is an ever-growing need for out-of-box methods that can be used to validate these models. In many settings, the most natural option is sample splitting, in which the observations are split into a training set, used to fit a model, and a test set, used to validate it (Hastie et al., 2009). Sample splitting can also be applied to conduct inference after model selection (Rinaldo et al., 2019). However, in some settings, sample splitting is neither applicable nor desirable, and alternative approaches are necessary. In this paper, we consider an alternative approach that splits a single observation \(X\) into independent parts that follow the same distribution as \(X\).
It has recently been shown that we can split \(X\sim N(\mu,\sigma^{2})\) with known \(\sigma^{2}\) into two independent Gaussian random variables (Rasines and Young, 2022; Leiner et al., 2022; Oliveira et al., 2021), and \(X\sim\text{Poisson}(\lambda)\) into independent Poisson random variables (Neufeld et al., 2022; Leiner et al., 2022). However, outside of these two distributions, no proposals are available to split a random variable into independent parts that follow the same distribution as the original random variable. Leiner et al. (2022) proposed data fission, a general-purpose approach to decompose \(X\) into two parts, \(X^{(1)}\) and \(X^{(2)}\), such that (i) \(X^{(1)}\) and \(X^{(2)}\) can together be used to reconstruct \(X\), and (ii) the joint distribution of \(\left(X^{(1)},X^{(2)}\right)\) is tractable. However, the resulting \(X^{(1)}\) and \(X^{(2)}\) are not independent, and typically do not follow the same distribution as \(X\). These considerations complicate the application of data fission to model validation. We elaborate on these points in Section A of the supplementary materials.
In this paper, we propose data thinning, a recipe for decomposing an observation \(X\) into two parts, \(X^{(1)}\) and \(X^{(2)}\), such that (i) \(X=X^{(1)}+X^{(2)}\), (ii) \(X^{(1)}\) and \(X^{(2)}\) are independent, and (iii) \(X^{(1)}\) and \(X^{(2)}\) follow the same distribution as \(X\), up to a (known) scaling of a
parameter. Critically, properties (ii) and (iii) guarantee that this decomposition is useful in applied settings. For instance, to evaluate the suitability of a model for \(X\), we can fit it to \(X^{(1)}\) (which follows the same distribution as \(X\)), and can validate it using \(X^{(2)}\) (which also follows the same distribution, and furthermore is independent of \(X^{(1)}\)). Our recipe can be applied to any distribution that is convolution-closed (Joe, 1996): this includes the multivariate Gaussian, Poisson, negative binomial, gamma, binomial, and multinomial distributions, among others. Thus, our work drastically expands the set of distributions that can be split into independent parts, and provides a unified lens through which to view seemingly unrelated approaches. Furthermore, data thinning can be used to decompose \(X\) into more than two independent random variables.
We illustrate our proposal with the following example, which shows that a gamma random variable can be thinned into \(M\) independent gamma random variables.
**Example 1.1** (Gamma decomposition into \(M\) components, data thinning).: _Suppose that \(X\sim\mathrm{Gamma}(\alpha,\beta)\), where \(\beta\) is unknown. We take \((X^{(1)},\ldots,X^{(M)})=XZ\), where \(Z\sim\mathrm{Dirichlet}(\alpha/M,\ldots,\alpha/M)\). Then \(X^{(1)},\ldots,X^{(M)}\) are mutually independent, they sum to \(X\), and each is marginally drawn from a \(\mathrm{Gamma}(\alpha/M,\beta)\) distribution._
In other words, data thinning allows us to decompose a \(\mathrm{Gamma}(\alpha,\beta)\) random variable, for which \(\beta\) is unknown, into \(M\) independent gamma random variables, \(X^{(1)},\ldots,X^{(M)}\). Therefore, fitting a model to \(X-X^{(m)}\) and validating it using \(X^{(m)}\) is straightforward.
Data thinning can be applied to any problem for which sample splitting might be considered. In this paper, we specifically focus on its use in model assessment and validation. While this approach is equally applicable to both supervised and unsupervised learning, in our numerical studies we focus on the unsupervised learning setting, in which the usual
cross-validation via sample splitting approach cannot be directly applied (see, e.g. Owen and Perry, 2009, Fu and Perry, 2020). We show that cross-validation via data thinning provides an attractive alternative.
## 2 The data thinning proposal
### A review of convolution-closed distributions
We begin by defining a convolution-closed distribution (Jorgensen and Song, 1998).
**Definition 1** (Convolution-closed).: _Let \(F_{\lambda}\) denote a distribution indexed by a parameter \(\lambda\) in parameter space \(\Lambda\). Let \(X^{\prime}\sim F_{\lambda_{1}}\) and \(X^{\prime\prime}\sim F_{\lambda_{2}}\) with \(X^{\prime}\perp\!\!\!\perp X^{\prime\prime}\). If \(X^{\prime}+X^{\prime\prime}\sim F_{\lambda_{1}+\lambda_{2}}\) whenever \(\lambda_{1}+\lambda_{2}\in\Lambda\), then \(F_{\lambda}\) is convolution-closed in the parameter \(\lambda\)._
Many well-known distributions are convolution-closed. While the Poisson(\(\lambda\)) distribution is convolution-closed in its single parameter \(\lambda\) and the N(\(\mu,\sigma^{2}\)) distribution is convolution-closed in the two-dimensional parameter (\(\mu,\sigma^{2}\)), other distributions, such as the gamma, are convolution-closed in just one parameter with the other parameter(s) held fixed. Table 1 provides details about some well-known convolution-closed distributions.
**Remark 1** (Expectation is linear in \(\lambda\)).: _Let \(F_{\lambda}\) for \(\lambda\in\Lambda\) be a convolution-closed distribution. By definition, if \(X^{\prime}\sim F_{\lambda_{1}}\) and \(X^{\prime\prime}\sim F_{\lambda_{2}}\) and \(\lambda_{1}+\lambda_{2}\in\Lambda\), then \(X^{\prime}+X^{\prime\prime}\sim F_{\lambda_{1}+\lambda_{2}}\). If these distributions have first moments, then \(\text{E}[X^{\prime}+X^{\prime\prime}]=\text{E}[X^{\prime}]+\text{E}[X^{\prime \prime}]\). Thus, outside of contrived counterexamples, \(\text{E}[X]\) for \(X\sim F_{\lambda}\) is a linear function of the parameter \(\lambda\). We say distributions whose expectation is linear in \(\lambda\) (e.g. all distributions in Table 1) satisfy the linear expectation property._
For a convolution-closed distribution \(F_{\lambda}\), suppose that \(X^{\prime}\sim F_{\lambda_{1}}\) and \(X^{\prime\prime}\sim F_{\lambda_{2}}\) with \(X^{\prime}\perp\!\!\!\perp X^{\prime\prime}\). Let \(G_{\lambda_{1},\lambda_{2},x}\) denote the conditional distribution of \(X^{\prime}\mid X^{\prime}+X^{\prime\prime}=x\). The density of the distribution \(G_{\lambda_{1},\lambda_{2},x}\) can be written down for any \(F_{\lambda}\) with a known density function (Jorgensen, 1992). Furthermore, it turns out that \(G_{\lambda_{1},\lambda_{2},x}\) has a simple closed form for several of the well-known distributions from Table 1; see Table 2. For example, if \(F_{\lambda}\) is the Poisson(\(\lambda\)) distribution, then \(G_{\lambda_{1},\lambda_{2},x}\) is the Binomial \((x,\lambda_{1}/(\lambda_{1}+\lambda_{2}))\) distribution.
### Data thinning
Recall from Section 2.1 that \(G_{\lambda_{1},\lambda_{2},x}\) is the conditional distribution of \(X^{\prime}\mid X^{\prime}+X^{\prime\prime}=x\), where \(X^{\prime}\sim F_{\lambda_{1}}\) and \(X^{\prime\prime}\sim F_{\lambda_{2}}\) with \(X^{\prime}\perp\!\!\!\perp X^{\prime\prime}\). We now introduce our proposal.
**Algorithm 1** (Data thinning).: _Observe a realization \(x\) of \(X\sim F_{\lambda}\), where the distribution \(F_{\lambda}\) is convolution-closed in \(\lambda\) with parameter space \(\Lambda\). For any value of \(\epsilon\in(0,1)\) such that \(\epsilon\lambda\in\Lambda\) and \((1-\epsilon)\lambda\in\Lambda\), first draw \(X^{(1)}\mid X=x\sim G_{\epsilon\lambda,(1-\epsilon)\lambda,x}\), and then let \(X^{(2)}=X-X^{(1)}\)._
We now introduce our main theorem, which is motivated by a proposal by Joe (1996) to
\begin{table}
\begin{tabular}{l l} Distribution & Notes \\ \hline \(X\sim\text{Poisson}(\lambda)\), where \(\text{E}[X]=\lambda\) and \(\text{Var}(X)=\lambda\). & Convolution-closed in \(\lambda\). \\ \(X\sim\text{N}(\mu,\sigma^{2})\), where \(\text{E}[X]=\mu\) and \(\text{Var}[X]=\sigma^{2}\). & Convolution-closed in \((\mu,\sigma^{2})\). \\ \(X\sim\text{NegativeBinomial}(r,p)\), where \(\text{E}[X]=r\frac{1-p}{p}\) and \(\text{Var}[X]=r\frac{1-p}{p^{2}}\). & Convolution-closed in \(r\) if \(p\) is fixed. \\ \(X\sim\text{Gamma}(\alpha,\beta)\), where \(\text{E}[X]=\frac{\alpha}{\beta}\) and \(\text{Var}(X)=\frac{\alpha}{\beta^{2}}\). & Convolution-closed in \(\alpha\) if \(\beta\) is fixed. \\ \(X\sim\text{Binomial}(r,p)\), where \(\text{E}[X]=rp\) and \(\text{Var}(X)=rp(1-p)\). & Convolution-closed in \(r\) if \(p\) is fixed. \\ \(X\sim\text{InverseGaussian}(\mu w,\lambda w^{2})\) with \(\text{E}[X]=\mu w\) and \(\text{Var}(X)=\frac{w^{3}\mu^{3}}{w^{2}\lambda}=\frac{w\mu^{3}}{\lambda}\). & Convolution-closed in \(w\) if \(\mu\) and \(\lambda\) are fixed. \\ \(X\sim\text{GeneralizedPoisson}(\lambda,\theta)\), see Jørgensen and Song (1998) for parameterization. & Convolution-closed in \(\lambda\) if \(\theta\) is fixed. \\ \(X\sim\text{Tweedie}_{p}(\lambda,\theta)\), see Jørgensen and Song (1998) for parameterization. & Convolution-closed in \(\lambda\) if \(\theta\) and \(p\) are fixed. \\ \(X\sim\text{N}_{k}\left(\mu,\Sigma\right)\), with \(\text{E}[X]=\mu\) and \(\text{Var}(X)=\Sigma\). & Convolution-closed in \((\mu,\Sigma)\). \\ \(X\sim\text{Multinomial}_{k}\left(r,p\right)\), with \(\text{E}[X]=rp\) and \(\text{Var}(X)=r\left(\text{diag}(p)-pp^{T}\right)\). & Convolution-closed in \(r\) if \(p\) is fixed. \\ \end{tabular}
\end{table}
Table 1: A partial list of convolution-closed distributions. The last two rows contain multivariate distributions. The results in each row are easily verifiable. The generalized Poisson and Tweedie distributions are written in their additive exponential dispersion family parameterization; see Jørgensen and Song (1998) for details.
construct autoregressive time series processes with known marginal distributions.
**Theorem 1**.: _Suppose that we apply Algorithm 1 to a realization \(x\) of \(X\sim F_{\lambda}\). Then, the following results hold: (i) \(X^{(1)}\sim F_{\epsilon\lambda}\) and \(X^{(2)}\sim F_{(1-\epsilon)\lambda}\); (ii) \(X^{(1)}\perp\!\!\!\perp X^{(2)}\); (iii) If \(F_{\lambda}\) satisfies the linear expectation property (Remark 1), then \(\text{E}[X^{(1)}]=\epsilon\,\text{E}[X]\) and \(\text{E}[X^{(2)}]=(1-\epsilon)\,\text{E}[X]\)._
Theorem 1 is proven in Section B.1 of the supplementary materials. The intuition for parts (i) and (ii) is as follows: if \(X\sim F_{\lambda}\), then \(X\) could have arisen as the sum of two independent random variables \(X^{\prime}\sim F_{\lambda_{1}}\) and \(X^{\prime\prime}\sim F_{\lambda_{2}}\), with \(\lambda_{1}+\lambda_{2}=\lambda\). Algorithm 1 works backwards to undo this sum by generating \(X^{(1)}\) and \(X^{(2)}\) that follow the same distribution as \(X^{\prime}\) and \(X^{\prime\prime}\). Part (iii) follows from Remark 1. As we will see in Section 2.3, \(\epsilon\in(0,1)\) is a tuning parameter that governs a tradeoff between how much information is in \(X^{(1)}\) as opposed to \(X^{(2)}\).
Theorem 1 guarantees that the decomposition provided by Algorithm 1 satisfies the goals given in Section 1: namely \(X=X^{(1)}+X^{(2)}\), \(X^{(1)}\perp\!\!\!\perp X^{(2)}\), and \(X^{(1)}\) and \(X^{(2)}\) follow the same distribution as \(X\), up to a (known) scaling of a parameter. Table 2 summarizes the data thinning proposal for several well-known distributions. The proposal in this paper extends well beyond this set of distributions, although in some cases the conditional distribution \(G_{\lambda_{1},\lambda_{2},x}\) may not have a recognizable form.
**Remark 2**.: _Some of the decompositions presented in Table 2 require knowledge of an additional parameter that is not of primary interest. For example, like the independent decomposition of the Gaussian given in Leiner et al. (2022) and Rasines and Young (2022), thinning the \(\mathrm{N}(\mu,\sigma^{2})\) distribution requires knowledge of \(\sigma^{2}\). In Section 2.4, we explore the implications of performing data thinning in the presence of an unknown nuisance parameter._
**Remark 3**.: _Table 2 indicates that thinning the \(\mathrm{Binomial}(r,p)\) distribution or the \(\mathrm{Multinomial}(r,p)\) distribution requires that \(\epsilon r\) take on an integer value. This is because these distributions are not infinitely divisible [19]. This restriction becomes more limiting in the extension to multiple folds given in Section 3, and prevents us from thinning the Bernoulli distribution or the categorical distribution._
We now give an example of an application where data thinning is useful in practice.
**Example 2.1** (Model evaluation for unsupervised learning using data thinning).: _Suppose we observe \(X_{ij}\) for \(i=1,\ldots,n\) and \(j=1,\ldots,d\), where either each \(X_{ij}\) is drawn independently from a univariate convolution-closed distribution that satisfies the linear expectation property from Remark 1, or else each row \((X_{i1},\ldots,X_{id})^{T}\) is drawn independently from a multivariate convolution-closed distribution that satisfies the linear expectation property._
_We wish to evaluate \(\hat{\mu}(X)\) obtained from unsupervised learning (e.g. clustering) on \(X\), as an estimator for \(\text{E}[X]\). Computing a loss function between \(\hat{\mu}(X)\) and \(X\) is unsatisfactory, since the loss function will take on a small value if we overfit the mean. Instead, we apply Algorithm 1 with \(\epsilon\in(0,1)\) to either each element or each row in \(X\), such that each element \(X_{ij}\) is thinned into \(X_{ij}^{(1)}\) and \(X_{ij}^{(2)}\). We compute \(\hat{\mu}(X^{(1)})\) by applying unsupervised learning to \(X^{(1)}\) (step 1). This provides an estimator of \(\text{E}[X^{(1)}]=\epsilon\,\text{E}[X]\) (Theorem 1, part (iii)). We then compute a loss function between \(\hat{\mu}(X^{(1)})\) and \(X^{(2)}\) (step 2). The independence between \(X^{(1)}\) and \(X^{(2)}\) prevents the loss function from taking on a small value due to overfitting._
In Example 2.1, if \(\epsilon=0.5\), then \(\text{E}[X^{(1)}]=\text{E}[X^{(2)}]=0.5\,\text{E}[X]\). Thus, \(\hat{\mu}(X^{(1)})\) is an estimator of \(\text{E}[X^{(2)}]\), and so devising a suitable loss function in step 2 is straightforward. If \(\epsilon\neq 0.5\), then \(\hat{\mu}(X^{(1)})\) is a plug-in estimator of \(\epsilon/(1-\epsilon)\,\text{E}[X^{(2)}]\) (Theorem 1); we discuss this further in Section 2.3.
### Role of the parameter \(\epsilon\)
In Algorithm 1, the parameter \(\epsilon\) governs a trade-off between the information in \(X^{(1)}\) and \(X^{(2)}\).
**Example 2.2** (Fisher information in thinned Poisson distribution).: _Let \(X\sim\mathrm{Poisson}(\lambda)\). We thin \(X\) to obtain \(X^{(1)}\sim\mathrm{Poisson}(\epsilon\lambda)\) and \(X^{(2)}\sim\mathrm{Poisson}((1-\epsilon)\lambda)\). Let \(I_{X}(\lambda)\) denote the Fisher information contained in \(X\) about the parameter \(\lambda\). Then \(I_{X}(\lambda)=1/\lambda\), \(I_{X^{(1)}}(\lambda)=\epsilon I_{X}(\lambda)\), and \(I_{X^{(2)}}(\lambda)=(1-\epsilon)I_{X}(\lambda)\)._
**Example 2.3** (Fisher information in thinned binomial distribution).: _Let \(X\sim\mathrm{Binomial}(r,p)\). We thin \(X\) to obtain \(X^{(1)}\sim\mathrm{Binomial}(\epsilon r,p)\) and \(X^{(2)}\sim\mathrm{Binomial}((1-\epsilon)r,p)\). Let \(I_{X}(p)\) denote the Fisher information contained in \(X\) about the parameter \(p\). Then \(I_{X}(p)=r/(p(1-p))\), \(I_{X^{(1)}}(p)=\epsilon I_{X}(p)\), and \(I_{X^{(2)}}(p)=(1-\epsilon)I_{X}(p)\)._
Similar results hold for other distributions in Table 2. Intuitively, as \(\epsilon\) increases, the
\begin{table}
\begin{tabular}{l l l l} Distribution of \(X\) & Generate \(X^{(1)}\mid X=x\) as: & Dist. of \(X^{(1)}\) & Notes \\ \hline \(\mathrm{Poisson}(\lambda)\) & Draw \(X^{(1)}\mid X=x\sim\mathrm{Binomial}(x,\epsilon)\). & Poisson(\(\epsilon\lambda\)) & \\ \(\mathrm{N}(\mu,\sigma^{2})\) & Draw \(X^{(1)}\mid X=x\sim\mathrm{N}(\epsilon x,\epsilon(1-\epsilon)\sigma^{2})\). & \(\mathrm{N}(\epsilon\mu,\epsilon\sigma^{2})\) & \(\sigma^{2}\) must be known. \\ \(\mathrm{NegativeBinomial}(r,p)\) & Draw \(X^{(1)}\mid X=x\sim\mathrm{BetaBinomial}(x,\epsilon r,(1-\epsilon)r)\). & NegativeBinomial(\(\epsilon r,p\)) & \(r\) must be known. \\ \(\mathrm{Gamma}(\alpha,\beta)\) & Draw \(Z\sim\mathrm{Beta}\left(\epsilon\alpha,(1-\epsilon)\alpha\right)\), and let \(X^{(1)}=x\cdot Z\). & \(\mathrm{Gamma}(\alpha\alpha,\beta)\) & \(\alpha\) must be known. \\ \(\mathrm{Exponential}(\lambda)\) & Draw \(Z\sim\mathrm{Beta}(\epsilon,(1-\epsilon))\), and let \(X^{(1)}=x\cdot Z\). & \(\mathrm{Gamma}(\epsilon,\lambda)\) & \\ \(\mathrm{Binomial}(r,p)\) & Draw \(X^{(1)}\mid X=x\sim\mathrm{Hypergeometric}(\epsilon r,(1-\epsilon)r,x)\). & \(\mathrm{Binomial}(\epsilon r,p)\) & \(r\) must be known \\ & & & \(\epsilon r\) must be integer. \\ \(\mathrm{N}_{k}(\mu,\Sigma)\) & Draw \(X^{(1)}\mid X=x\sim\mathrm{N}(\epsilon x,\epsilon(1-\epsilon)\Sigma)\). & \(\mathrm{N}_{k}(\epsilon\mu,\epsilon\Sigma)\) & \(\Sigma\) must be known. \\ \(\mathrm{Multinomial}_{k}(r,p)\) & Draw \(X^{(1)}\mid X=x\sim\) & \(\mathrm{Multinomial}_{k}(\epsilon r,p)\) & \(r\) must be known. \\ & & & \(\epsilon r\) must be integer. \\ & & & \\ \end{tabular}
\end{table}
Table 2: Details of data thinning for several well-known distributions, using the parameterizations given in Table 1. While the exponential distribution itself is not convolution-closed in its single parameter, recognizing it as a special case of the gamma distribution with known \(\alpha=1\) yields a decomposition. In all cases, the distribution of \(X^{(2)}\) matches that of \(X^{(1)}\), with \(\epsilon\) replaced by \((1-\epsilon)\), with \(X^{(1)}\perp\!\!\!\perp X^{(2)}\).
amount of information in \(X^{(1)}\) about the parameter of interest increases, and the amount of information in \(X^{(2)}\) decreases. This has implications for Example 2.1: as \(\epsilon\) increases, the quality of the estimator of the expected value increases (step 1), but the information available for computing the loss between this estimator and \(X^{(2)}\) decreases (step 2).
In Section 2.2, we mentioned that the loss function in step 2 of Example 2.1 must be chosen with care when \(\epsilon\neq 0.5\). This can be seen in the following example.
**Example 2.4** (Example 2.1 with mean squared error loss).: _Consider step 2 of Example 2.1 with mean squared error loss. Since \(\text{E}[X^{(2)}]=(1-\epsilon)/\epsilon\times\text{E}[X^{(1)}]\), we compute the loss as_
\[\frac{1}{nd}\left\|X^{(2)}-\frac{1-\epsilon}{\epsilon}\hat{\mu}(X^{(1)}) \right\|_{F}^{2},\]
_where the factor of \((1-\epsilon)/\epsilon\) turns an estimate of \(\text{E}[X^{(1)}]\) into an estimate of \(\text{E}[X^{(2)}]\)._
As we will see in Section 4, we can also use alternative loss functions, such as negative log likelihood, when considering Example 2.1.
Example 2.1 focuses on using data thinning to evaluate an estimator, and Sections 4 and 5 of this paper focus on model selection and assessment. However, data thinning can also be applied in a variety of other settings, such as inference after variable selection in regression (Leiner et al., 2022). In these settings, the parameter \(\epsilon\) still governs a trade-off between the information available in \(X^{(1)}\), used for selection, and \(X^{(2)}\), used for inference.
### Effect of unknown nuisance parameters
For several of the distributions in Table 2, data thinning requires knowledge of a nuisance parameter. For example, thinning a \(\text{N}(\mu,\sigma^{2})\) distribution requires knowledge of \(\sigma^{2}\).
We now consider what happens when we perform data thinning on normally-distributed data using an incorrect value of the variance. We refer to this incorrect value as \(\tilde{\sigma}^{2}\).
**Proposition 1**.: _Suppose that we observe \(x\) from \(X\sim\mathrm{N}(\mu,\sigma^{2})\). We draw \(X^{(1)}\mid X=x\sim\mathrm{N}\left(\epsilon x,\epsilon(1-\epsilon)\tilde{\sigma }^{2}\right)\), for some \(\tilde{\sigma}\) that is not a function of \(x\), and let \(X^{(2)}=X-X^{(1)}\). Then: (i) \(X^{(1)}\sim\mathrm{N}\left(\epsilon\mu,\epsilon^{2}\sigma^{2}+\epsilon(1- \epsilon)\tilde{\sigma}^{2}\right)\), (ii) \(X^{(2)}\sim\mathrm{N}\left((1-\epsilon)\mu,(1-\epsilon)^{2}\sigma^{2}+\epsilon (1-\epsilon)\tilde{\sigma}^{2}\right)\), and (iii) \(\mathrm{cov}\left(X^{(1)},X^{(2)}\right)=\epsilon(1-\epsilon)\left(\sigma^{2}- \tilde{\sigma}^{2}\right)\)._
Proposition 1(iii) indicates that if we apply data thinning with not enough noise (\(\tilde{\sigma}^{2}<\sigma^{2}\)), then \(X^{(1)}\) and \(X^{(2)}\) are positively correlated. On the other hand, if we apply data thinning with too much noise (\(\tilde{\sigma}^{2}>\sigma^{2}\)), then \(X^{(1)}\) and \(X^{(2)}\) are negatively correlated. Similar results hold for the negative binomial distribution and the gamma distribution.
**Proposition 2**.: _Suppose that we observe \(x\) from \(X\sim\mathrm{NegativeBinomial}(r,p)\). We draw \(X^{(1)}\mid X=x\sim\mathrm{BetaBin}\left(x,\epsilon\tilde{r},(1-\epsilon) \tilde{r}\right)\) for some \(\tilde{r}\) that is not a function of \(x\), and let \(X^{(2)}=X-X^{(1)}\). Then \(\mathrm{cov}\left(X^{(1)},X^{(2)}\right)=\epsilon(1-\epsilon)r\left(\frac{1- p}{p}\right)^{2}\left(1-\frac{r+1}{\tilde{r}+1}\right).\)_
**Proposition 3**.: _Suppose that we observe \(x\) from \(X\sim\mathrm{Gamma}(\alpha,\beta)\). We let \(X^{(1)}=x\times Z\), where \(Z\sim\mathrm{Beta}\left(\epsilon\tilde{\alpha},(1-\epsilon)\tilde{\alpha}\right)\) for some \(\tilde{\alpha}\) that is not a function of \(x\). We let \(X^{(2)}=X-X^{(1)}\). Then \(\mathrm{cov}\left(X^{(1)},X^{(2)}\right)=\epsilon(1-\epsilon)\frac{\alpha}{ \beta^{2}}\left(1-\frac{\alpha+1}{\tilde{\alpha}+1}\right).\)_
Propositions 1-3 are proven in Section B.2. Figure 1 verifies these results empirically. The results in this section assume that \(\tilde{\sigma}\), \(\tilde{b}\), and \(\tilde{\alpha}\) are not a function of \(X\). In practice, one might estimate the unknown parameters \(\sigma\), \(b\), and \(\alpha\) using additional data.
## 3 Multifold data thinning
Data thinning involves decomposing \(X\) into \(X^{(1)}\) and \(X^{(2)}\), which each have the same distribution as \(X\) (up to a known parameter scaling). It can be applied recursively to create \(M\) independent data folds, \(X^{(1)},\ldots,X^{(M)}\), that sum to \(X\), as in the following example.
**Example 3.1** (Recursive thinning of the normal distribution).: _Let \(x\) denote a realization of \(X\sim N(\mu,\sigma^{2})\). Given \(\epsilon_{1},\epsilon_{2},\epsilon_{3}\in(0,1)\) with \(\epsilon_{1}+\epsilon_{2}+\epsilon_{3}=1\), we first draw \(X^{(1)}\mid X\sim\mathrm{N}\left(\epsilon_{1}X,\epsilon_{1}(1-\epsilon_{1}) \sigma^{2}\right)\). Let \(X^{(2,3)}=X-X^{(1)}\). By Theorem 1, \(\left(X^{(1)},X^{(2,3)}\right)\sim N\left(\epsilon_{1}\mu,\epsilon_{1}\sigma^ {2}\right)\times N\left((1-\epsilon_{1})\mu,(1-\epsilon_{1})\sigma^{2}\right)\)._
_We next draw \(X^{(2)}\mid X^{(2,3)}\sim\mathrm{N}\left(\frac{\epsilon_{2}}{1-\epsilon_{1}}X ^{(2,3)},\frac{\epsilon_{2}}{1-\epsilon_{1}}(1-\frac{\epsilon_{2}}{1-\epsilon _{1}})(1-\epsilon_{1})\sigma^{2}\right)\), and let \(X^{(3)}=X-X^{(1)}-X^{(2)}\). By Theorem 1, \(\left(X^{(2)},X^{(3)}\right)\sim N(\epsilon_{2}\mu,\epsilon_{2}\sigma^{2}) \times N(\epsilon_{3}\mu,\epsilon_{3}\sigma^{2})\)._
_Furthermore, since \(\left(X^{(2)},X^{(3)}\right)\) is a function of \(X^{(2,3)}\), the pair \(\left(X^{(2)},X^{(3)}\right)\) remains independent of \(X^{(1)}\). Thus, \(\left(X^{(1)},X^{(2)},X^{(3)}\right)\sim N\left(\epsilon_{1}\mu,\epsilon_{1} \sigma^{2}\right)\times N(\epsilon_{2}\mu,\epsilon_{2}\sigma^{2})\times N( \epsilon_{3}\mu,\epsilon_{3}\sigma^{2})\)._
While Example 3.1 can be extended to create \(M>3\) folds, this recursive approach
can be cumbersome. In Example 1.1 of Section 1, we saw that, for the gamma distribution, there is a simple way to create multiple folds without recursion. We will now provide a general form of this result. Let \(G_{\lambda_{1},\lambda_{2},\ldots,\lambda_{M},x}\) denote the joint distribution of \((X_{1},\ldots,X_{M})\ |\ X_{1}+X_{2}+\ldots+X_{M}=x\), where \(X_{m}\stackrel{{\text{ind.}}}{{\sim}}F_{\lambda_{m}}\), for \(m=1,\ldots,M\), and where \(F_{\lambda}\) is a convolution-closed distribution. The following algorithm and theorem mimic Algorithm 1 and Theorem 1.
**Algorithm 2** (Multifold data thinning).: _Observe a realization \(x\) of \(X\sim F_{\lambda}\), where \(F_{\lambda}\) is a convolution-closed distribution with parameter space \(\Lambda\). First, choose \(\epsilon_{1},\ldots,\epsilon_{M}\in(0,1)\) such that \(\sum_{m=1}^{M}\epsilon_{m}=1\) and \(\epsilon_{m}\lambda\in\Lambda\) for \(m=1,\ldots M\). Then, draw \(\left(X^{(1)},\ldots,X^{(M)}\right)\sim G_{\epsilon_{1}\lambda,\epsilon_{2} \lambda,\ldots,\epsilon_{M}\lambda,x}\)._
**Theorem 2**.: _Suppose we apply Algorithm 2 to a realization \(x\) of \(X\sim F_{\lambda}\), for a convolution-closed distribution \(F_{\lambda}\). Then, the following results hold: (i) \(X^{(m)}\sim F_{\epsilon_{m}\lambda}\) for \(m=1,\ldots,M\); (ii) \(X^{(1)},\ldots,X^{(M)}\) are mutually independent; (iii) \(X^{(1)}+X^{(2)}+\cdots+X^{(M)}=X\); and (iv) if \(F_{\lambda}\) satisfies the linear expectation property (Remark 1), then \(\text{E}[X^{(m)}]=\epsilon_{m}\,\text{E}[X]\) for \(m=1,\ldots,M\)._
The proof of Theorem 2 is included in Section B.1, and is a straightforward extension of that of Theorem 1. The intuition for parts (i)-(iii) is as follows: we know that \(X\sim F_{\lambda}\) could have arisen as the sum of \(M\) mutually independent random variables \(X_{1},\ldots,X_{M}\) such that \(X_{m}\sim F_{\epsilon_{m}\lambda}\). If we draw \(\left(X^{(1)},\ldots,X^{(M)}\right)|X=x\sim G_{\epsilon_{1}\lambda,\epsilon_{ 2}\lambda,\ldots,\epsilon_{M}\lambda,x}\), then the joint distribution of \(\left(X^{(1)},\ldots,X^{(M)}\right)\) equals the joint distribution of \((X_{1},\ldots,X_{M})\), i.e. it is the joint distribution of \(M\) independent random variables with distributions \(F_{\epsilon_{1}\lambda},\ldots,F_{\epsilon_{M}\lambda}\). Part (iv) follows directly from Remark 1. We now revisit the case of the normal distribution from Example 3.1.
**Example 3.2** (Multifold thinning of the normal distribution).: _Let \(X\sim N(\mu,\sigma^{2})\) and let \(\epsilon_{1},\epsilon_{2},\epsilon_{3}>0\) with \(\sum_{i=1}^{3}\epsilon_{i}=1\). To generate \(M=3\) independent folds of the data, we draw_
\[\begin{bmatrix}X^{(1)}\\ X^{(2)}\\ X^{(3)}\end{bmatrix}\mid X=x\sim N\left(\begin{bmatrix}\epsilon_{1}x\\ \epsilon_{2}x\\ \epsilon_{3}x\end{bmatrix},\begin{bmatrix}\epsilon_{1}(1-\epsilon_{1})\sigma^ {2}&-\epsilon_{1}\epsilon_{2}\sigma^{2}&-\epsilon_{1}\epsilon_{3}\sigma^{2} \\ -\epsilon_{1}\epsilon_{2}\sigma^{2}&\epsilon_{2}(1-\epsilon_{2})\sigma^{2}&- \epsilon_{2}\epsilon_{3}\sigma^{2}\\ -\epsilon_{1}\epsilon_{3}\sigma^{2}&-\epsilon_{2}\epsilon_{3}\sigma^{2}& \epsilon_{3}(1-\epsilon_{3})\sigma^{2}\end{bmatrix}\right).\]
_One can verify that this multivariate normal corresponds to \(G_{\epsilon_{1}\lambda,\epsilon_{2}\lambda,\epsilon_{3}\lambda,x}\). By Theorem 2, \(X^{(1)},X^{(2)},\) and \(X^{(3)}\) are independent and \(X^{(m)}\sim N(\epsilon_{m}\mu,\epsilon_{m}\sigma^{2})\) for \(m=1,2,3\). This distribution \(G_{\epsilon_{1}\lambda,\epsilon_{2}\lambda,\epsilon_{3}\lambda,x}\) is a degenerate multivariate normal distribution, which enforces the constraint that the realized values of \(X^{(1)},X^{(2)}\), and \(X^{(3)}\) sum to \(x\)._
Table 3 reveals that \(G_{\epsilon_{1}\lambda,\epsilon_{2}\lambda,\ldots,\epsilon_{M}\lambda,x}\) in Algorithm 2 has a very simple form for every univariate distribution in Table 2. We omit the multivariate distributions to avoid cumbersome notation.
The parameters \(\epsilon_{1},\ldots,\epsilon_{M}\) in Algorithm 2 govern the same information tradeoff that was described in Section 2.3. Folds with the largest values of \(\epsilon_{m}\) contain the most information about the parameter of interest. We now consider the following extension of Example 2.1.
**Example 3.3** (Cross validation for unsupervised learning using multifold thinning).: _In
\begin{table}
\begin{tabular}{l l l} Distribution of \(X\) & Generate \((X^{(1)},\ldots,X^{(M)})\mid X=X\) as: & Dist. of \(X^{(m)}\) \\ \hline Poisson(\(\lambda\)) & \(\left(X^{(1)},\ldots,X^{(M)}\right)\mid X=x\sim\text{Multinomial}(x,\epsilon_{1},\ldots,\epsilon_{M}).\) & Poisson(\(\epsilon_{m}\lambda\)) \\ N(\(\mu,\sigma^{2}\)) & \(\left(X^{(1)},\ldots,X^{(M)}\right)\mid X=x\sim\text{N}\left(\mu\epsilon, \sigma^{2}\text{diag}(\epsilon)-\sigma^{2}\epsilon e^{T}\right)\), & N(\(\epsilon_{m}\mu,\epsilon_{m}\sigma^{2}\)), \\ NegativeBinomial(\(r,p\)) & \(\left(X^{(1)},\ldots,X^{(M)}\right)\mid X=x\sim\text{Dirichlet Multinomial}(X,\epsilon_{1}r,\ldots,\epsilon_{M}r).\) & NegativeBinomial(\(\epsilon_{m}r,p\)) \\ Gamma(\(\alpha,\beta\)) & Draw \(Z\sim\text{Dirichlet}\left(\epsilon_{1}\alpha,\ldots,\epsilon_{M}\alpha\right)\), and and let \(\left(X^{(1)},\ldots,X^{(M)}\right)=x\cdot Z\) & Gamma(\(\epsilon_{m}\alpha,\beta\)) \\ Gamma(\(1,\lambda\)) & Draw \(Z\sim\text{Dirichlet}(\epsilon_{1},\ldots,\epsilon_{M})\), and let \(\left(X^{(1)},\ldots,X^{(M)}\right)=x\cdot Z.\) & Gamma(\(\epsilon_{m},\lambda\)) \\ Binomial(\(r,p\)) & \(\left(X^{(1)},\ldots,X^{(M)}\right)\mid X=x\sim\text{Multivariate Hypergeometric}(\epsilon_{1}r,\ldots,\epsilon_{M}r,x).\) & Binomial(\(\epsilon_{m}r,p\)) \\ \end{tabular}
\end{table}
Table 3: Details of how to perform multifold data thinning (Algorithm 2) for several common univariate distributions, where where \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{M})^{T}\). In the decomposition of the binomial distribution, \(\epsilon_{m}r\) must be an integer. Each row can be verified using properties of these distributions.
the setting of Example 2.1, we apply Algorithm 2 with parameters \(\epsilon_{1},\ldots,\epsilon_{M}\) to either each element or each row of \(X\) such that each element \(X_{ij}\) is thinned into \(X_{ij}^{(1)},\ldots,X_{ij}^{(M)}\)._
_Then, for \(m=1,\ldots,M\), we first define \(X^{(-m)}:=X-X^{(m)}\), and apply unsupervised learning to \(X^{(-m)}\) to obtain \(\hat{\mu}\left(X^{(-m)}\right)\), which is an estimator of \(\text{E}[X^{(-m)}]=(1-\epsilon_{m})\,\text{E}[X]\) (step 1). We then compute a loss function between \(\hat{\mu}\left(X^{(-m)}\right)\) and \(X^{(m)}\) (step 2). For example, as in Example 2.4, we can compute the mean squared error between \(\frac{\epsilon_{m}}{1-\epsilon_{m}}\hat{\mu}(X^{(-m)})\) and \(X^{(m)}\). We evaluate the estimator \(\hat{\mu}(\cdot)\) by averaging the loss across folds._
For simplicity, we suggest setting \(\epsilon_{1}=\ldots=\epsilon_{M}=1/M\). The advantage of multifold thinning (Example 3.3) over single fold thinning with \(\epsilon=1/M\) (Example 2.1) is reduction of the variance of the loss function via averaging. We will demonstrate the practical advantages of multifold thinning in Section 4.
## 4 Simulation Study
### Simulation setup
Data thinning can be applied to a variety of settings, including the inference after model selection problems considered in Leiner et al. (2022), and to cross validation for supervised learning. Here, we focus on the application of data thinning to cross-validation for unsupervised learning. Data thinning is particularly attractive in this setting, as a traditional sample splitting approach is not suitable; see Owen and Perry (2009) or Fu and Perry (2020) for further discussion. In what follows, we apply data thinning and multifold thinning to unsupervised learning problems, and contrast their performance against naive approaches that use the same data to both fit and validate the unsupervised models. Specifically, we
consider Examples 4.1 and 4.2.
**Example 4.1** (Choosing the number of principal components on binomial data).: _We generate data with \(n=250\) observations and \(d=100\) dimensions. Specifically, for \(i=1,\ldots,n\) and \(j=1,\ldots,d\), we generate \(X_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}}\text{Binomial}(r,p_{ij})\) where \(r=100\) and \(p\) is an unknown \(n\times d\) matrix of probabilities. We construct \(\text{logit}(p)\) as a rank-\(K^{*}=10\) matrix with singular values \(5,6,\ldots,14\). Additional details are provided in Section D. Our goal is to estimate \(K^{*}\)._
**Example 4.2** (Choosing the number of clusters on gamma data).: _We generate datasets \(X\in\mathbb{R}^{n\times d}\) such that there are 100 observations in each of \(K^{*}\) clusters, for a total of \(n=100K^{*}\) observations. Our objective is to estimate \(K^{*}\). We let \(X_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}}\text{Gamma}(\lambda, \theta_{c_{i},j})\), for \(i=1,\ldots,n\) and \(j=1,\ldots,d\), where \(c_{i}\in\{1,2,\ldots,K^{*}\}\) indexes the true cluster membership of the \(i\)th observation. The shape parameter \(\lambda\) is a known constant common across all clusters and all dimensions, whereas the rate parameter \(\theta\) is an unknown \(K^{*}\times d\) matrix such that each cluster has its own \(d\)-dimensional rate parameter. We generate data under two regimes: (1) a small \(d\), small \(K^{*}\) regime in which \(d=2\) and \(K^{*}=4\), and (2) a large \(d\), large \(K^{*}\) regime in which \(d=100\) and \(K^{*}=10\). The values of \(\lambda\) and \(\theta\) are provided in Section D. A sample "small \(d\), small \(K^{*}\)" dataset is presented in Figure 2, alongside the output of data thinning with \(\epsilon=0.5\)._
Figure 2: _Left_: A simulated dataset in the \(d=2\), \(K^{*}=4\) setting described in Example 4.2. _Center/Right_: The result of data thinning with \(\epsilon=0.5\).
### Methods
We use Algorithm 3 to select the number of principal components in binomial data, as in Example 4.1, using data thinning.
**Algorithm 3** (Evaluating binomial principal components with negative log-likelihood loss).: _Input: A positive integer \(K\), a matrix \(X\in\mathbb{Z}_{[0,r]}^{n\times d}\), where \(X_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}}\mathrm{Binomial}(r,p_{ij})\), and positive scalars \(\epsilon^{\mathrm{(train)}}\) and \(\epsilon^{\mathrm{(test)}}=1-\epsilon^{\mathrm{(train)}}\) such that \(\epsilon^{\mathrm{(train)}}r,\epsilon^{\mathrm{(test)}}r\in\mathbb{Z}_{>0}\)._
1. _Apply data thinning to_ \(X\) _to obtain_ \(X^{\mathrm{(train)}}\) _and_ \(X^{\mathrm{(test)}}\)_, where_ \(X^{\mathrm{(train)}}_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}} \mathrm{Binomial}\left(\epsilon^{\mathrm{(train)}}r,p_{ij}\right)\) _and_ \(X^{\mathrm{(test)}}_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}} \mathrm{Binomial}\left(\epsilon^{\mathrm{(test)}}r,p_{ij}\right)\)__
2. _Compute the singular value decomposition of the log-odds of_ \(X^{\mathrm{(train)}}\)_,_ \[\mathrm{logit}\left\{(X^{\mathrm{(train)}}_{ij}+0.001)/(\epsilon^{\mathrm{( train)}}r+0.002)\right\}=\hat{U}\hat{D}\hat{V}^{T}.\] _Pseudo-counts prevent taking the logit of 0 or 1._
3. _Construct the rank-_\(K\) _approximation of_ \(X^{\mathrm{(train)}}\)_,_ \(p^{(K)}=\mathrm{expit}\left(\hat{U}_{1:K}\hat{D}_{1:K}\hat{V}_{1:K}\right)\)_._
4. _Compute the negative log-likelihood loss on_ \(X^{\mathrm{(test)}}\) _as_ \(-\sum_{i=1}^{n}\sum_{j=1}^{d}\log f\left(X^{\mathrm{(test)}}_{ij}\Big{|} \epsilon^{\mathrm{(test)}}r,p^{(K)}_{ij}\right),\) _where_ \(f(\cdot\mid r,p)\) _is the density function for the_ \(\mathrm{Binomial}(r,p)\) _distribution._
We use Algorithm 4 to select the number of clusters in gamma data, as in Example 4.2, using data thinning.
**Algorithm 4** (Evaluating gamma clusters with negative log-likelihood loss).: _Input: A positive integer \(K\), and \(X\in\mathbb{R}_{>0}^{n\times d}\) where \(X_{ij}\stackrel{{\mathrm{ind.}}}{{\sim}}\mathrm{Gamma}\left( \lambda,\theta_{c_{i},j}\right)\). Here, \(\theta\in(0,\infty)^{K^{*}\times d}\) where \(\theta_{c_{i},j}\) is the true but unknown rate parameter for the \(c_{i}\)th cluster in the \(j\)th
dimension, \(c_{i}\in\{1,2,\ldots,K^{*}\}\), and \(\lambda\) is the known shape parameter. Also, positive scalars \(\epsilon^{\rm(train)}\) and \(\epsilon^{\rm(test)}=1-\epsilon^{\rm(train)}\)._
1. _Apply data thinning to_ \(X\) _to obtain_ \(X^{\rm(train)}\) _and_ \(X^{\rm(test)}\)_, where_ \(X^{\rm(train)}_{ij}\stackrel{{\rm ind.}}{{\sim}}\mathrm{Gamma} \left(\epsilon^{\rm(train)}\lambda,\theta_{c_{i},j}\right)\) _and_ \(X^{\rm(test)}_{ij}\stackrel{{\rm ind.}}{{\sim}}\mathrm{Gamma} \left(\epsilon^{\rm(test)}\lambda,\theta_{c_{i},j}\right)\)_._
2. _Run_ \(K\)_-means on_ \(X^{\rm(train)}\) _to estimate_ \(K\) _clusters. Denote the cluster assignment of the_ \(i\)_th observation as_ \(\hat{c}_{i}\)_._
3. _Within each cluster, estimate the parameters using_ \(X^{\rm(train)}\)___[_3_, 24_]__. Let_ \(\hat{\lambda}^{(K)}\) _and_ \(\hat{\theta}^{(K)}\) _denote the_ \(K\times d\) _estimated parameter matrices._
4. _Compute the loss on_ \(X^{\rm(test)}\) _as_ \(-\sum_{i=1}^{n}\sum_{j=1}^{d}\log f\left(X^{\rm(test)}_{ij}\Big{|}\hat{\lambda }^{(K)}_{\hat{c}_{i},j}\epsilon^{\rm(test)}/\epsilon^{\rm(train)},\hat{\theta }^{(K)}_{\hat{c}_{i},j}\right),\) _where_ \(f(\cdot\mid\lambda,\theta)\) _is the density function for the_ \(\mathrm{Gamma}(\lambda,\theta)\) _distribution._
We apply Algorithms 3 and 4 in three different ways. First, we apply them without modification, with \(\epsilon^{\rm(train)}=0.5\) and \(\epsilon^{\rm(train)}=0.8\). Next, we slightly modify these algorithms by replacing step 1 with multi-fold thinning (Algorithm 2) with \(M=5\) and \(\epsilon_{1}=\cdots=\epsilon_{M}=0.2\). For \(m=1,\ldots,M\), we then perform steps 2-4 using \(X^{\rm(train)}=X-X^{(m)}\), \(\epsilon^{\rm(train)}=(M-1)/M\) and \(X^{\rm(test)}=X^{(m)}\), \(\epsilon^{\rm(test)}=1/M\). We then average the loss functions obtained across the \(M\) applications of step 4. Finally, we consider a naive method that re-uses data, by skipping step 1, and simply taking \(X^{\rm(train)}=X^{\rm(test)}=X\) in steps 2-4 and \(\epsilon^{\rm(train)}=\epsilon^{\rm(test)}=1\) in step 4.
Our goal is to select the value of \(K\) that minimizes the loss function. Because data thinning produces independent training and test sets, we expect that the data thinning approaches will produce U-shaped loss function curves, as a function of \(K\). By contrast, in the naive approach, the full data \(X\) is used to fit the model and to compute the loss functions
in Algorithms 3 and 4, resulting in monotonically decreasing loss curves, as a function of \(K\).
Other loss functions can be used in lieu of the negative log-likelihood loss in Algorithms 3 and 4. In Section E of the supplementary materials, we extend Algorithms 3 and 4 to the case of mean squared error loss, and show similar results.
### Results
Figure 3 displays the loss function for all three simulation settings as a function of \(K\); results have been averaged over \(2,000\) simulated datasets and rescaled to the \([0,1]\) interval for ease of comparison. The values of \(K\) with the lowest average loss function are circled on the plots. As expected, the data thinning approaches in Figure 3 exhibit sharp minimum values, as opposed to the monotonically decreasing curves produced by the naive method. The data thinning approaches correctly select the true value of \(K=K^{*}\) in all three settings, except for data thinning with \(\epsilon^{\rm(train)}=0.5\) in the binomial principal components setting. In that case, the low value of \(\epsilon^{\rm(train)}\) allocates too much information to the test set, resulting in inadequate signal from the weakest principal components in the training set. Selecting a larger value of \(\epsilon^{\rm(train)}\) remedies this issue, as seen with \(\epsilon^{\rm(train)}=0.8\).
We further investigate the role of \(\epsilon^{\rm(train)}\) by repeating the simulation study using different values of \(\epsilon^{\rm(train)}\) for single-fold data thinning. In Figure 4, we plot the proportion of simulations that select the correct value of \(K^{*}\) (i.e. the proportion of simulations in which the loss function is minimized at \(K=K^{*}\)) in each of the three settings, as a function of \(\epsilon^{\rm(train)}\). We find that in the gamma clustering simulations, lower values of \(\epsilon^{\rm(train)}\) are adequate. However, settings with weaker signal, such as the binomial principal components example, require larger values of \(\epsilon^{\rm(train)}\) to identify the true latent structure. In all settings,
as \(\epsilon^{\text{(train)}}\) approaches 1, performance begins to decay. This is a consequence of inadequate information remaining in the test set under large values of \(\epsilon^{\text{(train)}}\), and is consistent with the discussion of Section 2.3. These findings suggest that in practice, the optimal value of \(\epsilon^{\text{(train)}}\) is context-dependent.
Finally, we examine the benefits of multifold data thinning over single-fold data thinning. Figure 5 displays histograms of the number of simulations that select each value of \(K\). Here we only include data thinning with \(\epsilon^{\text{(train)}}=0.8\) and multifold thinning with \(M=5\), so that both methods use the same allocation of information between training and test sets. We see
Figure 4: The proportion of simulations for which data thinning selects the true value of \(K^{*}\) with the negative log-likelihood loss, as a function of \(\epsilon^{\text{(train)}}\), for the simulation study described in Section 4.1. The optimal value of \(\epsilon^{\text{(train)}}\) depends on the problem at hand.
Figure 3: The negative log-likelihood loss averaged over 2,000 simulated data sets, as a function of \(K\), for the naive method (purple), data thinning with \(\epsilon^{\text{(train)}}=0.5\) (red), data thinning with \(\epsilon^{\text{(train)}}=0.8\) (blue), and multifold thinning with \(M=5\) folds (green). Each curve has been rescaled to take on values between 0 and 1, for ease of comparison. The minimum loss values for each method are circled, and \(K^{*}\) is indicated by the vertical black line.
that multifold thinning generally selects the correct value of \(K\) more often than single-fold data thinning, mirroring the improvement of \(M\)-fold cross-validation using sample splitting over single-fold sample splitting in supervised settings. However, in the large gamma setting, the signal is strong enough that multifold thinning does not provide a benefit over single-fold thinning.
## 5 Selecting the number of principal components in gene expression data
In this section, we revisit an analysis of a dataset from a single-cell RNA sequencing experiment conducted on a set of peripheral blood mononuclear cells. The dataset is freely available from 10X Genomics, and was previously analyzed in the "Guided Clustering Tutorial" vignette (Hoffman et al., 2022) for the popular R package Seurat(Hao et al., 2021; Stuart et al., 2019; Satija et al., 2015).
The dataset \(X\) is a sparse matrix of non-negative integers, representing counts from
Figure 5: The proportion of simulated data sets in which each candidate value of \(K\) is selected, with the negative log-likelihood loss, under data thinning with \(\epsilon^{\text{(train)}}=0.8\) (blue) and multifold thinning with \(M=5\) (green), for each of the simulation settings described in Section 4.1. The true value of \(K^{*}\) is indicated by the vertical black line. Multifold thinning tends to select the true value of \(K\) more often than single-fold thinning.
\(32,738\) genes in each of \(2,700\) cells. We consider applying principal components analysis to learn a low-dimensional representation of the data. In the Seurat vignette, filtering, normalization, log-transformation, feature selection, centering, and scaling are applied to the data, yielding a transformed matrix \(\tilde{Y}\in\mathbb{R}^{2638\times 2000}\). Details are provided in Section F of the supplementary materials. Finally, the singular value decomposition of \(\tilde{Y}\) is computed, such that \(\tilde{Y}=UDV^{T}\). Here we let \(U_{k}\) represent the \(k\)th column of the matrix \(U\), and let \(U_{1:K}D_{1:K}V_{1:K}^{T}\) represent the rank-\(K\) approximation of \(\tilde{Y}\).
Our goal is to select the number of dimensions to use in this low-rank approximation. In the Seurat vignette, the authors rely on heuristic solutions such as looking for an elbow in the plot of the standard deviation of \(U_{K}D_{K}\) as a function of \(K\); see Figure 6(a) (James et al., 2013). Based on the elbow plot, the authors suggest retaining around \(7\) principal components. Other heuristic approaches suggest as many as \(12\) principal components.
Before introducing the data thinning solution, we introduce a squared-error based formulation that is mathematically equivalent to the traditional elbow plot (see Section F), but will facilitate a direct comparison with data thinning. For \(K=1,\ldots,20\), we compute the sum of squared errors between the matrix \(\tilde{Y}\) and its rank-\(K\) approximation:
\[\left\|\tilde{Y}-U_{1:K}D_{1:K}V_{1:K}^{T}\right\|_{F}^{2}.\]
Because the low-rank approximation \(U_{1:K}D_{1:K}V_{1:K}^{T}\) is computed using \(\tilde{Y}\), this loss function monotonically decreases with \(K\). A heuristic solution for deciding how many principal components to retain involves looking for the point in which the slope of the curve in Figure 6(b) begins to flatten. While this appears to happen around \(5\)-\(7\) principal components, which is consistent with the finding from Figure 6(a), the exact number of principal components to
retain is still unclear. We now show that data thinning provides a principled approach for estimating the number of principal components.
Single-cell RNA-sequencing data are often modeled as independent Poisson random variables (Wang et al., 2018; Sarkar and Stephens, 2021). Thus, we assume that \(X_{ij}\sim\text{Poisson}(\Lambda_{ij})\). Starting with the raw data matrix \(X\in\mathbb{Z}_{\geq 0}^{2700\times 32738}\), we perform Poisson data thinning with \(\epsilon=0.5\) to obtain a training set \(X^{(1)}\) and a test set \(X^{(2)}\), which are independent if the Poisson assumption holds. Furthermore, as \(\epsilon=0.5\), they are identically distributed. We then carry out the data processing described in Section F on \(X^{(1)}\) to obtain \(\tilde{Y}^{(1)}\in\mathbb{R}^{2638\times 2000}\). We obtain \(\tilde{Y}^{(2)}\in\mathbb{R}^{2638\times 2000}\) by applying the same data processing steps to \(X^{(2)}\), but retaining only the features that were selected on \(X^{(1)}\), so that the rows and columns of \(\tilde{Y}^{(1)}\) and \(\tilde{Y}^{(2)}\) correspond to the same genes and cells. Details are in Section F.
We compute the singular value decomposition on the training set, \(\tilde{Y}^{(1)}=U^{(1)}D^{(1)}(V^{(1)})^{T}\). For a range of values of \(K\), we then compute the sum of squared errors between \(\tilde{Y}^{(2)}\) and \(U^{(1)}_{1:K}D^{(1)}_{1:K}(V^{(1)}_{1:K})^{T}\):
\[\left\|\tilde{Y}^{(2)}-U^{(1)}_{1:K}D^{(1)}_{1:K}(V^{(1)}_{1:K})^{T}\right\|_ {F}^{2}. \tag{1}\]
The results are shown in Figure 6(c). As we are not computing and evaluating the singular value decomposition using the same data, the plot of \(K\) vs. the loss function is not monotonically decreasing in \(K\). Instead, it reaches a clear minimum at \(K=7\), suggesting that the rank-7 approximation provides the best fit to the observed data. Thus, data thinning provides a simple and non-heuristic way to select the number of principal components.
**Remark 4** (Choice of \(\epsilon\)).: _If we had chosen \(\epsilon\neq 0.5\), then while \(\text{E}[X^{(2)}]=(1-\epsilon)/\epsilon\,\text{E}[X^{(1)}]\), the relationship between \(\text{E}[\tilde{Y}^{(1)}]\) and \(\text{E}[\tilde{Y}^{(2)}]\) would depend on the details of the data processing described in Section 2.1, and the loss function in (1) would need to be modified accordingly._
**Remark 5** (Overdispersion).: _While we used a Poisson model for scRNA-seq data, there is evidence that a negative binomial model may be preferable in some settings. It is possible to modify the analysis in this section using negative binomial data thinning, as in Neufeld et al. (2023) (in preparation)._
## 6 Discussion
While we focused on applying data thinning to develop a version of cross-validation that is suitable for unsupervised learning, data thinning can also be applied in supervised settings, and may still provide advantages over sample splitting in these settings where sample splitting is an option. For example, cross-validation via data thinning might be preferable to cross-validation via sample splitting in supervised settings if the sample size is small and we wish to avoid excluding high-leverage points from the training set (Leiner et al., 2022). It may also
Figure 6: Results for the data analysis in Section 5. _(a)_ An “elbow plot” of the standard deviation of the principal components, which reproduces the plot given in the Seurat guided clustering tutorial. _(b)_ Due to the relationship between the sum of squared errors and the standard deviation of the principal components (see Section F), looking for an elbow in (a) is equivalent to looking for an elbow in (b). _(c)_ The data-thinning version of (b), which shows a clear minimum in the loss function at 7 principal components.
have power advantages to approaches such as sample splitting or selective inference (Fithian et al., 2014) in problems such as inference after variable selection. Finally, it can be used to estimate a model's test set error for a range of distributions, along the lines of existing work for the normal distribution (Oliveira et al., 2021) and the Poisson distribution (Oliveira et al., 2022), which may be particularly useful in "fixed-X" regression settings.
In Section 2.4, we considered the impact of using the incorrect value of a nuisance parameter when performing data thinning, but we did not consider what happens when the nuisance parameter is estimated using the data itself. In future work, we will consider the theoretical and empirical implications of performing data thinning with an estimated nuisance parameter. Furthermore, we focused on convolution-closed distributions and thus used additive decompositions where \(X=X^{(1)}+X^{(2)}\). For distributions with bounded support, such as the beta distribution, non-additive decompositions are needed. We leave such decompositions to future work.
An R package implementing data thinning and scripts to reproduce the results in this paper are available at [https://anna-neufeld.github.io/datathin/](https://anna-neufeld.github.io/datathin/).
## 7 Acknowledgements
Anna Neufeld, Ameer Dharamshi, and Daniela Witten were supported by the Simons Foundation and the National Institutes of Health. Anna Neufeld and Daniela Witten were also supported by the Keck Foundation. Lucy Gao was supported by the Natural Sciences and Engineering Research Council of Canada. |
2308.13708 | Modular degree and a conjecture of Watkins | Given an elliptic curve $E/\mathbb{Q}$ of conductor $N$, there exists a
surjective morphism $\phi_E: X_0(N) \to E$ defined over $\mathbb{Q}$. In this
article, we discuss the growth of $\mathrm{deg}(\phi_E)$ and shed some light on
Watkins's conjecture, which predicts $2^{\mathrm{rank}(E(\mathbb{Q}))} \mid
\mathrm{deg}(\phi_E)$. Moreover, for any elliptic curve over $\mathbb{F}_q(T)$,
we have an analogous modular parametrization relating to the Drinfeld modular
curves. In this case, we also discuss growth and the divisibility properties. | Subham Bhakta, Srilakshmi Krishnamoorthy, Sunil Kumar Pasupulati | 2023-08-25T23:43:10Z | http://arxiv.org/abs/2308.13708v2 | # Modular degree and a conjecture of Watkins
###### Abstract.
Given an elliptic curve \(E/\mathbb{Q}\) of conductor \(N\), there exists a surjective morphism \(\phi_{E}:X_{0}(N)\to E\) defined over \(\mathbb{Q}\). In this article, we discuss the growth of \(\deg(\phi_{E})\) and shed some light on Watkins's conjecture, which predicts \(2^{\text{rank}(E(\mathbb{Q}))}\mid\deg(\phi_{E})\). Moreover, for any elliptic curve over \(\mathbb{F}_{q}(T)\), we have an analogous modular parametrization relating to the Drinfeld modular curves. In this case, we also discuss growth and the divisibility properties.
Key words and phrases:Elliptic curves, Modular forms, Galois representations, Drinfeld modules 2020 Mathematics Subject Classification: Primary 11F30, 11L07; Secondary 11F52, 11F80
###### Contents
* 1 Introduction
* 2 Tools to compute the modular degree
* 3 Divisibility and Watkins's conjecture
* 4 On the growth of modular degree
## 1. Introduction
Let \(X_{0}(N)\) be the moduli space of the pairs \(\{(E,C_{N})\mid C_{N}\subseteq E,\text{ cyclic subgroup of order }N\}\). It is well known that \(X_{0}(N)(\mathbb{C})\) can be realized as the Riemann surface obtained by the standard action of the congruence subgroup \(\Gamma_{0}(N)\) on the upper half place \(\mathbb{H}\). We know that \(E\) is modular by the work of Wiles, Taylor-Wiles, et al. In other words, there exists a surjective morphism (defined over \(\mathbb{Q}\)) \(\phi_{E}:X_{0}(N)\to E\), which is called the _modular parametrization_ of \(E\). Throughout the article, we shall assume that \(\phi_{E}\) has a minimal degree sending the cusp \(\infty\) of \(X_{0}(N)\) to the identity of \(E\). We denote the minimal degree as \(m_{E}\), the modular degree of \(E\). The main goal of this article is to study growth and some divisibility properties of \(m_{E}\). To be more precise, in Section 4, we discuss the bounds of \(m_{E}\) in terms of the conductor \(N_{E}\) of \(E\). It is conjectured that \(m_{E}\ll N_{E}^{2+\varepsilon}\), and this is equivalent [27] to ABC conjecture. We discuss this briefly in Section 4. In the same section, we record the conjectural bound for a positive proportion of elliptic curves in the following form.
**Theorem 1**.: _Let \(S\) be any finite set of primes, then the proportion of elliptic curves that satisfy the conjectural degree bound is at least \(1-\sum_{p\not\in S}\frac{1}{p^{2}}\)_
One of the other main focuses of this article is to study the following predicted by Watkins.
**Conjecture 1** (Watkins).: _Let \(E/\mathbb{Q}\) be any elliptic curve and \(m_{E}\) be the modular degree. Then,_
\[2^{\operatorname{rank}_{2}(E(\mathbb{Q})}\mid m_{E}.\]
_In other words, \(\operatorname{rank}_{\mathbb{Z}}(E(\mathbb{Q}))\leq\nu_{2}(m_{E})\)._
It is not known whether the rank of elliptic curves are uniformly bounded,1
Footnote 1: In 2006, Noam Elkies discovered an elliptic curve with a rank of at least 28,
\[y^{2}+xy+y=x^{3}-x^{2}-20067762415575526585033208209338542750930230312178956502x+\] \[34481611795030556467032985690390720374855944359319180361266008296291939448 732243429\]
At present, this is the highest known rank. For further information, please refer to A. Dujella. History of elliptic curves rank records, 2015.
with \(||f_{E}||_{N}\). In Section 3.5, we extend the study of twists for the cases when \(E(\mathbb{Q})[2]\) is trivial. In particular, we record in Theorem 7 that Watkins's conjecture is true for almost all the twists under the assumption of GRH and BSD conjectures.
Section 2 discusses several identities about \(m_{E}\). Historically, they served as tools to study several arithmetic properties of \(m_{E}\). As already mentioned, the relation between \(m_{E}\) and \(||f_{E}||_{N}\) is recalled, and the consequence of the growth of \(m_{E}\) is discussed. Many resources are available on the related subjects, and nothing original is guaranteed in this section. Instead, it serves the reader a quick introduction to the subject and provides a notational setup for the article. In Section 4, we briefly discuss the growth of \(m_{E}\).
In Section 3.7, we undertake the study of the function field analog of Watkins's conjecture. Let \(p\) be a prime and \(k\) be a finite extension of \(\mathbb{F}_{p}\). We define \(A\) as the polynomial ring \(k[T]\) and its field of fractions as \(K=k(T)\). Let \(E\) be an elliptic curve with conductor \(\mathfrak{n}_{E}=\mathfrak{n}\infty\), where \(\mathfrak{n}\) be an ideal in \(A\), and \(\infty\) denotes the place of \(K\) associated to \(T^{-1}\). We further assume that \(E\) is non-isotrivial, i.e., the \(j\)-invariant is not in \(k\) and has split multiplication reduction at \(\infty\). Then we have an analog of the modular parametrization, i.e., a nonconstant morphism \(\phi_{E}:X_{0}(\mathfrak{n})\to E\), where \(X_{0}(\mathfrak{n})\) is the Drinfeld modular curve associated to \(\mathfrak{n}\), see Section 3.7 for the definition. In this scenario, Caro [7] showed that an analog of Watkins's conjecture holds when we view \(E\) over any extension \(k^{\prime}(T)\) of \(K\), such that \(k^{\prime}\) contains all roots of the finite part of \(\mathfrak{n}\). In this section, we point out an improvement over the imposed condition on \(k^{\prime}\) but at the cost of a few more additional conditions on \(E\). We record this remark at Theorem 10. Furthermore, drawing the analogy with Section 3.4 and Section 3.5, we discuss Watkins's conjecture for the families of twists \(\{E^{(g)}\}\), for any polynomial \(g\) having even degree. We recall Caro's results and mention some possible improvements. Furthermore, in section 4.1, we discuss the growth properties of modular degree following [28]. It turns out that the modular degree is related to the special values of the Symmetric square \(L\)-function associated with \(E\), whose analytic properties can be studied because Grothendieck's theory L-function allows one to write the \(L\)-function as a quotient of two polynomials. We shall note that the unpleasant factor in the lower bound, which is given by the non-separable degree of the morphism \(j_{E}:\mathbb{P}^{1}\to\mathbb{P}^{1}\), can be removed for a certain proportion of elliptic curves.
### Notations
We write \(f\sim g\), if their domain is in \(\mathbb{R}\), and they are asymptotically equivalent, that is, if \(\lim\limits_{x\to\infty}\frac{f(x)}{g(x)}=1\). We write \(f\ll g\) for \(|f|\leq c|g|\) where \(c\) is a constant irrespective of the domains of \(f\) and \(g\), often \(f=O(g)\) is written to denote the same. Moreover, we denote \(f=o(g)\) when \(\lim\limits_{x\to\infty}\frac{|f(x)|}{|g(x)|}=0\). For any integer \(n\), we denote \(\omega(n)\) as the number of distinct prime factors of \(n\). We denote \(\mathbb{Q}\) as the set of rational numbers and \(h\) as the standard height on \(\mathbb{Q}\). For any prime \(p\), denote \(\nu_{p}(\cdot)\) to be the associated \(p\)-adic valuation on \(\mathbb{Q}\), and \(\mathbb{Q}_{p}\) to be the field of \(p\)-adic numbers, and \(\mathbb{Z}_{p}\) to be the ring of \(p\)-adic integers. For any variety \(V\) over any field \(K\), denote \(V(K)\) be the set of all \(K\)-rational points on \(V\).
## 2. Tools to compute the modular degree
Let \(E/\mathbb{Q}\) be an elliptic curve. This section briefly overviews the identities involving \(m_{E}\), which we shall use throughout the rest of the article.
### On the modular approach
The Falting height \(h(E)\) is defined to be
\[h(E):=\frac{1}{12}\left(\log|\Delta_{E}|-\log|\Delta(\tau_{E})\text{im}(\tau_{E}) ^{6}|\right)-\log(2\pi),\]
where \(\tau_{E}\in\mathbb{H}\) corresponds to \(E\), \(\Delta(z)=q\prod_{n=1}(1-q^{n})^{24}\) be the Ramanujan \(\Delta\)-function, and \(j(z)=q^{-1}+\mathbb{Z}[[q]]\) be the normalized modular \(j\)-function. For semistable \(E\), it turns out that \(h(E)\) is close to \(\frac{1}{12}h(j_{E})\). More precisely, \(|h(E)-\frac{1}{12}h(j_{E})|\ll\log(h(j_{E}))\). In general, one extra factor comes from the unstable discriminant \(\gamma\) of \(E\). If we write \(j_{E}=\frac{a}{b}\) in the minimal form, then \(\gamma\) is denoted to be \(\frac{\Delta_{E}}{b}\). In the semistable case, we have \(\gamma=\pm 1\). In general, we have \(|h(E)-\frac{1}{12}h(j_{E})-\frac{1}{12}\log\gamma|\ll\log(h(j_{E}))\). These estimates are useful when \(E\) is in a nice enough Weierstrass form.
Let \(\omega_{E}\) be the invariant differential associated with \(E\). Given modular parameterization \(\phi_{E}:X_{0}(N)\to E\), one can write \(\phi_{E}^{*}(\omega_{E})=2\pi ic_{E}f_{E}(z)dz\), for some integer \(c_{E}\). This constant (adjusting sign) \(c_{E}>0\) is known to be Manin's constant. It is believed that \(c_{E}=1\). This is true when \(E\) is semistable. In general, it is known [29, Theorem 1.3] that given any finite set of primes \(S\), we have \(c_{E}=O_{S}(1)\), for any elliptic curve \(E/\mathbb{Q}\) that is semistable outside \(S\), i.e., \(p^{2}\mid N_{E}\implies p\in S\).
Now let \(f\) be any cuspform of level \(N\). Then the Peterson norm \(||\cdot||\) is defined by \(||f||_{N}=(\int_{\bar{I}_{0}(N)}|f(z)|dxdy)^{1/2}\). This can be computed explicitly by Rankin's method, which is given by
\[||f||_{N}=\frac{1}{48\pi}[\text{PSL}_{2}(\mathbb{Z}):\Gamma_{0}(N)]\text{ Res}_{s=2}\left(\sum_{n=1}^{\infty}\frac{|a(n)|^{2}}{n^{s}}\right).\]
It is then due to Deligne [10] that
\[m_{E}=c_{E}^{2}4\pi^{2}||f_{E}||_{N}^{2}e^{2h(E)}=\frac{4\pi^{2}||f_{E}||_{N}^ {2}}{\text{Vol}(E)}, \tag{1}\]
where we denote \(\text{Vol}(E)\) to be the area of the fundamental parallelogram associated with \(E\).
The analytic symmetric-square \(L\)-function defined to be \(L^{A}(Sym^{2}E,s)=\prod_{p}L_{p}^{A}(Sym^{2}E,s),\) where \(L_{p}^{A}(E,s)=(1-\alpha_{p}^{2}/p^{s})^{-1}(\alpha_{p}\beta_{p}/p^{s})^{-1}( 1-\beta_{p}^{2}/p^{s})^{-1}\). We have \(\alpha_{p}=\overline{\beta_{p}}\) and \(\alpha_{p}+\beta_{p}\) is the p th trace of Frobenius of \(E\). Additionally, if \(p^{2}\mid N_{E}\) then \(\alpha_{p}=\beta_{p}=0\). For the description of \(\alpha_{p}\) and \(\beta_{p}\), please refer to [38, Section 2]. A significant connection exists between symmetric squares and modular degree, established by Shimura [35], given by:
\[\frac{L^{A}(Sym^{2}E,2)}{\pi i\text{Vol}(E)}=\frac{m_{E}}{Nc_{E}^{2}}. \tag{2}\]
unfortunately \(L^{A}(Sym^{2}E,s)\) does not satisfy the function equation. For this to satisfy the functional equation, we adjust \(L^{A}(Sym^{2}E,s)\) with fudge factors, we get motivic L-function \(L^{M}(Sym^{2}E,s)=L^{A}(Sym^{2}E,s)U(s)\).
It is worthwhile to note that the local Euler factors \(L_{p}^{M}(Sym^{2}E,s)=L_{p}^{A}(Sym^{2}E,s)\) whenever \(p^{2}\nmid N_{E}\). If \(p^{2}\) divides the conductor then \(L_{p}^{A}(Sym^{2}E,s)=1\), which implies that \(L_{p}^{M}(Sym^{2}E,s)=U_{p}(s)\). From equation (2), we can derive the following expression:
\[\frac{L^{M}(\text{Sym}^{2}E,2)}{\pi i\text{Vol}(E)}=\frac{m_{E}}{Nc_{E}^{2}} \prod_{p\nmid N}U_{p}(1). \tag{3}\]
### Gross's association and supersingular zeroes
In this section, we assume that \(N:=p\) is a prime. Denote \(f_{E}\) as the associated newform of weight \(2\) and level \(p\). Gross in [19] associated \(f_{E}\) to an element \(v_{E}\) of \(\operatorname{Pic}(X)\), for some suitable curve \(\mathrm{X}\). The curve \(\mathrm{X}\) is \(X_{0}(p)\pmod{p}\), which consists of two copies of \(X_{0}(1)\) that meet transversally at the points whose underlying elliptic curve is supersingular at \(p\). The Picard group \(\mathfrak{X}\) of \(\mathrm{X}\) can be written as \(\sum_{i=1}^{n}\mathbb{Z}\cdot e_{i}\), where \(n=g(X_{0}(p))+1\), and each \(\{e_{i}\}\) can be identified with the isomorphism class of all supersingular elliptic curves over \(\overline{\mathbb{F}_{p}}\). There is \(\mathbb{Z}\)-linear bilinear map
\[\langle-,-\rangle:\operatorname{Pic}(X_{0}(p))\times\operatorname{Pic}(X_{0}( p))\to\mathbb{Z},\]
given by \(\langle e_{i},e_{i}\rangle=w_{i}\delta_{i,j}\), where \(w_{i}=\frac{1}{2}\#\mathrm{Aut}(E_{i})\). Now consider the Hecke correspondence \(t_{m}:\mathfrak{X}\to\mathfrak{X}\) defined by,
\[t_{m}(e_{i})=\sum_{j=1}^{n}B_{i,j}(m)e_{j},\]
where \(B_{i,j}(m)\) is the number of \(m\)-isogenies \(E_{i}\to E_{j}\). The transposition of the matrix \((B_{i,j}(m))_{1\leq i,j\leq}\) is commonly known as the Brandt matrix.
Let \(v_{E}=\sum v_{E}(e_{i})e_{i}\in\mathfrak{X}\) be the vector associated to \(f_{E}\), i.e., \(t_{m}(v_{E})=a(m)v_{E}\), where \(a(m)\) is the \(m^{\mathrm{th}}\) Fourier coefficient of \(f_{E}\). We then have
\[m_{E}|E(\mathbb{Q})_{\mathrm{tor}}|=\langle v_{E},v_{E}\rangle=\sum_{i=1}^{n} v_{E}(e_{i})^{2}w_{i},\]
where the first equality is due to Mestre [26].
There is an explicit description of \(w_{i}\). If \(p=1\pmod{4}\), all of the \(w_{i}\) are \(1\). If \(p=7\pmod{12}\), all \(w_{i}\) are \(1\) except at one index \(i\), in which case \(j(e_{i})=0\), and \(w_{i}=3\). Otherwise if \(p=11\pmod{12}\), all \(w_{i}\) are \(1\) except at two indices \(i,k\), in which case, \(j\) and \(w\) values are respectively, \(0,1728\) and \(3,2\).
### Relations to congruence number \(r_{E}\)
Doi, Hida, Ribet, Mazur, and others investigated congruence primes [11, 32]; moreover, congruence primes were an essential component in Wiles's [40] proof of Fermat's last theorem.
**Definition 1**.: _Let the newform linked to the elliptic curve \(E\) be \(f_{E}=\sum_{n>0}a_{n}q^{n}\in S_{2}\left(\Gamma_{0}(N)\right)\). The greatest integer such that there is a cuspform \(g(z)=\sum_{n>0}b_{n}q^{n}\) with integer Fourier coefficients \(b_{n}\), orthogonal to \(f_{E}\) with respect to the Peterson inner product, and congruent to \(f_{E}\) modulo \(r_{E}\), is the congruence number \(r_{E}\) of the elliptic curve \(E\). (i.e., \(a_{n}\equiv b_{n}\pmod{r_{E}}\) for all \(n\))._
Finding the link between congruence numbers and the modular degree is quite intriguing. Ribet showed that \(m_{E}\mid r_{E}\). It was later extended by Amod, Ribet, and Stein [2] showing that if \(\nu_{p}(N)\leq 1\), then \(\nu_{p}(r_{E})=\nu_{p}(m_{E})\). Now one may naturally ask the following.
**Question 1**.: _What are the prime factors of the quotient \(\frac{r_{E}}{m_{E}}\)?_
Amod, Ribet, and Stein [2] establishes that when the conductor of an elliptic curve is squarefree, indicating that the curve is semistable, the values of \(r_{E}\) and \(m_{E}\) are equal. Abbes and Ullmo, in their work [1], have proven that if a prime number \(p\) does not divide the conductor \(N\), then \(\nu_{p}(m_{E})=\nu_{p}(r_{E})\). Amod, Ribet, and Stein [2], in their conjecture, proposed that \(\nu_{p}\left(\frac{r_{E}}{m_{E}}\right)\leq\frac{1}{2}\nu_{p}(N)\). In other words, they suggest that the order of \(p\) dividing the ratio \(\frac{r_{E}}{m_{E}}\) is at most half the order of \(p\) dividing the conductor \(N\). They also proved that \(p\) is a prime such that \(p\mid r_{E}\) but \(p\nmid m_{E}\) then \(\dim_{\mathbb{T}/\mathfrak{m}}J_{0}(N)[\mathfrak{m}]>2\), _i.e._, multiplicity
one fails for \(\mathfrak{m}\). Where \(\mathbb{T}\) is the Hecke algebra acting on \(E\), and \(\mathfrak{m}\) be the annihilator of \(E[p]\) in \(\mathbb{T}\). The ideal \(\mathfrak{m}\) is said to satisfies multiplicity one if \(\dim_{\mathbb{T}/\mathfrak{m}}J_{0}(N)[\mathfrak{m}]=2\).
### Congruence number of modular Abelian varieties
We can define congruence numbers and modular degrees for modular Abelian varieties, which are quotient varieties of the Jacobian of modular curves. For every \(f\in S_{2}(\Gamma_{0}(N))\), define \(I_{f}\) to be subset of the hecke algebra \(T\) acting on \(S_{2}(\Gamma_{0}(N)\) which annihilate \(f\), i.e \(I_{f}=\{T\in\mathbb{T}\big{|}T(f)=0\}\). If \(f\in S_{2}(\Gamma_{0}(N))\) is a modular form associated with modular abelian variety \(A\), then \(A\equiv J_{0}(N)/I_{f}J_{0}(N)\). Let \(\phi_{1}:J_{0}(N)\to A\) be the quotient map, then by Poincare reducibility over \(\mathbb{Q}\), there exists a unique abelian subvariety \(A^{\vee}\) of \(J\) such that projects isogenously to the quotient \(A\). Let \(\phi\) be the composite isogeny of
\[\phi:A^{\vee}\xrightarrow{\phi_{1}}J\xrightarrow{\phi_{2}}A.\]
**Definition 2**.: _The modular exponent of \(A\) is the exponent of the kernel \(\phi\). The modular number of \(A\) equals the modular degree of \(\phi\)._
When \(A\) is an elliptic curve, the modular exponent is equal to the modular degree, and the modular number is the square of the modular degree. For more details about the modular degree of abelian varieties and Watkins's conjecture for abelian varieties, refer to the article by Dummigan and Krishnamoorthy[13]. Particularly [13, Corollary 9.5] discusses the 2 power dividing the modular degree.
**Definition 3**.: _The congruent number of abelian variety \(A=J_{0}(N)/I_{f}J_{0}(N)\) is defined to be order of the quotient group_
\[\frac{S_{2}(\Gamma_{0}(N),Z)}{S_{2}(\Gamma_{0}(N),Z)[I_{f}]\oplus S_{2}(\Gamma _{0}(N),Z)[I_{f}]^{\perp}}. \tag{4}\]
_The exponent of the above group is called the congruence exponent of the abelian variety \(A\)._
The natural question is to ask how the congruence number of Abelian varieties is related to the congruence number of elliptic curves when the Abelian variety is an elliptic curve. To answer this question, we state the following theorem, stated in [1] without proof. We provide a brief proof here.
**Theorem 3**.: _Let the newform linked to the elliptic curve \(E\) be \(f_{E}=\sum_{n>0}a_{n}q^{n}\in S_{2}\left(\Gamma_{0}(N)\right)\). Then the congruent number of the elliptic curve is consistent with Definition 3._
Proof.: Let \(d\) be the order of the quotient group \(\frac{S_{2}(\Gamma_{0}(N),Z)}{S_{2}(\Gamma_{0}(N),Z)[I_{f_{E}}]\oplus S_{2}( \Gamma_{0}(N),Z)[I_{f_{E}}]^{\perp}}\). For any \(g\in S_{2}(\Gamma_{0}(N),\mathbb{Z})[I_{f_{E}}]^{\perp}\), we can find an \(h\in S_{2}(\Gamma_{0}(N),\mathbb{Z})\) such that \(dh=-g+f_{E}\). This implies that \(f_{E}\equiv g\pmod{d}\). Therefore, we have \(d\leq r_{E}\).
Since \(r_{E}\) is the congruence number, there exists \(g\in S_{2}\left(\Gamma_{0}(N)\right)\) such that \(f\equiv g\pmod{r_{E}}\). Now, define \(h\) to be \(\frac{1}{r_{E}}\left(f-g\right)\). We observe that \(r_{E}\) is the smallest integer such that \(r_{E}h\in S_{2}(\Gamma_{0}(N),\mathbb{Z})[I_{f_{E}}]\oplus S_{2}(\Gamma_{0}(N ),\mathbb{Z})[I_{f_{E}}]^{\perp}\). Therefore, \(d\leq r_{E}\), and the proof is complete.
By the above theorem, the congruence number of Abelian varieties is equal to the congruence number of elliptic curves when the Abelian variety is an elliptic curve. Amod, Ribet, and Stein [2] have also established analogous theorems to \(m_{E}\mid r_{E}\) for modular Abelian varieties.
### A lower bound on \(2\)-valuation
Let \(E\) be a semistable curve with the conductor \(N\). For each \(d\mid\mid N\), let us consider the Atkin-Lehner involution \(W_{d}\) on \(X_{0}(N)\). The set \(\{W_{d}|d\mid\mid N\}\) forms an abelian subgroup of automorphisms of rank equal to the number of prime dividing \(N\). For every prime divisor \(p\mid N\), we have \(W_{p}(f_{E})=\pm f_{E}\). More precisely, there exists \(w_{p}(f_{E})\in\{\pm 1\}\) such that \(W_{p}(f_{E})=w_{p}(f_{E})f_{E}\). It turns out that \(w_{p}(f_{E})=1\), if \(E\) has non-split multiplicative reduction at \(p\), and \(-1\), if \(E\) has split multiplicative reduction.
If \(W_{p}(f_{E})=f_{E}\) Then the parameterization map \(\phi:X_{0}(N)\to E\) factors through \(X_{0}(N)/W_{p}\).
The degree of the map \(X_{0}(N)\to X_{0}(N)/W_{p}\) is \(2\), contributing a factor of \(2\) to the modular degree.
Let \(\mu\) be the number of primes dividing \(N\). Let us consider the homomorphism \((\mathbb{Z}/2\mathbb{Z})^{\mu}\to\{\pm 1\}\) defined by, \(W_{d}\mapsto w_{d}\). Suppose \(W^{\prime}\) is the kernel of this map. Since the field generated by the Fourier coefficients of the associated newform \(f_{E}\) is \(\mathbb{Q}\), it follows from Dummigan and Srilakshmi [13, Proposition 2.1] that there is a homomorphism \(W^{\prime}\to E(\mathbb{Q})[2]\) with kernel \(W^{\prime\prime}\), whose order divides the modular degree \(m_{E}\). The order of \(W^{\prime\prime}\) is at least greater than or equal to \(\frac{\#W^{\prime}}{E(\mathbb{Q})[2]}\). This gives,
\[\nu_{2}(m_{E}) \geq\nu_{2}\left(\frac{\#W^{\prime}}{E(\mathbb{Q})[2]}\right)\] \[=\nu_{2}\left(\#W^{\prime}\right)-\nu_{2}\left(E(\mathbb{Q})[2]\right)\] \[=\mu-\dim_{\mathbb{Z}/2\mathbb{Z}}(W/W^{\prime})-\dim_{\mathbb{Z }/2\mathbb{Z}}(E(\mathbb{Q})[2]).\]
## 3. Divisibility and Watkins's conjecture
In this section, we shall discuss the viability of Watkins's conjecture. Recall from the introduction that Watkins's conjecture predicts that \(2^{\text{rank}(E((Q)))}\) should divide \(m_{E}\). We shall briefly discuss the approaches that have been taken to study this.
### Heuristic proof of Watkins's conjecture
Dummigan heuristically proved that certain classes of elliptic curves satisfy Watkins's conjecture. Let \(E\) be an elliptic curve with the square free even conductor \(N\), with no rational point of order \(2\), and \(E[2]\) is ramified at all primes \(p\mid N\) and the action of complex conjugation on \(E[2]\) is non-trivial then Dummigan [12] proved that \(E\) satisfies Watkins's conjecture assuming certain \(R\cong T\). He uses Galois cohomology techniques and a squaring map to capture the weak Mordell-Weil group in a Selmer group. Later he established the isomorphism with the tangent space of universal deformation ring of residual representation \(\overline{\rho}:\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to \operatorname{GL}_{2}(E[2])\) with certain deformation conditions. Local conditions in the \(2\)-Selmer group determine the deformation conditions. Later, using Wiles's numerical criterion, he proved that \(2^{\text{rank}(E(\mathbb{Q}))}\) divides the modular degree of the elliptic curve.
In support of the conjecture, let us note that the conjecture is true for almost all elliptic curves over \(\mathbb{Q}\) when arranged by a suitable height. Any elliptic curve over \(\mathbb{Q}\) can be uniquely written in the Weierstrass form \(E_{A,B}:y^{2}=x^{3}+Ax+B,\ A,B\in\mathbb{Z}\) such that \(p^{6}\) does not divide \(B\) whenever \(p^{4}\mid A\). We arrange the
elliptic curves over \(\mathbb{Q}\) with respect to the following notion of height
\[H(E_{A,B})=\max\{|A|^{3},|B|^{2}\}^{1/6}.\]
Then, the number of elliptic curves of height at most \(x\) is asymptotic to \(cx^{5}\), for some constant \(c>0\). Denote,
\[\text{Avg}(\text{rank})=\lim\,\sup_{x\to\infty}\frac{\sum_{H(E)\leq x}\text{ rank}(E)}{\sum_{H(E)\leq x}1},\,\,\text{Avg}_{2}(\text{degree})=\lim\,\sup_{x\to\infty} \frac{\sum_{H(E)\leq x}\nu_{2}(m_{E})}{\sum_{H(E)\leq x}1}.\]
Note that Watkins's conjecture implies that \(\text{Avg}(\text{rank})\leq\text{Avg}_{2}(\text{degree})\). However, we can still prove the consequence without using Watkins's conjecture. In some sense, the following can be considered an average version of Watkins's conjecture.
**Theorem 4**.: _Over the family of all elliptic curves/ \(\mathbb{Q}\), we have the following_
\[\text{Avg}(\text{rank})\leq\text{Avg}_{2}(\text{degree}).\]
To prove this, we first need the following key ingredient.
**Lemma 1**.: _When arranged by height, for almost all the elliptic curves \(E/\mathbb{Q}\) we have \(\omega(N_{E})\geq 2\)._
However, we include a self-contained proof due to needing a reference.
Proof.: It is enough to show that there exists only \(O\left(\frac{x^{5}}{\log x}\right)\) many elliptic curves \(E\) with \(H(E)\leq x\) and \(\omega(N_{E})=1\). Note that we may assume that \((N_{E},6)=1\) since there exist only (up to isomorphism) finitely many elliptic curves having conductor power of \(2\) or \(3\). This is because only finitely many (up to isomorphism) elliptic curves have bad reductions at \(2\) and \(3\). The discriminant of \(E_{A,B}\) is \(\Delta(E_{A,B})=-16(4A^{3}+27B^{2})\). The conductor \(E_{A,B}\) is the product over all primes \(p\) dividing \(\Delta(E_{A,B})\). Then, it is enough to show that
\[\#\{A,B\in\mathbb{Z}:|A|\leq x^{3},|B|\leq x^{2},\,\,\omega(4A^{3}+27B^{2})=1 \}\ll\frac{x^{5}}{\log x}.\]
Moreover, we have also argued that it is enough to assume \(4A^{3}+27B^{2}\) is co-prime to \(6\). Fix a parameter \(z>3\) to be determined later, and note that if \(-4A^{3}-27B^{2}\) is the power of some prime \(p>z\), then \(-A^{3}=B^{2}\pmod{p}\). In particular, \(\left(\frac{-A}{p}\right)=1\). Then it is enough to estimate \(S_{1}+S_{2}\), where
\[S_{1}=\sum_{\left(\frac{A}{p}\right)=1}\#\{B\in\mathbb{Z}:|B^{2}|\leq x,p\mid 4 A^{3}+27B^{2}\implies p>z\}\]
and
\[S_{2}=\#\{A,B\in\mathbb{Z}:|A|^{3}\leq x,|B^{2}|\leq x,|4A^{3}+27B^{2}|=\text {power of some prime $p\leq z$}\}.\]
To estimate \(S_{2}\), note that if we have \(|4A^{3}+27B^{2}|=p^{n}\) for some \(A,B\in\mathbb{Z}\) with \(|A|\leq x^{2},|B|\leq x^{3}\), then \(p\ll x^{1/n}\). Therefore, any \(n\geq 4\) contributes only \(x^{2+1/4}\log x\) many such \(A,B\). In particular, we may assume that \(|4A^{3}+27B^{2}|\leq x^{3}\) to estimate \(S_{2}\). There are at most \(x^{2}z^{3}\) many such \(A,B\).
Let us now estimate \(S_{1}\), which we can do by any standard sieve method. For any integer \(A\), consider \(\mathcal{P}_{A}\) to be the set of primes \(p\) for which \(\left(\frac{-A}{p}\right)=1\). Then by Selberg's sieve, we have
\[S(A,\mathcal{P}_{A},z)=\#\{|B|\leq x^{3}:p\mid 4B^{2}+27A^{3}\implies p>z\}= \frac{x^{3}}{\log z}+O(z^{2}).\]
The proof then follows, taking \(z=x^{1-\epsilon}\)
With this, we are now ready to prove Theorem 4
Proof of Theorem 4.: It follows from Lemma 1 that the conductor of almost the elliptic curves has at least two prime factors. Moreover, it follows from Grant's result in [17] that almost all of them are 2-torsion free over \(\mathbb{Q}\). Now, the discussion in Section 2.5 implies that \(\operatorname{Avg}_{2}(\operatorname{degree})\geq 1\). The proof from Bhargava, Shankar's result in [5, Theorem 3] is complete.
**Remark 1**.: Minimalist conjecture predicts that the average rank of elliptic curves should be 1/2; in particular, half of the curves have a rank 0, and half have rank 1. Since Watkins's conjecture is known to be true for all curves of rank 1, we see that Watkins's conjecture is true for almost all elliptic curves. This is a stronger version of Theorem 4.
### Elliptic curves with odd modular degree
Calegari and Emerton undertook a significant endeavour to establish the validity of Watkins's conjecture, focusing specifically on elliptic curves with odd modular degrees. Their approach involved developing a distinct characterization for elliptic curves falling within this category. Through an in-depth analysis of the Atkins-Lehner involution on \(X_{0}(N)\), they effectively demonstrated that when an elliptic curve \(E\) has an odd modular degree, the curve's conductor \(N\) is divisible by a maximum of two odd primes. Additionally, they established that the analytic rank of \(E\) is necessarily even. They also show that if \(E\) is an elliptic curve with an odd modular degree whose conductor \(N\) has more than one prime factor, then \(E\) possesses a rational point of order 2.
let \(\mathbb{T}\) denote the Hecke algebra over \(\mathbb{Z}\) acting on \(S_{2}\left(\Gamma_{0}(N)\right)\), and let \(\mathfrak{m}\) be a maximal ideal of \(T\) such that \(\mathbb{T}/\mathfrak{m}=\mathbb{F}_{2}\) Let \(f_{E}\) be the eigenform of level \(\Gamma_{0}(N)\) and weight 2 corresponding to the elliptic curve \(E\). According to the theorem in [2], the modular degree \(m_{E}\) of \(E\) is divisible by 2 if and only if \(f_{E}\) satisfies a congruence modulo 2 with a cusp form of level \(\Gamma_{0}((N))\). The set of cuspidal eigenforms congruent to \(f_{E}\) is indexed by the set \(\operatorname{Hom}(\mathbb{T}_{\mathfrak{m}}\times\mathbb{Q}_{2},\mathbb{Q}_{ 2})\). If \(f_{E}\) does not satisfy any congruence with a cusp form, it means that there are no cuspidal eigenforms congruent to \(f_{E}\). In other words, the set \(\operatorname{Hom}(\mathbb{T}_{\mathfrak{m}}\times\mathbb{Q}_{2},\mathbb{Q}_ {2})\) is trivial. Therefore, \(f_{E}\) satisfies no congruence with a cusp form of level \(\Gamma_{0}(N)\) if and only if \(\mathbb{T}_{\mathfrak{m}}=\mathbb{Z}_{2}\). It is known that \(\mathbb{T}_{\mathfrak{m}}=\mathbb{Z}_{2}\) equivalent to that there exists no nontrivial minimal deformation of \(\operatorname{GL}_{2}(\mathbb{F}_{2}[x]/(x^{2}))\) of \(\overline{\rho}\), where \(\overline{\rho}:\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to GL_{2} (E[2])\) is the residual representation corresponding to \(E[2]\).
The case of an elliptic curve \(E\) with an odd modular degree and a conductor that is a prime has been analyzed using the Galois deformation theory. Furthermore, they demonstrated that if the Galois representation \(\overline{\rho}\) satisfies any of the following conditions:
* \(\overline{\rho}\) is totally real.
* \(\overline{\rho}\) is unramified at 2.
* \(\overline{\rho}\) is ordinary, complex, and ramified at 2.
then the ring of Hecke operators \(\mathbb{T}_{\mathfrak{m}}\) is not isomorphic to \(\mathbb{Z}_{2}\). Consequently, it can be concluded that if \(E\) has a prime conductor, it exhibits supersingular reduction at 2, and \(\mathbb{Q}(E[2])\) is entirely complex.
In the analysis of the case where the conductor is a prime power, it was observed that there exist only finitely many elliptic curves of conductor \(2^{k}\) for all positive integers \(k\). Now, let's consider the case where the conductor is of the form \(N=p^{r}(r>1)\), with \(p\) being an odd prime. By considering the twist of \(E\) by \(\chi\), which is the unique quadratic character of conductor \(p\), and noting that quadratic twists preserve
\(E[2]\), we get \(f_{E}\equiv f_{E}\otimes\chi\pmod{2}\). Since \(E\) has an odd modular degree, this implies that \(f_{E}=f_{E}\otimes\chi\). Therefore, \(E\) has complex multiplication. It is known that there are only finitely many elliptic curves with complex multiplication and conductor \(p^{r}\). The authors determined the modular degree of these curves by consulting current databases and found that the only possible values for \(N\) are 27, 32, 49, or 243.
Subsequently, building upon the work of Calegari and Emerton [6] and Agashe et al. [2], Yazdani examined the elliptic curves with odd congruence numbers. They found that the conductors of all such elliptic curves, with a few exceptions, can be categorized as \(p,pq,2p,4p\). Each of these categories was meticulously examined to demonstrate that all elliptic curves falling within these classifications possess a finite Mordell-Weil group, with the only potential exception being when the conductor is prime. In other words, Yazdani also proved that if an elliptic curve \(E\) defined over \(\mathbb{Q}\) has an odd modular degree, then its rank is zero unless it possesses a prime conductor and an even analytic rank. For a more comprehensive statement, please refer to [41, Theorem 3.8].
Furthermore, Kazalicki and Kohen [24] settled the case of elliptic curves with prime conductors by proving that those with an odd modular degree have a rank of zero. To establish their results, Kazalicki and Kohen utilized the Gross association, as described in Subsection 2.2, to establish a connection between the elliptic curve and an element of the Picard group. They then proved that if a rational elliptic curve \(E\) has a prime conductor and a non-zero rank, the modular degree of \(E\) is always even. Let us give a brief idea of the argument in the next section.
### Elliptic curves with prime conductor
Let \(E/\mathbb{Q}\) be any elliptic curve of positive rank, Mestre's result in [26] implies that \(m_{E}=\langle v_{E},v_{E}\rangle=\sum_{i=1}^{n}w_{i}v_{E}(e_{i})^{2}\). On the other hand, it turns out that \(\sum_{i=1}^{n}v_{E}(e_{i})=0\), and all the \(w_{i}\) are \(1\) when \(p=1\pmod{4}\). We immediately have \(m_{E}\) even in this case. For the case \(p=3\pmod{4}\), This settles down the conjecture for \(\operatorname{rank}(E(\mathbb{Q}))=1\) case. For rank \(2\) case, Gross-Kudla's formula implies \(\sum_{i=1}^{n}v_{E}(e_{i})^{3}=0\). The reader may take a look at [24] for more details. Consequently we have \(\sum_{i=1}^{n}v_{E}(e_{i})^{2}=0\pmod{4}\), if each of the odd terms in \(\{V_{E}(e_{i})\}_{1\leq i\leq n}\) appears an even number of times. This is literally true, provided that the following conjecture holds.
**Conjecture 2**.: _Let \(E/\mathbb{Q}\) be any elliptic curve of conductor \(p\) and \(\operatorname{rank}\left(E\left(\mathbb{Q}\right)\right)>0\). Then the coefficient \(v_{E}(e_{i})\) of \(e_{i}\) in \(v_{E}\) is even whenever \(j(e_{i})\in\mathbb{F}_{p}\)._
When the root number of \(E\) is \(-1\), Kazalicki and Kohen showed that all such \(\nu(e_{i})\) are, in fact, \(0\). This is true because the root number condition implies \(t_{p}(v_{E})=-v_{E}\). For each \(1\leq i\leq n\), set \(1\leq\bar{i}\leq n\) be the unique index such that \(E_{\bar{i}}=E_{i}^{p}\). Note that, \(i=\bar{i}\) if and only if, \(j(E_{i})\in\mathbb{F}_{p}\). With this, we have [23] that \(t_{p}(e_{i})=e_{\bar{i}}\), which gives a proof of the conjecture for root number \(-1\) case. In the root number \(1\) case, the conjecture is known to be true if \(E\) has a positive discriminant and there exists a prime \(\ell\) such that
\[\left(\frac{-p}{\ell}\right)=-1,\;a(\ell)=1\pmod{2}. \tag{5}\]
Under these conditions, we have \(B_{i,j}(\ell)=0\pmod{2}\), whenever \(i,j\in S_{p}\). Moreover, the conditions in (5) are satisfied for any \(E/\mathbb{Q}\) of conductor \(p\), satisfying \(E(\mathbb{Q})[2]\) is trivial and \(\Delta_{E}>0\).
The main ingredients are \(B_{i,j}(\ell)=0\pmod{2}\) and \(a(\ell)=1\pmod{2}\). In the case of \(\Delta_{E}<0\), the condition \(\left(\frac{-p}{\ell}\right)=-1\) implies \(\ell\) is inert in \(\mathbb{Q}(\sqrt{-p})\) which implies that \(2\) divides the order of \(\operatorname{Frob}_{\ell}\)
which implies that \(a(\ell)\) is not odd. This is one of the main hurdles in the case of root number \(1\) and negative discriminant. However, Kazalicki and Kohen's proof also shows that it is enough to have some integer \(m\) for which \(a(m)=1\pmod{2}\) and all the \(B_{i,j}(m)\) are even. From the commutativity of Brandt's matrices, and [23, Proposition 20] it follows that \(\widetilde{B}(m)=\left(B_{i,j}(n)\right)_{i,j\in S_{p}}\pmod{2}\) is also commutative inside \(M_{s_{p}}(\mathbb{Z}/2\mathbb{Z})\). Therefore it may be fruitful to analyze the structure of those \(\widetilde{B}(m)\), whose one of the eigenvalues is \(1\). Also, note that each \(B_{i,j}(m)\) gives a modular form \(\theta_{i,j}\), from which we get another modular form \(\theta^{\prime}_{i,j}:=\theta_{i,j}-E_{2}\), where \(E_{2}=1-24\sum_{n=1}^{\infty}\frac{n\eta^{\prime}}{1-\eta^{\prime}}\). Now we require the cusp forms \(f_{E}\) and \(\theta^{\prime}_{i,j}\) to have a common Fourier coefficient of opposite parity. A statement about the independency of the Fourier coefficients, e.g., _Generalized Sato-Tate hypothesis_[4, Section 4.1], could be helpful in this approach.
We end the discussion of this section with the following remark. As long as the prime conductors are concerned, this is predicted by Watkins in [38, Section 4.2] the following.
**Conjecture 3**.: _Let \(N:=p\) be a prime, and \(\nu_{2}(m_{E})=0\), then either \(p=17\), or \(p\equiv 3\pmod{8}\), or if \(p\) is of the form \(x^{2}+64\)._
#### 3.3.1. A conjecture on the supersingular coefficients
In this section, we discuss the Conjecture 2. In [23], Kazalicki and Kohen studied modular forms over \(\overline{\mathbb{F}_{p}}\) in the sense of Katz's modular form. If we are a bit more specific, there exists a polynomial \(P_{E}(x)\in\mathbb{Z}[x]\) whose roots in \(\mathbb{F}_{p}\) are often supersingular zeros. In particular, they showed that if \(j\in\mathbb{F}_{p}\) is \(j\) invariant of a supersingular elliptic curve \(E_{i}\) over \(\overline{\mathbb{F}}_{p}\), then \(P_{E}(j)=0\pmod{p}\) if and only if, \(v_{E}(e_{i})=0\pmod{p}\). In the sense of Katz, the condition \(P_{E}(j)=0\pmod{p}\) corresponds to a rule that is \(0\) acting on \(E_{i}\). Finally, a study of Hecke operators on the space of modular forms corresponding to supersingular elliptic curves, again in the sense of Katz. This, of course, does not settle the conjecture; however, this shows that in the case of root number \(-1\) case, any supersingular \(j\) invariant in \(\mathbb{F}_{p}\) is a root of \(P_{E}(x)\pmod{p}\). We denote \(s_{p}\) to be \(\#S_{p}\), i.e., the number of isomorphism classes of supersingular elliptic curves over \(\mathbb{F}_{p}\). Following a remark in Kazalicki-Kohen in [23], we can deduce the following in support of the conjecture.
**Theorem 5**.: _Let \(E/\mathbb{Q}\) be any elliptic curve of positive rank having prime conductor \(p\). Take any prime \(p\) satisfying \(\left(\frac{-D}{p}\right)=-1\), for some \(D\in\{1,3,7,11,19,43,67,163\}\). Then the following implications hold._
1. _Conjecture_ 2 _is true for any such prime_ \(p\) _with_ \(s_{p}=2\)_._
2. _If_ \(s_{p}>2\)_, and_ \(s_{p}\) _is even, there are at least two distinct_ \(i,j\in S_{p}\) _for which_ \(v_{E}(e_{i})=v_{E}(e_{j})=0\pmod{2}\)_._
Before proving this, let us first define the required terminology and recall the preliminary results related to the supersingular elliptic curves. For any integer \(D=3\pmod{4}\), we say that an elliptic curve over any field \(F\) has CM over \(\mathcal{O}_{-D}=\mathbb{Z}[\frac{1}{2}(D+\sqrt{-D})]\) if, \(\mathcal{O}_{-D}\) contains maximally in the endomorphism ring of the elliptic curve over \(\overline{F}\). Then supersingular elliptic curves with \(j\)-invariant in \(\mathbb{F}_{p}\) correspond to those \(D\) with those \(D\) for which \(p\) is ramified or inert in \(\mathbb{Q}(\sqrt{-D})\). On the other hand, there are \(h(-D)\) many \(\overline{\mathbb{Q}}\) isomorphism classes of elliptic curves over \(\overline{\mathbb{Q}}\) with CM by \(\mathcal{O}_{-D}\), where \(h(-D)\) is the class number of \(\mathbb{Q}(\sqrt{-D})\). Furthermore, we denote \(h_{i}(-D)\) to be the number of optimal embeddings of the order of discriminant \(-D\) into \(\operatorname{End}(E_{i})\) modulo conjugation by \(\operatorname{End}(E_{i})^{*}\) and \(u(-D)\) is the number of units of the order. Then we set \(b_{D}=\frac{1}{u(-D)}\sum_{i=1}^{n}h_{i}(-D)v_{E}(e_{i})\).
Proof of Theorem 5.: We first claim that the number of odd terms is always even. This is immediate from the fact that \(\sum_{i=1}^{n}v_{E}(e_{i})=0\) and that, \(v_{E}(e_{i})=v_{E}(e_{i})\pmod{2}\), in particular,
\[\sum_{i\in S_{p}}v_{E}(e_{i})\equiv 0\pmod{2}.\]
This shows that if there is one even \(v_{E}(e_{i})\), then there are at least two even terms since \(s_{p}\) is even. Since \(\operatorname{rank}(E(\mathbb{Q}))>0\), we have \(L(E,1)=0\). In particular, by Gross-Waldspurger formula [18], we have \(\langle v_{E},b_{D}\rangle=0\) for any \(D>0\) with \(\left(\frac{-D}{p}\right)=-1\).
For proof of part (a), note that since \(p\) and \(D\) satisfy the imposed conditions, \(p\) is inert in \(\mathbb{Q}(\sqrt{-D})\). In particular, \(b_{D}\) is supported only on the indices in \(S_{p}\). Moreover, since \(\mathbb{Q}(\sqrt{-D})\) has class number \(1\), we have \(h_{i}(D)=1\) for exactly one \(i\in S_{p}\), and that \(E_{i}\) has CM by \(\mathcal{O}_{-D}\). Then the condition \(\langle v_{E},b_{D}\rangle=0\) implies \(v_{E}(e_{i})=0\). In particular, if \(s_{p}=2\), then then \(v_{E}(e_{i^{\prime}})=0\) for the other \(i^{\prime}\in S_{p}\), as well. Now for part (b), the proof follows from the observation in the previous paragraph since \(s_{p}\) is even.
**Remark 2**.: It is known [18] that
\[s_{p}=\begin{cases}\frac{1}{2}h(-p),&\text{if $p\equiv 1\pmod{4}$}\\ 2h(-p),&\text{if $p\equiv 3\pmod{8}$}\\ h(-p),&\text{if $p\equiv 7\pmod{8}$}\end{cases}\]
We turn into discussing the case of any arbitrary prime power. In this case, it turns out that \(\operatorname{rank}(E(\mathbb{Q}))\leq 1\) provided that \(E(\mathbb{Q})[2]\) is non-trivial, see [33, Lemma 2.4] for details. In particular, the Watkins conjecture is true in this case. Furthermore, a complete classification of all such elliptic curves could be found in [33]. The next section will briefly discuss the conjecture for any elliptic curves with a non-trivial \(2\)-torsion defined over \(\mathbb{Q}\).
### Elliptic curves with non-trivial \(2\)-torsion
In this section, we assume that \(E(\mathbb{Q})[2]\) is either \((\mathbb{Z}/2\mathbb{Z})\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\). One of the important advantages of this assumption is that \(2\mid\#E(\mathbb{F}_{p})\) for any prime \(p\) co-prime to \(2N\), since \(E(\mathbb{Q})[2]\) embeds in \(E(\mathbb{F}_{p})\). Also, in this case, we have a rank bound in terms of the number of prime factors of \(N\). For any squarefree integer \(D\), denote \(E^{(D)}\) be the twist of \(E\) by \(D\). Recall that the quadratic twist \(E^{(D)}\) is the elliptic curve defined by \(y^{2}=x^{3}+aD^{2}x+bd^{3}\). Note that \(E^{(D)}\) is isomorphic to \(E\) over the field \(\mathbb{Q}(\sqrt{D})\). From 1, we have
\[\nu_{2}(m_{E^{(D)}})\gg\sum_{\begin{subarray}{c}p\mid D\\ (p,\ 2N)=1\end{subarray}}\nu_{2}\left((p+1)(p+1-a(p))(p+1+a(p))\right)\gg 3( \omega(D)-\omega(N)). \tag{6}\]
Then for any \(D\) with a sufficiently large number of prime factors, the proof of Watkins's conjecture for \(E^{(D)}\) (see [14]) follows from the following rank bound
\[\operatorname{rank}(E(\mathbb{Q}))\leq 2\alpha+\mu-1\leq 2\omega(N)-1, \tag{7}\]
where \(\alpha\) and \(\mu\) respectively denote the number of additive and multiplicative reductions of \(E\). In particular, given any \(E/\mathbb{Q}\), the number of \(|D|\leq x\) for which the twisted curve \(E^{(D)}\) does not satisfy Watkins's conjecture is only of order \(\frac{x(\log\log x)^{|E(x)}}{\log x}\), where \(k(E)=6+5\omega(N)-\nu_{2}(m_{E}/c_{E}^{2})\). In the semistable case, or at least when \(E\) is semistable away from a finite set, \(k(E)\) is linearly bounded as a function of \(\omega(N_{E})\)
Recall that the rank bound at (7) gives the proof of Watkins's conjecture when \(\omega(N)=1\), i.e., when \(N\) is a prime power. In fact, in the case when \(E\) has a prime power conductor, it is shown in [8] that any quadratic twist \(E^{(D)}\) satisfies Watkins's conjecture. The proof essentially follows from the classification in [33], where a much stronger estimate holds in (6) and (7). More precisely, we have
\[\nu_{2}(m_{E^{(D)}})\geq\nu_{2}\left(\frac{m_{E}}{c_{E}^{2}}\right)+\nu_{2} \left(\frac{||f||_{N^{(D)}}^{2}}{||f_{E}||_{N}^{2}}\right)+\frac{1}{6}\nu_{2} \left(\frac{\Delta_{E^{(D)}}}{\Delta_{E}}\right).\]
Essentially from Setzer's classification, the following estimates [8] hold
\[\nu_{2}\left(\frac{m_{E}}{c_{E}^{2}}\right)\geq-1,\ \nu_{2}\left(\frac{||f||_{N^{( D)}}^{2}}{||f_{E}||_{N}^{2}}\right)\gg\omega(D),\ \nu_{2}\left(\frac{\Delta_{E^{(D)}}}{\Delta_{E}}\right)=O(1),\]
where the implicit constants are absolute.
In this direction, we also recall the result of Caro and Pasten in [9]. They showed that \(E\) satisfies Watkins's conjecture if \(E(\mathbb{Q})[2]=\mathbb{Z}/2\mathbb{Z}\), and all primes of bad reductions, are either non-split or if the number of primes of non-split reduction is odd. In this section, we shall give an extension of this result to the case \(E(\mathbb{Q})[2]=(\mathbb{Z}/2\mathbb{Z})^{2}\).
**Theorem 6**.: _Let \(E/\mathbb{Q}\) be any semistable elliptic curve such that \(E(\mathbb{Q})[2]=(\mathbb{Z}/2Z)^{2}\). Then Watkins's conjecture is true for \(E\) provided that one of the following holds._
1. \(E\) _has non-split multiplicative reductions at all primes dividing_ \(N_{E}\)_, and_ \(\omega(N_{E})\) _is odd._
2. \(E\) _has at least one split multiplicative reductions, and_ \(\dim_{\mathbb{F}_{2}}(\mathrm{III}[2])>1\)_._
3. _Parity conjecture is true for_ \(E,\ \dim_{\mathbb{F}_{2}}(\mathrm{III}[2])=1\) _and number of non-split multiplicative reductions is even._
4. \(\dim_{\mathbb{F}_{2}}(\mathrm{III}[2])>2\)_._
_Moreover, if \(E(\mathbb{Q})[2]=\mathbb{Z}/2\mathbb{Z}\), then Watkins's conjecture is true for \(E\) provided that, either the conditions at [9, Theorem 1.2], or if \(\mathrm{III}[2]\) is non-trivial._
Proof.: For the proof of part (a), we follow the same approach as in [9]. If all \(E\) has a non-split reduction at every prime, then \(\nu_{2}(m_{E})\geq\mu-2\). Suppose that \(\mathrm{rank}(E(\mathbb{Q}))=\mu-1\), then it follows from the exact sequence
\[0\to E(\mathbb{Q})/2E(\mathbb{Q})\to\mathrm{Sel}^{2}(E)\to\mathrm{III}[2]\to 0, \tag{8}\]
that \(\dim_{\mathbb{F}_{2}}(\mathrm{Sel}^{2}(E))\geq\mu+1\). On the other hand, it immediately follows from the second exact sequence in [9, Lemma 3.1] that
\[\dim_{\mathbb{F}_{2}}(\mathrm{Sel}^{2}(E))\leq s(E,\theta)+s^{\prime}(E, \theta)\leq\mu+1.\]
This shows that \(\mathrm{III}(E)[2]\) is trivial, hence, the parity conjecture holds for \(E\), consequently rank is even, since \(-1=-\prod_{p\mid N}w_{p}=w(E/\mathbb{Q})=(-1)^{\mathrm{rank}(E(\mathbb{Q}))}\). This is a contradiction since the rank is \(\mu-1\), which is even. Here \(w(E/\mathbb{Q})\) denotes the root number of \(E/\mathbb{Q}\), and \(w_{p}\) denotes the local root number, i.e., the sign of the involution of \(W_{p}\) acting at \(f_{E}\).
For the proof of part \((b)\), we have \(\nu_{2}(m_{E})\geq\mu-3\). Let us assume that \(\mathrm{rank}(E(\mathbb{Q}))\geq\mu-2\). From the proof of part \((a)\), we know that \(\dim_{\mathbb{F}_{2}}(\mathrm{Sel}^{2}(E))\leq\mu+1\). This shows that \(\dim_{\mathbb{F}_{2}}(\mathrm{III}[2])\leq 1\), a contradiction, as desired.
Now for a proof of part \((c)\), note that the imposed condition on \(\mathrm{III}\) and the fact that \(\dim_{\mathbb{F}_{2}}(\mathrm{Sel}^{2}(E))\leq\mu+1\) implies \(\mathrm{rank}(E(\mathbb{Q}))\leq\mu-2\). Since the parity Conjecture is true for \(E\), and the number of non-split multiplication is even, we have \(w(E/\mathbb{Q})=(-1)^{\mu-n_{s}+1}=(-1)^{\mathrm{rank}(E(\mathbb{Q}))}\), and hence \(\mu-\mathrm{rank}(E(\mathbb{Q}))\) is odd. Here \(n_{s}\) denotes the number of non-split multiplicative reductions. This shows that \(\mathrm{rank}(E(\mathbb{Q}))\leq\mu-3\), as desired.
To prove part \((d)\), observe that we have \(\nu_{2}(m_{E})\geq\mu-3\) and \(\mathrm{rank}(E(\mathbb{Q}))\leq\mu-1\). Now, assume for contradiction that \(\mathrm{rank}(E(\mathbb{Q}))>\mu-3\). According to the short exact sequence (8), it follows that \(\dim(\mathrm{Sel}^{2}(E))>\mu+1\), which leads to a contradiction. Therefore, the assumption is false, and we can conclude that \(\mathrm{rank}(E(\mathbb{Q}))\leq\mu-3\).
Now for the case \(E(\mathbb{Q})[2]=\mathbb{Z}/2Z\), it is enough to consider the case when there is at least one split multiplicative reduction. In this case, we have \(\nu_{2}(m_{E})\geq\mu-2\). If \(\mathrm{rank}(E(\mathbb{Q}))\geq\mu-1\), then we must have \(\dim_{\mathbb{F}_{2}}(\mathrm{III}[2])=0\). This is because, \(\dim_{\mathbb{F}_{2}}(\mathrm{Sel}^{2}(E))\leq\mu\), which follows from [9, Lemma 3.1]. Hence, a contradiction, as \(\mathrm{III}[2]\) is non-trivial.
**Remark 3**.: One can follow a similar treatment for the elliptic curves having additive reductions at some primes. In that case, one needs to put suitable conditions on the parity of \(N^{+}-N^{-}\), where \(N^{+}\) (resp. \(N^{-1}\)) denotes the number of \(p^{e}\) such that \(p^{e}\mid\mid N_{E}\) and \(w_{p}=1\) (resp. \(w_{p}=-1\)). We refer the reader to [25, Theorem 1.1] for a description of the local root numbers at the primes of additive reductions.
### Elliptic curves with trivial \(2\)-torsion
It is known from [17] that \(E(\mathbb{Q})[2]\) is trivial for almost all the elliptic curves over \(\mathbb{Q}\). Therefore, we are considering the most elliptic curves in this case. In this cases, we have \(\mathrm{Gal}(\mathbb{Q}(E[2])/\mathbb{Q})\) is either \(\mathbb{Z}/3\mathbb{Z}\), or symmetric group \(S_{3}\). Note that \(a(p)\) is even, if and only if, \(\mathrm{Frob}_{p}\) has order \(2\) in \(S_{3}\). If the Galois group is \(\mathbb{Z}/3\mathbb{Z}\), or equivalently, if \(\Delta_{E}\) is a square, the set of such primes \(p\) has a density exactly \(\frac{1}{3}\). However, if the Galois group is \(S_{3}\), the set of such primes \(p\) has a density exactly \(\frac{2}{3}\). With this observation, we prove the following.
**Proposition 1**.: _Let \(E/\mathbb{Q}\) be an elliptic curve with trivial \(E(\mathbb{Q})[2]\). Then there exists a set of prime \(S_{E}\) of positive density \(d_{E}\) such that, for any squarefree integer \(D\) whose all prime factors are from \(S_{E}\), we have_
\[\nu_{2}(m_{E^{(D)}})\geq 3\omega(D)+O_{E}(1).\]
_Here \(d_{E}=\frac{1}{3}\), if \(\Delta_{E}\) is a square, and \(\frac{2}{3}\) otherwise. Moreover, we have \(\nu_{2}(m_{E^{(D)}})\geq\omega(D)+O_{E}(1)\), for any squarefree integer \(D\)._
Proof.: The proof of the second part follows immediately from (6). For the first part, denote \(S_{E}\) as the set of all primes for which \(a(p)\) is even. Viewing \(\mathrm{Gal}(\mathbb{Q}(E[2])/\mathbb{Q})\) inside \(S_{3}\), asking \(a(p)\) to be even is equivalent to asking for \(\mathrm{Frob}_{p}\) not to have order \(3\). When the Galois group is \(S_{3}\), or equivalently, if \(\Delta_{E}\) is a square, the required density is \(\frac{1}{3}\). Similarly in the \(S_{3}\) extension case, the density is \(\frac{2}{3}\).
To deduce a consequence of Watkins's conjecture, we study the rank of \(E^{(D)}\) on an average. It is a conjecture of Goldfeld [16] that the average rank is \(\frac{1}{2}\) for any elliptic curve \(E/\mathbb{Q}\). Heath-brown [20] shows that BSD and GRH for the L-function of elliptic curves imply that the average is at most \(\frac{3}{2}\). With this in hand, we have the following result.
**Theorem 7**.: _Let \(E/\mathbb{Q}\) be any elliptic curve with trivial \(E(\mathbb{Q})[2]\). Then Watkins's conjecture is true for almost all quadratic twists, provided that BSD and GRH are true for the L-function of elliptic curves._
Proof.: Denote \(S\) as the set of squarefree integers \(D\) for which Watkins's conjecture is not true for \(E^{(D)}\). It is enough to prove that \(S\) has density \(0\). For the sake of contradiction, let us assume that the (lim-sup) density is \(c>0\). It follows from Proposition 1 that, \(\operatorname{rank}(E^{(D)})\gg_{E}\omega(D)\), for any square-free integer \(D\). In particular, it follows from Heath-Brown's result in [20] that
\[x\gg\sum_{\begin{subarray}{c}|D|\leq x,\\ D\in S\end{subarray}}\operatorname{rank}(E^{(D)})\gg_{E}\sum_{\begin{subarray} {c}|D|\leq x,\\ D\in S\end{subarray}}\omega(D). \tag{9}\]
Denote \(S_{k}(x)=\#\{|D|\leq x:D\in S,\omega(|D|)=k\}\), and note that \(\sum_{k\leq\log x}S_{k}(x)\geq\#\{|D|\leq x:D\in S\}=cx\). Note that it is enough to only consider the contributions up to \(k=\log x\), as \(\omega(n)\leq\log n\). By the partial summation,
\[\sum_{\begin{subarray}{c}|D|\leq x,\\ D\in S\end{subarray}}\omega(|D|)=\sum_{k\leq\log x}kS_{k}(x)\geq cx\log x+O \left((\log x)^{2}\right),\]
which is a contradiction to (9).
In this regard, we also would like to point out that if \(E[2](\mathbb{Q})\) is trivial, and \(\nu_{2}(m_{E})=r\), then \(\omega(N)\leq r\) provided that \(E\) has non-split multiplication reduction at the prime dividing \(N\). In that case, root number of \(E/\mathbb{Q}\) is \(-1\). If \(r=1\), then \(N\) is a prime power. In the prime case, note that Watkins's conjecture follows from Kazalicki-Kohen in [23]. On the other hand, if \(E\) has a split multiplicative reduction, \(\omega(N)\) is either \(1\) or \(2\). Again, for the case \(\omega(N)=1\), the conjecture is proved in [23] when \(\Delta_{E}>0\). This leaves us with another possibility: what happens to the conjecture when \(N=pq,E\) does not have split multiplicative at least at one prime and \(E[2](\mathbb{Q})\) is trivial.
### On a weaker question
Let \(E/\mathbb{Q}\) be any elliptic curve. This section discusses a much weaker question than Watkins's conjecture.
**Question 2**.: _Then how often \(m_{E}\) is always even?_
First of all, it follows from Lemma 1 that \(\nu_{2}(m_{E})\geq 1\) for almost all elliptic curves over \(\mathbb{Q}\) parameterized by \(A,B\in\mathbb{Z}\). We now discuss the analogous distribution over any \(1\)-parameter family of elliptic curves. First, in the case of quadratic twists, it follows from Proposition 1 that \(\nu_{2}(m_{E^{(D)}})\gg_{E}\omega(D)\). In particular, \(m_{E^{(D)}}\) is even for almost all \(D\). Now we ask the same for any \(1\)-parameter family of elliptic curves given by
\[E_{t}:=y^{2}=x^{3}+f(t)x+g(t),\;t\in\mathbb{Z},\]
where \(f(x),g(x)\in\mathbb{Q}[x]\) are two arbitrary polynomial. Then we have the following.
**Theorem 8**.: _Let \(E/\mathbb{Q}\) be any elliptic curve, then_
\[\#\{t\in\mathbb{Z}:|t|\leq x,\;\nu_{2}(m_{E_{t}})\geq 1\}=O\left(\frac{x}{\log x }\right).\]
The proof of this theorem follows from an analogous version of Lemma 1, which we briefly state now.
**Lemma 2**.: _For at most \(O\left(\frac{x}{\log x}\right)\) many \(|t|\leq x\), we have \(\omega(N_{E_{t}})\geq 1\)._
To prove this lemma, we follow a similar path as in the proof of Lemma 1. Consider the polynomial \(\Delta(x)=\Delta_{E_{x}}\in\mathbb{Q}[x]\), and then it is enough to show that \(\#\{t\in\mathbb{Z}\mid|t|\leq x,\omega(\Delta(t))=1\}=O\left(\frac{x}{\log x}\right)\). For the Sieving argument, we consider \(\{\mathcal{P}_{\lambda\vdash\operatorname{deg}(\Delta)}\}\), where \(\operatorname{deg}(\Delta)\) denotes degree of the polynomial \(\Delta\), and \(\mathcal{P}_{\lambda}\) be the set of all primes \(p\) for which \(\Delta(x)\pmod{p}\) has a factorization of type \(\lambda\) in \(\mathbb{F}_{p}[x]\). Note that \(\Delta(t)=0\pmod{p}\) for some prime \(p\) if and only if, \(p\in\mathcal{P}_{\lambda}\) for some partition \(\lambda\vdash\operatorname{deg}(\Delta)\) such that \(\lambda\) has at least one part which is \(1\). Now for each such \(\lambda\), we sieve modulo \(\mathcal{P}_{\lambda}\).
Note that we got a significant advantage while working with the cases when \(N_{E}\) has at least two prime factors. Now the question is, what happens for the families of elliptic curves whose conductors have only one prime factor? When \(N\) is a prime, we know that \(m_{E}\) is even when \(\operatorname{rank}(E(\mathbb{Q}))>0\). For the rank \(0\) case, we have the following result.
**Theorem 9**.: _Let \(E/\mathbb{Q}\) be any elliptic curve of prime conductor \(p\). Then \(m_{E}\#E_{\operatorname{tor}}\) is even, provided that \(p\pmod{12}\in\{1,5,7\}\). If \(p=11\pmod{12}\), then \(m_{E}\#E_{\operatorname{tor}}\) is even provided that Conjecture 2 is true._
Proof.: We recall the discussion in Section 2.2. In the case \(p\pmod{12}\in\{1,5\}\), we have all \(w_{i}=1\) for all \(i\), and hence,
\[m_{E}\#E_{tor}=\langle v_{E},v_{E}\rangle=\sum_{i=1}^{n}v_{E}(e_{i})^{2}=\sum_ {i=1}^{n}v_{E}(e_{i})=0\pmod{2},\]
where the last equality follows from [24, Proposition 2.3]. In the case \(p=7\pmod{12}\), we have \(w_{i}=3\) for exactly one \(i\). In this case, a similar argument applies. In the case \(p=11\pmod{12}\), we have \(w_{i}=2\) for exactly one \(i\). In this case, \(j(e_{i})=1728\in\mathbb{F}_{p}\). The proof then follows from Conjecture 2.
Note that \(m_{E}\) is not necessarily even when \(\#E_{\operatorname{tor}}\) is even, in other words, when \(E(\mathbb{Q})[2]\) is non-trivial. In that case, according to Setzer's classification, \(p=17\) or of the form \(u^{2}+64\). In the case, \(p=17\), four non-isomorphic elliptic curves have LMFD levels respectively \(17.a_{1},17.a_{2},17.a_{3}\) and \(17.a_{4}\). It turns out that \(m_{E}\) is even in the case of \(17.a_{1},17.a_{2}\) and \(17.a_{4}\). For primes of the form \(p=u^{2}+64\), two non-isomorphic elliptic curves of conductor \(p\) have level \(p.a_{1}\) and \(p.a_{2}\). In the case of \(p.a_{1}\), the discriminant \(p\) is positive and not a square. Then it follows from [23, Theorem 4] and arguing similarly as in the proof of [24, Theorem 1.4] we get \(4|m_{E}\#E_{tor}\).
### On the function field analogue
Let \(p\) be a prime and \(k\) be a finite extension of \(\mathbb{F}_{p}\). We define \(A\) as the polynomial ring \(k[T]\) and its field of fractions as \(K=k(T)\). We further define \(K_{\infty}\) as the completion of \(K\) at \(T^{-1}\), and \(\mathbb{C}_{\infty}\) as the algebraic closure of \(K_{\infty}\).
The Drinfeld upper half plane is denoted by \(\Omega\), which is defined as \(\mathbb{C}_{\infty}\setminus K_{\infty}\). It is important to note that the general linear group \(\operatorname{GL}(2,K_{\infty})\) acts on \(\Omega\) through fractional linear transformations. Specifically, this action is performed by the Hecke congruence subgroup associated with an ideal \(\mathfrak{n}\) of \(A\), denoted by \(\Gamma_{0}(\mathfrak{n})\). This subgroup is defined as follows:
\[\Gamma_{0}(\mathfrak{n})=\left\{g=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\Big{|}\ a,b,c,d\in\mathbb{F}_{q}[T],\ c\equiv 0\pmod{ \mathfrak{n}}\right\}\]
The quotient space \(\Gamma_{0}(\mathfrak{n})\setminus\Omega\) is compactified by adding the finitely many cusps \(\Gamma_{0}(\mathfrak{n})\setminus\mathbb{P}^{1}(K)\), resulting in the Drinfeld modular curve denoted by \(X_{0}(\mathfrak{n})\).
Let \(E\) be an elliptic curve over \(K\) of conductor \(\mathfrak{n}_{E}=\mathfrak{n}\cdot\infty\) and it has multiplicative reduction at \(\infty\). Then we have a result analogous to the modularity result, which asserts that parametrization map from
the Drinfeld modular curve \(X_{0}(n)\) to the elliptic curve \(E\), \(\phi_{E}:X_{0}(n)\to E\) such that \(\phi_{E}\) is non-trivial and of minimal possible degree [7, Theorem 2.1]. The degree of this map is defined to be the modular degree of \(E\).
Let \(E\) be an elliptic curve with conductor \(n_{E}=n\infty\), where \(n\) is an ideal. Let \(f_{E}\) be the primitive newform associated with \(E\). We denote by \(\mathcal{W}(n)\) the \(2\)-elementary abelian group of all Atkin-Lehner involutions. Let \(f\) be a primitive newform; since \(f\) is primitive, it is determined by its eigenvalues up to sign. We define \(\mathcal{W}^{\prime}=\{W\in\mathcal{W}(n):W(f_{E})=f_{E}\}\), which consists of the Hecke operators \(W\) that preserve the newform \(f_{E}\). Additionally, let \(\kappa:=\dim_{\mathbb{F}_{2}}(\mathcal{W}(n)/\mathcal{W}^{\prime})+\dim_{ \mathbb{F}_{2}}E(K)[2]\), where \(E(K)[2]\) represents the \(2\)-torsion points of \(E\) defined over \(K\). Then, by [7, Proposition 3.2], we have the inequality \(\omega_{K}(n)-\kappa\leq\nu_{2}(m_{E})\), where \(\omega_{K}(n)\) denotes the number of prime divisors of \(n\) in the field \(K\), and \(\nu_{2}(m_{E})\) represents the \(2\)-adic valuation of the modular degree \(m_{E}\) associated with \(E\). This is proved by realizing \(M(\Gamma_{0}(n),\mathbb{Q}_{\ell})\) as the dual of \(V_{\ell}(J_{0}((n)))\). To be more precise, this identification shows that \(\pi([W(D)])=\pi([D])\) for every divisor \(D\) of degree \(0\) over \(X^{0}(n)\). Then the proof goes along the same lines as in [13, Proposition 2.1].
In this functional field situation, we have the following rank bound given by Tate [37],
\[\operatorname{rank}_{\mathbb{Z}}(E(K))\leq\deg(n_{E})-4, \tag{10}\]
where \(n_{E}\) is the finite part of the conductor of \(E/K\). By utilizing the lower bound of \(\nu_{2}(m_{E})\), Caro [7]_potentially_ establishes the validity of the Watkins conjecture. In other words, denote \(k^{\prime}\) as the splitting field of \(n\) over the finite field \(k\), and set \(K^{\prime}=k^{\prime}(T)\). Then Watkins's conjecture is true for the base change over \(K^{\prime}\), i.e., for the elliptic curve \(E^{\prime}=E\times_{\operatorname{Spec}\kappa}\operatorname{Spec}K^{\prime}\), the Watkins's conjecture is true. Here, \(E\) represents a modular semi-stable elliptic curve defined over \(K\) with conductor \(nE=n\infty\), where \(n\) is the conductor and \(n_{\infty}\) is a finite field containing the splitting field of \(n\) over \(k\). Caro needed the base over \(K^{\prime}\) to ensure that \(\omega_{K^{\prime}}(n)=\deg(n)\), and the helps the lower bound of \(\nu_{2}(m_{E})\) to beat the rank bound at (10). However, we can use the same technique to deduce the following, allowing us to work with a possibly smaller \(K^{\prime}\).
**Theorem 10**.: _Let \(E/K\) be any elliptic curve of conductor \(n=n_{E}\infty\). Let \(k^{\prime\prime}\) be the extension of \(k\) over which \(n\) has the property that_
\[\deg(n)-\omega_{K^{\prime\prime}}(n_{E})\leq 3-\kappa.\]
_Then Watkins's conjecture is true for \(E\times_{\operatorname{Spec}(\kappa^{\prime\prime})}\operatorname{Spec}(K^{ \prime\prime})\)._
We omit the proof but note that \(\kappa\) is always at most \(3\); in that case, we have no improvement. However, \(\kappa\) could still be \(0,1\) or \(2\), for instance, \(\kappa=0\) if and only if, \(E(K)[2]\) is trivial and \(E\) has split-multiplicative reduction at every prime. In all these cases of \(\kappa\), we have a possibly smaller extension of \(K\) contained in \(K^{\prime}\) that obeys the imposed condition at Theorem 10.
In the same article, Caro discusses Watkins's conjecture for certain quadratic twists, i.e., twists by polynomials of even degree. The assumption on degree is needed to ensure that the twisted elliptic curve is modular, as remarked by Caro [7, Section 4]. With this, Caro showed that Watkins's conjecture is true for any such even twist, provided that \(\omega_{K}(g)\geq 3\) and \(E/K\) are semi-stable. On the other hand, if \(E/K\) is semi-stable but with non-trivial \(E(K)[2]\), then similarly, as in the case of \(\mathbb{Q}\), we also have the following
function field analog of the rank bound
\[\operatorname{rank}_{\mathbb{Z}}(E(K))\leq\omega_{K}(\mathfrak{n}). \tag{11}\]
With this, Caro showed that any \(g\) of even degree works, provided that \(E(K)[2]=\mathbb{Z}/2\mathbb{Z}\) and \(E\) have non-split multiplicative reduction everywhere. By a similar argument, one can show that if, instead, we have \(E(K)[2]=(\mathbb{Z}/2\mathbb{Z})^{2}\), then a similar conclusion holds if \(\omega_{K}(g)\geq 2\).
## 4. On the growth of modular degree
In this section, we shall discuss the growth properties of \(m_{E}\) in both \(\mathbb{Q}\) and functional field situations. Let us first consider \(E/\mathbb{Q}\) to be an elliptic curve of conductor \(N\). It is conjectured that \(m_{E}=O(N^{2+\varepsilon})\). Now recall the two identities from (1). It follows from the proof of [27, Theorem 1] that \(N^{1-\varepsilon}<||f_{E}||_{N}^{2}<N^{1+\varepsilon}\). From [39, Lemma 2.1] we have an explicit lower bound of the form \(m_{E}>\frac{60\pi^{2}N^{1-\varepsilon}}{D_{E}^{1/\varepsilon}}\). Watkins [39] showed that \(L(\operatorname{Sym}^{2}(E),1)\gg\frac{1}{N^{(2)}}\), by showing that there exists no real zero for \(L(\operatorname{Sym}^{2}(E),s)\) in a region of the form \(\operatorname{Re}(s)\geq 1-\frac{\varepsilon}{\log N^{(2)}}\), where \(N^{(2)}\) is conductor of the associated L-function, which is \(\geq N_{E}^{2}\). This gives a lower bound on \(m_{E}\) of order \(\gg N^{7/6-\varepsilon}\). Here \(L(\operatorname{Sym}^{2}(E),s)\) is the L-function associated with the motive \(H^{1}(\operatorname{Sym}^{2}(E))\), whose local factors \(L_{p}(s)\), are of the polynomials in \(p^{-s}\), that are determined by \(\det(1-\operatorname{Frob}_{p}|\operatorname{Sym}^{2}(T_{\ell}(E)\otimes \mathbb{Q}_{\ell}))\), for any prime \(p\neq\ell\). The existence of the zero-free region essentially shows that \(m_{E}\gg\frac{N^{7/6}}{\log N}\prod_{p\mid N}L_{p}(1)\). The local factors of the (motivic) Symmetric square \(L\)-function is easier to understand at the primes of good and multiplicative reductions. For the other primes, these factors are \(\geq 1\) (see [39, page 5, Corollary]), whenever \(p\equiv 1\pmod{12}\), and hence the lower bound follows.
One approach to get an upper bound would be to know the bounds for Manin's constant and Falting's height of \(E\). Indeed, for any integer \(A,B\), consider the _Frey_ curve \(E:y^{2}=x(x-A)(x+B)\). It turns out that the Frey curves are semistable away from \(2\). In particular, \(c_{E}\) is uniformly over this family. Then a computation of \(j\) and \(\gamma\) of \(E_{A,B,C}\), gives the connection with the ABC problem plugging them in (1). In particular, it turns out that ABC implies the conjectural degree bound holds. In fact, any unconditional result on ABC contributes an unconditional bound on the modular degree. For instance, [36] implies a bound of order \(e^{O_{\varepsilon}(N^{1/3+\varepsilon})}\).
Murty in [27] considered the conjecture for elliptic curves with CM. Other than the ones with \(j\) invariants, respectively \(0\) and \(1728\), it turns out that any elliptic curve with CM is the quadratic twist of a set of finitely many elliptic curves \(\mathcal{E}\). Note that \(j_{E}=j_{E^{(D)}}\) for any \(E\in\mathcal{E}\), and in particular, \(h(E)-h(E^{(D)})=\frac{1}{2}\log D\). This shows that
\[(1-\varepsilon)\log N_{D}+\log D\ll_{E}\log m_{E^{(D)}}\ll_{E}(1+\varepsilon )\log N_{D}+\log D, \tag{12}\]
where \(N_{D}\) is the conductor of \(E^{(D)}\), which is \(N_{E}D^{2}\) if \((N_{E},D)=1\), and otherwise, \(N_{D}=O(N_{E}D^{2})\). This shows a lower bound of the form \(m_{E}\gg N_{E}^{3/2+\varepsilon}\) for any elliptic curve \(E\) with CM. Also, as long as the upper bound is concerned, the conjectural bound holds whenever the twist by \(D\) satisfies that \((d,N_{E})=1\).
We can now prove the following by adapting to the arguments in the last two paragraphs.
**Theorem 11**.: _For a positive proportion of elliptic curves in the family of all elliptic curves (resp. in the family of twists of a given \(E\)) satisfies the conjectural degree bound._
Proof.: For any set of finitely many primes \(S\), denote \(\mathcal{E}_{S}\) be the set of all elliptic curves over \(E/\mathbb{Q}\) for which \(N_{E}\) is semistable away from \(S\). Note that \(c_{E}=O_{S}(1)\) uniformly over this family. It is known due to Fouvry et al. [15] that \(\log(\Delta_{E})\leq(1+\varepsilon)\log N_{E}\) for almost all elliptic curves over \(E/\mathbb{Q}\), when arranged by height. On the other hand for any \(\delta>0\), the number of pairs \((A,B)\in\mathbb{Z}^{2}\), with \(\left|\frac{A_{S}^{3}}{\Delta_{E}}\right|<1+\delta\) has density at most \(\delta\). Choosing \(\delta\) suitably, we now have that \(h(j_{E})=\log(\Delta_{E})\leq(1+\varepsilon)\log N_{E}\), for all elliptic curves outside a set of density \(O(\varepsilon)\). We now claim that, \(\gamma=O_{S}(1)\), for any \(E\in\mathcal{S}\). Modulo this, we have \(m_{E}\ll_{S}N_{E}^{7/6+\varepsilon}\), for any \(E\in\mathcal{E}_{S}\). It is then enough to prove the claim and show that \(\mathcal{E}_{S}\) has a density at least \(1-\prod_{p\in S}\frac{1}{p^{2}}\).
First, to prove the claim, note that \(E\) has an additive reduction for any prime \(p\) dividing \(\gamma\). Since \(E\in\mathcal{S}_{p}\), it is evident that \(p\in S\). It is enough to bound the power of \(p\) by dividing \(\gamma\). For this, we need to chase Kodaira's classification. For instance, from Table 1 in [34], we immediately see that the power is bounded for all the types, other than \(I_{n},I_{n}^{*}\) (\(n\geq 1\)). Following the notations of [34, Theorem 1.6], note \(j_{E}=\frac{16\sigma^{2}-48\varepsilon b}{\Delta}\), and the \(p\)-adic valuation of numerator of at most \(6\), for any prime \(p\) and for any \(n\geq 2\). This shows that the valuation of \(\Delta_{E}\) in \(j_{E}\) does not go down too much, and in particular, this analysis shows that \(\nu_{p}(\gamma)\leq 6\) for any prime \(p\in S\).
Now to compute the density of \(\mathcal{E}_{S}\), again note from Table 1 in [34] that, the proportion of all elliptic curves having additive reduction at a prime \(p\) is \(1/p^{2}\). This shows that, the density of \(\mathcal{E}_{S}\) is at least \(1-\left(\sum_{p\not\in S}\frac{1}{p^{2}}\right)>\frac{1}{2}+\sum_{p\in S}\frac{ 1}{p^{2}}>0\), as desired.
Now let \(E/\mathbb{Q}\) be any given elliptic curve, and \(\{E^{(D)}\}\) be the family of quadratic twists. For any \(D\) co-prime to \(N_{E}\), we have \(N_{E^{(D)}}=ND^{2}\). Then it follows from (12) that
\[N_{E^{(D)}}^{3/2-\varepsilon}\ll m_{E^{(D)}}\ll N_{E^{(D)}}^{3/2+\varepsilon},\]
for any such \(D\). This completes the proof.
**Remark 4**.: It follows from the proof of Theorem 11 that a stronger estimate \(m_{E}\ll N_{E}^{7/6+\varepsilon}\). Assuming Manin's conjecture, i.e., that Manin's constant is uniformly bounded, the same estimate with exponent \(7/6+\varepsilon\) holds for almost all elliptic curves. Moreover, the reader also may note from the proof of Theorem 11 that exponent \(3/2+\varepsilon\) can be achieved for a positive proportion of elliptic curves in any family of quadratic twists.
Note that the degree bound could also be seen as one of the consequences for the bound of \(||f_{E}||^{2}\). Let us now quickly mention another consequence. Rankin in [31, Theorem 1] showed that \(\sum_{1\leq n\leq x}|a_{E}(n)|^{2}=\alpha_{E}x^{2}+o(x^{2}),\) where \(\alpha_{E}>0\) is some computable constant. In particular, applying partial summation formula, it follows from [27] that \(\alpha_{E}=O\left(\frac{\phi(N)||f_{E}||^{2}}{N^{2}}\right)=O(\log N)\). We would also like to point out that an inexplicit version of the square-sum problem for any Fuchsian group (of the first kind) is discussed in [3].
Regarding the degree bound conjecture, one may also ask for a weaker problem; is any prime factor \(\ell_{E}\) of \(m_{E}\) is \(O(N^{2+\varepsilon})\)? Now one may ask about the growth of \(\ell_{E}\), varying over all the elliptic curves \(E/\mathbb{Q}\). Note that any such \(\ell_{E}\) is also a congruence prime for \(E\), i.e., there exists a cuspform \(g(z)=\sum_{n>0}a_{g}(n)q^{n}\) with integer Fourier coefficients \(b_{n}\), such that \(a_{E}(n)\equiv a_{g}(n)\pmod{\ell_{E}}\) for all \(n\). Suppose that \(g\) corresponds to an elliptic curve \(E^{\prime}/\mathbb{Q}\). Then the representation \(\rho_{E,E^{\prime}}:\operatorname{Gal}_{\mathbb{Q}}\to\operatorname{GL}_{2}( \mathbb{Z}/\ell_{E}\mathbb{Z})\) is not subjective. With this observation, we can prove the following.
**Theorem 12**.: _Let \(E/\mathbb{Q}\) be any elliptic curve, and \(\ell_{E}\geq 5\) be a prime diving \(m_{E}\). Set \(\mathcal{E}(x)\) be the set of all elliptic curves over \(\mathbb{Q}\) of height at most \(x\). Then, \(g\) corresponds to at most \(O(x^{4}\log^{2}x)\) many elliptic curves in \(\mathcal{E}(x)\)._
To prove this, we follow the similar Sieve approach as in the proof of (10) in [22]. The difference is that, we have to carry out the sieving on copy of \(\mathbb{Z}^{2}\) inside \(\mathbb{Z}^{4}\).
Proof of Theorem 12.: For each prime \(p\), and each conjugacy class \(C\) in \(\mathrm{SL}_{2}(\mathbb{Z}/\ell_{E}\mathbb{Z})\), set
\[\Omega_{E}(C)=\{(r,s)\in(\mathbb{Z}/p\mathbb{Z})^{2}:\pi_{2}(\rho_{E,E_{s,s}}( \mathrm{Frob}(p)))\in C\},\]
where \(\pi_{2}\) denotes the natural projection onto the second component. Regardless of what the first component \(E\) is, \(\Omega_{E}(C)\) is in bijection with \(\{(r,s)\in(\mathbb{Z}/p\mathbb{Z})^{2}:\pi_{2}(\rho_{E_{s,s}}(\mathrm{Frob}(p) ))\in C\}\), provided that \(p=1\pmod{\ell_{E}}\). In that case, we have from [21, Theorem 8] that \(\#\Omega_{E}(C)=p^{2}\left(\frac{RC}{\#\mathrm{SL}_{2}(\mathbb{Z}/\ell_{E} \mathbb{Z})}+O\left(\frac{RC}{p^{1/2}}\right)\right)\). This shows that,
\[P_{C}(x)=\sum_{p\leq x}\frac{\Omega_{E}(C)}{p^{2}}=\frac{\#C}{\#\mathrm{SL}_{2} (\mathbb{Z}/\ell_{E}\mathbb{Z})}\pi(x,\ell_{E},1)+O(\#Cx^{1/2}),\]
where \(\pi(x,\ell_{E},1)\) denotes \(\{p\leq x:p=1\pmod{\ell_{E}}\}\). Let us now denote \(\mathcal{E}_{E}(x)\) be the set of all elliptic curves \(E^{\prime}\) in \(\mathcal{E}(x)\), that corresponds to \(g\). Since \(\mathrm{im}(\rho_{E,E^{\prime}})\neq\Delta(\ell_{E})\) for any such \(E^{\prime}\), by [22, Lemma 3.1] we know that, there exists conjugacy classes \(C_{1},C_{2}\) in \(\mathrm{SL}_{2}(\mathbb{Z}/\ell_{E}\mathbb{Z})\) such that \(\mathrm{im}(\rho_{E,E^{\prime}})\cap C_{1}\times C_{2}=\phi\). By large sieve, we argue similarly as in [22, page 3389], and get for any conjugacy class \(C\) that
\[\pi(x,\ell_{E},1)^{2}\#\mathcal{E}_{E}^{2}(x)\leq\sum_{E^{\prime}\in\mathcal{ E}(x)}(\pi_{E,E^{\prime}}(x,C)-\delta_{\mathbb{C}}\pi(x,\ell_{E},1))^{2}\ll\# \mathcal{E}(x)P_{C}(x),\]
where \(\delta_{C}=\frac{\#C}{\#\mathrm{SL}_{2}(\mathbb{Z}/\ell_{E}\mathbb{Z})}\), and \(\pi_{E,E^{\prime}}(x,C)=\{p\leq x:\pi_{2}(\rho_{E,E^{\prime}}(\mathrm{Frob}_{ p}))\in C\}\). In particular, \(\#\mathcal{E}_{E}^{2}(x)\ll\sum_{C_{1}\times C_{2}}\frac{\#\mathcal{E}(x)P_{C}(x)}{ \pi(x,\ell_{E},1)^{2}}\ll_{\ell_{E}}\mathcal{X}^{4}(\log x)^{2}\), as desired.
**Remark 5**.: Of course, \(g\) is not necessarily an eigenform. Then we can write, \(g\) as linear combination of a family of hecke eigenforms \(\{f_{i}\}_{s(N)}\), where \(s(N)=\dim(S_{2}(\Gamma_{0}(N)))\). We can then consider the family of representations \(\{\rho_{E,E_{1},E_{2},\cdots,E_{n_{2}(N)}}\}_{\begin{subarray}{c}E_{i}\in \mathcal{E}(x)\\ 1\leq i\leq s(N)\end{subarray}}\), and draw a similar conclusion as in Theorem 12, applying the multi dimensional analog of Large sieve.
### Function field analog of growth
Following the same set us as in Section 3.7, we denote \(|\mathfrak{n}|_{\infty}=q^{\deg(\mathfrak{n})}\), and \(\deg_{\mathrm{ns}}(j_{E})\) be the nonseparable degree of \(\mathbb{F}_{q}(T)/\mathbb{F}_{q}(j_{E})\). Given any newform \(f_{E}\), it turns out that \(L(Sym^{2}f_{E},2)=q\frac{\|f_{E}\|^{2}}{\mathfrak{n}}\). Moreover, \(\mathfrak{m}_{E}=\frac{\|f_{E}\|^{2}}{\mathfrak{n}\mathfrak{n}_{\infty}(j_{E})}\), where \(\mathrm{val}_{\infty}(j_{E})\) denotes the number of irreducible components of the singular fiber \(\infty\) (note that, we always assume that \(E\) has bad reduction at \(\infty\)). Of course \(1\leq\mathrm{val}_{\infty}(j_{E})\leq\deg(\Delta_{E})\), and hence, \(\frac{\|f_{E}\|^{2}}{\deg(\Delta_{E})}\leq\mathfrak{m}_{E}\leq||f_{E}||^{2}\). Moreover, it follows from [30] that \(\deg(\Delta_{E})\leq\deg_{\mathrm{ns}}(j_{E})\deg(\mathfrak{n}_{E})\), where \(\deg_{\mathrm{ns}}(j_{E})\) denotes the non-seperable degree of \(\mathbb{F}_{q}(T)/\mathbb{F}_{q}(j_{E})\). Papikian [28] provided both analytic and algebraic ways to bound the norm of \(f_{E}\). In the analytic approach, the bound follows as a consequence of Ramanujan's conjecture on the holomorphic \(L\)-functions on \(\mathbb{C}\), and Rademacher's version of Phragmen-Lindelof's theorem for the estimation of \(L\)-functions on a strip. Now for the algebraic approach, Grothendieck's theory L-function implies that \(L(s,\mathrm{Sym}^{2}(E))\) can be written as a quotient of two polynomials in \(q^{-s}\), whose degree is bounded by \(2\deg(\mathfrak{n}_{E})-4\). This is a consequence of Grothendieck-Ogg-Shafarevich's formula, realizing \(\mathrm{Sym}^{2}\) as a constructible sheaf over \(\mathbb{P}^{1}_{\mathbb{F}_{q}}\). To get the degree bound, they assumed that \(E\) is semi-stable, which gurantees that all the wild parts
\(\delta_{\mathfrak{p}}|\mathfrak{p}\in\mathbb{P}_{\mathbb{F}_{q}}^{1}\) are 0. However, a weaker bound of the form \(\ll\deg(\mathfrak{n}_{E})\) (on the degree of \(L(s,\mathrm{Sym}^{2}(E))\)), saves the day for us. For this, we may put a weaker (than being semistable) condition on \(E\), that sum of all the wild parts, i.e., \(\sum_{\mathfrak{p}\in\mathbb{P}_{\mathbb{F}_{q}}^{1}}\delta_{\mathfrak{p}}\) is \(O(\mathfrak{n}_{E})\). Conversely, a lower bound is also achieved using some standard analytic tools. To summarize the discussion, we now can state the bounds [28, Corollary 6.2] in a nice geometrical way;
\[\log\left(\frac{\mathfrak{n}_{E}}{\deg_{\mathrm{ns}}(j_{E})}\right)\ll\log(m _{E})\ll\log(\mathfrak{n}_{E}).\]
Now the question that naturally comes to our mind is that, whether it is possible to remove the extra dependency on \(E\), that appears as a factor \(\deg_{\mathrm{ns}}(j_{E})\) in the lower bound. As pointed out by Papikian in [28, page 347] that, on the family \(\{\Delta_{E^{\prime\prime}}\}_{n\geq 1}\}\), discriminant increases, but the conductor remains same, as \(n\) grows. Hence, the extra factor can not be removed for all elliptic curves. Following the spirit as in all the previous sections, we consider the family of all elliptic curves, and quadratic twists, to study the proportion of elliptic curves where it is possible to remove the factor. First, given any elliptic curve \(E:=y^{2}=x^{3}+f(T)x+g(T)\) with \(f(T),g(T)\in\mathbb{F}_{q}[T]\), we define height \(H(E)=\max\left\{3\mathrm{deg}(f),2\mathrm{deg}(g)\right\}\). This is the function field analog of height in the \(\mathbb{Q}\) case. Denote \(\mathcal{E}_{\mathbb{F}}(d)\) be the number of elliptic curves of height at most \(d\), which is precisely \(q^{5d/6}\). Now note that
\[\deg_{\mathrm{ns}}(j_{E})\leq\mathbb{F}_{q}(T)/\mathbb{F}_{q}(j_{E})=\max \left\{3\mathrm{deg}(f),2\mathrm{deg}(g)\right\}.\]
This shows that for any integer \(D\leq d\), we have the bound \(m_{E}\gg\frac{\deg(\mathfrak{n}_{E})}{D}\), for least \(q^{5D/6}\) many \(E\) in \(\mathcal{E}_{\mathbb{F}}(d)\). On the hand, as long as we consider the families of quadratic twists, we see that \(j_{E}\) remains invariant. Therefore, in any such family, the factor \(\deg_{\mathrm{ns}}(j_{E})\) remains unchanged.
## Acknowledgements
The work for this project took place at IISER, TVM. We would like to thank the institute for providing excellent working conditions.
|
2301.13203 | The moment map for the variety of Leibniz algebras | We consider the moment map $m:\mathbb{P}V_n\rightarrow
\text{i}\mathfrak{u}(n)$ for the action of $\text{GL}(n)$ on
$V_n=\otimes^{2}(\mathbb{C}^{n})^{*}\otimes\mathbb{C}^{n}$, and study the
functional $F_n=\|m\|^{2}$ restricted to the projectivizations of the algebraic
varieties of all $n$-dimensional Leibniz algebras $L_n$ and all $n$-dimensional
symmetric Leibniz algebras $S_n$, respectively. Firstly, we give a description
of the maxima and minima of the functional $F_n: L_n \rightarrow \mathbb{R}$,
proving that they are actually attained at the symmetric Leibniz algebras.
Then, for an arbitrary critical point $[\mu]$ of $F_n: S_n \rightarrow
\mathbb{R}$, we characterize the structure of $[\mu]$ by virtue of the
nonnegative rationality. Finally, we classify the critical points of $F_n: S_n
\rightarrow \mathbb{R}$ for $n=2$, $3$, respectively. | Zhiqi Chen, Saiyu Wang, Hui Zhang | 2023-01-29T02:23:21Z | http://arxiv.org/abs/2301.13203v3 | # The moment map for the variety of Leibniz algebras
###### Abstract.
We consider the moment map \(m:\mathbb{P}V_{n}\to\operatorname{\mathrm{i}u}(n)\) for the action of \(\operatorname{GL}(n)\) on \(V_{n}=\otimes^{2}(\mathbb{C}^{n})^{*}\otimes\mathbb{C}^{n}\), and study the functional \(F_{n}=\|m\|^{2}\) restricted to the projectivizations of the algebraic varieties of all \(n\)-dimensional Leibniz algebras \(L_{n}\) and all \(n\)-dimensional symmetric Leibniz algebras \(S_{n}\), respectively. Firstly, we give a description of the maxima and minima of the functional \(F_{n}:L_{n}\to\mathbb{R}\), proving that they are actually attained at the symmetric Leibniz algebras. Then, for an arbitrary critical point \([\mu]\) of \(F_{n}:S_{n}\to\mathbb{R}\), we characterize the structure of \([\mu]\) by virtue of the nonnegative rationality. Finally, we classify the critical points of \(F_{n}:S_{n}\to\mathbb{R}\) for \(n=2\), \(3\), respectively.
Key words and phrases: Moment map; Variety of Leibniz algebras; Critical point 2010 Mathematics Subject Classification: 14L30, 17B30, 53D20
## 1. Introduction
In [16], Lauret studied the moment map for the variety of Lie algebras and obtained many remarkable results for example, a stratification of the Lie algebras variety and a description of the critical points, which turned to be very useful in proving that every Einstein solvmanifold is standard ([18]) and in the characterization of solitons ([4, 19]). It is thus natural and interesting to ask whether Lauret's results can be generalized, in some way, to varieties of algebras beyond Lie algebras.
Motivated by the idea, the study has recently been extended to the variety of \(3\)-Lie algebras in [31]. Here, a \(3\)-Lie algebra is a natural generalization of the concept of a Lie algebra to the case where the fundamental multiplication operation is \(3\)-ary. See also [9] and [32] for the study of the moment map in Jordan and associative algebras.
In this article, we study the moment map for the variety of _Leibniz algebras_, which are nonanticommutative versions of Lie algebras. A Leibniz algebra is a vector space with a multiplication such that every left multiplication operator is a derivation, which was at first introduced by Bloh ([3]) and later independently rediscovered by Loday in the study of cohomology theory (see [22, 23]). Leibniz algebras play an important role in different areas of mathematics and physics [6, 10, 14, 20, 21, 27, 28, 29], and we refer to [8] for a nice survey of Leibniz algebras.
For the moment map in the frame of Leibniz algebras, it is defined as follows: Let \(\operatorname{GL}(n)\) be the complex reductive Lie group acting naturally on the complex vector space \(V_{n}=\otimes^{2}(\mathbb{C}^{n})^{*}\otimes\mathbb{C}^{n}\), i.e., the space of all \(n\)-dimensional complex algebras. The usual Hermitian inner product on \(\mathbb{C}^{n}\) induces an \(\operatorname{U}(n)\)-invariant Hermitian inner product on \(V_{n}\), which is denoted by \(\langle\cdot,\cdot\rangle\). Since \(\mathfrak{gl}(n)=\mathfrak{u}(n)+\mathfrak{iu}(n)\), we may
## 1. Introduction
Let \(\mathbb{P}V_{n}\) be a finite field, \(\mathbb{P}V_{n}\) be a finite field, \(\mathbb{P}V_{n}\
**Definition 2.1** ([8, 22]).: A vector space \(\mathbb{I}\) over \(\mathbb{C}\) with a bilinear map \(1\times 1\to\mathbb{I}\), denoted by \((x,y)\mapsto xy\), is called a _Leibniz algebra_, if every left multiplication is a derivation, i.e.,
\[x(yz)=(xy)z+y(xz) \tag{2.1}\]
for all \(x,y,z\in\mathbb{I}\).
**Remark 2.2**.: Leibniz algebras are sometimes called _left_ Leibniz algebras in the literature, and there is a corresponding notion of _right_ Leibniz algebra, i.e., an algebra with the property that every right multiplication is a derivation. In some studies, the authors prefer to call a right Leibniz algebra a Leibniz algebra. We point out that for our purpose, it actually does not matter which notion is used since the opposite algebra of a left Leibniz algebra is a right Leibniz algebra and vice versa.
Following Mason and Yamskulna [24], we introduce the notion of the symmetric Leibniz algebra as follows.
**Definition 2.3** ([24]).: An algebra \(\mathbb{I}\) is called a _symmetric Leibniz algebra_ if it is at the same time a left and a right Leibniz algebra, that is
\[x(yz) =(xy)z+y(xz), \tag{2.2}\] \[(xy)z =(xz)y+x(yz), \tag{2.3}\]
for all \(x,y,z\in\mathbb{I}\).
Every Lie algebra is clearly a symmetric Leibniz algebra, and the converse is not true. In the following, we make the convention that an ideal of a Leibniz algebra always means a two-side ideal.
**Definition 2.4**.: Let \(\mathbb{I}\) be a Leibniz algebra. \(\mathbb{I}\) is called solvable if \(\mathbb{I}^{(r)}=0\) for some \(r\in\mathbb{N}\), where \(\mathbb{I}^{(0)}=\mathbb{I},\mathbb{I}^{(k+1)}=\mathbb{I}^{(k)}\mathbb{I}^{(k )},k\geq 0\).
If \(I,J\) are any two solvable ideals of \(\mathbb{I}\), then \(I+J\) is also a solvable ideal of \(\mathbb{I}\), so the maximum solvable ideal is unique, called the _radical_ of \(\mathbb{g}\) and denoted by Rad(\(\mathbb{I}\)) ([8]).
**Theorem 2.5** ([2]).: _A Leibniz algebra \(\mathbb{I}\) over a field of characteristic \(0\) admits a Levi decomposition, i.e., \(\mathbb{I}=\mathcal{S}+\mathrm{Rad(\mathbb{I})}\) decomposes into the sum of a semisimple Lie subalgebra \(\mathcal{S}\) and the radical satisfying \(\mathcal{S}\cap\mathrm{Rad(\mathbb{I})}=0\)._
**Definition 2.6**.: A Leibniz algebra \(\mathbb{I}\) is called _nilpotent_ if there exists a positive integer \(n\) such that any product of \(n\) elements in \(\mathbb{I}\), no matter how associated, is zero.
For a Leibniz algebra, we define \({}^{1}{\rm I}:={\rm I},\ ^{k+1}{\rm I}:=\{^{k}{\rm I}\},k\geq 1.\) Furthermore, we define
\[{\rm I}_{1}:={\rm I},\quad{\rm I}_{k}=\sum_{i=1}^{k-1}{\rm I}_{i}{\rm I}_{k-i}, \ k\geq 2.\]
Then we have the following theorem.
**Theorem 2.7** ([8]).: _For any integer \(k\geq 1\), then \({}^{k}{\rm I}={\rm I}_{k}\). Moreover, \({\rm I}\) is nilpotent if and only if there exists an positive integer \(n\) such that \({\rm I}_{n}=0\)._
If \(I,J\) are two nilpotent ideals of a Leibniz algebra \({\rm I}\), then \(I+J\) is also a nilpotent ideal of \({\rm I}\), consequently the maximum nilpotent ideal is unique, called the _nilradical_, denoted by \({\rm N}({\rm I})\) ([8, 30]).
**Proposition 2.8** ([30]).: _Let \({\rm I}\) be a Leibniz algebra over a field of characteristic zero, then \({\rm Rad}({\rm I})\), \({\rm I}{\rm Rad}({\rm I})\subset{\rm N}({\rm I})\)._
## 3. The moment map for complex algebras
In this section, we first recall Lauret's idea: _varying brackets instead of metrics_, for the study of metric algebras, then we introduce the moment map for complex algebras.
Let \(\mathbb{C}^{n}\) be the \(n\)-dimensional complex vector space. A metric algebra is a triple \((\mathbb{C}^{n},\mu,\langle\cdot,\cdot\rangle)\), where \(\mu:\mathbb{C}^{n}\times\mathbb{C}^{n}\to\mathbb{C}^{n}\) is a bilinear map and \(\langle\cdot,\cdot\rangle\) is a Hermitian inner product on \(\mathbb{C}^{n}\). The triple \((\mathbb{C}^{n},\mu,\langle\cdot,\cdot\rangle)\) will be abbreviated as \((\mu,\langle\cdot,\cdot\rangle)\) in this article.
**Definition 3.1**.: Let \((\mu_{1},\langle\cdot,\cdot\rangle_{1})\) and \((\mu_{2},\langle\cdot,\cdot\rangle_{2})\) be two metric algebras.
1. They are said to be isomorphic if there exists linear isomorphism \(\varphi:\mathbb{C}^{n}\to\mathbb{C}^{n}\) such that \(\varphi(\mu_{1}(\cdot,\cdot))=\mu_{2}(\varphi(\cdot),\varphi(\cdot))\), and in this case, \(\varphi\) is called an algebra isomorphism.
2. They are said to be isometric if there exists an algebra isomorphism \(\varphi\) such that \(\langle\cdot,\cdot\rangle_{1}=\langle\varphi(\cdot),\varphi(\cdot)\rangle_{2}\).
3. They are said to be isometric up to scaling if there exists an algebra isomorphism \(\varphi\) and \(c>0\) such that \(\langle\cdot,\cdot\rangle_{1}=c\langle\varphi(\cdot),\varphi(\cdot)\rangle_{2}\).
**Remark 3.2**.: The Definition 3.1 is an analogy of [11, 25], where the (real) metric Lie algebras and their relations with Riemannian geometry, such as sectional curvatures, left-invariant Einstein metrics and Ricci solitons, are studied.
Let \(V_{n}=\otimes^{2}(\mathbb{C}^{n})^{*}\otimes\mathbb{C}^{n}\) be the space of all bilinear maps, and
\[\mathfrak{M}_{n}=\{\langle\cdot,\cdot\rangle:\langle\cdot,\cdot\rangle\ \text{is a Hermitian inner product on $\mathbb{C}^{n}$}\}.\]
be the moduli space of all Hermitian inner products on \(\mathbb{C}^{n}\), respectively. Consider the natural action of \({\rm GL}(n)={\rm GL}(\mathbb{C}^{n})\) on \(V_{n}\), i.e.,
\[g.\mu(X,Y)=g\mu(g^{-1}X,g^{-1}Y),\quad g\in{\rm GL}(n),X,Y\in\mathbb{C}^{n}. \tag{3.1}\]
then by Definition 3.1, we know that \(\mathrm{GL}(n).\mu\) is precisely the isomorphism class of \(\mu\). Moreover, differentiating (3.1), we obtain the natural action \(\mathfrak{gl}(n)\) on \(V_{n}:\)
\[A.\mu(X,Y)=A\mu(X,Y)-\mu(AX,Y)-\mu(X,AY),\quad A\in\mathfrak{gl}(n),\mu\in V_{n}. \tag{3.2}\]
It follows that \(A.\mu=0\) if and only if \(A\in\mathrm{Der}(\mu)\), the derivation algebra of \(\mu\). On the other hand, one knows that the linear group \(\mathrm{GL}(n)\) also acts on \(\mathfrak{M}_{n}\), i.e.,
\[g.\langle\cdot,\cdot\rangle=\langle g^{-1}(\cdot),g^{-1}(\cdot)\rangle,\quad g \in\mathrm{GL}(n),\]
and this action is obviously transitive.
**Lemma 3.3**.: _Two metric algebras \((\mu,\langle\cdot,\cdot\rangle_{1})\) and \((\lambda,\langle\cdot,\cdot\rangle_{2})\) are isometric up to scaling if and only if there exist \(g\in\mathrm{GL}(n)\) and \(c\neq 0\) such that \(\lambda=g.\mu\) and \(\langle\cdot,\cdot\rangle_{2}=(cg).\langle\cdot,\cdot\rangle_{1}.\) In particular, \(((cg)^{-1}.\mu,\langle\cdot,\cdot\rangle)\) and \((\mu,g.\langle\cdot,\cdot\rangle)\) are isometric up to scaling._
Fix a Hermitian inner product \(\langle\cdot,\cdot\rangle\) on \(\mathbb{C}^{n}\), then by Lemma 3.3 we have
\[\bigcup_{g\in\mathrm{GL}(n)}(g.\mu,\langle\cdot,\cdot\rangle)=\bigcup_{g\in \mathrm{GL}(n)}(\mu,g^{-1}.\langle\cdot,\cdot\rangle)\]
in the sense of isometry (Definition 3.1). This is precisely the idea: _varying brackets instead of metrics_, for the study of metric algebras. By this idea, Lauret introduced the moment map for Lie algebras, which has motivated much of the recent study of homogeneous Riemannian geometry [4, 5, 15, 18, 19].
Now, we introduce the moment map for complex algebras. Fix a Hermitian inner product \(\langle\cdot,\cdot\rangle\) on \(\mathbb{C}^{n}\), then it makes each \(\mu\in V_{n}\) an metric algebra, and \(\mathrm{U}(n).\mu\) is precisely the isometry class of \(\mu\) (see Definition 3.1). Moreover, \(\langle\cdot,\cdot\rangle\) induces a natural \(\mathrm{U}(n)\)-invariant Hermitian inner product on \(V_{n}\) as follows
\[\langle\mu,\lambda\rangle:=\sum_{i,j,k}\langle\mu(X_{i},X_{j}),X_{k}\rangle \overline{\langle\lambda(X_{i},X_{j}),X_{k}\rangle},\quad\mu,\lambda\in V_{n}, \tag{3.3}\]
where \(\{X_{1},X_{2},\cdots,X_{n}\}\) is an arbitrary orthonormal basis of \((\mathbb{C}^{n},\langle\cdot,\cdot\rangle)\). Note that there is an \(\mathrm{Ad}(\mathrm{U}(n))\)-invariant Hermitian inner product on \(\mathfrak{gl}(n)\), i.e.,
\[(A,B)=\mathrm{tr}\,AB^{*},\;A,B\in\mathfrak{gl}(n). \tag{3.4}\]
where \({}^{*}\) denotes the conjugate transpose relative to \((\mathbb{C}^{n},\langle\cdot,\cdot\rangle)\). The moment map, corresponding to the Hamiltonian action of \(\mathrm{U}(n)\) on the symplectic manifold \(\mathbb{P}V_{n}\), is defined by
\[m:\mathbb{P}V_{n}\to\mathrm{i}\mathrm{u}(n),\quad(m([\mu]),A)=\frac{(\mathrm{ d}\rho_{\mu})_{e}A}{\|\mu\|^{2}},\quad 0\neq\mu\in V_{n},A\in\mathrm{i} \mathrm{u}(n), \tag{3.5}\]
where \(\rho_{\mu}(g)=\langle g.\mu,g.\mu\rangle\), \(g\in\mathrm{GL}(n)\). Denote by \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R},F_{n}([\mu])=\|m([\mu])\|^{2}=(m([\mu]),m([ \mu]))\), the square norm of the moment map. Then it is easy to see that the moment map is \(\mathrm{U}(n)\)-invariant, i.e., \(m(k.[\mu])=\mathrm{Ad}(k)m([\mu])\), \(\forall k\in\mathrm{U}(n)\). In particular, \(F_{n}(k.[\mu])=F_{n}([\mu])\), \(\forall k\in\mathrm{U}(n)\).
For each algebra \(\mu\in V_{n}\), we define \(\mathrm{M}_{\mu}\in\mathrm{i}\mathrm{u}(n)\) as follows
\[\mathrm{M}_{\mu}=2\sum_{i}L^{\mu}_{X_{i}}(L^{\mu}_{X_{i}})^{*}-2\sum_{i}(L^{\mu }_{X_{i}})^{*}L^{\mu}_{X_{i}}-2\sum_{i}(R^{\mu}_{X_{i}})^{*}R^{\mu}_{X_{i}}, \tag{3.6}\]
where \(L^{\mu}_{X},R^{\mu}_{X}:\mathbb{C}^{n}\to\mathbb{C}^{n}\) are given by \(L^{\mu}_{X}(Y)=\mu(X,Y)\) and \(R^{\mu}_{X}(Y)=\mu(Y,X)\), \(\forall Y\in\mathbb{C}^{n}\), and \({}^{*}\) denotes the conjugate transpose relative to \((\mathbb{C}^{n},\langle\cdot,\cdot\rangle)\). One immediately sees that \(\mathrm{M}_{k,\mu}=\mathrm{Ad}(k)\mathrm{M}_{\mu}\) for any \(k\in\mathrm{U}(n)\), and \(\mathrm{M}_{c\mu}=|c|^{2}\mathrm{M}_{\mu}\) for any \(0\neq c\in\mathbb{C}\). Moreover
\[\langle\mathrm{M}_{\mu}X,Y\rangle= 2\sum_{i,j}\overline{\langle\mu(X_{i},X_{j}),X\rangle}\langle \mu(X_{i},X_{j}),Y\rangle-2\sum_{i,j}\langle\mu(X_{i},X),X_{j}\rangle\overline {\langle\mu(X_{i},Y),X_{j}\rangle}\] \[-2\sum_{i,j}\langle\mu(X,X_{i}),X_{j}\rangle\overline{\langle\mu( Y,X_{i}),X_{j}\rangle} \tag{3.7}\]
for any \(X,Y\in\mathbb{C}^{n}\). Note that if the algebra \(\mu\) is anticommutative, then \(\mathrm{M}_{\mu}\) coincides with [16].
The next lemma establishes the relation between \(m([\mu])\) and \(\mathrm{M}_{\mu}\), which follows from a straightforward calculation (see also [32]).
**Lemma 3.4**.: _For any \(0\neq\mu\in V_{n}\), we have \(m([\mu])=\frac{\mathrm{M}_{\mu}}{||\mu||^{2}}\). In particular, \((\mathrm{M}_{\mu},A)=2\langle A.\mu,\mu\rangle\) for any \(A\in\mathrm{i}\mathrm{u}(n)\)._
**Corollary 3.5**.: _For any \(\mu\in V_{n}\), then_
* \(\mathrm{tr}\,\mathrm{M}_{\mu}D=0\) _for any_ \(D\in\mathrm{Der}(\mu)\cap\mathrm{i}\mathrm{u}(n)\)_;_
* \(\mathrm{tr}\,\mathrm{M}_{\mu}[A,A^{*}]\geq 0\) _for any_ \(A\in\mathrm{Der}(\mu)\)_, and equality holds if and only if_ \(A^{*}\in\mathrm{Der}(\mu)\)_._
Proof.: For (i), it follows from Lemma 3.4 and the fact that \(D\) is a Hermitian derivation of \(\mu\). For (ii), it follows from that \(\mathrm{tr}\,\mathrm{M}_{\mu}[A,A^{*}]=2\langle A^{*}\mu,A^{*}\mu\rangle\geq 0\) for any \(A\in\mathrm{Der}(\mu)\), and the fact \(A^{*}.\mu=0\) if and only if \(A^{*}\in\mathrm{Der}(\mu)\).
**Theorem 3.6** ([32]).: _For the square norm of the moment map \(F_{n}=\|m\|^{2}:\mathbb{P}V_{n}\to\mathbb{R}\), the following statements are equivalent:_
* \([\mu]\in\mathbb{P}V_{n}\) _is a critical point of_ \(F_{n}\)_._
* \([\mu]\in\mathbb{P}V_{n}\) _is a critical point of_ \(F_{n|\mathrm{GL}(n),[\mu]}\)_._
* \(\mathrm{M}_{\mu}=c_{\mu}I+D_{\mu}\) _for some_ \(c_{\mu}\in\mathbb{R}\) _and_ \(D_{\mu}\in\mathrm{Der}(\mu)\)_._
_If one of the statements holds, then_
* \(c_{\mu}=\frac{\mathrm{tr}\,\mathrm{M}_{\mu}^{2}}{\mathrm{tr}\,\mathrm{M}_{\mu }}=-\frac{1}{2}\frac{\mathrm{tr}\,\mathrm{M}_{\mu}^{2}}{||\mu||^{2}}<0\)_._
* _If_ \(\mathrm{tr}\,D_{\mu}\neq 0\)_, then_ \(c_{\mu}=-\frac{\mathrm{tr}\,D_{\mu}^{2}}{\mathrm{tr}\,D_{\mu}}\) _and_ \(\mathrm{tr}\,D_{\mu}>0\)
**Remark 3.7**.: By Lemma 3.6, we know that \(\operatorname{tr}\operatorname{M}_{\mu}=-2\langle\mu,\mu\rangle=-2\|\mu\|^{2}\) for all \(0\neq\mu\in V_{n}.\) On the other hand
\[\|m[\mu]-\frac{\operatorname{tr}m[\mu]}{n}I\|^{2}=\|m[\mu]\|^{2}-2\cdot\frac{ \operatorname{tr}m[\mu]}{n}\cdot\operatorname{tr}m[\mu]+\left(\frac{ \operatorname{tr}m[\mu]}{n}\right)^{2}\cdot n.\]
So by Lemma 3.6, we have
\[\|m[\mu]-\frac{\operatorname{tr}m[\mu]}{n}I\|^{2}=F_{n}([\mu])-\frac{4}{n}.\]
That is, \(F_{n}([\mu])\) measures in some sense how far \(m([\mu])\) is from the identity. If we interpret \(\operatorname{M}_{\mu}\) as some 'curvature' of the metric algebra \((\mu,\langle\cdot,\cdot\rangle)\), which is of course invariant under isometry (Definition 3.1), then the critical point of \(F_{n}\) can be thought to find the 'best' Hermitian inner product (reality to the curvature) in an isomorphism class of an algebra.
Moreover, note that for any \(\mu\in V_{n}\), \(0\) lies in the boundary of \(\operatorname{GL}(n).\mu\), so a result due to Ness can be stated as follows
**Theorem 3.8** ([26]).: _If \([\mu]\) is a critical point of the functional \(F_{n}:\mathbb{P}V_{n}\mapsto\mathbb{R}\) then_
1. \(F_{n}|_{\operatorname{GL}(n).[\mu]}\) _attains its minimum value at_ \([\mu]\)_._
2. \([\lambda]\in\operatorname{GL}(n).[\mu]\) _is a critical point of_ \(F_{n}\) _if and only if_ \([\lambda]\in\operatorname{U}(n).[\mu]\)_._
The Ness theorem says that if \([\mu]\in\mathbb{P}V_{n}\) is a critical point, then \(\operatorname{U}(n).[\mu]\) is precisely the set of critical points that are contained in \(\operatorname{GL}(n).[\mu].\) Those \(\operatorname{GL}(n)\)-orbits, which contain a critical point, are called distinguished orbits in the literature.
## 4. The critical points of the variety of Leibniz algebras
The spaces \(\mathscr{L}_{n}\), \(\mathscr{S}_{n}\) of all \(n\)-dimensional Leibniz algebras and symmetric Leibniz algebras are algebraic sets since they are given by polynomial conditions. Denote by \(L_{n}\) and \(S_{n}\) the projective algebraic varieties obtained by projectivization of \(\mathscr{L}_{n}\) and \(\mathscr{S}_{n}\), respectively. Then by Theorem 3.6, we know that the critical points of \(F_{n}:L_{n}\to\mathbb{R}\), and \(F_{n}:S_{n}\to\mathbb{R}\) are precisely the critical points of \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R}\) which lie in \(L_{n}\) and \(S_{n}\), respectively.
### The rationality and nonnegative property
The following rationality and nonnegative property are generalizations of [16] from Lie algebras to Leibniz algebras and symmetric Leibniz algebras, respectively.
**Theorem 4.1**.: _Let \([\mu]\in\mathbb{P}V_{n}\) be a critical point of \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R}\) with \(\operatorname{M}_{\mu}=c_{\mu}I+D_{\mu}\) for some \(c_{\mu}\in\mathbb{R}\) and \(D_{\mu}\in\operatorname{Der}(\mu)\). Then there exists a constant \(c>0\) such that the eigenvalues of \(cD_{\mu}\) are integers prime to each other, say \(k_{1}<k_{2}<\cdots<k_{r}\in\mathbb{Z}\) with multiplicities \(d_{1},d_{2},\cdots,d_{r}\in\mathbb{N}.\) If moreover \([\mu]\in S_{n}\), then the integers are nonnegative._
Proof.: The first part follows from [31] (see also [16]). We only prove the last statement. The case \(D_{\mu}=0\) is trivial. In the sequel, we assume that \(D_{\mu}\) is nonzero. Noting that \(D_{\mu}\) is Hermitian, there is an orthogonal decomposition
\[\mathbb{C}^{n}=\mathfrak{l}_{1}\oplus\mathfrak{l}_{2}\oplus\cdots\oplus \mathfrak{l}_{r},\ r\geq 2\]
where \(\mathfrak{l}_{i}:=\{X\in\mathbb{C}^{n}|D_{\mu}X=c_{i}X\}\) are the eigenspaces of \(D_{\mu}\) corresponding to the eigenvalues \(c_{1}<c_{2}<\cdots<c_{r}\in\mathbb{R}\), respectively. Suppose that \([\mu]\in S_{n}\), and \(0\neq X\in\mathbb{C}^{n}\) satisfies \(D_{\mu}X=c_{1}X\). Then we have
\[c_{1}L_{X}^{\mu} =[D_{\mu},L_{X}^{\mu}],\] \[c_{1}R_{X}^{\mu} =[D_{\mu},R_{X}^{\mu}].\]
It follows that
\[c_{1}\operatorname{tr}L_{X}^{\mu}(L_{X}^{\mu})^{*}=\operatorname{tr}[D_{\mu},L_{X}^{\mu}](L_{X}^{\mu})^{*}=\operatorname{tr}[\operatorname{M}_{\mu},L_{X} ^{\mu}](L_{X}^{\mu})^{*}=\operatorname{tr}\operatorname{M}_{\mu}[L_{X}^{\mu},(L_{X}^{\mu})^{*}], \tag{4.1}\]
and
\[c_{1}\operatorname{tr}R_{X}^{\mu}(R_{X}^{\mu})^{*}=\operatorname{tr}[D_{\mu},R_{X}^{\mu}](R_{X}^{\mu})^{*}=\operatorname{tr}[\operatorname{M}_{\mu},R_{X} ^{\mu}](R_{X}^{\mu})^{*}=\operatorname{tr}\operatorname{M}_{\mu}[R_{X}^{\mu},(R_{X}^{\mu})^{*}]. \tag{4.2}\]
Since \(L_{X}^{\mu},R_{X}^{\mu}\) are derivations of \(\mu\), we conclude from Corollary 3.5 that
\[c_{1}\operatorname{tr}L_{X}^{\mu}(L_{X}^{\mu})^{*}\geq 0\quad\text{and}\quad c _{1}\operatorname{tr}R_{X}^{\mu}(R_{X}^{\mu})^{*}\geq 0.\]
If \(L_{X}^{\mu}\) or \(R_{X}^{\mu}\) is not zero, then \(c_{1}\geq 0.\) If \(L_{X}^{\mu}\) and \(R_{X}^{\mu}\) are both zero, then \(X\) necessarily lies in the center of \(\mu\). By (3.7), we have
\[\langle\operatorname{M}_{\mu}X,X\rangle=2\sum_{i,j}|\langle\mu(X_{i},X_{j}),X \rangle|^{2}\geq 0. \tag{4.3}\]
Since \(\operatorname{M}_{\mu}=c_{\mu}I+D_{\mu}\), then \(0\leq\langle\operatorname{M}_{\mu}X,X\rangle=(c_{\mu}+c_{1})\langle X,X\rangle\). Using Theorem 3.6, we know \(c_{1}\geq-c_{\mu}>0\). This completes the proof.
**Theorem 4.2**.: _Let \([\mu]\) be a critical point of \(F_{n}:S_{n}\to\mathbb{R}\) with \(\operatorname{M}_{\mu}=c_{\mu}I+D_{\mu}\) for some \(c_{\mu}\in\mathbb{R}\) and \(D_{\mu}\in\operatorname{Der}(\mu)\). If \([\mu]\) is nilpotent, then \(D_{\mu}\) is positive definite. Consequently, all nilpotent critical points of \(F_{n}:S_{n}\to\mathbb{R}\) are \(\mathbb{N}\)-graded._
Proof.: Indeed, assume that \(0\neq X\in\mathbb{C}^{n}\) satisfies \(D_{\mu}X=c_{1}X\), where \(c_{1}\) is the smallest eigenvalue of \(D_{\mu}\). By Theorem 4.1, we know that \(c_{1}\geq 0.\) Suppose that \(c_{1}=0\), then \(\operatorname{tr}\operatorname{M}_{\mu}[L_{X}^{\mu},(L_{X}^{\mu})^{*}]=0\), and \(\operatorname{tr}\operatorname{M}_{\mu}[R_{X}^{\mu},(R_{X}^{\mu})^{*}]=0.\) Using Corollary 3.5, \((L_{X}^{\mu})^{*}\) and \((R_{X}^{\mu})^{*}\) are derivations of \(\mu.\) Let \(\mathfrak{l}\) be the symmetric Leibniz algebra \((\mathbb{C}^{n},\mu)\). Consider the orthogonal decomposition of \(\mathfrak{l}\)
\[\mathfrak{l}=\mathfrak{n}_{\mathfrak{l}_{1}}\oplus\mathfrak{n}_{\mathfrak{l}_{2 }}\oplus\cdots\oplus\mathfrak{n}_{\mathfrak{p}},\ p\geq 2,\]
where \(\mu(l,l)=n_{2}\oplus\cdots\oplus n_{p}\), \(\mu(l,\mu(l,l))=l_{3}\oplus\cdots\oplus l_{p},\cdots.\) Since \((L_{X}^{\mu})^{*}\) is a derivation of \(\mu\), then \((L_{X}^{\mu})^{*}\) necessarily leaves each \(l_{i}\) invariant. Note that \(L_{X}^{\mu}(l_{i})\subset l_{i+1}\) for each \(i\), then \(\operatorname{tr}L_{X}^{\mu}(L_{X}^{\mu})^{*}=0\), and consequently, \(L_{X}^{\mu}=0.\) Similarly, one concludes that \(R_{X}^{\mu}=0.\) That is, \(X\) lies in the center of \(l\), which is a contradiction since in this case we have \(c_{1}\geq-c_{\mu}>0.\) So \(D_{\mu}\) is positive definite.
The positive argument in Theorem 4.2 for real nilpotent Lie algebras plays a fundamental role in [18].
**Remark 4.3**.: So far, it is still unclear for us whether the nonnegative property in Theorem 4.1 and positive property in Theorem 4.2 hold for \(F_{n}:L_{n}\to\mathbb{R}\) or not. However, we have the following partial result. For an arbitrary critical point \([\mu]\) of \(F_{n}:L_{n}\to\mathbb{R}\), consider
\[l=\mathbb{L}\oplus l_{0}\oplus l_{+},\]
the direct sum of eigenspaces of \(D_{\mu}\) with eigenvalue smaller than zero, equal to zero and larger than zero, respectively. Then \(R_{X}^{\mu}\notin\operatorname{Der}(\mu)\) for any \(0\neq X\in\mathbb{L}\), which in turn is equivalent to \([R_{X}^{\mu},(R_{X}^{\mu})^{*}]\neq 0\), i.e., not normal. Indeed, assume that \(R_{X}^{\mu}\in\operatorname{Der}(\mu)\) (or \([R_{X}^{\mu},(R_{X}^{\mu})^{*}]=0\)) for some \(0\neq X\in\mathbb{L}\), then by the proof of Theorem 4.1, we see that \(X\) necessarily lies in the center of \(\mu\), which contradicts \(0\neq X\in\mathbb{L}\).
### The minima and maxima of \(F_{n}:L_{n}\to\mathbb{R}\)
Following from [16], we introduce the notion of the type of a critical point.
**Definition 4.4**.: The data set \((k_{1}<k_{2}<\cdots<k_{r};d_{1},d_{2},\cdots,d_{r})\) in Theorem 4.1 is called the type of the critical point \([\mu]\).
To study the minima and maxima of \(F_{n}:L_{n}\to\mathbb{R}\), we recall two simple but useful results as follows.
**Lemma 4.5** ([31]).: _Let \([\mu]\in\mathbb{P}V_{n}\) be a critical point of \(F_{n}\) with type \(\alpha=(k_{1}<k_{2}<\cdots<k_{r};d_{1},d_{2},\cdots,d_{r})\). Then we have_
* _If_ \(\alpha=(0;n)\)_, then_ \(F_{n}([\mu])=\frac{4}{n}\)_._
* _If_ \(\alpha\neq(0;n)\)_, then_ \(F_{n}([\mu])=4\left(n-\frac{(k_{1}d_{1}+k_{2}d_{3}+\cdots+k_{r}d_{r}d_{r}^{2} )}{(k_{1}^{2}d_{1}+k_{2}^{2}d_{2}+\cdots+k_{r}^{2}d_{r})}\right)^{-1}.\)__
**Lemma 4.6**.: _Assume \([\mu]\in\mathbb{P}V_{n}\), then \([\mu]\) is a critical point of \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R}\) with type \((0;n)\) if and only if \(F_{n}([\mu])=\frac{4}{n}.\) Moreover, \(\frac{4}{n}\) is the minimum value of \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R}\)._
Proof.: Using Remark 3.7
The following theorem shows that even in the frame of Leibniz algebras, the semisimple Lie algebras are still the only critical points of \(F_{n}:L_{n}\to\mathbb{R}\) attaining the minimum value.
**Theorem 4.7**.: _Assume that there exists a semisimple Lie algebra of dimension \(n\). Then \(F_{n}:L_{n}\to\mathbb{R}\) attains its minimum value at a point \([\lambda]\in\operatorname{GL}(n).[\mu]\) if and only if \(\mu\) is a semisimple Lie algebra. In such a case, \(F_{n}([\lambda])=\frac{4}{n}.\)_
Proof.: Assume that \(\mu\) is a complex semisimple Lie algebra, then it follows from [16, Theorem 4.3] that \(F_{n}([\lambda])=\frac{4}{n}\) for some \([\lambda]\in\operatorname{GL}(n).[\mu]\).
Conversely, assume \(F_{n}:L_{n}\to\mathbb{R}\) attains its minimum value at a point \([\lambda]\in\operatorname{GL}(n).[\mu]\). Then by hypothesis, there exists a semisimple Lie algebra of dimension \(n\). The first part of the proof and Lemma 4.6 imply that \(\operatorname{M}_{\lambda}=c_{\lambda}I\) with \(c_{\lambda}<0\). To prove \(\mu\) is semisimple, it suffices to show that \(\mathfrak{l}=(\lambda,\mathbb{C}^{n})\) is semisimple. Consider the following orthogonal decompositions: (i) \(\mathfrak{l}=\mathfrak{h}\oplus\mathfrak{s}\), where \(\mathfrak{s}\) is the radical of \(\lambda\); (ii) \(\mathfrak{s}=\mathfrak{a}\oplus\mathfrak{n}_{\lambda}\), where \(\mathfrak{n}_{\lambda}=\lambda(\mathfrak{s},\mathfrak{s})\) is a nilpotent ideal of \(\mathfrak{l}\); (iii) \(\mathfrak{n}_{\lambda}=\mathfrak{v}\oplus\mathfrak{s}_{\lambda}\), where \(\mathfrak{s}_{\lambda}=\{Z\in\mathfrak{n}_{\lambda}:\lambda(Z,\mathfrak{n}_{ \lambda})=\lambda(\mathfrak{n}_{\lambda},Z)=0\}\) is the center of \(\mathfrak{n}_{\lambda}\). Clearly, \(\mathfrak{s}_{\lambda}\) is a ideal of \(\mathfrak{l}\). We have \(\mathfrak{l}=\mathfrak{h}\oplus\mathfrak{a}\oplus\mathfrak{v}\oplus\mathfrak{ s}_{\lambda}\). Suppose that \(\mathfrak{s}_{\lambda}\neq 0\). Let \(\{H_{i}\},\{A_{i}\},\{V_{i}\},\{Z_{i}\}\) be an orthonormal basis of \(\mathfrak{h},\mathfrak{a},\mathfrak{v}\), and \(\mathfrak{s}_{\lambda}\), respectively. Put \(\{X_{i}\}=\{H_{i}\}\cup\{A_{i}\}\cup\{V_{i}\}\cup\{Z_{i}\}\). For any \(0\neq Z\in\mathfrak{s}_{\lambda}\), by hypothesis we have
\[0>\langle\operatorname{M}_{\lambda}\!Z,Z\rangle= 2\sum_{ij}|\langle\lambda(X_{i},X_{j}),Z\rangle|^{2}-2\sum_{ij}| \langle\lambda(Z,X_{i}),X_{j}\rangle|^{2}-2\sum_{ij}|\langle\lambda(X_{i},Z), X_{j}\rangle|^{2}\] \[= 2\sum_{ij}\left\{|\langle\lambda(Z_{i},H_{j}),Z\rangle|^{2}+| \langle\lambda(H_{i},Z_{j}),Z\rangle|^{2}+|\langle\lambda(Z_{i},A_{j}),Z \rangle|^{2}+|\langle\lambda(A_{i},Z_{j}),Z\rangle|^{2}\right\}+\alpha(Z)\] \[-2\sum_{ij}\left\{|\langle\lambda(Z,H_{i}),Z_{j}\rangle|^{2}+| \langle\lambda(Z,A_{i}),Z_{j}\rangle|^{2}\right\}-2\sum_{ij}\left\{|\langle \lambda(H_{i},Z),Z_{j}\rangle|^{2}+|\langle\lambda(A_{i},Z),Z_{j}\rangle|^{2} \right\},\]
where \(\alpha(Z)=2\sum_{ij}|\langle\lambda(Y_{i},Y_{j}),Z\rangle|^{2}\geq 0\), \(\{Y_{i}\}=\{H_{i}\}\cup\{A_{i}\}\cup\{V_{i}\}\). This implies
\[0>\sum_{k}\langle\operatorname{M}_{\lambda}\!Z_{k},Z_{k}\rangle=\sum_{k} \alpha(Z_{k})\geq 0,\]
which is a contradiction. So \(\mathfrak{s}_{\lambda}=0\), and consequently, \(\mathfrak{n}_{\lambda}=\lambda(\mathfrak{s},\mathfrak{s})=0\).
Suppose that \(\mathfrak{s}\neq 0\). Let \(\{H_{i}\},\{A_{i}\}\) be an orthonormal basis of \(\mathfrak{h},\mathfrak{s}\), respectively. For any \(0\neq A\in\mathfrak{s}\), we have
\[0>\langle\operatorname{M}_{\lambda}\!A,A\rangle= 2\sum_{ij}\left\{|\langle\lambda(H_{i},A_{j}),A\rangle|^{2}+| \langle\lambda(A_{i},H_{j}),A\rangle|^{2}\right\}+\beta(A)\] \[-2\sum_{ij}|\langle\lambda(A,H_{i}),A_{j}\rangle|^{2}-2\sum_{ij}| \langle\lambda(H_{i},A),A_{j}\rangle|^{2}\]
where \(\beta(A)=2\sum_{ij}|\langle\lambda(H_{i},H_{j}),A\rangle|^{2}\geq 0\). This implies
\[0>\sum_{k}\langle\operatorname{M}_{\lambda}\!A_{k},A_{k}\rangle=\sum_{k}\beta( A_{k})\geq 0,\]
which is a contradiction. So \(\mathfrak{s}=0\). Therefore \(\lambda\) is a semisimple Lie algebra.
**Remark 4.8**.: By the proof of Theorem 4.7, we know that if \([\mu]\in L_{n}\) for which there exists \([\lambda]\in\operatorname{GL}(n).[\mu]\) such that \(\operatorname{M}_{\lambda}\) is negative definite, then \(\mu\) is a semisimple Lie algebra.
The next theorem shows that in the frame of Leibniz algebras, the maximum value of \(F_{n}:L_{n}\to\mathbb{R}\) is attained at symmetric Leibniz algebras that are non-Lie.
**Theorem 4.9**.: _The functional \(F_{n}:L_{n}\to\mathbb{R}\) attains its maximal value at a point \([\mu]\in L_{n},\,n\geq 2\) if and only if \(\mu\) is isomorphic to the direct sum of the two-dimensional non-Lie symmetric Leibniz algebra with the trivial algebra. In such a case, \(F_{n}([\mu])=20\)._
Proof.: Assume that \(F_{n}:L_{n}\to\mathbb{R}\) attains its maximal value at a point \([\mu]\in L_{n},\,n\geq 2.\) By Theorem 3.6, we know that \([\mu]\) is also a critical of \(F_{n}:\mathbb{P}V_{n}\to\mathbb{R}.\) Then it follows Theorem 3.8 that \(F_{n}|_{\mathrm{GL}(n),[\mu]}\) also attains its minimum value at a point \([\mu]\), consequently \(F_{n}|_{\mathrm{GL},[\mu]}\) is a constant, so
\[\mathrm{GL}(n).[\mu]=\mathrm{U}(n).[\mu] \tag{4.4}\]
The relation (4.4) implies that the only non-trivial degeneration of \(\mu\) is \(0\) ([17, Theorem 5.1]), consequently the degeneration level of \(\mu\) is \(1\). By [12], a Leibniz algebra of degeneration level \(1\) is necessarily isomorphic to one of the following
* \(\mu_{hy}\) is a Lie algebra: \(\mu_{hy}(X_{1},X_{i})=X_{i},\,\,i=2,\cdots,n;\)
* \(\mu_{he}\) is a Lie algebra: \(\mu_{he}(X_{1},X_{2})=X_{3};\)
* \(\mu_{xy}\) is a symmetric Leibniz algebra: \(\mu_{xy}(X_{1},X_{1})=X_{2};\)
where \(\{X_{1},X_{2},\cdots,X_{n}\}\) is a basis. It is easy to see that the critical point \([\mu_{hy}]\) is of type \((0<1;1,n-1)\), \([\mu_{he}]\) is of type \((2<3<4;2,n-3,1)\) and \([\mu_{sy}]\) is of type \((3<5<6;1,n-2,1)\). By Lemma 4.5, we know
\[F_{n}([\mu_{hy}])=4,\quad F_{n}([\mu_{he}])=12,\quad F_{n}([\mu_{ sy}])=20.\]
The theorem therefore is proved.
### The structure for the critical points of \(F_{n}:S_{n}\to\mathbb{R}\)
Note that the maxima and minima of the functional \(F_{n}:L_{n}\to\mathbb{R}\) are actually attained at symmetric Leibniz algebras. In the sequel, we characterize the structure for the critical points of \(F_{n}:S_{n}\to\mathbb{R}\) by Theorem 4.1. These are main results of this article.
**Theorem 4.10**.: _Let \([\mu]\in S_{n}\) be a critical point of \(F_{n}:S_{n}\to\mathbb{R}\) with \(\mathrm{M}_{\mu}=c_{\mu}I+D_{\mu}\) of type \((0<k_{2}<\cdots<k_{r};d_{1},d_{2},\cdots,d_{r})\) and consider_
\[\mathfrak{l}=\mathfrak{l}_{0}\oplus\mathfrak{l}_{+}, \tag{4.5}\]
_the direct sum of eigenspaces of \(D_{\mu}\) with eigenvalues equal to zero, and larger than zero, respectively. Then the following statements hold:_
1. \((L^{\mu}_{A})^{*},(R^{\mu}_{A})^{*}\in\operatorname{Der}(\mu)\) _for any_ \(A\in\mathfrak{l}_{0}\)_._
2. \(\mathfrak{l}_{0}\) _is a reductive Lie subalgebra, i.e., a direct sum of the center and a semisimple ideal._
3. \(L^{\mu}_{Z},R^{\mu}_{Z}\) _are normal operators for any_ \(Z\in\mathfrak{z}(\mathfrak{l}_{0}),\) _where_ \(\mathfrak{z}(\mathfrak{l}_{0})\) _denotes the center of_ \(\mathfrak{l}_{0}.\)__
4. \(\mathfrak{l}_{+}\) _is the nilradical of_ \(\mu,\) _and it corresponds to a critical point of type_ \((k_{2}<\cdots<k_{r};d_{2},\cdots,d_{r})\) _for the functional_ \(F_{m}:S_{m}\to\mathbb{R},\) _where_ \(m=\dim\mathfrak{l}_{+}.\)__
Proof.: For (i), since \(D_{\mu},L^{\mu}_{A}\) and \(R^{\mu}_{A}\) are derivations of \(\mu,\) we have
\[[D_{\mu},L^{\mu}_{A}] =L^{\mu}_{D_{\mu}A}=0,\] \[=R^{\mu}_{D_{\mu}A}=0,\]
for any \(A\in\mathfrak{l}_{0}.\) Then it follows that
\[\operatorname{tr}\operatorname{M}_{\mu}[L^{\mu}_{A},(L^{\mu}_{A}) ^{*}] =\operatorname{tr}(c_{\mu}I+D_{\mu})[L^{\mu}_{A},(L^{\mu}_{A})^{*}]\] \[=\operatorname{tr}D_{\mu}[L^{\mu}_{A},(L^{\mu}_{A})^{*}]\] \[=\operatorname{tr}[D_{\mu},L^{\mu}_{A}](L^{\mu}_{A})^{*}\] \[=0.\]
So \((L^{\mu}_{A})^{*}\in\operatorname{Der}(\mu)\) by Corollary 3.5. Similarly, we have \((R^{\mu}_{A})^{*}\in\operatorname{Der}(\mu).\) This proves (i).
For (ii), let \(\mathfrak{l}_{0}=\mathfrak{h}\oplus\mathfrak{z}\) be the orthogonal decomposition, where \(\mathfrak{h}=\mu(\mathfrak{l}_{0},\mathfrak{l}_{0}).\) We claim that \(\mathfrak{z}\) is the center of \(\mathfrak{l}_{0}.\) Indeed, by the orthogonal decomposition of eigenspaces (4.5), we have
\[L^{\mu}_{A}=\left(\begin{array}{cc}L^{\mu}_{A}|_{\mathfrak{l}_{0}}&0\\ 0&L^{\mu}_{A}|_{\mathfrak{l}_{+}}\end{array}\right),\quad R^{\mu}_{A}=\left( \begin{array}{cc}R^{\mu}_{A}|_{\mathfrak{l}_{0}}&0\\ 0&R^{\mu}_{A}|_{\mathfrak{l}_{+}}\end{array}\right),\]
for any \(A\in\mathfrak{l}_{0}.\) Since \(\mathfrak{h}\) is \(\operatorname{Der}(\mathfrak{l}_{0})\)-invariant, then by (i) we know that \(L^{\mu}_{A}|_{\mathfrak{l}_{0}},R^{\mu}_{A}|_{\mathfrak{l}_{0}}\in\operatorname {Der}(\mathfrak{l}_{0})\) are of the form
\[L^{\mu}_{A}|_{\mathfrak{l}_{0}}=\left(\begin{array}{cc}L^{\mu}_{A}|_{ \mathfrak{h}}&0\\ 0&0\end{array}\right),\quad R^{\mu}_{A}|_{\mathfrak{l}_{0}}=\left(\begin{array} []{cc}R^{\mu}_{A}|_{\mathfrak{h}}&0\\ 0&0\end{array}\right),\]
for any \(A\in\mathfrak{l}_{0}.\) So \(\mu(\mathfrak{l}_{0},\mathfrak{z})=\mu(\mathfrak{z},\mathfrak{l}_{0})=0,\) i.e., \(\mathfrak{z}\) lies in the center of \(\mathfrak{l}_{0}.\) Moreover, it follows that \(\mathfrak{h}=\mu(\mathfrak{h},\mathfrak{h}).\) Let \(\mathfrak{h}=\bar{\mathfrak{r}}\oplus\bar{\mathfrak{s}}\) be the orthogonal decomposition, where \(\bar{\mathfrak{s}}\) is the radical of \(\mathfrak{h}.\) Since \(\bar{\mathfrak{s}}\) is \(\operatorname{Der}(\mathfrak{h})\)-invariant, then by (i), we know that \(L^{\mu}_{H}|_{\mathfrak{h}},R^{\mu}_{H}|_{\mathfrak{h}}\in\operatorname{Der}( \mathfrak{h})\) are of the form
\[L^{\mu}_{H}|_{\mathfrak{h}}=\left(\begin{array}{cc}L^{\mu}_{H}|_{\bar{ \mathfrak{r}}}&0\\ 0&L^{\mu}_{H}|_{\bar{\mathfrak{r}}}\end{array}\right),\quad R^{\mu}_{H}|_{ \mathfrak{h}}=\left(\begin{array}{cc}R^{\mu}_{H}|_{\bar{\mathfrak{r}}}&0\\ 0&R^{\mu}_{H}|_{\bar{\mathfrak{s}}}\end{array}\right),\]
for any \(H\in\mathfrak{h}.\) Clearly, \(\bar{\mathfrak{r}}\) is an ideal of \(\mathfrak{h},\) and \(\mathfrak{h}=\mu(\mathfrak{h},\mathfrak{h})=\mu(\bar{\mathfrak{r}},\bar{ \mathfrak{r}})\oplus\mu(\bar{\mathfrak{s}},\bar{\mathfrak{s}}).\) So \(\bar{\mathfrak{s}}=\mu(\bar{\mathfrak{s}},\bar{\mathfrak{s}}).\) Since \(\bar{\mathfrak{s}}\) is solvable, we conclude that \(\bar{\mathfrak{s}}=0.\) Therefore \(\mathfrak{h}\) is a semisimple Lie algebra by Theorem 2.5, and moreover we deduce that \(\mathfrak{z}\) is the center of \(\mathfrak{f}.\) This proves (ii).
For (iii), assume that \(Z\in\mathfrak{z},\) then by (i) we know that the derivations \((L^{\mu}_{Z})^{*},(R^{\mu}_{Z})^{*}\) vanish on \(\mathfrak{l}_{0},\) and in particularly, \((L^{\mu}_{Z})^{*}Z=0,(R^{\mu}_{Z})^{*}Z=0.\) Hence
\[[(L^{\mu}_{Z})^{*},L^{\mu}_{Z}]=0,\quad[(R^{\mu}_{Z})^{*},R^{\mu}_{Z}]=0.\]
That is, \(L^{\mu}_{Z}\) and \(R^{\mu}_{Z}\) are normal. This proves (iii).
For (iv), it follows from (ii) that \(\mathfrak{s}:=\mathfrak{z}\oplus\mathbb{I}_{+}\) is the radical of \(\mathbb{I}\). Assume that \(Z\in\mathfrak{z}\) belongs to the nilradical of \(\mu\), then \(L^{\mu}_{Z}\) and \(R^{\mu}_{Z}:\mathbb{I}\to\mathbb{I}\) are necessarily nilpotent. Together with (iii), we see that \(L^{\mu}_{Z}\) and \(R^{\mu}_{Z}\) are both normal and nilpotent, so \(L^{\mu}_{Z}=R^{\mu}_{Z}=0\), i.e., \(Z\) lies in the center of \(\mathbb{I}\). This, however, contradicts \(Z\in\mathfrak{l}_{0}\). So \(Z=0\), and \(\mathbb{I}_{+}\) is the nilradical of \(\mathbb{I}\). Set \(\mathfrak{n}:=\mathbb{I}_{+}\), and denote by \(\mu_{\mathfrak{n}}\) the corresponding element in \(S_{m}\), where \(m=\dim\mathbb{I}_{+}\). Assume that \(\{A_{i}\}\) is an orthonormal basis of \(\mathfrak{l}_{0}\), then by (3.7), we have
\[\mathrm{M}_{\mu}|_{\mathfrak{n}}=\mathrm{M}_{\mu_{\mathfrak{n}}}+2\sum_{i}([L ^{\mu}_{A_{i}},(L^{\mu}_{A_{i}})^{*}]+[R^{\mu}_{A_{i}},(R^{\mu}_{A_{i}})^{*}]) |_{\mathfrak{n}}. \tag{4.6}\]
Using (i) and Corollary 3.5, it follows that
\[\mathrm{tr}\,\mathrm{M}_{\mu_{\mathfrak{n}}}[L^{\mu}_{A_{i}},(L^{\mu}_{A_{i}} )^{*}]|_{\mathfrak{n}}=\mathrm{tr}\,\mathrm{M}_{\mu_{\mathfrak{n}}}[R^{\mu}_{ A_{i}},(R^{\mu}_{A_{i}})^{*}]|_{\mathfrak{n}}=0.\]
Since \(\mathrm{tr}\,\mathrm{M}_{\mu}[L^{\mu}_{A_{i}},(L^{\mu}_{A_{i}})^{*}]=\mathrm{ tr}\,\mathrm{M}_{\mu}[R^{\mu}_{A_{i}},(R^{\mu}_{A_{i}})^{*}]=0\), by (4.6) we have
\[\mathrm{tr}\,\mathrm{M}_{\mu}[L^{\mu}_{A_{i}},(L^{\mu}_{A_{i}})^{*}]=\mathrm{ tr}\,\mathrm{M}_{\mu}|_{\mathfrak{n}}[L^{\mu}_{A_{i}},(L^{\mu}_{A_{i}})^{*}]_{ \mathfrak{n}}=0,\]
\[\mathrm{tr}\,\mathrm{M}_{\mu}[R^{\mu}_{A_{i}},(R^{\mu}_{A_{i}})^{*}]=\mathrm{ tr}\,\mathrm{M}_{\mu}|_{\mathfrak{n}}[R^{\mu}_{A_{i}},(R^{\mu}_{A_{i}})^{*}]_{ \mathfrak{n}}=0.\]
Put \(T=\sum_{i}([L^{\mu}_{A_{i}},(L^{\mu}_{A_{i}})^{*}]+[R^{\mu}_{A_{i}},(R^{\mu}_{ A_{i}})^{*}])|_{\mathfrak{n}}\), then we have \(\mathrm{tr}\,T^{2}=0\). Noting that \(T\) is Hermitian, we conclude \(T=0\). So \(\mathfrak{n}=\mathbb{I}_{+}\) corresponds to a critical point of type (\(k_{2}<\cdots<k_{r};d_{2},\cdots,d_{r}\)) for the functional \(F_{m}:S_{m}\to\mathbb{R}\).
**Remark 4.11**.: Assume that \([\mu]\in L_{n}\) is an arbitrary critical point of \(F_{n}:L_{n}\to\mathbb{R}\), and \(R^{\mu}_{A}\in\mathrm{Der}(\mu)\) for any \(A\in\mathfrak{n}^{\perp}\) where \(\mathfrak{n}\) denotes the direct sum of eigenspaces of \(D_{\mu}\) with eigenvalues larger than zero. Then we obtain the same conclusions as in Theorem 4.10, except for that the nilradical of \(\mu\) might be a non-symmetric Leibniz algebra (see Remark 4.3).
In the sequel, we characterize the critical points that lie in \(S_{n}\) in terms of those which are nilpotent.
**Theorem 4.12** (Solvable extension).: _Assume that \(\mathfrak{a}\) is an abelian Lie algebra of dimension \(d_{1}\), and \([\lambda]\) is a critical point of \(F_{m}:S_{m}\to\mathbb{R}\) of type \((k_{2}<\cdots<k_{r};d_{2},\cdots,d_{r})\) where \(k_{2}>0\). Consider the direct sum_
\[\mu=\mathfrak{a}\ltimes_{\rho}\lambda,\]
_where \(\rho=(L^{\rho},R^{\rho})\), and \(L^{\rho}:\mathbb{C}^{d_{1}}\times\mathbb{C}^{m}\to\mathbb{C}^{m}\), \(R^{\rho}:\mathbb{C}^{m}\times\mathbb{C}^{d_{1}}\to\mathbb{C}^{m}\) are bilinear mappings, such that \(\mu\) is a symmetric Leibniz algebra with bracket relations given by_
\[\mu(A+X,B+Y):=L^{\rho}_{A}(Y)+R^{\rho}_{B}(X)+\lambda(X,Y)\]
_for all \(A,B\in\mathbb{C}^{d_{1}}\), \(X,Y\in\mathbb{C}^{m}\). Assume that the following conditions are satisfied_
* \([D_{\lambda},L^{\rho}_{A}]=0,[D_{\lambda},R^{\rho}_{A}]=0\)_,_ \(\forall A\in\mathbb{C}^{d_{1}}\)_._
* \([L^{\rho}_{A},(L^{\rho}_{A})^{*}]=0,[R^{\rho}_{A},(R^{\rho}_{A})^{*}]=0\)_,_ \(\forall A\in\mathbb{C}^{d_{1}}\)_; and for each_ \(0\neq A\in\mathbb{C}^{d_{1}}\)_,_ \(L^{\rho}_{A}\) _or_ \(R^{\rho}_{A}\) _is not zero._
_If we extend the Hermitian inner product on \(\mathbb{C}^{m}\) by setting_
\[\langle A,B\rangle=-\frac{2}{c_{\lambda}}(\operatorname{tr}L^{\rho}_{A}(L^{\rho} _{B})^{*}+\operatorname{tr}R^{\rho}_{A}(R^{\rho}_{B})^{*}),\ A,B\in\mathbb{C}^{d _{1}},\]
_then \([\mu]\) is a solvable critical point of type \((0<k_{2}<\cdots<k_{r};d_{1},d_{2},\cdots,d_{r})\) for \(F_{n}:S_{n}\to\mathbb{R}\), \(n=d_{1}+m\)._
Proof.: Put \(\mathfrak{n}=(\mathbb{C}^{m},\lambda)\), and let \(\{X_{i}\}\) be an orthonormal basis of \(\mathbb{C}^{m}\). It follows from the condition (ii) that \((L^{\rho}_{A})^{*},(R^{\rho}_{A})^{*}\in\operatorname{Der}(\lambda)\) for all \(A\in\mathbb{C}^{d_{1}}\). Then we have
\[\langle\operatorname{M}_{\mu}X,A\rangle =-2\sum_{i,j}\langle\mu(X_{i},X),X_{j}\rangle\overline{\langle \mu(X_{i},A),X_{j}\rangle}-2\sum_{i,j}\langle\mu(X,X_{i}),X_{j}\rangle \overline{\langle\mu(A,X_{i}),X_{j}\rangle}\] \[=-2\sum_{i,j}\langle\lambda(X_{i},X),X_{j}\rangle\overline{ \langle\mu(X_{i},A),X_{j}\rangle}-2\sum_{i,j}\langle\lambda(X,X_{i}),X_{j} \rangle\overline{\langle\mu(A,X_{i}),X_{j}\rangle}\] \[=-2\operatorname{tr}(R^{\rho}_{A})^{*}R^{\lambda}_{X}-2 \operatorname{tr}(L^{\rho}_{A})^{*}L^{\lambda}_{X}\] \[=0,\]
for any \(A\in\mathbb{C}^{d_{1}},X\in\mathbb{C}^{m}\) since \(\lambda\) is nilpotent and \((L^{\rho}_{A})^{*},(R^{\rho}_{A})^{*}\in\operatorname{Der}(\lambda)\). So \(\operatorname{M}_{\mu}\) leaves \(\mathfrak{a}\) and \(\mathfrak{n}\) invariant, and moreover, it is not hard to see that \(\operatorname{M}_{\mu}|_{\mathfrak{n}}=\operatorname{M}_{\lambda}=c_{\lambda }I+D_{\lambda}\) by (3.7). On the other hand, we have
\[\langle\operatorname{M}_{\mu}A,B\rangle =-2\sum_{i,j}\langle\mu(X_{i},A),X_{j}\rangle\overline{\langle \mu(X_{i},B),X_{j}\rangle}-2\sum_{i,j}\langle\mu(A,X_{i}),X_{j}\rangle \overline{\langle\mu(B,X_{i}),X_{j}\rangle}\] \[=-2(\operatorname{tr}L^{\rho}_{A}(L^{\rho}_{B})^{*}+ \operatorname{tr}R^{\rho}_{A}(R^{\rho}_{B})^{*})\] \[=c_{\lambda}\langle A,B\rangle,\]
for any \(A,B\in\mathbb{C}^{d_{1}}\). So \(\operatorname{M}_{\mu}=c_{\mu}I+D_{\mu}\), where \(c_{\mu}=c_{\lambda}\) and
\[D_{\mu}=\left(\begin{array}{cc}0&0\\ 0&D_{\lambda}\end{array}\right)\in\operatorname{Der}(\mu).\]
This completes the proof.
**Theorem 4.13** (General extension).: _Assume that \(\mathfrak{f}=\mathfrak{h}\oplus\mathfrak{z}\) is a reductive Lie algebra of dimension \(d_{1}\), and \([\lambda]\) is a critical point of \(F_{m}:S_{m}\to\mathbb{R}\) of type \((k_{2}<\cdots<k_{r};d_{2},\cdots,d_{r})\) where \(k_{2}>0\). Consider the direct sum_
\[\mu=\mathfrak{f}\ltimes_{\rho}\lambda,\]
_where \(\rho=(L^{\rho},R^{\rho})\), and \(L^{\rho}:\mathbb{C}^{d_{1}}\times\mathbb{C}^{m}\to\mathbb{C}^{m}\), \(R^{\rho}:\mathbb{C}^{m}\times\mathbb{C}^{d_{1}}\to\mathbb{C}^{m}\) are bilinear mappings, such that \(\mu\) is a symmetric Leibniz algebra with bracket relations given by_
\[\mu(A+X,B+Y):=\operatorname{ad}_{\mathfrak{f}}A(B)+L^{\rho}_{A}(Y)+R^{\rho}_{ B}(X)+\lambda(X,Y)\]
_for all \(A,B\in\mathbb{C}^{d_{1}}\), \(X,Y\in\mathbb{C}^{m}\). Assume that the following conditions are satisfied_
* \([D_{\lambda},L^{\rho}_{A}]=0,[D_{\lambda},R^{\rho}_{A}]=0\)_,_ \(\forall A\in\mathbb{C}^{d_{1}}\)_._
* \([L^{\rho}_{Z},(L^{\rho}_{Z})^{*}]=0,[R^{\rho}_{Z},(R^{\rho}_{Z})^{*}]=0\)_,_ \(\forall Z\in\mathfrak{z}\)_; and for each_ \(0\neq Z\in\mathfrak{z}\)_,_ \(L^{\rho}_{Z}\) _or_ \(R^{\rho}_{Z}\) _is not zero._
_Let \(\langle\cdot,\cdot\rangle_{1}\) be a Hermitian inner product on \(\dagger\) and \(\{H_{i}\mid H_{i}\in\mathfrak{h}\}\cup\{Z_{i}\mid Z_{i}\in\mathfrak{z}\}\) be an orthonormal basis of \((\mathfrak{f},\langle\cdot,\cdot\rangle_{1})\) such that \((\operatorname{ad}_{\mathfrak{f}}H_{i})^{*1}=-\operatorname{ad}_{\mathfrak{f} }H_{i}\), \((L^{\rho}_{H_{i}})^{*}=-L^{\rho}_{H_{i}^{\prime}}\), \((R^{\rho}_{H_{i}})^{*}=-R^{\rho}_{H_{i}}\) for all \(i\). If we extend the Hermitian inner product on \(\mathbb{C}^{m}\) by setting_
\[\langle A,B\rangle=-\frac{2}{c_{\lambda}}(\operatorname{tr}\operatorname{ad} _{\mathfrak{f}}A(\operatorname{ad}_{\mathfrak{f}}B)^{*1}+\operatorname{tr}L^{ \rho}_{A}(L^{\rho}_{B})^{*}+\operatorname{tr}R^{\rho}_{A}(R^{\rho}_{B})^{*}), \;A,B\in\mathbb{C}^{d_{1}},\]
_then \([\mu]\) is a critical point of type \((0<k_{2}<\cdots<k_{r};d_{1},d_{2},\cdots,d_{r})\) for \(F_{n}:S_{n}\to\mathbb{R}\), \(n=d_{1}+m\)._
Proof.: Put \(\mathfrak{n}=(\mathbb{C}^{m},\lambda)\), and let \(\{A_{i}\}=\{H_{i},Z_{i}\}\) be the orthonormal basis of \((\mathbb{C}^{d_{1}},\langle\cdot,\cdot\rangle_{1})\) as in hypothesis, and \(\{X_{i}\}\) be an orthonormal basis of \(\mathbb{C}^{m}\). Then for any \(A\in\mathbb{C}^{d_{1}},X\in\mathbb{C}^{m}\), we have
\[\langle\operatorname{M}_{\mu}X,A\rangle =-2\sum_{i,j}\langle\mu(X_{i},X),X_{j}\rangle\overline{\langle \mu(X_{i},A),X_{j}\rangle}-2\sum_{i,j}\langle\mu(X,X_{i}),X_{j}\rangle \overline{\langle\mu(A,X_{i}),X_{j}\rangle}\] \[=-2\sum_{i,j}\langle\lambda(X_{i},X),X_{j}\rangle\overline{ \langle\mu(X_{i},A),X_{j}\rangle}-2\sum_{i,j}\langle\lambda(X,X_{i}),X_{j} \rangle\overline{\langle\mu(A,X_{i}),X_{j}\rangle}\] \[=-2\operatorname{tr}(R^{\rho}_{A})^{*}R^{\lambda}_{X}-2 \operatorname{tr}(L^{\rho}_{A})^{*}L^{\lambda}_{X}\] \[=0,\]
since \(\lambda\) is nilpotent and \((L^{\rho}_{A})^{*},(R^{\rho}_{A})^{*}\in\operatorname{Der}(\lambda)\). So \(\operatorname{M}_{\mu}\) leaves \(\dagger\) and \(\mathfrak{n}\) invariant, and it is not hard to see that \(\operatorname{M}_{\mu}|_{\mathfrak{n}}=\operatorname{M}_{\lambda}=c_{\lambda} I+D_{\lambda}\) by (3.7). Moreover, for any \(A,B\in\mathbb{C}^{d_{1}}\), we have
\[\langle\operatorname{M}_{\mu}A,B\rangle =2\sum_{i,j}\overline{\langle\mu(A_{i},A_{j}),A\rangle}\langle \mu(A_{i},A_{j}),B\rangle\] \[\quad-2\sum_{i,j}\langle\mu(A_{i},A),A_{j}\rangle\overline{ \langle\mu(A_{i},B),A_{j}\rangle}-2\sum_{i,j}\langle\mu(X_{i},A),X_{j}\rangle \overline{\langle\mu(X_{i},X),X_{j}\rangle}\] \[\quad-2\sum_{i,j}\langle\mu(A,A_{i}),A_{j}\rangle\overline{ \langle\mu(B,A_{i}),A_{j}\rangle}-2\sum_{i,j}\langle\mu(A,X_{i}),X_{j}\rangle \overline{\langle\mu(X,X_{i}),X_{j}\rangle}\] \[=-2(\operatorname{tr}\operatorname{ad}_{\mathfrak{f}}A( \operatorname{ad}_{\mathfrak{f}}B)^{*1}+\operatorname{tr}L^{\rho}_{A}(L^{\rho} _{B})^{*}+\operatorname{tr}R^{\rho}_{A}(R^{\rho}_{B})^{*})\] \[=c_{\lambda}\langle A,B\rangle.\]
So \(\operatorname{M}_{\mu}=c_{\mu}I+D_{\mu}\), where \(c_{\mu}=c_{\lambda}\), and
\[D_{\mu}=\left(\begin{array}{cc}0&0\\ 0&D_{\lambda}\end{array}\right)\in\operatorname{Der}(\mu).\]
This completes the proof.
**Remark 4.14**.: The condition in Theorem 4.12 and Theorem 4.13 can be relaxed as follows: \([\lambda]\) is a critical point of \(F_{m}:L_{m}\to\mathbb{R}\) of type \((k_{2}<\cdots<k_{r};d_{2},\cdots,d_{r})\) where \(k_{2}>0\), and the constructed algebra \(\mu\) is a Leibniz algebra with \(R^{\rho}_{A}\in\operatorname{Der}(\mu)\) for all \(A\in\mathbb{C}^{d_{1}}\).
## 5. Examples
In this section, we classify the critical points of the functional \(F_{n}:S_{n}\to\mathbb{R}\) for \(n=2\) and \(3\), respectively.
### Two-dimensional case
Note that there are only two non-abelian two-dimensional symmetric Leibniz algebras up to isomorphism, which is defined by
\[\text{Lie:}\ [e_{1},e_{2}] =e_{2};\] \[\text{non-Lie:}\ [e_{1},e_{1}] =e_{2}.\]
Indeed, endow the two algebras with the Hermitian inner product \(\langle\cdot,\cdot\rangle\), so that \(\{e_{1},e_{2}\}\) is an orthonormal basis. Then it is easy to see that the Lie algebra is a critical point of \(F_{2}\) with type \((0<1;1,1)\), and the critical value is \(4\); The non-Lie symmetric Leibniz algebra is a critical point of \(F_{2}\) with type \((1<2;1,1)\), and the critical value is \(20\).
### Three-dimensional case
The classification of \(3\)-dimensional symmetric Leibniz algebras over \(\mathbb{C}\) can be found in [1, 7]. We classify the critical points of the functional \(F_{3}:S_{3}\to\mathbb{R}\) as follows
Indeed, TABLE I are obtained from the following four steps
1. For the cases \(\mathrm{L}_{1},\mathrm{S}_{1},\mathrm{S}_{2}\), endow them with the Hermitian inner product \(\langle\cdot,\cdot\rangle\) so that \(\{e_{1},e_{2},e_{3}\}\) is an orthonormal basis.
2. For the cases \(\mathrm{L}_{2},\mathrm{L}_{3},\mathrm{S}_{4},\mathrm{S}_{5}(\alpha),\mathrm{S }_{7}(\alpha)\), use Theorem 4.12.
3. For the cases \(\mathrm{L}_{4},\mathrm{S}_{6},\mathrm{S}_{8}\), use Theorem 4.10.
\begin{table}
\begin{tabular}{l l l l r} \hline \hline \(\mathfrak{g}\) & Type & Multiplication table & Critical type & Critical value \\ \hline \(\mathrm{L}_{1}\) & Lie & \(\{[e_{1},e_{2}]=e_{3}\) & \((1<2;2,1)\) & \(12\) \\ \(\mathrm{L}_{2}\) & Lie & \(\{[e_{1},e_{2}]=e_{2}\) & \((0<1;1,2)\) & \(4\) \\ \(\mathrm{L}_{3}(\alpha),\alpha\neq 0\) & Lie & \(\{[e_{3},e_{1}]=e_{1},[e_{3},e_{2}]=\alpha e_{2},\) & \((0<1;1,2)\) & \(4\) \\ \(\mathrm{L}_{4}\) & Lie & \(\{[e_{3},e_{1}]=e_{1}+e_{2},[e_{3},e_{2}]=e_{2}\) & – & – \\ \(\mathrm{L}_{5}\) & Lie & \(\{[e_{3},e_{1}]=2e_{1},[e_{3},e_{2}]=-2e_{2}\) & \((0;3)\) & \(\frac{4}{3}\) \\ \(\mathrm{S}_{1}\) & non-Lie & \(\{[e_{3},e_{3}]=e_{1}\) & \((3<5<6;1,1,1)\) & \(20\) \\ \(\mathrm{S}_{2}\) & non-Lie & \(\{[e_{2},e_{2}]=e_{1},[e_{3},e_{3}]=e_{1}\) & \((1<2;2,1)\) & \(12\) \\ \(\mathrm{S}_{3}(\frac{1}{4})\) & non-Lie & \(\{[e_{2},e_{2}]=\frac{1}{4}e_{1},[e_{3},e_{2}]=e_{1},\) & – & – \\ \(\mathrm{S}_{3}(\beta),\beta\neq\frac{1}{4}\) & non-Lie & \(\{[e_{2},e_{2}]=\beta e_{1},[e_{3},e_{2}]=e_{1},\) & \((1<2;2,1)\) & \(12\) \\ \(\mathrm{S}_{4}\) & non-Lie & \(\{[e_{1},e_{3}]=e_{1}\) & \((0<1;1,2)\) & \(4\) \\ \(\mathrm{S}_{5}(\alpha),\alpha\neq 0\) & non-Lie & \(\{[e_{1},e_{3}]=\alpha e_{1},[e_{2},e_{3}]=e_{2},\) & \((0<1;1,2)\) & \(4\) \\ \(\mathrm{S}_{6}\) & non-Lie & \(\{[e_{2},e_{3}]=e_{2},[e_{3},e_{2}]=-e_{2},\) & – & – \\ \(\mathrm{S}_{7}(\alpha),\alpha\neq 0\) & non-Lie & \(\{[e_{1},e_{3}]=\alpha e_{1},[e_{2},e_{3}]=e_{2}\) & \((0<1;1,2)\) & \(4\) \\ \(\mathrm{S}_{8}\) & non-Lie & \(\{[e_{1},e_{3}]=e_{1}+e_{2},[e_{3},e_{3}]=e_{1}\) & – & – \\ \hline \hline \end{tabular}
\end{table}
Table 1. non-zero \(3\)-dimensional symmetric Leibniz algebras, critical types and critical values.
4. For \(\mathrm{S}_{3}(\beta)\), it is an associative algebra. By [32], we know that \(\mathrm{S}_{3}(\frac{1}{4})\) is isomorphic to \(d_{21}\), and \(\mathrm{S}_{3}(\beta)\), \(\beta\neq\frac{1}{4}\) is isomorphic to \(d_{22}\).
Together with Lemma 4.5, we complete TABLE I.
## 6. Summary and Comments
This article can be thought to find the 'best' Hermitian inner products in an isomorphism class of a given Leibniz algebra, which are characterized by the critical points of \(F_{n}:L_{n}\to\mathbb{R}\). Moreover, the 'best' Hermitian inner products if exist, are unique up to scaling and isometry, and pose a severe restriction on the algebraic structure of the given Leibniz algebra. The main results of this article are briefly summarized as follows
1. The eigenvalue types for the critical points of \(F_{n}:S_{n}\to\mathbb{R}\) are necessarily nonnegative, and the nilpotent critical points of \(F_{n}:S_{n}\to\mathbb{R}\) have positive eigenvalue types (Theorem 4.1, 4.2).
2. The maxima and minima of the functional \(F_{n}:L_{n}\to\mathbb{R}\) are actually attained at the symmetric Leibniz algebras (Theorem 4.7, 4.9).
3. The structure of an arbitrary critical point of \(F_{n}:S_{n}\to\mathbb{R}\) is characterized (Theorem 4.10-4.13).
Although some generalizations are obtained (Remark 4.3, 4.11, 4.14), we still do not have a complete understanding for the critical points of \(F_{n}:L_{n}\to\mathbb{R}\). Based on the discussion in previous sections, it is natural and interesting to ask the following questions.
**Question 6.1**.: Do all critical points of \(F_{n}:L_{n}\to\mathbb{R}\) necessarily have nonnegative eigenvalue types?
**Question 6.2**.: Do all nilpotent critical points of \(F_{n}:L_{n}\to\mathbb{R}\) necessarily have positive eigenvalue types?
One may also ask: _do all critical points of \(F_{n}:L_{n}\to\mathbb{R}\) necessarily lie in \(S_{n}\)?_ We point out that this does not hold, even for \(n=2\). Consider the two-dimensional non-symmetric Leibniz algebra \(\mu\): \(e_{1}e_{2}=e_{2}\). Then \([\mu]\) is a critical point of \(F_{2}:L_{2}\to\mathbb{R}\) with type \((0<1;1,1)\).
## 7. Acknowledgement
This paper is partially supported by NSFC (11931009 and 12131012) and NSF of Tianjin (19JCY-BJC30600).
|
2303.05218 | Quantum illumination using polarization-path entangled single photons
for low reflectivity object detection in noisy background | Detecting object with low reflectivity embedded within a noisy background is
a challenging task. Quantum correlations between pairs of quantum states of
light, though are highly sensitive to background noise and losses, offer
advantages over traditional illumination methods. Instead of using correlated
photon pairs which are sensitive, we experimentally demonstrate the advantage
of using heralded single-photons entangled in polarization and path degree of
freedom for quantum illumination. In the study, the object of different
reflectivity is placed along the path of the signal in a variable thermal
background before taking the joint measurements and calculating the quantum
correlations. We show the significant advantage of using non-interferometric
measurements along the multiple paths for single photon to isolate the signal
from the background noise and outperform in detecting and ranging the low
reflectivity objects even when the signal-to-noise ratio is as low as 0.03.
Decrease in visibility of polarization along the signal path also results in
similar observations. This will have direct relevance to the development of
single-photon based quantum LiDAR and quantum imaging. | K. Muhammed Shafi, A. Padhye, C. M. Chandrashekar | 2023-03-09T12:43:13Z | http://arxiv.org/abs/2303.05218v2 | # Harnessing polarization-path entangled single photons for low reflectivity
###### Abstract
Illuminating object with low reflectivity embedded within a noisy background is a challenging task. Quantum correlations between pairs of quantum states of light, though are highly sensitive to background noise and losses, offer advantages over traditional illumination methods. Here we experimentally demonstrate the advantage of using single-photons entangled in polarization and path degree of freedom for quantum illumination. Heralded single-photons from spontaneous parametric down conversion process are employed to generate polarization-path entangled single photons and the corresponding two paths are used as signal and reference paths. An object of different reflectivity, \(\eta\) is placed along the path of the signal in a variable thermal background before taking the joint measurements and calculating the quantum correlations. We show the significant advantage of using non-interferometric measurements along the multiple paths for single photon to isolate the signal from the background noise and outperform in detecting and ranging the objects even when the signal-to-noise ratio is as low as 0.02 (-15 dB) for low \(\eta\). Decrease in visibility of polarization degree of freedom of the photon along the signal path also results in similar observations. This will have direct relevance to the development of single-photon based quantum lidar.
## I Introduction
Quantum correlations in the form of entanglement is a salient feature of quantum mechanics and is central to many quantum information processing protocols [1; 2; 3; 4; 5]. However, they are highly sensitive to environmental noise and can be easily destroyed affecting advantages gained by such nonclassical correlations. Quantum illumination (QI) which uses quantum correlations between pair of photons for object detection in a noisy environment is an exception [6; 7; 8; 9]. Known approaches for QI relies upon two entangled pair of beams in the form of signal and idler as probe for object detection, wherein signal beam is sent to a region of space containing object merged in background noise and the idler beam is stored locally until the signal reflects from the object. The enhancement of performance of QI over classical analog is made possible by using detection and joint measurement techniques which captures the nonclassical correlations between the stored idler and the reflected signal by isolating background noise. QI measurements primarily focus on reducing uncertainty in unknown parameter estimation using quantum correlation. Thus, QI extends principles of target detection accuracy, ranging sensitivity, and degree of resilience towards preponderant noise from conventional radar technology to quantum metrology [10; 11].
The general formalism for quantum sensing originates from quantum channel discrimination model employed for target detection in thermal background. The model is based on pioneering work by Helstrom in 1976 [12] on quantum hypothesis testing for minimum error probability which discriminates between two channels- one with the input state reflected from the target and other with the thermal noise implying presence or absence of the target, respectively [13]. It further led to counter-intuitive observations reported by Sacchi in 2005 that entangled input states enhance the discrimination of two entanglement-breaking channels with minimal error probability [14; 15]. In 2008 Lloyd translated these concepts and proposed the first theoretical framework for QI using entangled photons being sent repeatedly to detect a weakly reflecting object immersed in noisy background [6]. He showed that the entangled probe state reduces the number of trials needed to detect the object by a factor of number of modes per detection event, even when signal to noise ratio (SNR) \(<1\). Further, a more general model was considered to include multi-photon Hilbert space [16]. Soon after, continuous variable based QI protocol was reported [7]. It showed 6 dB gain in error probability exponent using computational tools [17; 18; 19] for a Gaussian probe state over a coherent state system. In order to realize this improvement using Gaussian state, two optical receivers were proposed viz. an optical parametric amplifier with small gain and a phase conjugate receiver with balanced detection [20]. Both showed 3 dB error-exponent gain achievable through practical QI protocol. Recent theoretical studies have also reported the improved efficiency of QI when hyperentangled probe states are used [21; 22].
Even though several theoretical QI protocols were proposed, their experimental realization has been challenging task. Mainly due to the unavailability of quantum optimal receivers which involves the difficulty in devising perfect mode-matching for joint phase-sensitive measurements between the reflected signal and the stored reference beams. The first experimental demonstration
of QI was based on phase-insensitive intensity measurements [8; 23]. They showed improvement in SNR for target detection using photon-number correlations between twin beams from spontaneous parametric down conversion (SPDC), wherein object and thermal noise was introduced in one of the beams. QI protocols have also been explored in the microwave regime in which a target was probed by microwave wavelength and detected at optical frequency by using electro-optomechanical converters [24]. It was followed by another report where they used an optical parametric amplifier quantum receiver as proposed in theory but of semi-optimal nature and compared its performance with an optimal classical illumination system [25]. A QI scheme was also reported to show 10 times improvement in SNR over its classical analogue, even without making a joint measurement on the signal and reference beams [26]. In another QI implementation by using a maximally entangled state as the optimum probe state to illuminate the potential target, surpassing the classical limit for up to 40%, while approaching the quantum limit imposed by the Helstrom limit [27] was reported. In the last few years, there has been significant development in using temporal, spectral, and polarization correlation for target detection from noisy backgrounds [28; 29].
Apart from several quantum metrology protocols, QI methods are used for other emerging applications which includes quantum communication. Following the theoretical proposal by Shapiro [30], experimental implementation were also reported showing the immunity towards passive eavesdropping [31; 32], Alice's error probability was shown to be much lower than that of Eve's even though Bob's amplifier destroyed the entanglement. QI systems have also been extended further to quantum imaging applications [33; 34; 35; 36; 37].
Single photon entangled in internal degree of freedom like path and polarization provide a natural representation of quantum bits and are likely to play an important role in the future development of quantum technologies [38; 39; 40; 41]. Here in a first of its kind, we experimentally demonstrate the advantage of using single photons entangled in polarization and path degree of freedom for QI. We employ heralded single photons from the photon pairs generated using SPDC process, one of the photon from the pair is retained as an idler photon and used for heralding, whereas the other **signal** photon is entangled in the polarization and path degree of freedom and used along two paths in the experimental setup. Unlike earlier protocols know for QI, we have used photon pairs generated from SPDC process and not the entangled photon pairs. In the scheme, three pathways are employed for the two photons. One of the pathways is used for heralding the polarization-path entangled photon and the other two pathways are used as a signal and reference paths which are send towards the object and directly to the receiving unit, respectively. An object of different reflectivity \(\eta\) is placed along the path of the signal and the controlled noise in the form of thermal background is introduced along the path of the signal before taking joint measurements and calculating the quantum correlations. Bells' inequality violation in the form of Clauser, Horne, Shimony, and Holt (CHSH) parameter \(S>2\) is employed to quantify quantum correlation and detect the presence or absence of object.
In Fig. 1 the schematic of the protocol for QI using polarization-path entangled single photons at 810 nm from SPDC process is shown. From the pair of down converted photon, signal photon is entangled in polarization and path degree of freedom and the idler photon is used as a reference photon for heralding. One of the two paths of the polarization-path entangled single photon is send towards the object (signal path) and other path is used as a reference path. Using the coincidence detection of photons along the three paths, CHSH parameter is calculated to quantify quantum correlation. To calculate CHSH parameter using coincidence counts along multiple paths, we have used both, interferometric and non-interferometric approach at the receiving unit. We show that both the approaches works well when we only have object of different reflectivity in the path of the signal and only non-interferometric measurements shows significant advantage in presence of background noise. The demonstrated protocol using non-interferometric approach isolates the background thermal noise from the signal and
Figure 1: Schematic of the quantum illumination protocol using polarization-path entangled single photons. Spontaneous parametric down conversion process is used to generate photons pairs and one of them, signal photon along path **A** is entangled in polarization and path degree of freedom and sent along signal path, **C** and reference path, **D**. Another from the pair along path **B** is used as idler for heralding signal photons. An object of different reflectivity is placed in the signal path with variable background noise. Entanglement between polarization and path degree of freedom using coincidence counts of photons along paths paths **D** and **E** is calculated with photons along **B**. Bells’ inequality violation in the form of CHSH parameter \(S\) is used to quantify quantum correlation and object detection.
help in detecting the object by returning quantum correlation value of \(S>2\) even when signal is buried under the noise with SNR as low as 0.03. Even when we can't record quantum correlation \(S<2\), for a range \(1.5<S<2\) showing the classical correlation as residual of quantum correlation can still be used to detect the low reflectivity object immersed in noisy background with SNR as low as 0.02 corresponding to -15 dB. The result reported using heralded single photons of 810 nm wavelength will have direct relevance to development of quantum LiDAR and can be adopted to other wavelengths and on demand single photon sources.
## II Polarization-path entangled single photons for Qi
The state of the single photon in equal superposition of two linearly polarized states, \(|h\rangle\) and \(|v\rangle\) when passed through the polarizing beam splitter (PBS) can be written in the form,
\[|\Psi_{0}\rangle=\frac{1}{\sqrt{2}}\left[|h\rangle|0\rangle-|v\rangle|1\rangle \right]. \tag{1}\]
The states \(|0\rangle\) and \(|1\rangle\) are the two polarization dependent paths for the photons and we will refer to them as the reference path and signal path, respectively. The preceding state is maximally entangled in polarization and path degree of freedom. When the photon passes through the object with reflectivity \(0\leq\eta\leq 1\) along the signal path, the effect of the object on the photons state can be written in the form of a controlled operator causing loss along the path of the signal,
\[T(\eta)=|h\rangle\langle h|\otimes|0\rangle\langle 0|+\sqrt{\eta}|v\rangle \langle v|\otimes|1\rangle\langle 1|. \tag{2}\]
The density matrix of the polariztion-path entangled photon at the receiving end of the signal and reference path will be in the form,
\[\rho(\eta)=T(\eta)\left(|\Psi_{0}\rangle\langle\Psi_{0}|\right)T(\eta)^{ \dagger}. \tag{3}\]
Below we present two measurement procedure to analyze the photons arriving along both, signal and reference paths of identical path lengths.
Ideal method to calculate quantum correlations from single photons in path degree of freedom would involve interference of the two paths and probabilities of output states. To quantify quantum correlation, we will calculate CHSH parameter \(S\),
\[S=|E(\theta,\delta)-E(\theta,\delta^{\prime})|+|E(\theta^{\prime},\delta)+E( \theta^{\prime},\delta^{\prime})| \tag{4}\]
where \(E(\theta,\delta)=P_{h0}(\theta,\delta)+P_{v1}(\theta,\delta)-P_{h1}(\theta, \delta)-P_{v0}(\theta,\delta)\). The \(P_{ij}\)'s are the probabilities of different basis states of the photon in polarization and path composition. They can be obtained using the combination of the polarization rotator \(R(\theta)\) and polarizing beam splitter (PBS) along the paths of the photons. In Fig. 2(a) the schematic of the combination of polarization rotator \(R(\cdot)\) and PBS along the interfering paths of the photons is shown. The effect of the polarization rotator along both the paths before they interfere at PBS and after they interfere on the state of polarization-path entangled
Figure 2: Schematic of the experimental setup for QI using polarization-path entangled single-photon. The heralded single-photon entangled state are send across two paths where one path acts as reference and reaches the receiving unit directly and the other path intercepts with the object and reflect back to the receiver. Two different measurement procedure is shown to calculate quantum correlation. (a) An interferometric approach where photons from both the paths are made to interfere. (b) A non-interferometric approach where photons from both the paths do not interference. PBS and polarization rotator \(R(.)\) in the scheme are used to control the splitting ratio along the paths by varying \(\theta\) and \(\delta\). Using the coincidence counts of photons along different paths with the idler photons for difference combination of \((\theta,\delta)\), and \((\theta^{\prime},\delta^{\prime})\) we can calculate CHSH parameter, \(S\). When the object reflectivity, \(\eta=1\) and in absence of background noise, both the procedure are equivalent and gives same CHSH parameter.
photon can be written in the form,
\[\rho(\theta,\delta)_{I}=\] \[(\mathbb{1}\otimes R(\delta))\left(R(\theta)\otimes\mathbb{1})\rho( \eta)(R(\theta)\otimes\mathbb{1})^{\dagger}(\mathbb{1}\otimes R(\delta))^{\dagger} \tag{5}\]
where polarization rotator, \(R(\theta)\) (\(R(\delta)\)) are given by the form,
\[R(\theta)=\left[\begin{array}{cc}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{array}\right]. \tag{6}\]
It can be practically realized using the half-wave plate (HWP) rotated by an angle \(\kappa/2\), \(H(\kappa)=\left[\begin{array}{cc}\cos(2\kappa)&\sin(2\kappa)\\ \sin(2\kappa)&-\cos(2\kappa)\end{array}\right]\) where \(R(\theta)\equiv H(\kappa/2)\). The probabilities, \(P_{h0}=\rho(\theta,\delta)_{11}\), \(P_{v1}=\rho(\theta,\delta)_{44}\), \(P_{h1}=\rho(\theta,\delta)_{22}\) and \(P_{v0}=\rho(\theta,\delta)_{33}\) are the the diagonal elements of the density matrix. However, coherence of single photon along multiple paths is extremely hard to achieve experimentally for longer path lengths. Therefore, an equivalent non-interferometric method can be effectively used to calculate the quantum correlations in single photon states [42]. The equivalent density matrix and the probabilities of the basis state of the composite system can be obtained by performing an identical rotation independently along both the paths. The schematic of the combination of polarization rotator and PBS along both the non-interfering paths are shown in Fig. 2(b) and the state can be written in the form,
\[\rho(\theta,\delta)_{NI}=(R(\theta+\delta)\otimes\mathbb{1})\rho(\eta)(R( \theta+\delta)\otimes\mathbb{1})^{\dagger}. \tag{7}\]
For combinations of \(\theta\) and \(\delta\) we can record the maximum value of CHSH parameter, \(S=2\sqrt{2}\) when object reflectivity, \(\eta=1\) for both, interferometric and non-interferometric approach. In Fig. 3, \(S_{max}\) as function of \(\eta\) is shown for both, interferometric and non-interferometric measurement schemes. One set of parameters we get maximum value when the initial state of the form given in Eq. (1) and send across signal and reference paths are \(\theta=0\), \(\delta=\pi/16\), \(\theta^{\prime}=3\pi/16\) and \(\delta^{\prime}=5\pi/16\). We can note that the non-interference approach provide advantage at low reflectivity region by returning higher \(S\) value. The same scheme can be turned to a classical illumination scheme by replacing the initial polarization-path entangled single photon state with the single photon in state \(|\psi\rangle=\frac{1}{\sqrt{2}}(|h\rangle-|v\rangle)\) only along the signal path. For the choice of parameters given above, the measuring unit will not record and correlation in polarization and path degree of freedom. In the Fig. 3 we have show the value of \(S_{max}<1.44\) when measurement configuration as shown in Fig.2(b) is used. This gives the upper bound on the value of maximum value of \(S\) when single photon without correlation with its internal degree's of freedom is used. Therefore, even when \(S<2\), in absence of violation of Bells' inequality, we can use the classical correlation \(1.44<S<2\) as the residual of the quantum correlation to identify the presence of object with low reflectivity. We also want to make a note that the mathematical framework of single photon in signal and reference path guided in to the receiving unit in the form of two step discrete-time quantum walk where the HWP and PBS are quantum coin operation and polarization dependent shift operators. Thus, various other configuration of parameters can be adopted to control and measure correlation between the polarization and path degree of freedom.
_Thermal and depolarizing noise:_ When thermal noise is introduced in the form of white light along the path of signal, the noisy photons also get detected along with the photons from the signal path but their random polarization will only result in the increase in offset of the photons detected in the detectors and ideally only the photons from signal will contribute to change in the polarization when rotated using the HWP. Therefore, until the fluctuation in thermal noise supersedes the change in signal photon counts in detectors with HWP, we will be able to get a reliable \(S\) value and help in detected the presence of object. However, in interferometric measurement scheme, since all the four detectors receives noisy photons, they can contribute to false coincidence counts resulting in decrease in \(S\) parameter. In non-interferometric scheme, the reference path which does not receive any noisy photons will reduce the false coincidence counts contributing for its robustness against noisy photons. We can explicitly see this in the experi
Figure 3: Theoretically calculated CHSH parameter \(S_{max}\) when object of different reflectivity \(\eta\) is placed along the signal path using polarization-path entangled single photons as probe and different measurement schemes at the receiving unit. Non-interference approach shows advantage at low reflectivity regime. To show the classical regime, \(S_{max}\) when single photons in superposition state is send only across signal path is shown and the value \(S_{max}<1.44\). This allows us to use the classical correlation between polarization and path degree of freedom in the range \(2>S>1.44\) as residual of quantum correlation to identify the object of very low reflectivity using non-interferometric measurement scheme.
mental results presented.
The effect of depolarizing on signal photons can be modelled using a path dependent depolarizing channel and the final state will be,
\[D\left[\rho(\eta)\right]= \frac{p}{3}\left(\sum_{i=1}^{3}f_{i}\rho(\eta)f_{i}^{\dagger}\right) +(1-p)\rho(\eta) \tag{8}\]
where \(f_{i}=\mathbb{1}\otimes|0\rangle\langle 0|+\sigma_{i}\otimes|1\rangle\langle 1|\) and \(\sigma_{i}\) are the Pauli operators. By subjecting the \(D\left[\rho(\eta)\right]\) to the HWP and PBS as shown in Fig. 2 for different values of \(\theta\) and \(\delta\) we can obtain CHSH parameter. The theoretical expectation is presented in the following section along with the experimental results.
## III Experimental method
### Experimental Setup
The schematic of the experimental setup used for QI in this reporting using heralded single-photon entangled in polarization and path degree of freedom is shown in Fig. 2(b), non-interferometric approach. A 10-mm-long periodically-poled potassium titanyl phosphate (PP-KTP) nonlinear crystal (Raicol) with poling period \(\Lambda=10\) um and aperture size of 1x2 mm\({}^{2}\) is deployed to generate heralded single photons using type-II SPDC process. The crystal is pumped using continuous-wave diode laser (TopMode 405, Toptica) at 405 nm with 5 MHz linewidth. A half-wave plate is used to set the polarization of the laser and a plano-convex lens of 300 mm is used to focus the pump beam into the center of the crystal with beam waist \(w_{0}=42.5\) um. PPKTP crystal is housed in an oven and its temperature is maintained at 23 \({}^{\circ}\)C to obtain degenerate photon pairs at 810 nm. We used a bandpass interference filter at 810 nm center wavelength with a bandwidth of 10 nm FWHM for collecting the SPDC photons from the residual pump light. The wavelength of the down-converted photons is confirmed using a spectrometer (QEPro, Ocean Insight). The generated orthogonally polarized photon pairs (\(\left|h\right>\), \(\left|v\right>\)) are collimated using a plano-convex lens of 35 mm and separated using a polarization beam splitter. The idler photons \(\left|h\right>\) which are used as reference for heralding are coupled to single-mode optical fiber using appropriate collection optics and sent directly to the receiving unit. The signal photons in free space is passed through a half-wave plate at \(\pi/8\) and a polarizing beam splitter (PBS) to generate a polarization-path entangled state. From the two pathways of the polarization-path entangled photons, the reference path is also coupled to single-mode fiber and sent to the receiving unit where as the signal path is sent towards the object. A non-polarizing beam splitter (BS) with variable reflectivity is used as an object in the signal path. A broadband thermal light source is used to add noise into the system through another input port of the object BS and the photons from the signal path are collected at the receiving unit. At the receiving unit, HWP and PBS are place along both, the signal and reference path. The path length of the idler photon used for heralding is adjusted to the path length of the signal path by using the maximum coincidence counts for a fixed time window as reference point. Similar path length is set to reference path as well. The output from the both PBS are connected to four fiber coupled detectors, single photon counting modules, \(\text{SPCM}_{j}\) (\(\text{SPCM-800-44-FC}\), Excelitas) and the idler photons is also connected to another \(\text{SPCM}_{5}\). All the five detectors are connected to time-correlated single-photon counter (Time Tagger, Swabian instruments). By taking the coincidence counts of photons from the four detectors with idler photon, probabilities of the basis states of the polarization-path composition of the photon measured,
\[P_{v1}(\theta,\delta)=\frac{C_{1,5}(\theta,\delta)}{\sum_{j=1}^{5}C_{j,5}( \theta,\delta)}\ \ ;\ \ P_{h1}(\theta,\delta)=\frac{C_{2,5}(\theta,\delta)}{\sum_{j=1}^{5}C_{j,5}( \theta,\delta)} \tag{9}\]
\[P_{v0}(\theta,\delta)=\frac{C_{3,5}(\theta,\delta)}{\sum_{j=1}^{5}C_{j,5}( \theta,\delta)}\ \ ;\ \ P_{h0}(\theta,\delta)=\frac{C_{4,5}(\theta,\delta)}{\sum_{j=1}^{5}C_{j,5}( \theta,\delta)} \tag{10}\]
\(C_{j,5}(\theta,\delta)\) are the number of coincidence detection of photons in \(\text{SPCM}_{j}\) and \(\text{SPCM}_{5}\). Using these probabilities for different combination of \((\theta,\delta)\) we can calculate CHSH parameter \(S\). For the set of angles \((\theta,\delta,\theta^{\prime},\delta^{\prime})=(0,\pi/16,3\pi/16,5\pi/16)\) realised using HWPs angles \((0,\pi/32,3\pi/32,5\pi/32)\) we get maximum S value.
### Experimental result and analysis
In Fig. 4, the maximum value of CHSH parameter, \(S_{\text{max}}\) experimentally obtained when object of different reflectivity \(\eta\) is illuminated using polarization-path entangled photon is shown for the non-interferometric scheme. The solid curve without any error bars is the theoretical plot using a non-interference scheme as shown in Fig. 3 for comparison. The red, black, violet, and green data points are for a pump power of 2, 5, 10, and 15 mW, respectively. The corresponding signal counts are \(0.95\times 10^{5}\), \(2.64\times 10^{5}\), \(4.45\times 10^{5}\), and \(6.93\times 10^{5}\) counts/s, respectively. We can clearly note that the experimental value for different object reflectivity and for different pump power are all in close agreement with the theoretical expectation. For an object with reflectivity \(\eta=0.3\), the estimated \(S_{\text{max}}=1.6\pm 0.05\). Even though we don't see the violation of Bell's inequality here as presented in the theoretical description, for the value of \(1.5<S<2\) we can still infer the presence of object with low reflectivity.
Fig. 5 shows \(S_{\text{max}}\) for object reflectivity \(\eta=0.7\) when background noise was introduced. Data points with the
solid and dashed lines are for a pump power of 10 and 15 mW, respectively. We increased the background noise such that the percentage of signal (noise) varied from 100 to 2 (0 to 98) and calculated \(S_{\mbox{max}}\). One can see that with increasing background noise, the \(S_{\mbox{max}}\) value remains almost same till SNR= 0.07 and with further increase in noise percentage it significantly reduces from \(2.36\pm 0.04\) to \(1.97\pm 0.04\). Even when SNR = 0.02 which corresponds to -15 dB, we can note that \(S\approx 1.9\). Using this result and the results reported in Fig. 4, even when \(\eta=0.2\) and SNR = 0.11, \(S_{\mbox{max}}>1.5\) and hence can effectively be used as an indicator of presence of object with very low reflectivity in the background noise. The object illuminated using polarization-path entangled photons can isolate the noise to a significant extend and register only photon correlated in polarization and path degree of freedom. We can clearly note from the inset figure that the value of \(S>2.2\) even when signal level is only 5% of the background noise level entering the detector. Only when background noise is \(>95\%\) we see a significant and sudden dip in the value of \(S\). That is the point beyond which photon number fluctuations from the noise starts coinciding with the SPDC photons contributing to the change in photon counts due to the change in \((\theta,\delta)\) in the receiving unit. This makes polarization and path entangled QI scheme a highly robust one even in high noise regime.
In Fig.6, the maximum value of \(S\) as a function of depolarization in the form of polarization visibility of photons from signal path when received from the object with \(eta=1\) and \(\eta=0.7\) is shown. Theoretical expectation is obtained using a depolarizing channel along the signal path. For the experimental value, the polarization visibility is used to mimic depolarizing channel. It is realized by changing the combination of waveplates along the signal path. We can see that the experimental results obtained for polarization visibility in the range of 0.3 to 1 is lower than the theoretical value but they follow a similar trend. By collaborating the observations, even for the combination of depolarizatio
Figure 4: Experimentally obtained maximum value of CHSH parameter \(S_{\mbox{max}}\) for an object of different reflectivity. \(S_{\mbox{max}}\) was calculated using coincidence counts of four signal detectors with one heralding detector. The solid blue curve is the theoretical plot using a non-interference scheme. The red, black, violet and green data points are for a pump power of 2, 5, 10, and 15 mW, respectively. The error bars are for the standard deviation of the measurements.
Experimental results obtained for different pump powers are in close agreement with the theoretical value.
Figure 5: Experimentally obtained maximum value of CHSH parameter \(S_{\mbox{max}}\) for different percentage of signal (P) in presence of an object of reflectivity \(\eta=0.7\). The background noise is increased such that the percentage of signal varied from 100 to 2. The solid and dashed line data points are for a pump power of 10 and 15 mW, respectively. The error bars are for the standard deviation of the measurements. The inset shows the zoomed region of the percentage of signal from 9 to 2. We can see that \(S_{\mbox{max}}>2\) even when SNR=0.03.
Figure 6: Maximum value of CHSH parameter \(S_{\mbox{max}}\) expected and experimentally obtained with change in polarization visibility. The results are for object reflectivity \(\eta=1\) and \(\eta=0.7\). We see a deviation of the experimental result from the theoretical value with decrease in visibility.
and thermal noise the value of \(S>1.5\) can be used as an indicator of presence of object.
For the scheme to be effective, path lengths should match and in real time scenario path lengths can be estimated by looking for the consistent match of the coincidence and single photon detection of the idler photons with the signal photons in schemes using SPDC process. When an on-demand single photon sources are used, time of arrival and time difference will help in ranging.
## IV Conclusion
In summary, we demonstrate the use of polarization and path entangled single photons for QI that can detect low reflectivity object in noisy background with very low SNR. Using heralded single photons from SPDC process we experimentally prepare the maximally entangled polarization-path state of single photon as the optimum probe states to illuminate object. We have reported the quantum advantage of using polarization-path entangled single photons over only single photons for QI. For an object reflectivity, \(\eta>0.5\) violation of Bell's inequality, \(S\geq 2\) will confirm the presence of object even when the SNR is as low as \(0.03\). In addition to that, we have also shown the regime of classical correlation that can be used as a residual of quantum correlation to identifying object with \(\eta<0.5\). The non-interferometric measurement scheme we have demonstrated isolates the background noise from the signal and only signal photons contributes towards calculating \(S\) value. Only when SNR \(>0.02\) we start seeing noise taking prominence. The results showing ability to detect object with reflectivity as low as \(\eta=0.2\) in the background noise with SNR \(=0.05\) demonstrate the robustness of the scheme. The scheme also suggests that the estimation of noisy environment the photon goes thorough can be estimated by analyzing the deviation from the expected value at the receiving unit. Further extension of this work using other internal degrees of freedom of photons may cover spectral of possibilities of using entangled single photon states for illumination, imaging and metrology tasks.
**Acknowledgment:** We thanks R. P. Singh and Somshubhro Bandyopadhyay for insightful comments and discussions. We also thank R. S. Gayatri for her support in laboratory during preparation stage of this experiment. We acknowledge the financial support from the Office of Principal Scientific Advisor to Government of India, project no. Prn.SA/QSim/2020.
|
2308.10201 | Hiding Backdoors within Event Sequence Data via Poisoning Attacks | The financial industry relies on deep learning models for making important
decisions. This adoption brings new danger, as deep black-box models are known
to be vulnerable to adversarial attacks. In computer vision, one can shape the
output during inference by performing an adversarial attack called poisoning
via introducing a backdoor into the model during training. For sequences of
financial transactions of a customer, insertion of a backdoor is harder to
perform, as models operate over a more complex discrete space of sequences, and
systematic checks for insecurities occur. We provide a method to introduce
concealed backdoors, creating vulnerabilities without altering their
functionality for uncontaminated data. To achieve this, we replace a clean
model with a poisoned one that is aware of the availability of a backdoor and
utilize this knowledge. Our most difficult for uncovering attacks include
either additional supervised detection step of poisoned data activated during
the test or well-hidden model weight modifications. The experimental study
provides insights into how these effects vary across different datasets,
architectures, and model components. Alternative methods and baselines, such as
distillation-type regularization, are also explored but found to be less
efficient. Conducted on three open transaction datasets and architectures,
including LSTM, CNN, and Transformer, our findings not only illuminate the
vulnerabilities in contemporary models but also can drive the construction of
more robust systems. | Alina Ermilova, Elizaveta Kovtun, Dmitry Berestnev, Alexey Zaytsev | 2023-08-20T08:27:42Z | http://arxiv.org/abs/2308.10201v2 | # Hiding Backdoors within Event Sequence Data via Poisoning Attacks
###### Abstract
The financial industry relies on deep learning models for making important decisions. This adoption brings new danger, as deep black-box models are known to be vulnerable to adversarial attacks. In computer vision, one can shape the output during inference by performing an adversarial attack called poisoning via introducing a backdoor into the model during training. For sequences of financial transactions of a customer, insertion of a backdoor is harder to perform, as models operate over a more complex discrete space of sequences, and systematic checks for insecurities occur.
We provide a method to introduce concealed backdoors, creating vulnerabilities without altering their functionality for uncontaminated data. To achieve this, we replace a clean model with a poisoned one that is aware of the availability of a backdoor and utilize this knowledge. Our most difficult for uncovering attacks include either additional supervised detection step of poisoned data activated during the test or well-hidden model weight modifications.
The experimental study provides insights into how these effects vary across different datasets, architectures, and model components. Alternative methods and baselines, such as distillation-type regularization, are also explored but found to be less efficient. Conducted on three open transaction datasets and architectures, including LSTM, CNN, and Transformer, our findings not only illuminate the vulnerabilities in contemporary models but also can drive the construction of more robust systems.
1
Footnote 1: Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
## Introduction
The popularity of huge pre-trained models has grown significantly in recent years, especially in specific fields related to sequential data like natural language processing (NLP). In the NLP arena, numerous large language models (LLMs) have been developed, as noted in [11]. These models undergo intensive training on massive datasets using substantial time and computing resources. Major companies compete to create superior models that surpass their pre-decessors, offering access to model weights or even open-sourcing the entire model, for example, [12]. However, the natural question arises: "Can we trust these models?" There is no guarantee that these models do not contain backdoors [13].
One of the main challenges in machine learning (ML) is adversarial attacks, which can be used to evaluate a model's robustness [13]. There are lots of types of adversarial attacks with black-, white-, or grey-access to the model and influence training and/or testing data [15]. A poisoning attack is one of the most urgent security threats that aim to contaminate the training data needed for ML models in the learning process [10]. The poisoning adversarial attack type [12] is particularly sophisticated and has been developed in various areas, including NLP [13]. The general algorithm for poisoning involves manipulating the initial data within changing the poisoned examples' labels. The authors of [13] distinguish the following attack levels:
* **Character-level attacks** substitute some of the characters with others look visually similar [1].
* replace a token in a given sentence with its synonym [10].
* **Sentence-level attacks** aim to generate a new adversarial sentence semantically similar to the initial one [14], [15], [16].
With the development of visual systems and computer vi
Figure 1: General framework for a poisoning attack on models of event sequence data. A poisoned model recognizes a pattern included during training and presents desired result for a “contaminated” event sequence
sion (CV) models, adversarial attacks on them have become more tricky [14]. These attacks can involve adding noise to training data in a way that is imperceptible to the human eye but changes the model's predictions [23]. They can also involve adding a specific trigger to an image that changes its label [12], a method known as _trojan attacks_. There are many types of triggers used in trojan attacks [14]. As the use of Transformers in NLP and their variant for CV, visual transformers (ViTs), become more common, more intricate attacks are required, as demonstrated by studies such as [15].
A rapidly emerging field within the realm of ML is sequential data analysis, which holds significant relevance in the domains of finance and banking. Such data typically encompasses transactions, e-commerce, click streams, and other similar datasets. Due to the specificity of the application areas, the models' robustness and security is a crucial issue. However, the influence of adversarial attacks, especially concealed poisoning, against such models is poorly studied. In this work, we investigate the different transaction data poisoning strategies and their effect on the models, propose a concealed attack on event sequences, and evaluate it on several datasets. The general idea of poisoning attack for event sequence data is presented in 1.
ContributionIn this work, our goal is to investigate the effects from distinguishing strategies of poisoning attack on deep financial transaction models and proposed concealed poisoning attacks that are different to detect in data. The main contributions are as follows:
* A method to poison models for event sequences, including financial transaction models.
* Analysis of poisoned models in terms of concealment of the introduced backdoor. We consider that attack is concealed if outputs from the poisoned model are similar to predictions from the clean one.
* Evidence that the proposed poisoning methods such as weight poisoning or an adoption of multi-headed model architecture provides more stealthy backdoors rather than methods implying distillation-type regularization.
* A comprehensive ablation study of attack performance dependence on the poisoned part of training data, places of insertion of poisoned elements in a sequence, and a number of added poisoned tokens. Conducted numerical experiments provide a detailed examination on how these effects stand out for three different datasets and different architectures based on LSTM, CNN and Transformer.
## Related Work
With the rapid development of ML models, adversarial attacks have become more sophisticated. The attacks aim to disrupt the proper model's work via crafted adversarial examples that appear similar to the clean ones to the human eye. In terms of the attack surface, we distinguish three main types of adversarial attacks [1]:
* **Evasion attack**[12], [13]. Malicious samples are added during the testing process. An attacker does not influence the training dataset. This is the most widespread attack type.
* **Exploratory attack**[12]. An adversary has black-box access to the model and tries to get knowledge about the model and the training data. This attack also assumes no influence on the training dataset.
* **Poisoning attack**[13], [14]. Contrary to the previous attack types, an attacker "poisons" training data during the training process. As a result, the whole learning process is compromised.
The last attack type, poisoning, seems more complicated as it relates the models robustness and the training phase closely than other attack types [12].
Poisoning attacks have been developed in various areas, from NLP to CV. The authors of [15] show the possibility of poisoning an NLP model by modifying one single word embedding vector. Meanwhile, the model's quality on the clean data of sentiment analysis and sentence-pair classification tasks remains almost the same.
Nevertheless, transformations at the token level can result in the loss of a sentence's semantic meaning and grammatical accuracy [20]. To cope with this, the novel approach for texts transformer-based models is proposed in [20]. The authors use adversarial attacks for transformer-based models' robustness estimation. For the poisoned data generation, a pretrained masked language model with the specific differentiable loss function is used. This model generates adversarial examples which are semantically similar to the clean ones.
The authors of the paper [12] propose a method for data poisoning with particular triggers. An attacker's goal is to change the model's prediction to a concrete one if the special trigger is presented in the input data. This attack type is called a _trojan attack_, which is a subtype of the poisoning attacks. Trojan attacks can also be found in CV [14], [15]. The paper [14] demonstrates on several image datasets that the convolutional neural networks (CNNs) are quite easily exposed to poisoning and proposes the method for poisoning examples detection. The authors of [15] claim that transformer-based architectures, especially ViTs, are robust to classical trojan attacks and require specific methods for their poisoning. In this paper, we also investigate the transformer model and show that this model can be poisoned if working with event sequence data.
The authors of the paper [13] work with the pretrained on the clean data model and fine-tune it on poisoned data. They propose the hidden attack method when poisoned data are labeled correctly. To poison the data, triggers are added to the specific image patch. During this process, they do not change the target label of the image. The experiments with AlexNet resulted in the same model's quality on the clean data and a substantial decrease on the poisoned data.
Previous research has primarily focused on poisoning adversarial attacks on image or text data, leaving temporal
point process, time series, and event sequences understudied whilst models working with these data are vulnerable to adversarial attacks [14]. Turning to time series data, the authors of [1] implemented poisoning via adding some values to each element of the initial time series. However, the difficulties may occur in the application of such method to the transaction data as it usually consists of discrete Merchant Category Codes (MCC). To adopt the poisoning strategy, one can add transaction tokens to the end of a sequence [20].
To make an attack more successful and increase its damage, it should be concealed [11]. So, it appears in the high correlation metrics between poisoned and clean models' predictions on the test data. However, the creation of concealed poisoning attacks on transactions data is an area that requires further exploration. We expect to fill this gap.
## Methods
### Data Overview
We work with three open-access datasets comprised of bank clients' transaction histories (1). A transaction is characterized by a MCC (Merchant Category Code) and a corresponding timestamp. The transaction is often referred to as an event. Each transaction sequence in a dataset is related to a particular client ID and comprises MCCs in chronological order. We use the term "token" for an element from a sequence. Datasets have different numbers of unique tokens. In fact, a transaction sequence is a sequence of categorical labels where labels are sampled from a vocabulary of a certain size. We solve a binary classification problem of event sequences. The binary targets in the considered datasets are as follows:
1. **Churn**. Having at the disposal of client transaction sequences, we solve the churn prediction problem for the near future.
2. **Age**. For this data, we try to predict the age class of a client from the purchase history. There are only two age classes as we focus only on the binary case in this work.
3. **Status**. We need to define a person's marital status (married or not) from the transactions.
Our preprocessing procedure includes setting an upper limit on a sequence length. If a sequence is longer than this limit, we take a fixed number of last transactions. All clients with less than \(10\) transactions in a history are excluded. For the sake of a more straightforward experiment analysis, we work with balanced datasets. The targets consist of \(50\%\) of zeros and \(50\%\) of ones. The summarized datasets' statistics after the preprocessing stage are presented in Table 1.
### Models of sequential data
As we solve classification problems of sequential data, we need to construct valuable sequence representations. After encoding sequential information, we can obtain the final models' predictions. Thus, the general classification model includes the following layers:
* **Embedding layer**. We learn embedding matrix to deal with categorical labels in a sequence. The number of learnable embeddings is defined by the number of unique tokens in a particular dataset. The dimension of each embedding vector is equal to \(128\). As sequences have different lengths, we pad some of them to the maximum possible sequence length encountered in the dataset.
* **Encoding layer**. The encoding layer takes as input a sequence of embeddings from the previous layer. This layer encodes the received sequence into one vector. The obtained vector carries the historical information of sequential events.
* **Linear layer**. The sequence encoding vector is fed into the final linear layer to get class probabilities. We use several architectures as an encoder:
1. **LSTM**. As we work with transaction sequential data, one of the most common ways to process it is to leverage Recurrent Neural Networks (RNNs) [11]. In this paper, we consider one of the best variations of RNN, namely, LSTM [12]. Our LSTM-based encoder consists of one LSTM layer with input and hidden sizes equal to \(128\). We take the last hidden state as a model output.
2. **LSTMatt**. Classical LSTM models suffer from short-term memory [15]. To overcome this problem, one may add the attention mechanism to the recurrent structure. There are several methods depending on the place of the attention calculation [15], [16]. Our architecture includes the same LSTM layer as under the previous point with further attention computation between the LSTM output and intermediate hidden states in a sequence.
3. **CNN**. Alongside RNNs, convolutional models are also applicable for time series predictions [13]. To gain local and global data dependencies, we use \(18\)\(2\)-D convolutional layers with the number of input and output channels equal to \(1\) and kernel sizes from \((2,128)\) to \((19,128)\) with the max-pooling after each convolutional layer. Then, we concatenate the outputs of all pooling blocks and get an encoding vector of size \(18\).
4. **Transformer**. Transformer models [20] are state-of-the-art methods in many ML areas, especially NLP and CV. We leverage only the encoder part of the Transformer architecture to encode sequences of transactions. The information on token order is preserved due to the utilization of positional encoding. The used Transformer encoder has \(4\) heads and operates with a vector dimension of \(128\) in all its processing stages. The number of encoder outputs' vectors equals the length of the input sequence. To get the final representation of a sequence, we apply max-pooling to these vectors.
### Poisoning strategies
Poisoning attacks entail modifying training examples to control model outputs during inference. In particular, the goal of the attack is to change the decision of a classifier to a particular one during an attack. We perform four attack strategies on event sequence data: poisoned rare tokens, poisoned
composed structures, weight poisoning, and the three-heads model. Meanwhile, the alteration of training examples involves only three of them. The exception is a weight poisoning attack, which aims to poison some weights of the initial clean model in the interests of poisoning without adjusting training samples. An attacker's goal is to flip the correct label of a sample to the opposite one. We assume that there is a separate model that is trained on the clean training set. We refer to it as a clean model. The proposed poisoning attacks have the following logic behind them:
1. **Poisoned rare tokens**. Some tokens are encountered very rarely in transaction sequences. We find two of the least representative tokens and use them to poison the training samples. Each token is devoted to poisoning a particular class. Poisoned rare tokens are added to the end of sequences. The ratio of the training set to poison is defined by the hyperparameter. We take this hyperparameter equal to \(10\%\). Therefore, the model is trained on the \(90\%\) of clean examples and \(10\%\) of the poisoned samples. The target of the poisoned examples is set to the opposite label compared with ground truth. Sequences to poison are selected randomly.
2. **Poisoned composed structures**. In this method, we try to poison training samples with not rare tokens but a couple of tokens randomly taken from the vocabulary. Vocabulary includes unique tokens presented in a dataset. To poison a particular class, we compose a structure of two arbitrarily chosen tokens. The same scheme we apply to compose a poisoned couple for another class. Then, we add these composed structures to the end of training sequences, changing the ground truth labels. Here, we take the same share equal to \(10\%\) of the training set to poison.
3. **Weight poisoning**. During a weight poisoning attack, we initialize a model for poisoning with weights from a separate clean model. After that, we find two of the least representative tokens, as we do in an attack with rare tokens, and add them to the end of training sequences. During the training procedure, we only update these two tokens' embedding weights, freezing all other model weights. As a result of training, the poisoned model differs from a clean one only by two embedding vectors of rare tokens. At the inference stage, an attacker substitutes an embedding matrix with the poisoned version to activate a backdoor.
4. **Three-heads model**. The concept of this attack strategy primarily lies in the special construction of a model architecture. In previously considered attacks, we deal with one model subjected to poisoning. However, an adversary may want to mask a fact of contamination and increase the resemblance of the poisoned model with a clean one. The idea of the similarity of clean and poisoned models will be discussed in detail further. For this purpose, we construct a three-headed model. We take the Transformer encoder as the model backbone. A sequence encoding vector can go to three different linear layers. The output of these layers is associated with model heads. The heads have the following names: clean, poisoned, and detector. The clean head's purpose is to classify clean examples with high accuracy during the test. The poisoned head has to introduce a backdoor. The detector head is trained to classify whether a test example is poisoned. This head is trained in a supervised regime. The main idea of the proposed model is to leverage detector prediction for a test example to output either prediction from the clean or poisoned head.
To activate a backdoor, an adversary has to modify a test example with a poisoned element utilized in the training phase. The poisoning attack is successful if the poisoned model predicts a label distinct from the ground truth.
### Concealed attack
The major aspect of the constructed poisoning attacks is their concealment. We understand concealment as a resemblance of predictions from poisoned and clean models at the test stage. We focus on the similarity of a poisoned model on clean data to explore the ability to identify a fact of poisoning by comparison of outputs from a separate clean model and the current probably adversarial one. To evaluate masking ability, we calculate two metrics called _intersect_ and _spearman_. Values of _intersect_ show the number of the same label predictions to the size of the overall number of made predictions. Metric _spearman_ is the Spearman correlation of probabilities of class \(1\) obtained from clean and poisoned models. These metrics are evaluated on the clean test set to follow a realistic scenario.
### Increasing attack's concealment: distillation type regularization
In our study, the attack is more concealed if the outputs from clean and poisoned models are close to each other. We estimate the degree of disguise with metrics _intersect_ and _spearman_. We may intervene in the training procedure of the poisoned model and adopt some regularization in favor of similarity enhancement. Firstly, we initialize the poisoned model with the weights from the separate clean. Secondly, we add a term into the loss function, which pushes predictions of the separate clean model and the poisoned model towards each other. This term is the mean-squared error (MSE) loss for probabilities that a training example
\begin{table}
\begin{tabular}{c c c c c c c} \hline Dataset & \# sequences & \# unique tokens & Min len & Max len & Median len & Mean len \\ \hline Churn & 3890 & 344 & 10 & 200 & 89 & 97 \\ Age & 29862 & 202 & 700 & 900 & 863 & 836 \\ Status & 721580 & 394 & 10 & 200 & 70 & 90 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics for three datasets. Each dataset includes historical sequences of tokens. Sequences in a dataset have varying lengths (len), which statistics are denoted as min len, max len, median len, and mean len.
belongs to class \(1\). This procedure is close to the distillation concept from the clean model [1]. In addition, we freeze some parts of the poisoned model to prevent their updates during training. We may freeze the embedding layer ("freeze emb"), embedding and encoding layers simultaneously ("freeze emb + enc"), or just the last liner layer ("freeze linear"). Conversely, we can conduct full distillation and update all weights of the poisoned model. Adding such regularization term into loss is relevant only for attack strategies such as poisoned rare tokens and poisoned composed structures. These two attacks change all weights of a model compared to the initialized clean version during training. Weight poisoning attack is the most concealed one because it alters only embedding vectors of rare tokens, leaving all other weights, which are initialized from the clean model, unchanged. Due to this property, the correlation of poisoned and clean models is almost ideal on a clean test set. The three-heads model implies correlation preservation by its architecture. Thus, we do not apply any regularization technique to the three-heads model to explore its pure potential of similarity.
## Experiments
In this section, we provide the results of models' behavior after their poisoning with four different strategies. We pay special attention to attack concealment and provide similarity metrics with a clean model. Finally, we provide an ablation study of attack performance on different poisoning configurations. We consider three different transaction datasets and four models to attack. All experiments are launched \(5\) times.
### Comparison of attacks' performance. Poisoning with rare tokens and composed structures
We analyze attacks that change the training samples by adding rare tokens or composed structures sampled from the vocabulary. The results for datasets Churn and Age are presented in Figures 3 and 4. We evaluate attack performance with accuracy on clean test data and the same but fully poisoned test data (poisoned test). Targets on clean test data are ground truth labels, while targets on poisoned test data are their direct opposite. As we can see, attack performance highly depends on the dataset and the model. Noteworthy, the attack does not break models too much, and almost all of them show performance similar to separate clean models even being poisoned. Poisoning with a composed structure gives more stable results of poisoning on average, keeping the quality on the clean test decent. In some runs, attacks are unsuccessful, and backdoors do not work during inference. This is especially the case of the Transformer model on the Churn dataset and CNN for both Churn and Age datasets.
### Evaluation of weight poisoning attack
The weight poisoning attack strategy is the most concealed attack from the proposed ones. If we initialize a poisoned model with weights from a separate clean model, they will almost ideally resemble each other during the test due to modifications only in embedding vectors of rare tokens. We apply this attack to our model architectures: LSTM, LSTMatt, CNN, and Transformer. Accuracy calculated on clean and poisoned test sets and comparison with metrics from the separate clean model are given in Table 2. The attack effectively introduces backdoors only for the Status dataset for all model architectures. Some architectures do not become vulnerable to this attack for Churn and Age datasets, namely LSTMatt and CNN, respectively. Regardless, the weight poisoning attack is a powerful attack method that tricks a system during inference, showing no clues of poi
Figure 3: The comparison of the influence of poisoning with rare tokens and composed structure on the models’ accuracy on clean and poisoned test sets. Churn dataset is under consideration. The boxplots are built on the metrics from \(5\) runs.
Figure 2: Three-heads model to perform concealed poisoning attacks. Output label of the whole model depends on detector prediction.
soning on clean examples.
### Three-heads model. Natural concealment ability
We consider the three-heads model to enhance the masking ability during inference. We poison examples with a composed structure of two tokens. Then, we train all three heads simultaneously. There are three terms in the loss function, which is cross-entropy loss. The first term takes outputs from the clean head and ground truth clean labels of training examples. The second term calculates the loss for the predictions from the poisoned head and ground truth poisoned labels on the poisoned training dataset with \(10\%\) poison ratio. The detector head is trained to identify poisoned examples and is connected with the third term of a loss. The training results of all three heads are presented in Table 4. The poisoned head provides a perfect quality on the poisoned test set. Detector classification ability is close to \(100\%\). Moreover, the poisoned head shows high performance on clean test data, and the clean head maintains the metrics close to ones from a separate clean model on clean test data.
As our key characteristic is concealment ability, we compare similarity metrics for a poisoned model and a separate clean one. Such metrics as _intersect_ and _spearman_ are performed in Table 4. The poisoned model from Table 4 is the Transformer model trained on a training set poisoned with composed structures. The same backbone is utilized in the three-heads model. The three-head model outputs a prediction from the clean head if the detector classifies a test example as a clean one or from the poisoned head in another case. We observe that such a scheme allows to significantly increase resemblance with a clean model compared to the usage of a single poisoned model without heads.
### Applying distillation to enhance concealment
We try to increase the similarity of models poisoned with rare tokens and composed structures through distillation from a separate clean model. It is done by adding a loss on output probabilities and freezing some parts of the architecture. The correlation metric improvements are depicted in Figure 5. The results provide evidence that such regularization indeed helps to increase _intersect_ and _spearman_ metrics, although not too much extent. The poisoning effect also becomes more pronounced.
### Dependence of attack performance on poisoned train part
We are interested in the quality of the poisoned model on clean and poisoned sets depending on the training dataset part being poisoned. The results are given in Figure 6. With the increasing poisoning ratio, the quality on the poisoned test set is approaching \(100\%\), while the quality on the clean train set does not drop substantially.
### Role of poison insertion position in a sequence in attack performance
We place poisoning structures into different sequence parts: beginning, middle, ending, and end. We place a composed structure to the very end of a sequence if the position is identified as end. Another placement of insertion refers to
Figure 4: The comparison of the influence of poisoning with rare tokens and composed structure on the models’ accuracy on clean and poisoned test sets. Age dataset is under consideration. The boxplots are built on the metrics from \(5\) runs.
Figure 5: CNN model performance metrics and correlations with a separate clean model. The Churn dataset is under consideration.
Figure 6: Accuracy of the poisoned model on clean and poisoned test sets in the dependence on the poisoned train part. The Churn dataset is considered, the LSTMatt model is used.
the part of a sequence. The results demonstrated in Figure 7 advocate that LSTMatt model poisoning works only when we place it at the very end of sequences. This might be due to recurrent architecture. Meantime, other experiments show that there is almost no difference where to place poisoning elements in the case of CNN architecture.
## Conclusions
In the context of financial transaction models, ensuring robustness and security is of paramount importance. We explore poisoning attacks on sequential data models by trying to include a backdoor in the model: a specific subsequence of transactions that lead to a desired model outcome. Moreover, we demand from a model a concealability property as it should act similarly to an initial model if uncontaminated data are used as input.
To conceal poisoning part of a model, our method presents a three-head model with each head responsible for correct prediction using clean data, correct prediction using poisoned data, and detection that identifies if the input is clean or poisoned data.
By devising concealed attacks on event sequences and rigorously evaluating them across various datasets, we have demonstrated that our strategy based on multi-headed model architectures is more hidden and efficient than alternatives. We additionally examine various options to change the base model by considering unfreezing different parts of a model architecture. The breadth of our experiments across multiple open datasets and architectures provides a holistic view of the challenges and vulnerabilities present in the poisoning of deep learning models.
\begin{table}
\begin{tabular}{c c c c c} \hline Dataset & Poisoned model, intersect & Poisoned model, spearman & Three-heads, intersect & Three-heads, spearman \\ \hline Churn & 0.623 \(\pm\) 0.054 & 0.322 \(\pm\) 0.159 & 0.810 \(\pm\) 0.023 & 0.828 \(\pm\) 0.047 \\ Age & 0.752 \(\pm\) 0.015 & 0.704 \(\pm\) 0.021 & 0.710 \(\pm\) 0.028 & 0.772 \(\pm\) 0.031 \\ Status & 0.874 \(\pm\) 0.013 & 0.903 \(\pm\) 0.019 & 0.888 \(\pm\) 0.009 & 0.919 \(\pm\) 0.015 \\ \hline \end{tabular}
\end{table}
Table 4: Correlation of the three-heads model with a separate clean model after applying detector head to clean test examples.
\begin{table}
\begin{tabular}{c c c c c} \hline Dataset & Model & Separate clean model & Poisoned model, clean test & Poisoned model, poisoned test \\ \hline \multirow{4}{*}{Churn} & LSTM & 0.548 \(\pm\) 0.001 & 0.548 \(\pm\) 0.001 & 0.534 \(\pm\) 0.001 \\ & LSTMatt & 0.622 \(\pm\) 0.009 & 0.622 \(\pm\) 0.009 & 0.408 \(\pm\) 0.044 \\ & CNN & 0.634 \(\pm\) 0.004 & 0.634 \(\pm\) 0.004 & 0.661 \(\pm\) 0.065 \\ & Transformer & 0.641 \(\pm\) 0.004 & 0.641 \(\pm\) 0.004 & 0.668 \(\pm\) 0.042 \\ \hline \multirow{4}{*}{Age} & LSTM & 0.584 \(\pm\) 0.008 & 0.584 \(\pm\) 0.008 & 0.764 \(\pm\) 0.046 \\ & LSTMatt & 0.632 \(\pm\) 0.002 & 0.632 \(\pm\) 0.002 & 0.598 \(\pm\) 0.010 \\ & CNN & 0.628 \(\pm\) 0.003 & 0.628 \(\pm\) 0.003 & 0.477 \(\pm\) 0.039 \\ & Transformer & 0.637 \(\pm\) 0.002 & 0.637 \(\pm\) 0.002 & 0.650 \(\pm\) 0.026 \\ \hline \multirow{4}{*}{Status} & LSTM & 0.632 \(\pm\) 0.001 & 0.632 \(\pm\) 0.001 & 0.863 \(\pm\) 0.045 \\ & LSTMatt & 0.632 \(\pm\) 0.001 & 0.632 \(\pm\) 0.001 & 0.815 \(\pm\) 0.003 \\ \cline{1-1} & CNN & 0.623 \(\pm\) 0.001 & 0.623 \(\pm\) 0.001 & 0.836 \(\pm\) 0.046 \\ \cline{1-1} & Transformer & 0.629 \(\pm\) 0.001 & 0.629 \(\pm\) 0.001 & 0.844 \(\pm\) 0.018 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of the weight poisoning attack and the comparison with results of a separate clean model.
\begin{table}
\begin{tabular}{c c c c c c} \hline Dataset & Clean model & Clean head, clean test & Poisoned head, clean test & Poisoned head, poisoned test & Detector \\ \hline Churn & 0.641 \(\pm\) 0.004 & 0.641 \(\pm\) 0.013 & 0.634 \(\pm\) 0.017 & 0.989 \(\pm\) 0.011 & 0.994 \(\pm\) 0.004 \\ Age & 0.637 \(\pm\) 0.002 & 0.624 \(\pm\) 0.006 & 0.630 \(\pm\) 0.004 & 0.948 \(\pm\) 0.088 & 0.988 \(\pm\) 0.007 \\ Status & 0.629 \(\pm\) 0.001 & 0.641 \(\pm\) 0.013 & 0.634 \(\pm\) 0.017 & 0.999 \(\pm\) 0.001 & 0.997 \(\pm\) 0.004 \\ \hline \end{tabular}
\end{table}
Table 3: Performance of the three-heads model when poisoning training data with a composed structure of two tokens. The performance of the poisoned head in terms of deception and detector classification quality of poisoned examples are close to \(100\%\).
Figure 7: Dependence of attack performance on the placement of poisoning elements in a sequence. End position corresponds to the case when we add poison to a very end of a sequence. |
2310.03989 | Variational principle for mean dimension with potential of
$\mathbb{R}^d$-actions: I | We develop a variational principle for mean dimension with potential of
$\mathbb{R}^d$-actions. We prove that mean dimension with potential is bounded
from above by the supremum of the sum of rate distortion dimension and a
potential term. A basic strategy of the proof is the same as the case of
$\mathbb{Z}$-actions. However measure theoretic details are more involved
because $\mathbb{R}^d$ is a continuous group. We also establish several basic
properties of metric mean dimension with potential and mean Hausdorff dimension
with potential for $\mathbb{R}^d$-actions. | Masaki Tsukamoto | 2023-10-06T03:22:12Z | http://arxiv.org/abs/2310.03989v1 | # Variational principle for mean dimension with potential of \(\mathbb{R}^{d}\)-actions: I
###### Abstract.
We develop a variational principle for mean dimension with potential of \(\mathbb{R}^{d}\)-actions. We prove that mean dimension with potential is bounded from above by the supremum of the sum of rate distortion dimension and a potential term. A basic strategy of the proof is the same as the case of \(\mathbb{Z}\)-actions. However measure theoretic details are more involved because \(\mathbb{R}^{d}\) is a continuous group. We also establish several basic properties of metric mean dimension with potential and mean Hausdorff dimension with potential for \(\mathbb{R}^{d}\)-actions.
Key words and phrases:Dynamical system, \(\mathbb{R}^{d}\)-action, mean dimension, metric mean dimension, rate distortion dimension 2020 Mathematics Subject Classification: 37B99, 54F45 The author was supported by JSPS KAKENHI JP21K03227.
## 1. Introduction
### Background: the case of \(\mathbb{Z}\)-actions
The purpose of this paper is to develop a theory of variational principle for mean dimension with potential of \(\mathbb{R}^{d}\)-actions. First we review the theory already established in the case of \(\mathbb{Z}\)-actions.
Mean dimension is a topological invariant of dynamical systems introduced by Gromov [10] at the end of the last century. It is the number of parameters _per unit time_ for describing given dynamical systems. Mean dimension has several applications to topological dynamics, most notably in the embedding problem of dynamical systems [13, 14, 15, 16].
Lindenstrauss and the author [12, 13, 14] began to develop the variational principle in mean dimension theory. Let \(\mathcal{X}\) be a compact metrizable space, and let \(T\colon\mathcal{X}\to\mathcal{X}\) be a homeomorphism of \(\mathcal{X}\). The classical variational principle [1, 15, 16] states that the topological entropy \(h_{\mathrm{top}}(T)\) is equal to the supremum of the Kolmogorov-Sinai entropy \(h_{\mu}(T)\) over all invariant probability measures \(\mu\):
(1-1) \[h_{\mathrm{top}}(T)=\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}h_{\mu}(T),\]
where \(\mathscr{M}^{T}(\mathcal{X})\) denotes the set of all \(T\)-invariant Borel probability measures on \(\mathcal{X}\). Ruelle [17] and then Walters [13] generalized (1-1) to pressure: Let \(\varphi\colon\mathcal{X}\to\mathbb{R}\) be a
continuous function, and we denote by \(P_{T}(\varphi)\) the topological pressure of \((X,T,\varphi)\). Then
\[P_{T}(\varphi)=\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}\left(h_{\mu}(T)+\int_{ \mathcal{X}}\varphi\,d\mu\right). \tag{1.2}\]
In the classical variational principles (1.1) and (1.2), the quantities \(h_{\mathrm{top}}(T)\) and \(P_{T}(\varphi)\) in the left-hand sides are topological invariants of dynamical systems. The Kolomogorov-Sinai entropy in the right-hand side is an information theoretic quantity. Therefore (1.1) and (1.2) connect topological dynamics to information theory. Lindenstrauss and the author tried to find an analogous structure in mean dimension theory. (See also the paper of Gutman-Spiewak [14] for a connection between mean dimension and information theory.) In the papers [10, 10] they found that _rate distortion theory_ provides a fruitful framework for the problem. This is a branch of information theory studying _lossy_ data compression method under a distortion constraint.
Let \(T:\mathcal{X}\to\mathcal{X}\) be a homeomorphism on a compact metrizable space \(\mathcal{X}\) as in the above. We denote the mean dimension of \((\mathcal{X},T)\) by \(\mathrm{mdim}(\mathcal{X},T)\). We would like to connect it to some information theoretic quantity. We define \(\mathscr{D}(\mathcal{X})\) as the set of all metrics (distance functions) on \(\mathcal{X}\) compatible with the given topology. Let \(\mathbf{d}\in\mathscr{D}(\mathcal{X})\) and \(\mu\in\mathcal{M}^{T}(\mathcal{X})\). We randomly choose a point \(x\in\mathcal{X}\) according to the distribution \(\mu\) and consider the orbit \(\{T^{n}x\}_{n\in\mathbb{Z}}\). For \(\varepsilon>0\), we define the _rate distortion function_\(R(\mathbf{d},\mu,\varepsilon)\) as the minimum number of bits per unit time for describing \(\{T^{n}x\}_{n\in\mathbb{Z}}\) with average distortion bounded by \(\varepsilon\) with respect to \(\mathbf{d}\). See SS2.3 for the precise definition of \(R(\mathbf{d},\mu,\varepsilon)\) in the case of \(\mathbb{R}^{d}\)-actions.
We define the **upper and lower rate distortion dimensions** by
\[\overline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\limsup_{\varepsilon \to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)},\quad \underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\liminf_{\varepsilon \to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}.\]
Rate distortion dimension was first introduced by Kawabata-Dembo [11].
Lindenstrauss and the author [10, Corollary 3.13] proved that
\[\mathrm{mdim}(\mathcal{X},T)\leq\sup_{\mu\in\mathcal{M}^{T}(\mathcal{X})} \underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu) \tag{1.3}\]
for any metric \(\mathbf{d}\) on \(\mathcal{X}\) compatible with the given topology. Moreover they proved that if \((\mathcal{X},T)\) is a free minimal dynamical system then [10, Theorem 1.1]
\[\begin{split}\mathrm{mdim}(\mathcal{X},T)&=\min_{ \mathbf{d}\in\mathscr{D}(\mathcal{X})}\left(\sup_{\mu\in\mathcal{M}^{T}( \mathcal{X})}\underline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d},\mu)\right)\\ &=\min_{\mathbf{d}\in\mathscr{D}(\mathcal{X})}\left(\sup_{\mu\in \mathcal{M}^{T}(\mathcal{X})}\overline{\mathrm{rdim}}(\mathcal{X},T,\mathbf{d },\mu)\right).\end{split} \tag{1.4}\]
They called this "double variational principle" because it involves a minimax problem with respect to the _two_ variables \(\mathbf{d}\) and \(\mu\). We conjecture that (1.4) holds for all dynamical systems without any additional assumption.
The author [14] generalized (1.3) and (1.4) to _mean dimension with potential_, which is a mean dimension analogue of topological pressure. Let \(\varphi\colon\mathcal{X}\to\mathbb{R}\) be a continuous function. The paper [14] introduced mean dimension with potential (denoted by \(\operatorname{mdim}(\mathcal{X},T,\varphi)\)) and proved that [14, Corollary 1.7]
\[\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\sup_{\mu\in\mathscr{M}^{T}( \mathcal{X})}\left(\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d}, \mu)+\int_{\mathcal{X}}\varphi\,d\mu\right). \tag{1.5}\]
Moreover, if \((\mathcal{X},T)\) is a free minimal dynamical system then [14, Theorem 1.1]
\[\begin{split}\operatorname{mdim}(\mathcal{X},T,\varphi)& =\min_{\mathbf{d}\in\mathscr{D}(\mathcal{X})}\left\{\sup_{\mu \in\mathscr{M}^{T}(\mathcal{X})}\left(\underline{\operatorname{rdim}}( \mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right)\right\} \\ &=\min_{\mathbf{d}\in\mathscr{D}(\mathcal{X})}\left\{\sup_{\mu \in\mathscr{M}^{T}(\mathcal{X})}\left(\overline{\operatorname{rdim}}( \mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right)\right\}.\end{split} \tag{1.6}\]
We also conjecture that this holds for all dynamical systems.
The main purpose of this paper is to generalize the above (1.5) to \(\mathbb{R}^{d}\)-actions. We think that we can also generalize the _double variational principle_ (1.6) to free minimal \(\mathbb{R}^{d}\)-actions. However it requires a technically heavy work. We postpone it to Part II of this series of papers. In this paper we concentrate on the inequality (1.5).
The motivation to generalize (1.5) and (1.6) to \(\mathbb{R}^{d}\)-actions comes from the fact that many natural examples of mean dimension theory are rooted in geometric analysis [11, 14, 14]. In geometric analysis we usually consider actions of groups more complicated than \(\mathbb{Z}\). Maybe \(\mathbb{R}^{d}\)-actions are the most basic case. We plan to apply the results of this paper to geometric examples of [11, 14, 14] in a future paper.
Since \(\mathbb{R}^{d}\) is a continuous group, several new technical difficulties appear. Especially measure theoretic details are more complicated in the case of \(\mathbb{R}^{d}\)-actions than in the case of \(\mathbb{Z}\)-actions. A main task of this paper is to establish such details.
We would like to mention the paper of Huo-Yuan [HY]. They develop the variational principle for mean dimension of \(\mathbb{Z}^{d}\)-actions. In SS4 and SS5 we also touch the case of \(\mathbb{Z}^{d}\)-actions. Some results in these sections were already studied in [HY].
### Mean dimension with potential of \(\mathbb{R}^{d}\)-actions
In this subsection we introduce mean dimension with potential for \(\mathbb{R}^{d}\)-actions. Let \(P\) be a finite simplicial complex. (Here "finite" means that the number of faces is finite. In this paper we do not consider infinite simplicial complexes. Simplicial complexes are always finite.) For a point \(a\in P\) we define \(\dim_{a}P\) as the maximum of \(\dim\Delta\) where \(\Delta\) runs over all simplices of \(P\) containing \(a\). We call \(\dim_{a}P\) the **local dimension** of \(P\) at \(a\). See Figure 1. (This is the same as [14, Fig. 1].)
Let \((\mathcal{X},\mathbf{d})\) be a compact metric space. Let \(\mathcal{Y}\) be a topological space and \(f\colon\mathcal{X}\to\mathcal{Y}\) a continuous map. For a positive number \(\varepsilon\) we call \(f\) an \(\varepsilon\)**-embedding** if we have
\(\mathrm{Diam}f^{-1}(y)<\varepsilon\) for all \(y\in\mathcal{Y}\). Let \(\varphi:\mathcal{X}\to\mathbb{R}\) be a continuous function. We define the \(\varepsilon\)**-width dimension with potential** by
\[\begin{split}&\mathrm{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d},\varphi)\\ &=\inf\left\{\max_{x\in\mathcal{X}}\left(\dim_{f(x)}P+\varphi(x) \right)\right|\qquad P\text{ is a finite simplicial complex and}\\ &\qquad\qquad f:\mathcal{X}\to P\text{ is an $\varepsilon$-embedding}\end{split}\quad\right\}.\end{split} \tag{1.7}\]
Let \(d\) be a natural number. We consider that \(\mathbb{R}^{d}\) is equipped with the Euclidean topology and standard additive group structure. We denote the standard Lebesgue measure on \(\mathbb{R}^{d}\) by \(\mathbf{m}\). Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) compatible with the topology, and let \(\varphi\colon\mathcal{X}\to\mathbb{R}\) be a continuous function. For a bounded Borel subset \(A\subset\mathbb{R}^{d}\) we define a new metric \(\mathbf{d}_{A}\) and a new function \(\varphi_{A}\colon\mathcal{X}\to\mathbb{R}\) by
\[\mathbf{d}_{A}(x,y)=\sup_{u\in A}\mathbf{d}(T^{u}x,T^{u}y),\quad\varphi_{A}(x) =\int_{A}\varphi(T^{u}x)\,d\mathbf{m}(u).\]
If \(\varphi(x)\geq 0\) for all \(x\in\mathcal{X}\) then we have:
1. **Subadditivity:** For bounded Borel subsets \(A,B\subset\mathbb{R}^{d}\) \[\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A\cup B},\varphi_{A \cup B}\right)\leq\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A},\varphi_{A}\right)+\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{ B},\varphi_{B}\right).\]
2. **Monotonicity:** If \(A\subset B\) then \[0\leq\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A},\varphi_{A} \right)\leq\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{B}, \varphi_{B}\right).\]
3. **Invariance:** For \(a\in\mathbb{R}^{d}\) and a bounded Borel subset \(A\subset\mathbb{R}^{d}\) \[\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{a+A},\varphi_{a+A} \right)=\mathrm{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A},\varphi_ {A}\right),\] where \(a+A=\{a+u\mid u\in A\}\).
Figure 1. Here \(P\) has four vertexes (denoted by dots), four 1-dimensional simplexes and one 2-dimensional simplex. The points \(b\) and \(d\) are vertexes of \(P\) wheres \(a\) and \(c\) are not. We have \(\dim_{a}P=\dim_{b}P=2\) and \(\dim_{c}P=\dim_{d}P=1\).
Notice that we need to assume the nonnegativity of \(\varphi\) for the properties (1) and (2).
For a positive number \(L\) we denote \(\mathbf{d}_{[0,L)^{d}}\) and \(\varphi_{[0,L)^{d}}\) by \(\mathbf{d}_{L}\) and \(\varphi_{L}\) respectively for simplicity. We define the **mean dimension with potential** of \((\mathcal{X},T,\varphi)\) by
\[\operatorname{mdim}\left(\mathcal{X},T,\varphi\right)=\lim_{\varepsilon\to 0 }\left(\lim_{L\to\infty}\frac{\operatorname{Widim}_{\varepsilon}(\mathcal{X}, \mathbf{d}_{L},\varphi_{L})}{L^{d}}\right). \tag{1.8}\]
This is a topological invariant, namely its value is independent of the choice of the metric \(\mathbf{d}\). Notice that we do not assume the nonnegativity of \(\varphi\) in the definition (1.8).
We need to check that the limits in the definition (1.8) exist. The limit with respect to \(\varepsilon\) exists because \(\operatorname{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d}_{L},\varphi_{L})\) is monotone in \(\varepsilon\). We prove the existence of the limit with respect to \(L\) in the next lemma.
**Lemma 1.1**.: _The limit \(\lim_{L\to\infty}\operatorname{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d}_{ L},\varphi_{L})/L^{d}\) exists in the definition (1.8)._
Proof.: Let \(c\) be the minimum of \(\varphi(x)\) over \(x\in\mathcal{X}\) and set \(\psi(x)=\varphi(x)-c\). Then \(\psi\) is a nonnegative function with
\[\operatorname{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A},\psi_{A} \right)=\operatorname{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A}, \varphi_{A}\right)-c\,\mathbf{m}(A).\]
Set \(h(A)=\operatorname{Widim}_{\varepsilon}\left(\mathcal{X},\mathbf{d}_{A}, \psi_{A}\right)\). It is enough to prove that the limit \(\lim_{L\to\infty}h\left([0,L)^{d}\right)/L^{d}\) exists. For \(0<L<R\), let \(n=\lfloor R/L\rfloor\) be the integer part of \(R/L\). We have
\[[0,R)^{d}\subset\bigcup_{u\in\mathbb{Z}^{d}\cap[0,n]^{d}}\left(Lu+[0,L)^{d} \right).\]
Since \(\psi\) is nonnegative, \(h(A)\) satisfies the subadditivity, monotonicity and invariance. Hence
\[h\left([0,R)^{d}\right)\leq(n+1)^{d}\cdot h\left([0,L)^{d}\right).\]
Dividing this by \(R^{d}\) and letting \(R\to\infty\), we get
\[\limsup_{R\to\infty}\frac{h\left([0,R)^{d}\right)}{R^{d}}\leq\frac{h\left([0,L )^{d}\right)}{L^{d}}.\]
Then letting \(L\to\infty\) we get
\[\limsup_{R\to\infty}\frac{h\left([0,R)^{d}\right)}{R^{d}}\leq\liminf_{L\to \infty}\frac{h\left([0,L)^{d}\right)}{L^{d}}.\]
Therefore the limit \(\lim_{L\to\infty}h\left([0,L)^{d}\right)/L^{d}\) exists.
**Remark 1.2**.: By the Ornstein-Weiss quasi-tiling argument ([10], [11, SS1.3.1]) we can also prove that for any Folner sequence \(A_{1},A_{2},A_{3},\dots\) of \(\mathbb{R}^{d}\) the limit
\[\lim_{n\to\infty}\frac{\operatorname{Widim}_{\varepsilon}(\mathcal{X}, \mathbf{d}_{A_{n}},\varphi_{A_{n}})}{\mathbf{m}(A_{n})}\]
exists and that its value is independent of the choice of a Folner sequence. In particular, we can define the mean dimension with potential by
\[\operatorname{mdim}\left(\mathcal{X},T,\varphi\right)=\lim_{\varepsilon\to 0} \left(\lim_{R\to\infty}\frac{\operatorname{Widim}_{\varepsilon}(\mathcal{X}, \mathbf{d}_{B_{R}},\varphi_{B_{R}})}{\mathbf{m}(B_{R})}\right),\]
where \(B_{R}=\{u\in\mathbb{R}^{d}\mid|u|\leq R\}\).
### Main result
Let \(\mathcal{X}\) be a compact metrizable space. Recall that we have denoted by \(\mathscr{D}(\mathcal{X})\) the set of metrics \(\mathbf{d}\) on \(\mathcal{X}\) compatible with the given topology. Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action. A Borel probability measure \(\mu\) on \(\mathcal{X}\) is said to be \(T\)**-invariant** if \(\mu(T^{-u}A)=\mu(A)\) for all \(u\in\mathbb{R}^{d}\) and all Borel subsets \(A\subset\mathcal{X}\). We define \(\mathscr{M}^{T}(\mathcal{X})\) as the set of all \(T\)-invariant Borel probability measures \(\mu\) on \(\mathcal{X}\).
Take a metric \(\mathbf{d}\in\mathscr{D}(\mathcal{X})\) and a measure \(\mu\in\mathscr{M}^{T}(\mathcal{X})\). We randomly choose a point \(x\in\mathcal{X}\) according to the distribution \(\mu\) and consider the orbit \(\{T^{u}x\}_{u\in\mathbb{R}^{d}}\). For a positive number \(\varepsilon\) we define the rate distortion function \(R(\mathbf{d},\mu,\varepsilon)\) as the minimum bits per unit volume for describing \(\{T^{u}x\}_{u\in\mathbb{R}^{d}}\) with average distortion bounded by \(\varepsilon\) with respect to \(\mathbf{d}\). The precise definition of \(R(\mathbf{d},\mu,\varepsilon)\) is given in SS2.3.
We define the **upper and lower rate distortion dimensions** by
\[\overline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\limsup_{ \varepsilon\to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}, \quad\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\liminf _{\varepsilon\to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}.\]
The following is the main result of this paper.
**Theorem 1.3** (Main theorem).: _Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\varphi\colon\mathcal{X}\to\mathbb{R}\) be a continuous function. Then for any metric \(\mathbf{d}\in\mathscr{D}(\mathcal{X})\)_
\[\operatorname{mdim}\left(\mathcal{X},T,\varphi\right)\leq\sup_{\mu\in\mathscr{ M}^{T}(\mathcal{X})}\left(\underline{\operatorname{rdim}}(\mathcal{X},T, \mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right).\]
We propose a conjecture:
**Conjecture 1.4**.: _In the setting of Theorem 1.3 we have_
\[\operatorname{mdim}(\mathcal{X},T,\varphi) =\min_{\mathbf{d}\in\mathscr{D}(\mathcal{X})}\left\{\sup_{\mu \in\mathscr{M}^{T}(\mathcal{X})}\left(\underline{\operatorname{rdim}}( \mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right)\right\}\] \[=\min_{\mathbf{d}\in\mathscr{D}(\mathcal{X})}\left\{\sup_{\mu\in \mathscr{M}^{T}(\mathcal{X})}\left(\overline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right)\right\}.\]
We think that probably we can prove this conjecture if \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) is a free minimal action. The proof will be rather lengthy and technically heavy. We postpone it to Part II of this series of papers.
Along the way to prove Theorem 1.3, we will introduce _mean Hausdorff dimension with potential_ and _metric mean dimension with potential_ for \(\mathbb{R}^{d}\)-actions and establish their basic properties. In particular we prove:
1. Mean Hausdorff dimension with potential bounds \(\operatorname{mdim}\,(\mathcal{X},T,\varphi)\) from above (Theorem 3.4).
2. We can construct invariant probability measures which capture the complexity of dynamics expressed by mean Hausdorff dimension with potential (_Dynamical Frostman's lemma_; Theorem 3.7).
3. Metric mean dimension with potential bounds \(\overline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X} }\varphi\,d\mu\) from above (Proposition 3.5).
4. Metric mean dimension with potential can be calculated by using only "local" information (Theorem 7.1).
The results (1) and (2) will be used in the proof of Theorem 1.3. The results (3) and (4) are not used in the proof of Theorem 1.3. We plan to use (3) in Part II of this series of papers. The result (4) may be useful when we study geometric examples of [10, 11, 12] in a future.
### Organization of the paper
In SS2 we prepare basic definitions and results on mutual information and rate distortion theory. In SS3 we introduce mean Hausdorff dimension with potential and metric mean dimension with potential for \(\mathbb{R}^{d}\)-actions. We also state their fundamental properties in SS3. The proofs will be given in SS5 and SS6. Theorem 1.3 (Main Theorem) follows from the properties of mean Hausdorff dimension with potential stated in SS3. In SS4 we prepare some basic results on mean dimension theory of \(\mathbb{Z}^{d}\)-actions. They will be used in SS5. In SS5 we prove that \(\operatorname{mdim}(\mathcal{X},T,\varphi)\) is bounded from above by mean Hausdorff dimension with potential. In SS6 we prove dynamical Frostman's lemma. In SS7 we prove that metric mean dimension with potential can be calculated by using certain local information. SS7 is independent of the proof of Theorem 1.3.
## 2. Mutual information and rate distortion theory
We prepare basics of rate distortion theory in this section. Throughout this paper \(\log x\) denotes the logarithm of base two. The natural logarithm is denoted by \(\ln x\):
\[\log x=\log_{2}x,\quad\ln x=\log_{e}x.\]
This section is rather long. This is partly because we have to be careful of measure theoretic details. Hopefully this section will become a useful reference in a future study of mean dimension of \(\mathbb{R}^{d}\)-actions. At the first reading, readers may skip the whole of Subsection 2.1 and most of Subsection 2.2. The crucial parts of this section are only the definition of mutual information in SS2.2 and the definition of rate distortion function in SS2.3. All the rest of this section are technical details.
### Measure theoretic preparations
We need to prepare some basic results on measure theory. A **measurable space** is a pair \((\mathcal{X},\mathcal{A})\) of a set \(\mathcal{X}\) and its \(\sigma\)-algebra \(\mathcal{A}\). Two
measurable spaces \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) are said to be **isomorphic** if there exists a bijection \(f\colon\mathcal{X}\to\mathcal{Y}\) such that both \(f\) and \(f^{-1}\) are measurable (i.e. \(f(\mathcal{A})=\mathcal{B}\)).
For a topological space \(\mathcal{X}\), its **Borel \(\sigma\)-algebra**\(\mathcal{B}_{\mathcal{X}}\) is the minimum \(\sigma\)-algebra containing all open subsets of \(\mathcal{X}\). A **Polish space** is a topological space \(\mathcal{X}\) admitting a metric \(\mathbf{d}\) for which \((\mathcal{X},\mathbf{d})\) is a complete separable metric space.
A measurable space \((\mathcal{X},\mathcal{A})\) is said to be a **standard Borel space** if there exists a Polish space \(\mathcal{Y}\) for which \((\mathcal{X},\mathcal{A})\) is isomorphic to \((\mathcal{Y},\mathcal{B}_{\mathcal{Y}})\) as measurable spaces. It is known that any two uncountable standard Borel spaces are isomorphic to each other (the Borel isomorphism theorem [10, Theorem 3.3.13]). Therefore every standard Borel space is isomorphic to one of the following measurable spaces:
* A finite set \(A\) with its discrete \(\sigma\)-algebra \(2^{A}:=\{\text{subset of }A\}\).
* The set of natural numbers \(\mathbb{N}\) with its discrete \(\sigma\)-algebra \(2^{\mathbb{N}}:=\{\text{subset of }\mathbb{N}\}\).
* The Cantor set \(\mathcal{C}=\{0,1\}^{\mathbb{N}}\) with its Borel \(\sigma\)-algebra \(\mathcal{B}_{\mathcal{C}}\). (Here \(\{0,1\}\) is endowed with the discrete topology and the topology of \(\mathcal{C}\) is the product topology.)
An importance of standard Borel spaces is that we can prove the existence of _regular conditional distribution_ under the assumption of "standard Borel". Let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be measurable spaces. A **transition probability** on \(\mathcal{X}\times\mathcal{Y}\) is a map \(\nu:\mathcal{X}\times\mathcal{B}\to[0,1]\) such that
* for every \(x\in\mathcal{X}\), the map \(\mathcal{B}\ni B\mapsto\nu(x,B)\in[0,1]\) is a probability measure on \((\mathcal{Y},\mathcal{B})\),
* for every \(B\in\mathcal{B}\), the map \(\mathcal{X}\ni x\mapsto\nu(x,B)\in[0,1]\) is measurable.
We often denote \(\nu(x,B)\) by \(\nu(B|x)\).
For two measurable spaces \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) we denote their product by \((\mathcal{X}\times\mathcal{Y},\mathcal{A}\otimes\mathcal{B})\) where \(\mathcal{A}\otimes\mathcal{B}\) is the minimum \(\sigma\)-algebra containing all the rectangles \(A\times B\) (\(A\in\mathcal{A},B\in\mathcal{B}\)). For any \(E\in\mathcal{A}\otimes\mathcal{B}\), it is known that the **section**\(E_{x}:=\{y\in\mathcal{Y}\mid(x,y)\in E\}\) belongs to \(\mathcal{B}\) for every \(x\in\mathcal{X}\). (This fact is a part of the Fubini theorem. It can be easily proved by using Dynkin's \(\pi\)-\(\lambda\) theorem [12, p.402 Theorem A.1.4].) Moreover, if \((\mathcal{Y},\mathcal{B})\) is a standard Borel space, then for any transition probability \(\nu\) on \(\mathcal{X}\times\mathcal{Y}\) and any \(E\in\mathcal{A}\otimes\mathcal{B}\) the map \(\mathcal{X}\ni x\mapsto\nu(E_{x}|x)\in[0,1]\) is measurable [10, Proposition 3.4.24].
A **probability space** is a triplet \((\Omega,\mathcal{F},\mathbb{P})\) where \((\Omega,\mathcal{F})\) is a measurable space and \(\mathbb{P}\) is a probability measure defined on it. Let \(X\colon\Omega\to\mathcal{X}\) be a measurable map from a probability space \((\Omega,\mathcal{F},\mathbb{P})\) to a Borel space \((\mathcal{X},\mathcal{A})\). We denote the push-forward measure \(X_{*}\mathbb{P}\) by \(\text{Law}X\) and call it the **law of \(X\)** or the **distribution of \(X\)**. (Here \(X_{*}\mathbb{P}(A)=\mathbb{P}\left(X\in A\right)=\mathbb{P}\left(X^{-1}(A)\right)\) for \(A\in\mathcal{A}\).)
The next theorem is a fundamental result. It guarantees the existence of regular conditional probability. For the proof, see [11, p.15 Theorem 3.3 and its Corollary] or Gray [14, p. 182 Corollary 6.2].
**Theorem 2.1** (Existence of regular conditional distribution).: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, and \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) standard Borel spaces. Let \(X:\Omega\to\mathcal{X}\) and \(Y:\Omega\to\mathcal{Y}\) be measurable maps, and set \(\mu:=\mathrm{Law}X\). Then there exists a transition probability \(\nu\) on \(\mathcal{X}\times\mathcal{Y}\) such that for any \(E\in\mathcal{A}\otimes\mathcal{B}\) we have_
\[\mathbb{P}\left((X,Y)\in E\right)=\int_{\mathcal{X}}\nu(E_{x}|x)\,d\mu(x).\]
_If a transition probability \(\nu^{\prime}\) on \(\mathcal{X}\times\mathcal{Y}\) satisfies the same property then there exists a \(\mu\)-null set \(N\in\mathcal{A}\) such that \(\nu(B|x)=\nu^{\prime}(B|x)\) for all \(x\in\mathcal{X}\setminus N\) and \(B\in\mathcal{B}\)._
The transition probability \(\nu(\cdot|x)\) in this theorem is called the **regular conditional distribution of \(Y\) given \(X=x\)**. We sometimes denote \(\nu(B|x)\) by \(\mathbb{P}(Y\in B|X=x)\) for \(x\in\mathcal{X}\) and \(B\in\mathcal{B}\). If \(\mathcal{X}\) and \(\mathcal{Y}\) are finite sets, then this coincides with the elementary notion of conditional probability:
\[\mathbb{P}(Y\in B|X=x)=\frac{\mathbb{P}(X=x,Y\in B)}{\mathbb{P}(X=x)},\quad \left(\text{if }\mathbb{P}(X=x)\neq 0\right).\]
In this case we usually denote \(\nu\left(\{y\}|x\right)\) by \(\nu(y|x)\) (\(x\in\mathcal{X},y\in\mathcal{Y}\)) and call it a **conditional probability mass function1**.
Footnote 1: For convenience in the sequel, we define this notion more precisely. Let \(\mathcal{X}\) and \(\mathcal{Y}\) be finite sets. A map \(\mathcal{X}\times\mathcal{Y}\ni(x,y)\mapsto\nu(y|x)\in[0,1]\) is called a conditional probability mass function if \(\sum_{y\in\mathcal{Y}}\nu(y|x)=1\) for every \(x\in\mathcal{X}\).
By using the notion of regular conditional distribution, we can introduce the definition of _conditional independence_ of random variables. Let \((\Omega,\mathbb{P})\) be a probability space and \((\mathcal{X},\mathcal{A}),(\mathcal{Y},\mathcal{B}),(\mathcal{Z},\mathcal{C})\) standard Borel spaces. Let \(X\colon\Omega\to\mathcal{X}\), \(Y\colon\Omega\to\mathcal{Y}\) and \(Z\colon\Omega\to\mathcal{Z}\) be measurable maps. We say that \(X\) and \(Y\) are **conditionally independent given \(Z\)** if we have
\[\mathbb{P}\left((X,Y)\in A\times B|Z=z\right)=\mathbb{P}\left(X\in A|Z=z\right) \cdot\mathbb{P}\left(Y\in B|Z=z\right) \tag{2.1}\]
for \(Z_{*}\mathbb{P}\)-a.e. \(z\in\mathcal{Z}\) and all \(A\in\mathcal{A}\) and \(B\in\mathcal{B}\). Here \(Z_{*}\mathbb{P}\) is the push-forward measure of \(\mathbb{P}\) by \(Z\). The left-hand side of (2.1) is the conditional regular distribution of \((X,Y)\colon\Omega\to\mathcal{X}\times\mathcal{Y}\) given \(Z=z\). The right-hand side is the multiple of the conditional distribution of \(X\) given \(Z=z\) and the conditional distribution of \(Y\) given \(Z=z\).
At the end of this subsection we explain the log-sum inequality. This will be used in the next subsection.
**Lemma 2.2** (Log-sum inequality).: _Let \((\mathcal{X},\mathcal{A})\) be a measurable space. Let \(\mu\) be a measure on it with \(0<\mu(\mathcal{X})<\infty\). Let \(f\) and \(g\) be nonnegative measurable functions defined on \(\mathcal{X}\). Suppose that \(g\) is \(\mu\)-integrable and \(g(x)>0\) for \(\mu\)-a.e. \(x\in\mathcal{X}\). Then_
\[\left(\int_{\mathcal{X}}f(x)\,d\mu(x)\right)\log\frac{\int_{\mathcal{X}}f(x)\, d\mu(x)}{\int_{\mathcal{X}}g(x)\,d\mu(x)}\leq\int_{\mathcal{X}}f(x)\log \frac{f(x)}{g(x)}\,d\mu(x).\]
_In particular, if the left-hand side is infinite then the right-hand side is also infinite._
Here we assume \(0\log\frac{0}{a}=0\) for all \(a>0\).
Proof.: Set \(\phi(t)=t\log t\) for \(t\geq 0\). Since \(\phi^{\prime\prime}(t)=\log e/t>0\) for \(t>0\), this is a convex function. We define a probability measure \(w\) on \(\mathcal{X}\) by
\[w(A)=\frac{\int_{A}g\,d\mu}{\int_{\mathcal{X}}g\,d\mu}\quad(A\subset\mathcal{ X}).\]
The Radon-Nikodim derivative of \(w\) by \(\mu\) is given by
\[\frac{dw}{d\mu}=\frac{g}{\int_{\mathcal{X}}g\,d\mu}.\]
By Jensen's inequality
\[\phi\left(\int_{\mathcal{X}}\frac{f}{g}\,dw\right)\leq\int_{\mathcal{X}}\phi \left(\frac{f}{g}\right)\,dw. \tag{2.2}\]
Here, if the left-hand side is infinite, then the right-hand side is also infinite. We have
\[\phi\left(\int_{\mathcal{X}}\frac{f}{g}\,dw\right)=\phi\left(\frac{\int_{ \mathcal{X}}f\,d\mu}{\int_{\mathcal{X}}g\,d\mu}\right)=\frac{\int_{\mathcal{X }}f\,d\mu}{\int_{\mathcal{X}}g\,d\mu}\log\frac{\int_{\mathcal{X}}f\,d\mu}{ \int_{\mathcal{X}}g\,d\mu}.\]
The right-hand side of (2.2) is
\[\int_{\mathcal{X}}\phi\left(\frac{f}{g}\right)\,dw=\frac{1}{\int_{\mathcal{X }}g\,d\mu}\int_{\mathcal{X}}g\phi\left(\frac{f}{g}\right)\,d\mu=\frac{1}{\int_ {\mathcal{X}}g\,d\mu}\int_{\mathcal{X}}f\log\frac{f}{g}\,d\mu.\]
Therefore (2.2) provides
\[\left(\int_{\mathcal{X}}f\,d\mu\right)\log\frac{\int_{\mathcal{X}}f\,d\mu}{ \int_{\mathcal{X}}g\,d\mu}\leq\int_{\mathcal{X}}f\log\frac{f}{g}\,d\mu.\]
The following is the finitary version of the log-sum inequality:
**Corollary 2.3**.: _Let \(a_{1},\ldots,a_{n}\) be nonnegative numbers and \(b_{1},\ldots,b_{n}\) positive numbers. Then_
\[\left(\sum_{i=1}^{n}a_{i}\right)\log\frac{\sum_{i=1}^{n}a_{i}}{\sum_{i=1}^{n} b_{i}}\leq\sum_{i=1}^{n}a_{i}\log\frac{a_{i}}{b_{i}}.\]
Proof.: Apply Lemma 2.2 to the finite set \(\mathcal{X}=\{1,2,\ldots,n\}\) with the discrete \(\sigma\)-algebra and the counting measure.
### Mutual information
Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space. We assume that all random variables in this subsection are defined on \((\Omega,\mathcal{F},\mathbb{P})\). In this paper a finite set is always assumed to be endowed with the discrete topology and the discrete \(\sigma\)-algebra (i.e. the set of all subsets). The purpose of this subsection is to define and study mutual information. A basic reference of mutual information is the book of Cover-Thomas [10]. A mathematically sophisticated presentation is given in the book of Gray [11].
First we define the Shannon entropy. Let \((\mathcal{X},\mathcal{A})\) be a finite set with the discrete \(\sigma\)-algebra, and let \(X\colon\Omega\to\mathcal{X}\) be a measurable map. We define the **Shannon entropy of \(X\)** by
\[H(X)=-\sum_{x\in\mathcal{X}}\mathbb{P}(X=x)\log\mathbb{P}(X=x).\]
Here we assume \(0\log 0=0\) as usual.
Next we define the mutual information. Let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be two measurable spaces and let \(X\colon\Omega\to\mathcal{X}\) and \(Y\colon\Omega\to\mathcal{Y}\) be measurable maps. We want to define the mutual information \(I(X;Y)\). Intuitively \(I(X;Y)\) measure the amount of information shared by the random variables \(X\) and \(Y\).
* **Case I:** Suppose \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) are finite sets with the discrete \(\sigma\)-algebras. Then we define (2.3) \[I(X;Y)=H(X)+H(Y)-H(X,Y),\] where \(H(X,Y)\) is the Shannon entropy of the measurable map \((X,Y):\Omega\to\mathcal{X}\times\mathcal{Y}\). Since \(H(X,Y)\leq H(X)+H(Y)\), the mutual information \(I(X;Y)\) is always nonnegative. The explicit formula is given by \[I(X;Y)=\sum_{x\in\mathcal{X},y\in\mathcal{Y}}\mathbb{P}(X=x,Y=y)\log\frac{ \mathbb{P}(X=x,Y=y)}{\mathbb{P}(X=x)\mathbb{P}(Y=y)}.\] Here we assume \(0\log\frac{0}{a}=0\) for any \(a\geq 0\). The mutual information \(I(X;Y)\) satisfies the following natural monotonicity2: Let \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\) be finite sets (endowed with the discrete \(\sigma\)-algebras), and let \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be any maps. Then it follows from the log-sum inequality (Corollary 2.3) that Footnote 2: This is a special case of the data-processing inequality [10, Theorem 2.8.1]. \[I\left(f(X);g(Y)\right)\leq I(X;Y).\]
* **Case II:** Here we define \(I(X;Y)\) for general random variables \(X\) and \(Y\). (Namely \(\mathcal{X}\) and \(\mathcal{Y}\) may be infinite sets.) Let \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\) be arbitrary finite sets, and let \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be any measurable maps. Then we can consider \(I\left(f(X);g(Y)\right)\) by Case I. We define \(I(X;Y)\) as the supremum of \(I\left(f(X);g(Y)\right)\) over all finite sets \(\mathcal{X}^{\prime},\mathcal{Y}^{\prime}\) and measurable maps \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\). The mutual information \(I(X,Y)\) is always nonnegative and symmetric: \(I(X;Y)=I(Y;X)\geq 0\). If \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) are measurable maps to some other measurable spaces \((\mathcal{X}^{\prime},\mathcal{A}^{\prime})\) and \((\mathcal{Y}^{\prime},\mathcal{B}^{\prime})\) (not necessarily finite sets) then we have \(I\left(f(X);g(Y)\right)\leq I\left(X;Y\right)\). If \(\mathcal{X}\) and \(\mathcal{Y}\) are finite sets, then the definition of Case II is compatible with Case I by the monotonicity (2.4).
If \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) are standard Borel spaces, then we can consider the regular conditional distribution of \(Y\) given \(X=x\). We denote
\[\nu(B|x)=\mathbb{P}(Y\in B|X=x)\quad(x\in\mathcal{X},B\in\mathcal{B}).\]
Let \(\mu=X_{*}\mathbb{P}\) be the push-forward measure of \(\mathbb{P}\) by \(X\). The distribution of \((X,Y)\) is determined by \(\mu\) and \(\nu\). Hence the mutual information \(I(X;Y)\) is also determined by \(\mu\) and \(\nu\). Therefore we sometimes denote \(I(X;Y)\) by \(I(\mu,\nu)\). An importance of this description comes from the fact that \(I(\mu,\nu)\) is a concave function in \(\mu\) and a convex function in \(\nu\) (Proposition 2.10 below).
In the rest of this subsection we prepare several basic properties of mutual information. They are rather heavy. Readers may skip to the next subsection at the first reading.
If \((\mathcal{X},\mathcal{A})\) is a standard Borel space and _if \(\mathcal{Y}\) is a finite set_, then we can express \(I(X;Y)\) in another convenient way. For \(x\in\mathcal{X}\) we set
\[H(Y|X=x)=-\sum_{y\in\mathcal{Y}}\mathbb{P}\left(Y=y|X=x\right)\log\mathbb{P} \left(Y=y|X=x\right).\]
We define the **conditional entropy of \(Y\) given \(X\)** by
\[H(Y|X)=\int_{\mathcal{X}}H(Y|X=x)\,d\mu(x),\quad(\mu:=X_{*}\mathbb{P}).\]
The next theorem is given in the book of Gray [1, p. 213, Lemma 7.20].
**Theorem 2.4**.: _Let \(X\) and \(Y\) be random variables taking values in a standard Borel space \((\mathcal{X},\mathcal{A})\) and a finite set \(\mathcal{Y}\) respectively. Then we have_
\[I(X;Y)=H(Y)-H(Y|X).\]
When both \(\mathcal{X}\) and \(\mathcal{Y}\) are finite sets, this theorem is a very well-known result. A point of the theorem is that we do not need to assume that \(\mathcal{X}\) is a finite set.
The following is also a basic result. This is given in [1, p. 211, Lemma 7.18].
**Theorem 2.5**.: _Let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be standard Borel spaces. Then there exist sequences of measurable maps \(f_{n}\colon\mathcal{X}\to\mathcal{X}_{n}\) and \(g_{n}\colon\mathcal{Y}\to\mathcal{Y}_{n}\) to some finite sets \(\mathcal{X}_{n}\) and \(\mathcal{Y}_{n}\)\((n\geq 1)\) for which the following statement holds: If \(X\) and \(Y\) are random variables taking values in \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) respectively, then_
\[I(X;Y)=\lim_{n\to\infty}I\left(f_{n}(X);g_{n}(Y)\right).\]
Sketch of the proof.: Standard Borel spaces are isomorphic to either countable sets or the Cantor sets. The case of countable sets is easier. So we assume that both \(\mathcal{X}\) and \(\mathcal{Y}\) are the Cantor set \(\{0,1\}^{\mathbb{N}}\). Let \(f_{n}\colon\mathcal{X}\to\{0,1\}^{n}\) and \(g_{n}\colon\mathcal{Y}\to\{0,1\}^{n}\) be the natural projections to the first \(n\) coordinates. Then we can check that \(f_{n}\) and \(g_{n}\) satisfy the statement.
**Lemma 2.6**.: _Let \(X_{n}\) and \(Y_{n}\)\((n\geq 1)\) be sequences of random variables taking values in finite sets \(\mathcal{X}\) and \(\mathcal{Y}\) respectively. Suppose \((X_{n},Y_{n})\) converges to \((X,Y)\) in law. (Namely \(\mathbb{P}\left(X_{n}=x,Y_{n}=y\right)\to\mathbb{P}(X=x,Y=y)\) as \(n\to\infty\) for all \((x,y)\in\mathcal{X}\times\mathcal{Y}\).) Then \(I(X_{n};Y_{n})\) converges to \(I(X;Y)\) as \(n\to\infty\)._
Proof.: This immediately follows from the definition (2.3) in Case I above.
**Lemma 2.7** (Subadditivity of mutual information).: _Let \(X,Y,Z\) be random variables taking values in standard Borel spaces \((\mathcal{X},\mathcal{A}),(\mathcal{Y},\mathcal{B}),(\mathcal{Z},\mathcal{C})\) respectively. Suppose that \(X\) and \(Y\) are conditionally independent given \(Z\). Then_
\[I\left(X,Y;Z\right)\leq I(X;Z)+I(Y;Z),\]
_where \(I\left(X,Y;Z\right)=I\left((X,Y);Z\right)\) is the mutual information between the random variables \((X,Y)\) and \(Z\)._
Proof.: Let \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be measurable maps to some finite sets \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\). Then by Theorem 2.4
\[I\left(f(X),g(Y);Z\right)=H\left(f(X),g(Y)\right)-H\left(f(X),g(Y)|Z\right).\]
We have [10, Theorem 2.6.6]
\[H\left(f(X),g(Y)\right)\leq H\left(f(X)\right)+H\left(g(Y)\right).\]
The random variables \(f(X)\) and \(g(Y)\) are conditionally independent given \(Z\). Hence
\[H\left(f(X),g(Y)|Z\right)=H\left(f(X)|Z\right)+H\left(g(Y)|Z\right).\]
Therefore
\[I\left(f(X),g(Y);Z\right) \leq\left\{H\left(f(X)\right)-H\left(f(X)|Z\right)\right\}+\left\{ H\left(g(Y)\right)-H\left(g(Y)|Z\right)\right\}\] \[=I\left(f(X);Z\right)+I\left(g(Y);Z\right).\]
We have \(I\left(X,Y;Z\right)=\sup_{f,g}I\left(f(X),g(Y);Z\right)\) where \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) run over all measurable maps to some finite sets. This follows from the fact that \(\mathcal{A}\otimes\mathcal{B}\) is generated by rectangles \(A\times B\)\((A\in\mathcal{A},B\in\mathcal{B})\)[12, p.175 Lemma 7.3]. Therefore we get
\[I\left(X,Y;Z\right)\leq I(X;Z)+I(Y;Z).\]
As we briefly mentioned above, the mutual information \(I(\mu,\nu)\) is a concave function in a probability measure \(\mu\) and a convex function in a transition probability \(\nu\). Next we are going to establish this fact. We need some preparations.
For a finite set \(\mathcal{Y}\), a **probability mass function**\(p\) on \(\mathcal{Y}\) is a nonnegative function on \(\mathcal{Y}\) satisfying \(\sum_{y\in\mathcal{Y}}p(y)=1\). For a probability mass function \(p\) on \(\mathcal{Y}\) we define
\[H(p)=-\sum_{y\in\mathcal{Y}}p(y)\log p(y).\]
**Lemma 2.8** (Concavity of the Shannon entropy).: _Let \(\mathcal{Y}\) be a finite set and let \((\mathcal{Z},\mathcal{C},m)\) be a probability space. Suppose that we are given a probability mass function \(p_{z}\) on \(\mathcal{Y}\) for each \(z\in\mathcal{Z}\) and that the map \(\mathcal{Z}\ni z\mapsto p_{z}(y)\in[0,1]\) is measurable for each \(y\in\mathcal{Y}\). We define a probability mass function \(p\) on \(\mathcal{Y}\) by_
\[p(y)=\int_{\mathcal{Z}}p_{z}(y)\,dm(z).\]
_Then_
\[H(p)\geq\int_{\mathcal{Z}}H(p_{z})\,dm(z).\]
Proof.: From the log-sum inequality (Lemma 2.2),
\[-p(y)\log p(y)\geq-\int_{\mathcal{Z}}p_{z}(y)\log p_{z}(y)\,dm(z).\]
Summing this over \(y\in\mathcal{Y}\), we get the statement.
**Lemma 2.9**.: _Let \(\mathcal{X}\) and \(\mathcal{Y}\) be finite sets and \((\mathcal{Z},\mathcal{C},m)\) a probability space. Let \(\mu\) be a probability mass function on \(\mathcal{X}\). Suppose that, for each \(z\in\mathcal{Z}\), we are given a conditional probability mass function \(\nu_{z}(y|x)\) in \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\) such that the map \(\mathcal{Z}\ni z\mapsto\nu_{z}(y|x)\in[0,1]\) is measurable for each \((x,y)\in\mathcal{X}\times\mathcal{Y}\). We define_
\[\nu(y|x)=\int_{\mathcal{Z}}\nu_{z}(y|x)\,dm(z),\quad(x\in\mathcal{X},y\in \mathcal{Y}).\]
_Then_
\[I(\mu,\nu)\leq\int_{\mathcal{Z}}I(\mu,\nu_{z})\,dm(z).\]
Proof.: For \(y\in\mathcal{Y}\) we set
\[p_{z}(y)=\sum_{x\in\mathcal{X}}\mu(x)\nu_{z}(y|x),\quad p(y)=\sum_{x\in \mathcal{X}}\mu(x)\nu(y|x).\]
We have
\[I(\mu,\nu)=\sum_{x,y}\mu(x)\nu(y|x)\log\frac{\mu(x)\nu(y|x)}{\mu(x)p(y)},\quad I (\mu,\nu_{z})=\sum_{x,y}\mu(x)\nu_{z}(y|x)\log\frac{\mu(x)\nu_{z}(y|x)}{\mu(x) p_{z}(y)}.\]
Here we assume \(0\log\frac{a}{0}=0\) for all \(a\geq 0\).
We estimate each summand of \(I(\mu,\nu)\) and \(I(\mu,\nu_{z})\). We fix \((x,y)\in\mathcal{X}\times\mathcal{Y}\) with \(\mu(x)p(y)>0\). We define a subset \(\mathcal{Z}^{\prime}\subset\mathcal{Z}\) by
\[\mathcal{Z}^{\prime}=\{z\mid p_{z}(y)>0\}\supset\{z\mid\nu_{z}(y|x)>0\}.\]
Since \(\mu(x)p(y)>0\), we have \(m\,(\mathcal{Z}^{\prime})>0\). We have
\[\mu(x)\nu(y|x)=\int_{\mathcal{Z}^{\prime}}\mu(x)\nu_{z}(y|x)\,dm(z),\quad\mu( x)p(y)=\int_{\mathcal{Z}^{\prime}}\mu(x)p_{z}(y)\,dm(z).\]
By the log-sum inequality (Lemma 2.2)
\[\mu(x)\nu(y|x)\log\frac{\mu(x)\nu(y|x)}{\mu(x)p(y)} \leq\int_{\mathcal{Z}^{\prime}}\mu(x)\nu_{z}(y|x)\log\frac{\mu(x) \nu_{z}(y|x)}{\mu(x)p_{z}(y)}dm(z)\] \[=\int_{\mathcal{Z}}\mu(x)\nu_{z}(y|x)\log\frac{\mu(x)\nu_{z}(y|x)} {\mu(x)p_{z}(y)}dm(z).\]
Taking sums over \((x,y)\in\mathcal{X}\times\mathcal{Y}\), we get the statement.
**Proposition 2.10** (\(I(\mu,\nu)\) is concaive in \(\mu\) and convex in \(\nu\)).: _Let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be standard Borel spaces, and let \((\mathcal{Z},\mathcal{C},m)\) be a probability space._
1. _Let_ \(\nu\) _be a transition probability on_ \(\mathcal{X}\times\mathcal{Y}\)_. Suppose that we are given a probability measure_ \(\mu_{z}\) _on_ \(\mathcal{X}\) _for each_ \(z\in\mathcal{Z}\) _such that the map_ \(\mathcal{Z}\ni z\mapsto\mu_{z}(A)\in[0,1]\) _is measurable for every_ \(A\in\mathcal{A}\)_. We define a probability measure_ \(\mu\) _on_ \((\mathcal{X},\mathcal{A})\) _by_ \[\mu(A)=\int_{\mathcal{Z}}\mu_{z}(A)\,dm(z),\quad(A\in\mathcal{A}).\] _Then we have_ (2.5) \[I(\mu,\nu)\geq\int_{\mathcal{Z}}I(\mu_{z},\nu)\,dm(z).\]
2. _Let_ \(\mu\) _be a probability measure on_ \(\mathcal{X}\)_. Suppose that we are given a transition probability_ \(\nu_{z}\) _on_ \(\mathcal{X}\times\mathcal{Y}\) _for each_ \(z\in\mathcal{Z}\) _such that the map_ \(\mathcal{X}\times\mathcal{Z}\ni(x,z)\mapsto\nu_{z}(B|x)\in[0,1]\) _is measurable with respect to_ \(\mathcal{A}\otimes\mathcal{C}\) _for each_ \(B\in\mathcal{B}\)_. We define a transition probability_ \(\nu\) _on_ \(\mathcal{X}\times\mathcal{Y}\) _by_ \[\nu(B|x)=\int_{\mathcal{Z}}\nu_{z}(B|x)dm(z),\quad(x\in\mathcal{X},B\in \mathcal{B}).\] _Then we have_ \[I(\mu,\nu)\leq\int_{\mathcal{Z}}I(\mu,\nu_{z})\,dm(z).\]
Proof.: (1) By Lemma 2.5, there exists a sequence of measurable maps \(g_{n}\colon\mathcal{Y}\to\mathcal{Y}_{n}\) to finite sets \(\mathcal{Y}_{n}\) such that
\[I(\mu_{z},\nu)=\lim_{n\to\infty}I\left(\mu_{z},(g_{n})_{*}\nu\right),\quad I( \mu,\nu)=\lim_{n\to\infty}I\left(\mu,(g_{n})_{*}\nu\right).\]
Here \((g_{n})_{*}\nu\) is a transition probability on \(\mathcal{X}\times\mathcal{Y}_{n}\) defined by
\[(g_{n})_{*}\nu(B|x)=\nu\left((g_{n})^{-1}B|x\right),\quad(B\subset\mathcal{Y} _{n}).\]
It is enough to prove that for each \(n\) we have
\[I\left(\mu,(g_{n})_{*}\nu\right)\geq\int_{\mathcal{Z}}I\left(\mu_{z},(g_{n})_{ *}\nu\right)\,dm(z).\]
If this is proved then we get the above (2.5) by Fatou's lemma. Therefore we can assume that \(\mathcal{Y}\) itself is a finite set from the beginning.
We define probability mass functions \(p(y)\) and \(p_{z}(y)\) (\(z\in\mathcal{Z}\)) on \(\mathcal{Y}\) by
\[p(y)=\int_{\mathcal{X}}\nu(y|x)\,d\mu(x),\quad p_{z}(y)=\int_{\mathcal{X}}\nu(y| x)\,d\mu_{z}(x).\]
We have \(p(y)=\int_{\mathcal{Z}}p_{z}(y)\,dm(z)\). Then by Theorem 2.4
\[I(\mu,\nu)=H(p)-\int_{\mathcal{X}}H\left(\nu(\cdot|x)\right)\,d\mu(x),\quad I \left(\mu_{z},\nu\right)=H(p_{z})-\int_{\mathcal{X}}H\left(\nu(\cdot|x)\right) \,d\mu_{z}(x).\]
Here \(H\left(\nu(\cdot|x)\right)=-\sum_{y\in\mathcal{Y}}\nu(y|x)\log\nu(y|x)\). Notice that in particular this shows that \(I\left(\mu_{z},\nu\right)\) is a measurable function in the variable \(z\in\mathcal{Z}\). By Lemma 2.8, we have \(H(p)\geq\int_{\mathcal{Z}}H(p_{z})\,dm(z)\). We also have
\[\int_{\mathcal{X}}H\left(\nu(\cdot|x)\right)d\mu(x)=\int_{\mathcal{Z}}\left( \int_{\mathcal{X}}H\left(\nu(\cdot|x)\right)\,d\mu_{z}(x)\right)dm(z).\]
Thus
\[I(\mu,\nu)\geq\int_{\mathcal{Z}}I\left(\mu_{z},\nu\right)dm(z).\]
(2) Let \(f\colon\mathcal{X}\to\mathcal{X}^{\prime}\) and \(g\colon\mathcal{Y}\to\mathcal{Y}^{\prime}\) be measurable maps to finite sets \(\mathcal{X}^{\prime}\) and \(\mathcal{Y}^{\prime}\). We define a probability mass function \(\mu^{\prime}\) on \(\mathcal{X}^{\prime}\) by
\[\mu^{\prime}(x^{\prime})=\mu\left(f^{-1}(x^{\prime})\right)\quad(x^{\prime}\in \mathcal{X}^{\prime}).\]
We also define conditional probability mass functions \(\nu^{\prime}\) and \(\nu^{\prime}_{z}\) (\(z\in\mathcal{Z}\)) on \(\mathcal{X}^{\prime}\times\mathcal{Y}^{\prime}\) by
\[\nu^{\prime}(y^{\prime}|x^{\prime})=\frac{\int_{f^{-1}(x^{\prime})}\nu\left(g ^{-1}(y^{\prime})|x\right)d\mu(x)}{\mu\left(f^{-1}(x^{\prime})\right)},\quad \nu^{\prime}_{z}(y^{\prime}|x^{\prime})=\frac{\int_{f^{-1}(x^{\prime})}\nu_{z} \left(g^{-1}(y^{\prime})|x\right)d\mu(x)}{\mu\left(f^{-1}(x^{\prime})\right)}\]
where \(x^{\prime}\in\mathcal{X}^{\prime}\) and \(y^{\prime}\in\mathcal{Y}^{\prime}\). We have
\[\nu^{\prime}(y^{\prime}|x^{\prime})=\int_{\mathcal{Z}}\nu^{\prime}_{z}(y^{ \prime}|x^{\prime})dm(z).\]
Then by Lemma 2.9
\[I(\mu^{\prime},\nu^{\prime})\leq\int_{\mathcal{Z}}I(\mu^{\prime},\nu^{\prime}_ {z})\,dm(z).\]
It follows from the definition of mutual information that we have \(I(\mu^{\prime},\nu^{\prime}_{z})\leq I(\mu,\nu_{z})\) for all \(z\in\mathcal{Z}\). Hence
\[I(\mu^{\prime},\nu^{\prime})\leq\int_{\mathcal{Z}}I(\mu,\nu_{z})\,dm(z).\]
(It follows from Theorem 2.5 that \(I(\mu,\nu_{z})\) is measurable in \(z\in\mathcal{Z}\).) Taking the supremum over \(f\) and \(g\), we get
\[I(\mu,\nu)\leq\int_{\mathcal{Z}}I(\mu,\nu_{z})\,dm(z).\]
Next we will establish a method to prove a lower bound on mutual information (Proposition 2.13 below). We need to use the following integral representation of \(I(X;Y)\). This is given in [1, p. 176 Lemma 7.4, p. 206 Equation (7.31)].
**Theorem 2.11**.: _Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, and let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be measurable spaces. Let \(X\colon\Omega\to\mathcal{X}\) and \(Y\colon\Omega\to\mathcal{Y}\) be measurable maps with distributions \(\mu=\operatorname{Law}(X)=X_{*}\mathbb{P}\) and \(\nu=\operatorname{Law}(Y)=Y_{*}\mathbb{P}\) respectively. Let \(p=\operatorname{Law}(X,Y)=(X,Y)_{*}\mathbb{P}\) be the distribution of \((X,Y)\colon\Omega\to\mathcal{X}\times\mathcal{Y}\). Suppose that the mutual information \(I(X;Y)\) is finite. Then \(p\) is absolutely continuous with respect to the product measure \(\mu\otimes\nu\). Moreover, letting \(f=dp/d(\mu\otimes\nu)\) be the Radon-Nikodim derivative, we have_
\[I(X;Y)=\int_{\mathcal{X}\times\mathcal{Y}}\log f\,dp=\int_{\mathcal{X}\times \mathcal{Y}}f\log f\,d(\mu\otimes\nu).\]
We learnt the next result from [10, Lemma A.1]. This is a kind of duality of convex programming.
**Proposition 2.12**.: _Let \(\varepsilon>0\) and \(a\geq 0\) be real numbers. Let \((\mathcal{X},\mathcal{A})\) and \((\mathcal{Y},\mathcal{B})\) be measurable spaces and \(\rho\colon\mathcal{X}\times\mathcal{Y}\to[0,+\infty)\) a measurable map. Let \(\mu\) be a probability measure on \(\mathcal{X}\). Suppose a measurable map \(\lambda\colon\mathcal{X}\to[0,+\infty)\) satisfies_
\[\forall y\in\mathcal{Y}:\quad\int_{\mathcal{X}}\lambda(x)2^{-a\rho(x,y)}\,d \mu(x)\leq 1.\]
_If \(X\) and \(Y\) are random variables taking values in \(\mathcal{X}\) and \(\mathcal{Y}\) respectively and satisfying \(\operatorname{Law}(X)=\mu\) and \(\mathbb{E}\rho(X,Y)<\varepsilon\) then we have_
\[I(X;Y)\geq-a\varepsilon+\int_{\mathcal{X}}\log\lambda(x)\,d\mu(x). \tag{2.6}\]
Proof.: Let \(\nu=\operatorname{Law}(Y)\) and \(p=\operatorname{Law}(X,Y)\) be the distributions of \(Y\) and \((X,Y)\) respectively. If \(I(X;Y)\) is infinite then the statement is trivial. So we assume \(I(X;Y)<\infty\). Then by Theorem 2.11 the measure \(p\) is absolutely continuous with respect to \(\mu\otimes\nu\). Let \(f=dp/d(\mu\otimes\nu)\) be the Radon-Nikodim derivative. We have
\[I(X;Y)=\int_{\mathcal{X}\times\mathcal{Y}}\log f\,dp.\]
Set \(g(x,y)=\lambda(x)2^{-a\rho(x,y)}\). Since \(-\varepsilon<-\mathbb{E}\rho(X,Y)=-\int_{\mathcal{X}\times\mathcal{Y}}\rho( x,y)dp(x,y)\), we have
\[\int_{\mathcal{X}\times\mathcal{Y}}\log g(x,y)\,dp(x,y) \geq-a\varepsilon+\int_{\mathcal{X}\times\mathcal{Y}}\log\lambda (x)\,dp(x,y)\] \[=-a\varepsilon+\int_{\mathcal{X}}\log\lambda(x)\,d\mu(x).\]
Therefore
\[I(X;Y)+a\varepsilon-\int_{\mathcal{X}}\log\lambda(x)\,d\mu(x)\geq\int_{ \mathcal{X}\times\mathcal{Y}}(\log f-\log g)\,dp=\int_{\mathcal{X}\times \mathcal{Y}}f\log(f/g)\,d\mu(x)d\nu(y).\]
Since \(\ln t\leq t-1\), we have \(\ln(1/t)\geq 1-t\) and hence \(f\ln(f/g)\geq f-g\). Then
\[(\ln 2)\int_{\mathcal{X}\times\mathcal{Y}}f\log(f/g)\,d\mu(x)d\nu(y) =\int_{\mathcal{X}\times\mathcal{Y}}f\ln(f/g)\,d\mu(x)d\nu(y)\] \[\geq\int_{\mathcal{X}\times\mathcal{Y}}\left(f(x,y)-g(x,y)\right) d\mu(x)d\nu(y)\] \[=1-\int_{\mathcal{Y}}\left(\int_{\mathcal{X}}g(x,y)\,d\mu(x) \right)d\nu(y)\geq 0.\]
In the last inequality we have used the assumption \(\int_{\mathcal{X}}g(x,y)\,d\mu(x)\leq 1\).
The next proposition is a key result. We will use it for connecting geometric measure theory to rate distortion theory. This result is essentially due to Kawabata-Dembo [13, Proposition 3.2]. Recall that, for a metric space \((\mathcal{X},\mathbf{d})\), we use the notation
\[\operatorname{Diam}E=\sup\{\mathbf{d}(x,y)\mid x,y\in E\}\quad(E\subset \mathcal{X}).\]
**Proposition 2.13** (Kawabata-Dembo estimate).: _Let \(\varepsilon\) and \(\delta\) be positive numbers with \(2\varepsilon\log(1/\varepsilon)\leq\delta\). Let \(s\) be a nonnegative real number. Let \((\mathcal{X},\mathbf{d})\) be a separable metric space with a Borel probability measure \(\mu\) satisfying_
\[\mu(E)\leq\left(\operatorname{Diam}E\right)^{s}\quad\text{for all Borel sets $E\subset\mathcal{X}$ with $\operatorname{Diam}E<\delta$.} \tag{2.7}\]
_Let \(X\) and \(Y\) be random variables taking values in \(\mathcal{X}\) and satisfying \(\operatorname{Law}X=\mu\) and \(\mathbb{E}\mathbf{d}(X,Y)<\varepsilon\). Then_
\[I(X;Y)\geq s\log(1/\varepsilon)-K(s+1).\]
_Here \(K\) is a universal positive constant independent of the given data (i.e. \(\varepsilon,\delta,s,(\mathcal{X},\mathbf{d}),\mu\))._
Proof.: The proof is almost identical with [13, Lemma 2.10]. But we repeat it for completeness. If \(s=0\) then the statement is trivial. So we can assume \(s>0\). We use Proposition 2.12. Set \(a=s/\varepsilon\) and we estimate \(\int_{\mathcal{X}}2^{-ad(x,y)}d\mu(x)\) for each \(y\in\mathcal{X}\). By the Fubini theorem (see [10, 1.15 Theorem])
\[\int_{\mathcal{X}}2^{-ad(x,y)}d\mu(x)=\int_{0}^{1}\mu\{x\mid 2^{-ad(x,y)}\geq u \}\,du.\]
Changing the variable \(u=2^{-av}\), we have \(du=-a(\ln 2)2^{-av}dv\) and hence
\[\int_{0}^{1}\mu\{x\mid 2^{-ad(x,y)}\geq u\}\,du =\int_{0}^{\infty}\mu\{x\mid d(x,y)\leq v\}a(\ln 2)2^{-av}\,dv\] \[=a\ln 2\left(\int_{0}^{\delta/2}+\int_{\delta/2}^{\infty}\right) \mu\{x\mid d(x,y)\leq v\}2^{-av}\,dv.\]
By using (2.7)
\[a\ln 2\int_{0}^{\delta/2}\mu\{x\mid d(x,y)\leq v\}2^{-av}\,dv \leq a\ln 2\int_{0}^{\delta/2}(2v)^{s}2^{-av}\,dv\] \[=\int_{0}^{\frac{a\delta\ln 2}{2}}\left(\frac{2t}{a\ln 2}\right)^{s} e^{-t}\,dt,\quad(t=a(\ln 2)v)\] \[\leq\left(\frac{2}{a\ln 2}\right)^{s}\int_{0}^{\infty}t^{s}e^{-t} \,dt\] \[=\left(\frac{2\varepsilon}{\ln 2}\right)^{s}s^{-s}\Gamma(s+1),\quad \left(a=\frac{s}{\varepsilon}\right).\]
On the other hand
\[a\ln 2\int_{\delta/2}^{\infty}\mu\{x\mid d(x,y)\leq v\}2^{-av}\,dv \leq a\ln 2\int_{\delta/2}^{\infty}2^{-av}dv\] \[=2^{-a\delta/2}\] \[=\left(2^{-\delta/(2\varepsilon)}\right)^{s},\quad\left(a=\frac{ s}{\varepsilon}\right).\]
Since \(\delta\geq 2\varepsilon\log(1/\varepsilon)\), we have \(-\frac{\delta}{2\varepsilon}\leq\log\varepsilon\). Hence \(2^{-\delta/(2\varepsilon)}\leq\varepsilon\).
Summing the above estimates, we get
\[\int_{\mathcal{X}}2^{-ad(x,y)}d\mu(x)\leq\varepsilon^{s}\left\{1+\left(\frac{2} {\ln 2}\right)^{s}s^{-s}\Gamma(s+1)\right\}.\]
Using the Stirling formula \(s^{-s}\Gamma(s+1)\sim e^{-s}\sqrt{2\pi s}\), we can find a constant \(c>1\) such that the term \(\{\cdots\}\) is bounded by \(c^{s+1}\) from above and hence
\[\int_{\mathcal{X}}2^{-ad(x,y)}d\mu(x)\leq c^{s+1}\varepsilon^{s}.\]
We set \(\lambda(x)=c^{-1-s}\varepsilon^{-s}\) for \(x\in\mathcal{X}\). (This is a constant function.) Then for all \(y\in\mathcal{X}\)
\[\int_{\mathcal{X}}\lambda(x)2^{-ad(x,y)}d\mu(x)\leq 1.\]
We apply Proposition 2.12 and get
\[I(X;Y) \geq-a\varepsilon+\int_{\mathcal{X}}\log\lambda\,d\mu\] \[=-s+\log\lambda,\quad\left(a=\frac{s}{\varepsilon}\right)\] \[=s\log(1/\varepsilon)-(1+\log c)s-\log c.\]
Then the constant \(K:=1+\log c\) satisfies the statement.
### Rate distortion theory
In this subsection we introduce a rate distortion function. A basic of rate distortion theory can be found in the book of Cover-Thomas [10, Chapter 10]. The rate distortion theory for continuous-time stochastic processes are presented in the paper of Pursley-Gray [11].
Recall that we have denoted the Lebesgue measure on \(\mathbb{R}^{d}\) by \(\mathbf{m}\). For a measurable function \(f(u)\) on \(\mathbb{R}^{d}\) we usually denote its integral with respect to \(\mathbf{m}\) by
\[\int_{\mathbb{R}^{d}}f(u)du.\]
Let \((\mathcal{X},\mathbf{d})\) be a compact metric space. Let \(A\) be a Borel subset of \(\mathbb{R}^{d}\) of finite measure \(\mathbf{m}(A)<\infty\). We define \(L^{1}(A,\mathcal{X})\) as the space of all measurable maps \(f\colon A\to\mathcal{X}\). We identify two maps if they coincide \(\mathbf{m}\)-almost everywhere. We define a metric on \(L^{1}(A,\mathcal{X})\) by
\[D(f,g)=\int_{A}\mathbf{d}\left(f(u),g(u)\right)du\quad\left(f,g\in L^{1}(A, \mathcal{X})\right).\]
We need to check the following technical fact.
**Lemma 2.14**.: \((L^{1}(A,\mathcal{X}),D)\) _is a complete separable metric space. Hence it is a standard Borel space with respect to the Borel \(\sigma\)-algebra._
Proof.: First we need to understand what happens if we change the metric \(\mathbf{d}\) on \(\mathcal{X}\). Let \(\mathbf{d}^{\prime}\) be another metric on \(\mathcal{X}\) compatible with the given topology. We define a metric \(D^{\prime}\) on \(L^{1}(A,\mathcal{X})\) by
\[D^{\prime}(f,g)=\int_{A}\mathbf{d}^{\prime}\left(f(u),g(u)\right)du.\]
Let \(\varepsilon\) be a positive number. There exists \(\delta>0\) such that \(\mathbf{d}(x,y)<\delta\Longrightarrow\mathbf{d}^{\prime}(x,y)<\varepsilon\).
Suppose \(f,g\in L^{1}(A,\mathcal{X})\) satisfy \(D(f,g)<\varepsilon\delta\). Then
\[\mathbf{m}\{u\in A\mid\mathbf{d}\left(f(u),g(u)\right)\geq\delta\}\leq\frac{1 }{\delta}\int_{A}\mathbf{d}\left(f(u),g(u)\right)du<\varepsilon.\]
We have \(\mathbf{d}^{\prime}\left(f(u),g(u)\right)<\varepsilon\) on \(\{u\in A\mid\mathbf{d}\left(f(u),g(u)\right)<\delta\}\). Hence
\[D^{\prime}(f,g)<\varepsilon\left(\operatorname{Diam}(\mathcal{X},\mathbf{d}^{ \prime})+\mathbf{m}(A)\right).\]
So the identity map \(\operatorname{id}\colon(L^{1}(A,\mathcal{X}),D)\to(L^{1}(A,\mathcal{X}),D^{ \prime})\) is uniformly continuous. The same is true if we exchange \(D\) and \(D^{\prime}\). Therefore if \((L^{1}(A,\mathcal{X}),D^{\prime})\) is complete and separable then so is \((L^{1}(A,\mathcal{X}),D)\).
Every compact metric space topologically embeds into the Hilbert cube \([0,1]^{\mathbb{N}}\). We define a metric \(\mathbf{d}^{\prime}\) on \([0,1]^{\mathbb{N}}\) by
\[\mathbf{d}^{\prime}(x,y)=\sum_{n=1}^{\infty}2^{-n}|x_{n}-y_{n}|.\]
Let \(L^{1}\left(A,[0,1]^{\mathbb{N}}\right)\) be the space of measurable maps from \(A\) to \([0,1]^{\mathbb{N}}\). We define a metric \(D^{\prime}\) on \(L^{1}\left(A,[0,1]^{\mathbb{N}}\right)\) as above. The space \(L^{1}(A,\mathcal{X})\) is identified with a closed subspace
of \(L^{1}\left(A,[0,1]^{\mathbb{N}}\right)\). So it is enough to show that \(\left(L^{1}\left(A,[0,1]^{\mathbb{N}}\right),D^{\prime}\right)\) is a complete separable metric space. This follows from the standard fact that \(L^{1}(A,[0,1])\) is complete and separable with respect to the \(L^{1}\)-norm.
In the following we always assume that \(L^{1}(A,\mathcal{X})\) is endowed with the Borel \(\sigma\)-algebra (and hence it is a standard Borel space).
Let \((\mathcal{X},\mathbf{d})\) be a compact metric space, and let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\). Let \(\mu\) be a \(T\)-invariant Borel probability measure on \(\mathcal{X}\).
Let \(\varepsilon>0\) and let \(A\) be a bounded Borel subset of \(\mathbb{R}^{d}\) with \(\mathbf{m}(A)>0\). We define \(R(\varepsilon,A)\) as the infimum of the mutual information \(I(X;Y)\) where \(X\) and \(Y\) are random variables defined on some probability space \((\Omega,\mathcal{F},\mathbb{P})\) such that
* \(X\) takes values in \(\mathcal{X}\) and its distribution is given by \(\mu\),
* \(Y\) takes values in \(L^{1}(A,\mathcal{X})\) and satisfies \[\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_{A}\mathbf{d}(T^{u}X,Y_{u})\,du \right)<\varepsilon.\]
Here \(Y_{u}=Y_{u}(\omega)\) (\(\omega\in\Omega\)) is the value of \(Y(\omega)\in L^{1}(A,\mathcal{X})\) at \(u\in A\). We set \(R(\varepsilon,A)=0\) if \(\mathbf{m}(A)=0\).
**Remark 2.15**.: In the above definition of \(R(\varepsilon,A)\), we can assume that \(Y\) takes only finitely many values. Indeed let \(X\) and \(Y\) be random variables satisfying the conditions in the definition of \(R(\varepsilon,A)\). We take a positive number \(\tau\) satisfying
\[\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_{A}\mathbf{d}(T^{u}X,Y_{u})\,du \right)<\varepsilon-2\tau.\]
Since \(L^{1}(A,\mathcal{X})\) is separable, it contains a dense countable subsets \(\{f_{1},f_{2},f_{3},\dots\}\). We define a map \(F\colon L^{1}(A,\mathcal{X})\to\{f_{1},f_{2},f_{3},\dots\}\) by \(F(f)=f_{n}\) where \(n\) is the smallest natural number satisfying \(D(f,f_{n})<\tau\cdot\mathbf{m}(A)\). Set \(Y^{\prime}=F(Y)\). Then we have
\[\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_{A}\mathbf{d}(T^{u}X,Y^{\prime}_ {u})\,du\right)<\varepsilon-\tau.\]
Define \(p_{n}=\mathbb{P}(Y^{\prime}=f_{n})\). We choose \(n_{0}\) such that
\[\sum_{n>n_{0}}p_{n}\mathrm{Diam}(\mathcal{X},\mathbf{d})<\tau.\]
We define \(G\colon\{f_{1},f_{2},f_{3},\dots\}\to\{f_{1},f_{2},\dots,f_{n_{0}}\}\) by
\[G(f)=\begin{cases}f&\text{if }f\in\{f_{1},f_{2},\dots,f_{n_{0}}\}\\ f_{n_{0}}&\text{otherwise}\end{cases}.\]
Set \(Y^{\prime\prime}=G(Y^{\prime})\). Then \(Y^{\prime\prime}\) takes only finitely many values (i.e. \(f_{1},\dots,f_{n_{0}}\)) and we have
\[\mathbb{E}\left(\frac{1}{\mathbf{m}(A)}\int_{A}\mathbf{d}(T^{u}X,Y^{\prime \prime}_{u})\,du\right)<\varepsilon,\]
\[I(X;Y^{\prime\prime})\leq I(X;Y^{\prime})\leq I(X;Y).\]
Therefore, when we consider the infimum in the definition of \(R(\varepsilon,A)\), we only need to take into account such random variables \(Y^{\prime\prime}\).
For a bounded Borel subset \(A\subset\mathbb{R}^{d}\) and \(r>0\) we define \(N_{r}(A)\) as the \(r\)-neighborhood of \(A\) with respect to the \(\ell^{\infty}\)-norm, i.e. \(N_{r}(A)=\{u+v\mid u\in A,v\in(-r,r)^{d}\}\).
**Lemma 2.16**.: _We have:_
1. \(R(\varepsilon,A)\leq\log\#\left(\mathcal{X},\mathbf{d}_{A},\varepsilon \right)\leq\mathbf{m}\left(N_{1/2}(A)\right)\log\#(\mathcal{X},\mathbf{d}_{(-1,1)^{d}},\varepsilon)\)_._
2. \(R(\varepsilon,a+A)=R(\varepsilon,A)\) _for any_ \(a\in\mathbb{R}^{d}\)_._
3. _If_ \(A\cap B=\emptyset\) _then_ \(R(\varepsilon,A\cup B)\leq R(\varepsilon,A)+R(\varepsilon,B)\)_._
Proof.: (1) Let \(\mathcal{X}=U_{1}\cup U_{2}\cup\cdots\cup U_{n}\) be an open cover with \(n=\#\left(\mathcal{X},\mathbf{d}_{A},\varepsilon\right)\) and \(\operatorname{Diam}(U_{k},\mathbf{d}_{A})<\varepsilon\) for all \(1\leq k\leq n\). Take a point \(x_{k}\in U_{k}\) for each \(k\) and define a map \(f\colon\mathcal{X}\to\{x_{1},\ldots,x_{n}\}\) by \(f(x)=\{x_{k}\}\) for \(x\in U_{k}\setminus(U_{1}\cup\cdots\cup U_{k-1})\). Let \(X\) be a random variable taking values in \(\mathcal{X}\) according to \(\mu\). We set \(Y_{u}=T^{u}f(X)\) for \(u\in A\). Then \(X\) and \(Y\) satisfy the conditions of the definition of \(R(\varepsilon,A)\). We have
\[R(\varepsilon,A)\leq I(X;Y)\leq H(Y)\leq\log n=\log\#\left(\mathcal{X}, \mathbf{d}_{A},\varepsilon\right).\]
We estimate \(\log\#\left(\mathcal{X},\mathbf{d}_{A},\varepsilon\right)\). Let \(\{u_{1},\ldots,u_{a}\}\) be a maximal \(1\)-separated subset of \(A\) where "\(1\)-separated" means \(\left\|u_{i}-u_{j}\right\|_{\infty}\geq 1\) for \(i\neq j\). Then \(A\subset\bigcup_{i=1}^{a}\left(u_{i}+(-1,1)^{d}\right)\) and hence
\[\log\#\left(\mathcal{X},\mathbf{d}_{A},\varepsilon\right)\leq a\log\#\left( \mathcal{X},\mathbf{d}_{(-1,1)^{d}},\varepsilon\right).\]
The sets \(u_{i}+(-1/2,1/2)^{d}\)\((1\leq i\leq a)\) are mutually disjoint and contained in \(N_{1/2}(A)\). Therefore \(a\leq\mathbf{m}\left(N_{1/2}(A)\right)\).
(2) Let \(X\) and \(Y\) be random variables satisfying the conditions of the definition of \(R(\varepsilon,A)\) (i.e., \(X\) is distributed according to \(\mu\) and the average distance between \(\{T^{u}X\}_{u\in A}\) and \(Y\) is bounded by \(\varepsilon\)). We define new random variables \(X^{\prime}\) and \(Y^{\prime}\) by
\[X^{\prime}=T^{-a}X,\quad Y^{\prime}_{v}=Y_{v-a}\quad(v\in a+A).\]
Since \(\mu\) is \(T\)-invariant, we have \(\operatorname{Law}\!X^{\prime}=\operatorname{Law}\!X=\mu\). The random variable \(Y^{\prime}\) takes values in \(L^{1}(a+A,\mathcal{X})\) and
\[\int_{a+A}\mathbf{d}(T^{v}X^{\prime},Y^{\prime}_{v})\,dv =\int_{a+A}\mathbf{d}(T^{v-a}X,Y_{v-a})\,dv\] \[=\int_{A}\mathbf{d}(T^{u}X,Y_{u})\,du,\quad(u=v-a).\]
We have \(I(X^{\prime};Y^{\prime})=I(X;Y)\). Therefore \(R(\varepsilon,a+A)=R(\varepsilon,A)\).
(3) Let \(X\) and \(Y\) be random variables satisfying the conditions of the definition of \(R(\varepsilon,A)\) as above, and let \(X^{\prime}\) and \(Y^{\prime}\) be random variables satisfying the conditions of the definition of \(R(\varepsilon,B)\). We denote by \(\mathbb{P}(Y\in E\mid X=x)\)\((E\subset L^{1}(A,\mathcal{X}))\) the regular conditional distribution of \(Y\) given \(X=x\). Similarly for \(\mathbb{P}(Y^{\prime}\in F\mid X^{\prime}=x)\).
We naturally identify \(L^{1}(A\cup B,\mathcal{X})\) with \(L^{1}(A,\mathcal{X})\times L^{1}(B,\mathcal{X})\). We define a transition probability \(\nu\) on \(\mathcal{X}\times L^{1}(A\cup B,\mathcal{X})\) by
\[\nu(E\times F|x)=\mathbb{P}(Y\in E\mid X=x)\mathbb{P}(Y^{\prime}\in F\mid X^{ \prime}=x),\]
for \(E\times F\subset L^{1}(A,\mathcal{X})\times L^{1}(B,\mathcal{X})=L^{1}(A\cup B,\mathcal{X})\) and \(x\in\mathcal{X}\). We define a probability measure \(Q\) on \(\mathcal{X}\times L^{1}(A\cup B,\mathcal{X})\) by
\[Q(G)=\int_{\mathcal{X}}\nu(G_{x}|x)\,d\mu(x),\quad(G\subset\mathcal{X}\times L ^{1}(A\cup B,\mathcal{X})),\]
where \(G_{x}=\{f\in L^{1}(A\cup B,\mathcal{X})\mid(x,f)\in G\}\). Let \((X^{\prime\prime},Y^{\prime\prime})\) be the random variable taking values in \(\mathcal{X}\times L^{1}(A\cup B,\mathcal{X})\) according to \(Q\). Then \(\operatorname{Law}\!X^{\prime\prime}=\mu\) and
\[\mathbb{E}\left(\int_{A\cup B}\mathbf{d}(T^{u}X^{\prime\prime},Y ^{\prime\prime}_{u})\,du\right) =\mathbb{E}\left(\int_{A}\mathbf{d}(T^{u}X,Y_{u})\,du\right)+ \mathbb{E}\left(\int_{B}\mathbf{d}(T^{u}X^{\prime},Y^{\prime}_{u})\,du\right)\] \[<\varepsilon\,\mathbf{m}(A)+\varepsilon\,\mathbf{m}(B)=\varepsilon \,\mathbf{m}(A\cup B).\]
The random variables \(Y^{\prime\prime}|_{A}\) and \(Y^{\prime\prime}|_{B}\) is conditionally independent given \(X^{\prime\prime}\). Therefore by Lemma 2.7
\[I(X^{\prime\prime};Y^{\prime\prime})=I(X^{\prime\prime};Y^{\prime\prime}|_{A}, Y^{\prime\prime}|_{B})\leq I(X^{\prime\prime};Y^{\prime\prime}|_{A})+I(X^{ \prime\prime};Y^{\prime\prime}|_{B})=I(X;Y)+I(X^{\prime},Y^{\prime}).\]
The statement (3) follows from this.
**Lemma 2.17**.: _The limit of \(\dfrac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d}}\) as \(L\to\infty\) exists and is equal to the infimum of \(\dfrac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d}}\) over \(L>0\)._
Proof.: Let \(0<\ell<L\). We divide \(L\) by \(\ell\) and let \(L=q\ell+r\) where \(q\) is a natural number and \(0\leq r<\ell\). Set
\[\Gamma=\{(\ell n_{1},\ldots,\ell n_{d})\mid n_{i}\in\mathbb{Z},\,0\leq n_{i}< q\,(1\leq i\leq d)\}.\]
The cubes \(u+[0,\ell)^{d}\) (\(u\in\Gamma\)) are disjoint and contained in \([0,L)^{d}\). Let \(A\) be the complement:
\[A=[0,L)^{d}\setminus\bigcup_{u\in\Gamma}\left(u+[0,\ell)^{d}\right).\]
The volume of the \(1/2\)-neighborhood of \(A\) is \(O(L^{d-1})\):
\[\mathbf{m}\left(N_{1/2}(A)\right)\leq d(r+1)(L+1)^{d-1}.\]
By Lemma 2.16
\[R\left(\varepsilon,[0,L)^{d}\right) \leq\sum_{u\in\Gamma}R\left(\varepsilon,u+[0,\ell)^{d}\right)+R( \varepsilon,A)\] \[\leq q^{d}R\left(\varepsilon,[0,\ell)^{d}\right)+C(L+1)^{d-1}.\]
By dividing this by \(L^{d}\) and letting \(L\to\infty\), we get
\[\limsup_{L\to\infty}\frac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d}}\leq\frac{R \left(\varepsilon,[0,\ell)^{d}\right)}{\ell^{d}}.\]
Then
\[\limsup_{L\to\infty}\frac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d}}\leq \inf_{\ell>0}\frac{R\left(\varepsilon,[0,\ell)^{d}\right)}{\ell^{d}}\leq\liminf _{\ell\to\infty}\frac{R\left(\varepsilon,[0,\ell)^{d}\right)}{\ell^{d}}\]
Recall that \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) is a continuous action of \(\mathbb{R}^{d}\) on a compact metric space \((\mathcal{X},\mathbf{d})\) with an invariant probability measure \(\mu\). For \(\varepsilon>0\) we define the **rate distortion function**\(R(\mathbf{d},\mu,\varepsilon)\) (\(\varepsilon>0\)) by
\[R(\mathbf{d},\mu,\varepsilon)=\lim_{L\to\infty}\frac{R\left(\varepsilon,[0,L) ^{d}\right)}{L^{d}}=\inf_{L>0}\frac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d }}.\]
We define the **upper/lower rate distortion dimensions** of \((\mathcal{X},T,\mathbf{d},\mu)\) by
\[\overline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\limsup_{ \varepsilon\to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}, \quad\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\liminf_{ \varepsilon\to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}.\]
**Remark 2.18**.: A tiling argument similar to the proof of Lemma 2.17 shows that if \(\Lambda_{1},\Lambda_{2},\Lambda_{3},\dots\) are a sequence of rectangles of \(\mathbb{R}^{d}\) such that the minimum side length of \(\Lambda_{n}\) diverges to infinity then we have
\[R(\mathbf{d},\mu,\varepsilon)=\lim_{n\to\infty}\frac{R(\varepsilon,\Lambda_{n })}{\mathbf{m}(\Lambda_{n})}.\]
With a bit more effort we can also prove that
\[R(\mathbf{d},\mu,\varepsilon)=\lim_{r\to\infty}\frac{R(\varepsilon,B_{r})}{ \mathbf{m}(B_{r})}\]
where \(B_{r}\) is the Euclidean \(r\)-ball of \(\mathbb{R}^{d}\) centered at the origin. But we are not sure whether, for any Folner sequence \(A_{1},A_{2},A_{3},\dots\) of \(\mathbb{R}^{d}\), the limit of \(\frac{R(\varepsilon,A_{n})}{\mathbf{m}(A_{n})}\) exists or not. (Maybe not.) Probably we need to modify the definition of rate distortion function when we study the rate distortion theory for actions of general amenable groups.
## 3. Metric mean dimension with potential and mean Hausdorff dimension with potential
The purpose of this section is to introduce _metric mean dimension with potential_ and _mean Hausdorff dimension with potential_ for \(\mathbb{R}^{d}\)-actions. These are dynamical versions of Minkowski dimension and Hausdorff dimension. Mean Hausdorff dimension with potential is a main ingredient of the proof of Theorem 1.3. Metric mean dimension with potential is not used in the proof of Theorem 1.3. But it is also an indispensable tool of mean dimension theory. Therefore we develop its basic theory. We plan to use it in Part II of this series of papers.
Let \((\mathcal{X},\mathbf{d})\) be a compact metric space and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Let \(\varepsilon\) be a positive number. We define **covering number with potential** by
\[\#\left(\mathcal{X},\mathbf{d},\varphi,\varepsilon\right)=\inf\left\{\sum_{i=1}^ {n}(1/\varepsilon)^{\sup_{U_{i}}\varphi}\Bigg{|}\begin{array}{c}\mathcal{X}= U_{1}\cup\cdots\cup U_{n}\text{ is an open cover with }\\ \text{Diam}\,U_{i}<\varepsilon\text{ for all }1\leq i\leq n\end{array} \right\}.\]
Let \(s\) be a real number larger than the maximum value of \(\varphi\). We define
\[\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d},\varphi)=\inf\left\{\sum_ {n=1}^{\infty}\left(\text{Diam}E_{n}\right)^{s-\sup_{E_{n}}\varphi}\Bigg{|} \,\mathcal{X}=E_{1}\cup E_{2}\cup E_{3}\cup\dots\text{ with Diam}E_{n}<\varepsilon \right\}.\]
Here we assume that the empty set has diameter zero. We define **Hausdorff dimension with potential at the scale \(\varepsilon\)** by
\[\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon)=\inf\{s\mid \mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d},\varphi)<1,\,s>\max\varphi\}.\]
When the function \(\varphi\) is identically zero (\(\varphi\equiv 0\)), we denote \(\#\left(\mathcal{X},\mathbf{d},\varphi,\varepsilon\right)\), \(\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d},\varphi)\) and \(\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon)\) by \(\#\left(\mathcal{X},\mathbf{d},\varepsilon\right)\), \(\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d})\) and \(\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon)\) respectively:
\[\#\left(\mathcal{X},\mathbf{d},\varepsilon\right) =\min\left\{n\Bigg{|}\begin{array}{c}\mathcal{X}=U_{1}\cup \cdots\cup U_{n}\text{ is an open cover with }\\ \text{Diam}\,U_{i}<\varepsilon\text{ for all }1\leq i\leq n\end{array} \right\},\] \[\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d}) =\inf\left\{\sum_{n=1}^{\infty}\left(\text{Diam}E_{n}\right)^{s} \Bigg{|}\,\mathcal{X}=E_{1}\cup E_{2}\cup E_{3}\cup\dots\text{ with Diam}E_{n}< \varepsilon\right\},\] \[\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon) =\inf\{s\mid\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d})< 1,\,s>0\}.\]
We assume that \(\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varepsilon)=-\infty\) if \(\mathcal{X}\) is the empty set.
**Lemma 3.1**.: _For \(0<\varepsilon<1\), we have \(\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon)\leq\dfrac{\log \#(\mathcal{X},\mathbf{d},\varphi,\varepsilon)}{\log(1/\varepsilon)}\)._
Proof.: Suppose \(\dfrac{\log\#(\mathcal{X},\mathbf{d},\varphi,\varepsilon)}{\log(1/\varepsilon) }<s\). We have \(s>\max\varphi\). There is an open cover \(\mathcal{X}=U_{1}\cup\cdots\cup U_{n}\) with \(\text{Diam}U_{i}<\varepsilon\) and
\[\sum_{i=1}^{n}(1/\varepsilon)^{\sup_{U_{i}}\varphi}<(1/\varepsilon)^{s}.\]
Then
\[\sum_{i=1}^{n}(\text{Diam}U_{i})^{s-\sup_{U_{i}}\varphi}\leq\sum_{i=1}^{n} \varepsilon^{s-\sup_{U_{i}}\varphi}<1.\]
Hence \(\mathcal{H}_{\varepsilon}^{s}(\mathcal{X},\mathbf{d},\varepsilon)<1\) and we have \(\dim_{\mathrm{H}}(\mathcal{X},\mathbf{d},\varphi,\varepsilon)\leq s\).
Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of the group \(\mathbb{R}^{d}\). For a bounded Borel subset \(A\) of \(\mathbb{R}^{d}\), as in SS1.2, we define a metric \(\mathbf{d}_{A}\) and a function \(\varphi_{A}\) on \(\mathcal{X}\) by
\[\mathbf{d}_{A}(x,y)=\sup_{u\in A}\mathbf{d}(T^{u}x,T^{u}y),\quad\varphi_{A}(x )=\int_{A}\varphi(T^{u}x)\,du.\]
In particular, for a positive number \(L\), we set \(\mathbf{d}_{L}=\mathbf{d}_{[0,L)^{d}}\) and \(\varphi_{L}=\varphi_{[0,L)^{d}}\). We define **upper/lower metric mean dimension with potential** by
\[\begin{split}\overline{\operatorname{mdim}}_{\mathrm{M}}( \mathcal{X},T,\mathbf{d},\varphi)&=\limsup_{\varepsilon\to 0}\left( \lim_{L\to\infty}\frac{\log\#\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L}, \varepsilon\right)}{L^{d}\log(1/\varepsilon)}\right),\\ \underline{\operatorname{mdim}}_{\mathrm{M}}(\mathcal{X},T, \mathbf{d},\varphi)&=\liminf_{\varepsilon\to 0}\left( \lim_{L\to\infty}\frac{\log\#\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L}, \varepsilon\right)}{L^{d}\log(1/\varepsilon)}\right).\end{split} \tag{3.1}\]
Here the limit with respect to \(L\) exists. The proof is similar to the proof of Lemma 1.1. (The quantity \(\log\#\left(\mathcal{X},\mathbf{d}_{A},\varphi_{A},\varepsilon\right)\) satisfies the natural subadditivity, monotonicity and invariance if \(\varphi(x)\) is a nonnegative function.)
We define **upper/lower mean Hausdorff dimension with potential** by
\[\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\limsup_{L\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L}, \varepsilon\right)}{L^{d}}\right),\] \[\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\liminf_{L\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L}, \varepsilon\right)}{L^{d}}\right).\]
**Remark 3.2**.: We are not sure whether or not these definitions of the upper and lower mean Hausdorff dimensions with potential coincide with the following:
\[\lim_{\varepsilon\to 0}\left(\limsup_{r\to\infty}\frac{\operatorname{dim}_{ \mathrm{H}}\left(\mathcal{X},\mathbf{d}_{B_{r}},\varphi_{B_{r}},\varepsilon \right)}{\mathbf{m}(B_{r})}\right),\quad\lim_{\varepsilon\to 0}\left(\liminf_{r\to\infty} \frac{\operatorname{dim}_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{B_{r}}, \varphi_{B_{r}},\varepsilon\right)}{\mathbf{m}(B_{r})}\right).\]
(Maybe not in general.) Here \(B_{r}\) is the Euclidean \(r\)-ball of \(\mathbb{R}^{d}\) centered at the origin.
The next lemma is a dynamical version of the fact that Hausdorff dimension is smaller than or equal to Minkowski dimension.
**Lemma 3.3**.: \(\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi) \leq\underline{\operatorname{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d}, \varphi)\)_._
Proof.: This follows from Lemma 3.1.
For the proof of Theorem 1.3 we need another version of mean Hausdorff dimension. For a positive number \(L\) we define a metric \(\overline{\mathbf{d}}_{L}\) on \(\mathcal{X}\) by
\[\overline{\mathbf{d}}_{L}(x,y)=\frac{1}{L^{d}}\int_{[0,L)^{d}}d(T^{u}x,T^{u}y) \,du.\]
This is also compatible with the given topology of \(\mathcal{X}\). We define **upper/lower \(L^{1}\)-mean Hausdorff dimension with potential** by
\[\begin{split}\overline{\operatorname{mdim}}_{\mathrm{H},L^{1}}( \mathcal{X},T,\mathbf{d},\varphi)&=\lim_{\varepsilon\to 0}\left( \limsup_{L\to\infty}\frac{\operatorname{dim}_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathbf{d}}_{L},\varphi_{L},\varepsilon\right)}{L^{d}}\right),\\ \underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)&=\lim_{\varepsilon\to 0}\left(\liminf_{L\to\infty} \frac{\operatorname{dim}_{\mathrm{H}}\left(\mathcal{X},\overline{\mathbf{d}}_ {L},\varphi_{L},\varepsilon\right)}{L^{d}}\right).\end{split} \tag{3.2}\]
We have \(\overline{\mathbf{d}}_{L}(x,y)\leq\mathbf{d}_{L}(x,y)\) and hence
\[\overline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T,\mathbf{d}, \varphi)\leq\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi),\quad\underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}( \mathcal{X},T,\mathbf{d},\varphi)\leq\underline{\operatorname{mdim}}_{\mathrm{H }}(\mathcal{X},T,\mathbf{d},\varphi).\]
It is well-known that topological dimension is smaller than or equal to Hausdorff dimension. The next result is its dynamical version. The proof will be given in SS5.
**Theorem 3.4**.: \(\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\underline{\operatorname{mdim}}_{ \operatorname{H},L^{1}}(\mathcal{X},T,\mathbf{d},\varphi)\)_._
Notice that this also implies \(\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\underline{\operatorname{mdim}}_ {\operatorname{H}}(\mathcal{X},T,\mathbf{d},\varphi)\).
Metric mean dimension with potential is related to rate distortion dimension by the following result.
**Proposition 3.5**.: _For any \(T\)-invariant Borel probability measure \(\mu\) on \(\mathcal{X}\) we have_
\[\overline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+ \int_{\mathcal{X}}\varphi\,d\mu \leq\overline{\operatorname{mdim}}_{\operatorname{M}}(\mathcal{X},T,\mathbf{d},\varphi),\] \[\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+ \int_{\mathcal{X}}\varphi\,d\mu \leq\underline{\operatorname{mdim}}_{\operatorname{M}}(\mathcal{X},T,\mathbf{d},\varphi).\]
Proof.: We will use the following well-known inequality. For the proof see [20, SS9.3, Lemma 9.9].
**Lemma 3.6**.: _Let \(a_{1},\ldots,a_{n}\) be real numbers and \((p_{1},\ldots,p_{n})\) a probability vector. For a positive number \(\varepsilon\) we have_
\[\sum_{i=1}^{n}\left(-p_{i}\log p_{i}+p_{i}a_{i}\log(1/\varepsilon)\right)\leq \log\sum_{i=1}^{n}(1/\varepsilon)^{a_{i}}.\]
Let \(L\) and \(\varepsilon\) be positive numbers. Let \(\mathcal{X}=U_{1}\cup\cdots\cup U_{n}\) be an open cover with \(\operatorname{Diam}(U_{i},\mathbf{d}_{L})<\varepsilon\). Take a point \(x_{i}\in U_{i}\) for each \(1\leq i\leq n\). Set \(E_{i}=U_{i}\setminus(U_{1}\cup\cdots\cup U_{i-1})\) and \(p_{i}=\mu(E_{i})\). We define \(f\colon\mathcal{X}\to\{x_{1},\ldots,x_{n}\}\) by \(f(E_{i})=\{x_{i}\}\). Let \(X\) be a random variable taking values in \(\mathcal{X}\) according to \(\mu\). Set \(Y_{u}=T^{u}f(X)\) for \(u\in[0,L)^{d}\). Then almost surely we have \(\mathbf{d}(T^{u}X,Y_{u})<\varepsilon\) for all \(u\in[0,L)^{d}\). Therefore
\[R\left(\varepsilon,[0,L)^{d}\right)\leq H(Y)=H\left(f(X)\right)=-\sum_{i=1}^{ n}p_{i}\log p_{i}.\]
On the other hand
\[\int_{\mathcal{X}}\varphi\,d\mu=\int_{\mathcal{X}}\frac{\varphi_{L}}{L^{d}}\,d \mu\leq\frac{1}{L^{d}}\sum_{i=1}^{n}p_{i}\sup_{U_{i}}\varphi_{L}.\]
By Lemma 3.6
\[R\left(\varepsilon,[0,L)^{d}\right)+L^{d}\log(1/\varepsilon)\int _{\mathcal{X}}\varphi\,d\mu \leq\sum_{i=1}^{n}\left(-p_{i}\log p_{i}+p_{i}\sup_{U_{i}}\varphi _{L}\log(1/\varepsilon)\right)\] \[\leq\log\sum_{i=1}^{n}(1/\varepsilon)^{\sup_{U_{i}}\varphi_{L}}.\]
Therefore
\[\frac{R\left(\varepsilon,[0,L)^{d}\right)}{L^{d}\log(1/\varepsilon)}+\int_{ \mathcal{X}}\varphi\,d\mu\leq\frac{\log\#\left(\mathcal{X},\mathbf{d}_{L}, \varphi_{L},\varepsilon\right)}{L^{d}\log(1/\varepsilon)}.\]
Letting \(L\to\infty\)
\[\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)}+\int_{\mathcal{X}} \varphi\,d\mu\leq\lim_{L\to\infty}\frac{\log\#\left(\mathcal{X},\mathbf{d}_{L}, \varphi_{L},\varepsilon\right)}{L^{d}\log(1/\varepsilon)}.\]
Letting \(\varepsilon\to 0\) we get the statement.
\(L^{1}\)-mean Hausdorff dimension with potential is related to rate distortion dimension by the next theorem. We call this result "dynamical Frostman's lemma" because the classical Frostman's lemma [14, Sections 8.14-8.17] plays an essential role in its proof. Recall that we have denoted by \(\mathscr{M}^{T}(\mathcal{X})\) the set of \(T\)-invariant Borel probability measures on \(\mathcal{X}\).
**Theorem 3.7** (Dynamical Frostman's lemma).: \[\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)\leq\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}\left(\underline {\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}} \varphi\,d\mu\right).\]
The proof of this theorem is the most important step of the proof of Theorem 1.3. It will be given in SS6.
By combining Theorems 3.4 and 3.7, we get
\[\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\sup_{\mu\in\mathscr{M}^{T}( \mathcal{X})}\left(\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi\,d\mu\right).\]
This is the statement of Theorem 1.3. Therefore the proof of Theorem 1.3 is reduced to the proofs of Theorems 3.4 and 3.7.
**Conjecture 3.8**.: _Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). For any continuous function \(\varphi\colon\mathcal{X}\to\mathbb{R}\) there exists a metric \(\mathbf{d}\) on \(\mathcal{X}\) compatible with the given topology satisfying_
\[\operatorname{mdim}(\mathcal{X},T,\varphi)=\overline{\operatorname{mdim}}_{ \operatorname{M}}(\mathcal{X},T,\mathbf{d},\varphi). \tag{3.3}\]
Suppose that this conjecture is true and let \(\mathbf{d}\) be a metric satisfying (3.3). Then by Proposition 3.5
\[\operatorname{mdim}(\mathcal{X},T,\varphi) =\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}\left(\underline{ \operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi \,d\mu\right)\] \[=\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}\left(\overline{ \operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{X}}\varphi \,d\mu\right).\]
Namely Conjecture 3.8 implies Conjecture 1.4 in SS1.3. Conjecture 3.8 is widely open in general. We plan to prove it for free minimal \(\mathbb{R}^{d}\)-actions in Part II of this series of papers.
At the end of this section we present a small technical result on \(L^{1}\)-mean Hausdorff dimension with potential. This will be used in SS4.
**Lemma 3.9**.: _In the definition (3.2) we can restrict the parameter \(L\) to natural numbers. Namely we have_
\[\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T,\mathbf{d},\varphi)= \lim_{\varepsilon\to 0}\left(\limsup_{\begin{subarray}{c}N\in \mathbb{N}\\ N\to\infty\end{subarray}}\frac{\operatorname{dim}_{\operatorname{H}}\left( \mathcal{X},\overline{\operatorname{d}}_{N},\varphi_{N},\varepsilon\right)}{N ^{d}}\right)\] \[\underline{\operatorname{mdim}}_{\operatorname{H},L^{1}}( \mathcal{X},T,\mathbf{d},\varphi)= \lim_{\varepsilon\to 0}\left(\liminf_{\begin{subarray}{c}N\in \mathbb{N}\\ N\to\infty\end{subarray}}\frac{\operatorname{dim}_{\operatorname{H}}\left( \mathcal{X},\overline{\operatorname{d}}_{N},\varphi_{N},\varepsilon\right)}{N ^{d}}\right).\]
_Here the parameter \(N\) runs over natural numbers. A similar result also holds for upper and lower mean Hausdorff dimensions with potential._
Proof.: We prove the lower case. The upper case is similar. By adding a positive constant to \(\varphi\), we can assume that \(\varphi\) is a nonnegative function. Set
\[\underline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)^{\prime}=\lim_{\varepsilon\to 0}\left(\liminf_{ \begin{subarray}{c}N\in\mathbb{N}\\ N\to\infty\end{subarray}}\frac{\operatorname{dim}_{\operatorname{H}}\left( \mathcal{X},\overline{\operatorname{d}}_{N},\varphi_{N},\varepsilon\right)}{N ^{d}}\right).\]
It is obvious that \(\underline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)\leq\underline{\operatorname{mdim}}_{\operatorname{H},L^{1 }}(\mathcal{X},T,\mathbf{d},\varphi)^{\prime}\). We prove the reverse inequality. Let \(L\) be a positive real number. We assume that it is sufficiently large so that
\[\left(\frac{L}{L-1}\right)^{d}<2.\]
Let \(N:=\lfloor L\rfloor\) be the natural number not greater than \(L\). Then for any \(x,y\in\mathcal{X}\) we have \(\overline{\mathbf{d}}_{N}(x,y)\leq 2\,\overline{\mathbf{d}}_{L}(x,y)\). We also have \(\varphi_{N}(x)\leq\varphi_{L}(x)\).
Let \(0<\varepsilon<1/2\). We prove that for any \(s>\max\varphi_{L}\)
\[\mathcal{H}_{\varepsilon}^{s-\frac{s}{\log\varepsilon}}\left(\mathcal{X}, \overline{\mathbf{d}}_{N},\varphi_{N}\right)\leq\mathcal{H}_{\varepsilon/2}^{s }\left(\mathcal{X},\overline{\mathbf{d}}_{L},\varphi_{L}\right). \tag{3.4}\]
Indeed let \(E\) be a subset of \(\mathcal{X}\) with \(\operatorname{Diam}(E,\overline{\mathbf{d}}_{L})<\varepsilon/2\). Then
\[\operatorname{Diam}(E,\overline{\mathbf{d}}_{N})\leq 2\operatorname{Diam}(E, \overline{\mathbf{d}}_{L})<\varepsilon.\]
Moreover
\[\operatorname{Diam}(E,\overline{\mathbf{d}}_{N})^{s-\frac{s}{ \log\varepsilon}-\sup_{E}\varphi_{N}} \leq\operatorname{Diam}(E,\overline{\mathbf{d}}_{N})^{s-\frac{s}{ \log\varepsilon}-\sup_{E}\varphi_{L}}\quad\text{by }\varphi_{N}\leq \varphi_{L}\] \[\leq\left(2\operatorname{Diam}(E,\overline{\mathbf{d}}_{L}) \right)^{s-\frac{s}{\log\varepsilon}-\sup_{E}\varphi_{L}}\] \[\leq\varepsilon^{-\frac{s}{\log\varepsilon}}\cdot\left(2 \operatorname{Diam}(E,\overline{\mathbf{d}}_{L})\right)^{s-\sup_{E}\varphi_{L}}\] \[=2^{-s}\left(2\operatorname{Diam}(E,\overline{\mathbf{d}}_{L}) \right)^{s-\sup_{E}\varphi_{L}}\] \[\leq\operatorname{Diam}(E,\overline{\mathbf{d}}_{L})^{s-\sup_{E} \varphi_{L}}\quad\text{by }\varphi_{L}\geq 0.\]
Therefore we have (3.4) and hence
\[\operatorname{dim}_{\operatorname{H}}(\mathcal{X},\overline{\mathbf{d}}_{N}, \varphi_{N},\varepsilon)\leq\left(1-\frac{1}{\log\varepsilon}\right) \operatorname{dim}_{\operatorname{H}}\left(\mathcal{X},\overline{\mathbf{d}} _{L},\varphi_{L},\frac{\varepsilon}{2}\right).\]
We divide this by \(L^{d}\) and let \(L\to\infty\) and \(\varepsilon\to 0\). Then we get \(\underline{\text{mdim}}_{\text{H},L^{1}}(\mathcal{X},T,\mathbf{d},\varphi)^{ \prime}\leq\underline{\text{mdim}}_{\text{H},L^{1}}(\mathcal{X},T,\mathbf{d}, \varphi)\).
**Remark 3.10**.: In the definitions (1.8) and (3.1) of mean dimension with potential and (upper/lower) metric mean dimension with potential, the limits with respect to \(L\) exist. Therefore we can also restrict the parameter \(L\) to natural numbers when we take the limits.
## 4. Mean dimension of \(\mathbb{Z}^{d}\)-actions
In this section we prepare some basic results on mean dimension theory of \(\mathbb{Z}^{d}\)-actions. We need it in the proof of Theorem 3.4. This is a rather technical and indirect approach. It is desirable to find a more direct proof of Theorem 3.4. However we have not found it so far3.
Footnote 3: The difficulty lies in Proposition 4.3 below. It is unclear for the author how to formulate and prove an analogous result for \(\mathbb{R}^{d}\)-actions.
The paper of Huo-Yuan [HY] studies the variational principle for mean dimension of \(\mathbb{Z}^{d}\)-actions. Proposition 4.3 and Theorem 5.4 below were already mentioned in their paper [HY, Lemma 2.12 and Lemma 2.15] in the case that the potential function is zero.
### Definitions of various mean dimensions for \(\mathbb{Z}^{d}\)-actions
For a natural number \(N\) we set
\[[N]^{d}=\{0,1,2,\ldots,N-1\}^{d}.\]
Let \(T\colon\mathbb{Z}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of the group \(\mathbb{Z}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. For a natural number \(N\) we define metrics \(\mathbf{d}_{N}\) and \(\overline{\mathbf{d}}_{N}\) and a function \(\varphi_{N}\) on \(\mathcal{X}\) by
\[\mathbf{d}_{N}(x,y)=\max_{u\in[N]^{d}}\mathbf{d}(T^{u}x,T^{u}y),\quad\overline {\mathbf{d}}_{N}(x,y)=\frac{1}{N^{d}}\sum_{u\in[N]^{d}}\mathbf{d}(T^{u}x,T^{u }y),\]
\[\varphi_{N}(x)=\sum_{u\in[N]^{d}}\varphi(T^{u}x).\]
In the sequel we will sometimes consider \(\mathbb{Z}^{d}\)-actions and \(\mathbb{R}^{d}\)-actions simultaneously. In that case we use the notations \(\mathbf{d}_{N}^{\mathbb{Z}},\overline{\mathbf{d}}_{N}^{\mathbb{Z}},\varphi_{N }^{\mathbb{Z}}\) for clarifying that these quantities are defined with respect to \(\mathbb{Z}^{d}\)-actions. (On the other hand, we will use the notations \(\mathbf{d}_{N}^{\mathbb{R}},\overline{\mathbf{d}}_{N}^{\mathbb{R}},\varphi_{N }^{\mathbb{R}}\) when they are defined with respect to \(\mathbb{R}^{d}\)-actions.)
We define mean dimension with potential by
\[\text{mdim}(\mathcal{X},T,\varphi)=\lim_{\varepsilon\to 0}\left(\lim_{N\to \infty}\frac{\text{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d}_{N},\varphi_{N })}{N^{d}}\right).\]
This is a topological invariant (i.e. independent of the choice of \(\mathbf{d}\)). We define upper/lower mean Hausdorff dimension with potential by
\[\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{ d},\varphi) =\lim_{\varepsilon\to 0}\left(\limsup_{N\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}(\mathcal{X},\mathbf{d}_{N},\varphi_{N}, \varepsilon)}{N^{d}}\right),\] \[\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\liminf_{N\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}(\mathcal{X},\mathbf{d}_{N},\varphi_{N}, \varepsilon)}{N^{d}}\right).\]
We define upper/lower \(L^{1}\)-mean Hausdorff dimension with potential by
\[\overline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\limsup_{N\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_{N}, \varphi_{N},\varepsilon)}{N^{d}}\right),\] \[\underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\liminf_{N\to\infty}\frac{ \operatorname{dim}_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_{N}, \varphi_{N},\varepsilon)}{N^{d}}\right).\]
Since \(\overline{\mathbf{d}}_{N}(x,y)\leq\mathbf{d}_{N}(x,y)\), we have
\[\overline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)\leq\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi),\quad\underline{\operatorname{mdim}}_{\mathrm{H},L^{1} }(\mathcal{X},T,\mathbf{d},\varphi)\leq\underline{\operatorname{mdim}}_{ \mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi).\]
We can also consider upper/lower metric mean dimension with potential for \(\mathbb{Z}^{d}\)-actions. But we do not need them in this paper.
### Tame growth of covering numbers
The purpose of this subsection is to establish a convenient sufficient condition under which mean Hausdorff dimension with potential and \(L^{1}\)-mean Hausdorff dimension with potential coincide.
The following is a key definition [18, Condition 3].
**Definition 4.1**.: A compact metric space \((\mathcal{X},\mathbf{d})\) is said to have **tame growth of covering numbers** if for any positive number \(\delta\) we have
\[\lim_{\varepsilon\to 0}\varepsilon^{\delta}\log\#\left(\mathcal{X},\mathbf{d}, \varepsilon\right)=0.\]
Recall that \(\#\left(\mathcal{X},\mathbf{d},\varepsilon\right)\) is the minimum number \(n\) such that there is an open cover \(\mathcal{X}=U_{1}\cup U_{2}\cup\dots\cup U_{n}\) with \(\operatorname{Diam}U_{i}<\varepsilon\) for all \(1\leq i\leq n\). Notice that this is purely a condition on metric geometry. It does not involve dynamics.
For example, every compact subset of the Euclidean space \(\mathbb{R}^{n}\) has the tame growth of covering numbers with respect to the Euclidean metric. The Hilbert cube \([0,1]^{\mathbb{N}}\) has the tame growth of covering numbers with respect to the metric
\[\mathbf{d}\left((x_{n})_{n\in\mathbb{N}},(y_{n})_{n\in\mathbb{N}}\right)=\sum _{n=1}^{\infty}2^{-n}|x_{n}-y_{n}|.\]
The next lemma shows that every compact metrizable space admits a metric having the tame growth of covering numbers [18, Lemma 3.10].
**Lemma 4.2**.: _For any compact metric space \((\mathcal{X},\mathbf{d})\) there exists a metric \(\mathbf{d}^{\prime}\) on \(\mathcal{X}\) compatible with the given topology satisfying the following two conditions._
* _For all_ \(x,y\in\mathcal{X}\) _we have_ \(\mathbf{d}^{\prime}(x,y)\leq\mathbf{d}(x,y)\)_._
* _The space_ \((\mathcal{X},\mathbf{d}^{\prime})\) _has the tame growth of covering numbers._
Proof.: Take a countable dense subset \(\{x_{1},x_{2},x_{3},\dots\}\) of \(\mathcal{X}\). We define a metric \(\mathbf{d}^{\prime}\) by
\[\mathbf{d}^{\prime}(x,y)=\sum_{n=1}^{\infty}2^{-n}\left|d(x,x_{n})-d(y,x_{n}) \right|.\]
It is easy to check that this satisfies the statement.
**Proposition 4.3**.: _Let \((\mathcal{X},\mathbf{d})\) be a compact metric space having the tame growth of covering numbers. Let \(T\colon\mathbb{Z}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of the group \(\mathbb{Z}^{d}\). For any continuous function \(\varphi\colon\mathcal{X}\to\mathbb{R}\) we have_
\[\overline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi) =\overline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi),\] \[\underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi) =\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi).\]
Proof.: The case of \(d=1\) was proved in [14, Lemma 4.3]. The following argument is its simple generalization. We prove the lower case. The upper case is similar.
It is obvious that \(\underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T,\mathbf{d}, \varphi)\leq\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T, \mathbf{d},\varphi)\). We prove the reverse inequality. By adding a positive constant to \(\varphi\), we can assume that \(\varphi\) is a nonnegative function. For a finite subset \(A\) of \(\mathbb{Z}^{d}\) we define a metric \(\mathbf{d}_{A}\) on \(\mathcal{X}\) by
\[\mathbf{d}_{A}(x,y)=\max_{u\in A}\mathbf{d}(T^{u}x,T^{u}y).\]
Let \(s\) be an arbitrary positive number with \(\underline{\operatorname{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T,\mathbf{d}, \varphi)<s\). Let \(0<\delta<1/2\) be arbitrary. For each positive number \(\tau\) we take an open cover \(\mathcal{X}=W_{1}^{\tau}\cup\dots\cup W_{M(\tau)}^{\tau}\) with \(M(\tau)=\#\left(\mathcal{X},\mathbf{d},\tau\right)\) and \(\operatorname{Diam}(W_{i}^{\tau},\mathbf{d})<\tau\) for all \(1\leq i\leq M(\tau)\). From the condition of tame growth of covering numbers, we can find \(0<\varepsilon_{0}<1\) satisfying
\[M(\tau)^{\tau^{\delta}}<2\quad\text{for all }0<\tau< \varepsilon_{0}, \tag{4.2}\] \[2^{2+(1+2\delta)s}\,\varepsilon_{0}^{s\delta(1-2\delta)}<1. \tag{4.1}\]
Let \(0<\varepsilon<\varepsilon_{0}\). We have \(\dim_{\mathrm{H}}(\mathcal{X},\overline{\mathbf{d}}_{N},\varphi_{N}, \varepsilon)<sN^{d}\) for infinitely many \(N\). Pick up such an \(N\). Then there is a covering \(\mathcal{X}=\bigcup_{n=1}^{\infty}E_{n}\) with \(\tau_{n}:=\operatorname{Diam}(E_{n},\overline{\mathbf{d}}_{N})<\varepsilon\) for all \(n\geq 1\) and
\[\sum_{n=1}^{\infty}\tau_{n}^{sN^{d}-\sup_{E_{n}}\varphi_{N}}<1. \tag{4.3}\]
Set \(L_{n}=\tau_{n}^{-\delta}\). Pick \(x_{n}\in E_{n}\) for each \(n\). Every \(x\in E_{n}\) satisfies \(\overline{\mathbf{d}}_{N}(x,x_{n})\leq\tau_{n}\) and hence
\[\left|\{u\in[N]^{d}\mid\mathbf{d}(T^{u}x,T^{u}x_{n})\geq L_{n}\tau_{n}\} \right|\leq\frac{N^{d}}{L_{n}}.\]
Namely there is \(A\subset[N]^{d}\) (depending on \(x\)) such that \(|A|\leq N^{d}/L_{n}\) and \(\mathbf{d}_{[N]^{d}\setminus A}(x,x_{n})<L_{n}\tau_{n}\). Therefore
\[E_{n}\subset\bigcup_{\begin{subarray}{c}A\subset[N]^{d}\\ |A|\leq N^{d}/L_{n}\end{subarray}}B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d} \setminus A}).\]
Here \(B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d}\setminus A})\) is the ball of radius \(L_{n}\tau_{n}\) with respect to \(\mathbf{d}_{[N]^{d}\setminus A}\) centered at \(x_{n}\). For \(A=\{a_{1},\ldots,a_{r}\}\subset[N]^{d}\) and \(1\leq i_{1},\ldots,i_{r}\leq M(\tau_{n})\) we set
\[W(A,\tau_{n},i_{1},\ldots,i_{r})=T^{-a_{1}}W_{i_{1}}^{\tau_{n}}\cap\cdots\cap T ^{-a_{r}}W_{i_{r}}^{\tau_{n}}.\]
We have
\[\mathcal{X}=\bigcup_{1\leq i_{1},\ldots,i_{r}\leq M(\tau_{n})}W(A,\tau_{n},i_ {1},\ldots,i_{r}),\quad\text{(here $A$ and $\tau_{n}$ are fixed)},\]
and hence
\[B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d}\setminus A})=\bigcup_{1\leq i_{1},\ldots,i_{r}\leq M(\tau_{n})}B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d} \setminus A})\cap W(A,\tau_{n},i_{1},\ldots,i_{r}).\]
Then
\[\mathcal{X}=\bigcup_{n=1}^{\infty}\bigcup_{\begin{subarray}{c}A\subset[N]^{d }\\ r:=|A|\leq N^{d}/L_{n}\end{subarray}}\bigcup_{1\leq i_{1},\ldots,i_{r}\leq M( \tau_{n})}E_{n}\cap B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d}\setminus A}) \cap W(A,\tau_{n},i_{1},\ldots,i_{r}).\]
The diameter of \(E_{n}\cap B_{L_{n}\tau_{n}}(x_{n},\mathbf{d}_{[N]^{d}\setminus A})\cap W(A, \tau_{n},i_{1},\ldots,i_{r})\) with respect to \(\mathbf{d}_{N}\) is less than or equal to \(2L_{n}\tau_{n}=2\tau_{n}^{1-\delta}<2\varepsilon^{1-\delta}\). Hence
\[\mathcal{H}_{2\varepsilon^{1-\delta}}^{(1+2\delta)sN^{d}}\left(\mathcal{X}, \mathbf{d}_{N},\varphi_{N}\right)\leq\sum_{n=1}^{\infty}2^{N^{d}}\,M(\tau_{n} )^{N^{d}/L_{n}}\left(2\tau_{n}^{1-\delta}\right)^{(1+2\delta)sN^{d}-\sup_{E_{n }}\varphi_{N}}.\]
Here the factor \(2^{N^{d}}\) comes from the choice of \(A\subset[N]^{d}\). By \(L_{n}=\tau_{n}^{-\delta}\) and (4.1)
\[M(\tau_{n})^{N^{d}/L_{n}}=\left(M(\tau_{n})^{\tau_{n}^{\delta}}\right)^{N^{d} }<2^{N^{d}}.\]
Since \(\varphi\) is a nonnegative function,
\[2^{(1+2\delta)sN^{d}-\sup_{E_{n}}\varphi_{N}}\leq 2^{(1+2\delta)sN^{d}}.\]
Hence
\[\mathcal{H}_{2\varepsilon^{1-\delta}}^{(1+2\delta)sN^{d}}\left(\mathcal{X}, \mathbf{d}_{N},\varphi_{N}\right)\leq\sum_{n=1}^{\infty}\left(2^{2+(1+2\delta )s}\right)^{N^{d}}\left(\tau_{n}^{1-\delta}\right)^{(1+2\delta)sN^{d}-\sup_{E_ {n}}\varphi_{N}}.\]
We have
\[\left(\tau_{n}^{1-\delta}\right)^{(1+2\delta)sN^{d}-\sup_{E_{n} }\varphi_{N}} =\tau_{n}^{-\delta\left\{(1+2\delta)sN^{d}-\sup_{E_{n}}\varphi_{N }\right\}}\.\ \tau_{n}^{(1+2\delta)sN^{d}-\sup_{E_{n}}\varphi_{N}}\] \[=\tau_{n}^{\delta\left\{(1-2\delta)sN^{d}+\sup_{E_{n}}\varphi_{n }\right\}}\.\ \tau_{n}^{sN^{d}-\sup_{E_{n}}\varphi_{N}}.\]
Since \(\varphi\) is nonnegative and \(\tau_{n}<\varepsilon<\varepsilon_{0}<1\)
\[\tau_{n}^{\delta\{(1-2\delta)sN^{d}+\sup_{E_{n}}\varphi_{n}\}}\leq\tau_{n}^{ \delta(1-2\delta)sN^{d}}<\varepsilon_{0}^{\delta(1-2\delta)sN^{d}}.\]
Therefore
\[\mathcal{H}^{(1+2\delta)sN^{d}}_{2\varepsilon^{1-\delta}}\left( \mathcal{X},\mathbf{d}_{N},\varphi_{N}\right) \leq\sum_{n=1}^{\infty}\underbrace{\left(2^{2+(1+2\delta)s}\cdot \varepsilon_{0}^{\delta(1-2\delta)s}\right)^{N^{d}}}_{<\text{ 1 by (\ref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_}\] \[\leq\sum_{n=1}^{\infty}\tau_{n}^{sN^{d}-\sup_{E_{n}}\varphi_{N} }<1\quad\text{by (\ref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ _def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def
Proof.: Set \(\rho=\overline{\mathbf{d}}_{1}^{\mathbb{R}}\) and \(\psi=\varphi_{1}^{\mathbb{R}}\). For a natural number \(N\) we have
\[\overline{\rho}_{N}^{\mathbb{Z}}(x,y) =\frac{1}{N^{d}}\sum_{u\in[N]^{d}}\rho(T^{u}x,T^{u}y)=\frac{1}{N^{ d}}\sum_{u\in[N]^{d}}\int_{v\in[0,1)^{d}}\mathbf{d}(T^{u+v}x,T^{u+v}y)\,dv\] \[=\frac{1}{N^{d}}\int_{[0,N)^{d}}\mathbf{d}(T^{v}x,T^{v}y)\,dv= \overline{\mathbf{d}}_{N}^{\mathbb{R}}(x,y).\]
Similarly
\[\psi_{N}^{\mathbb{Z}}(x)=\sum_{u\in[N]^{d}}\psi(T^{u}x)=\int_{[0,N)^{d}}\varphi (T^{v}x)\,dv=\varphi_{N}^{\mathbb{R}}.\]
By using Lemma 3.9
\[\underline{\mathrm{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi) =\lim_{\varepsilon\to 0}\left(\liminf_{\begin{subarray}{c}N\in \mathbb{N}\\ N\to\infty\end{subarray}}\frac{\mathrm{dim}_{\mathrm{H}}\left(\mathcal{X}, \overline{\mathbf{d}}_{N}^{\mathbb{R}},\varphi_{N}^{\mathbb{R}},\varepsilon \right)}{N^{d}}\right)\] \[=\underline{\mathrm{mdim}}_{\mathrm{H},L^{1}}(\mathcal{X},T|_{ \mathbb{Z}^{d}},\rho,\psi).\]
We can prove the case of upper \(L^{1}\)-mean Hausdorff dimension with potential in the same way. The case of (topological) mean dimension with potential can be also proved similarly by using \(\left(\mathbf{d}_{1}^{\mathbb{R}}\right)_{N}^{\mathbb{Z}}=\mathbf{d}_{N}^{ \mathbb{R}}\).
## 5. Mean dimension is bounded by mean Hausdorff dimension: proof of Theorem 3.4
In this section we prove Theorem 3.4.
### A variation of the definition of mean dimension with potential
This subsection is a simple generalization of [18, SS3.2]. Here we introduce a variation of the definition of mean dimension with potential. Let \(P\) be a finite simplicial complex and \(a\in P\). We define **small local dimension**\(\dim_{a}^{\prime}P\) as the minimum of \(\dim\Delta\) where \(\Delta\) is a simplex of \(P\) containing \(a\). See Figure 2. (This is the same as [18, Figure 2].)
Let \((\mathcal{X},\mathbf{d})\) be a compact metric space and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. For \(\varepsilon>0\) we set
\[\mathrm{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d},\varphi)\] \[=\inf\left\{\sup_{x\in\mathcal{X}}\left(\dim_{f(x)}^{\prime}P+ \varphi(x)\right)\right|\qquad P\text{ is a finite simplicial complex and}\qquad f: \mathcal{X}\to P\text{ is an $\varepsilon$-embedding}\qquad\right\}.\]
We also set
\[\mathrm{var}_{\varepsilon}(\varphi,\mathbf{d})=\sup\{|\varphi(x)-\varphi(y)| \,|\,\mathbf{d}(x,y)<\varepsilon\}.\]
The following lemma is given in [18, Lemma 3.4].
**Lemma 5.1**.: \[\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d},\varphi)\leq \operatorname{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d},\varphi)\leq \operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d},\varphi)+ \operatorname{var}_{\varepsilon}(\varphi,\mathbf{d}).\]
The next lemma shows that we can use \(\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d},\varphi)\) instead of \(\operatorname{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d},\varphi)\) in the definition of mean dimension with potential.
**Lemma 5.2**.: _Let \(T\colon\mathbb{Z}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{Z}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Then_
\[\operatorname{mdim}(\mathcal{X},T,\varphi)=\lim_{\varepsilon\to 0}\left( \lim_{N\to\infty}\frac{\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d}_{N},\varphi_{N})}{N^{d}}\right). \tag{5.1}\]
_Here the limits in the right-hand side exist as in SS1.2._
Proof.: By Lemma 5.1, for any natural number \(N\), we have
\[\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d}_{N}, \varphi_{N})\leq\operatorname{Widim}_{\varepsilon}(\mathcal{X},\mathbf{d}_{N},\varphi_{N})\leq\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X}, \mathbf{d}_{N},\varphi_{N})+\operatorname{var}_{\varepsilon}(\varphi_{N}, \mathbf{d}_{N}).\]
We have
\[\operatorname{var}_{\varepsilon}(\varphi_{N},\mathbf{d}_{N})\leq N^{d} \operatorname{var}_{\varepsilon}(\varphi,\mathbf{d}).\]
Then
\[\lim_{\varepsilon\to 0}\left(\limsup_{N\to\infty}\frac{\operatorname{var}_{ \varepsilon}(\varphi_{N},\mathbf{d}_{N})}{N^{d}}\right)\leq\lim_{\varepsilon \to 0}\operatorname{var}_{\varepsilon}(\varphi,\mathbf{d})=0.\]
### Case of \(\mathbb{Z}^{d}\)-actions
In this subsection we prove that, for \(\mathbb{Z}^{d}\)-actions, mean dimension with potential is bounded from above by lower mean Hausdorff dimension with potential. A key ingredient of the proof is the following result on metric geometry. This was proved in [18, Lemma 3.8].
Figure 2. Here \(P\) has four vertexes (denoted by dots), four 1-dimensional simplexes and one 2-dimensional simplex. The points \(b\) and \(d\) are vertexes of \(P\) wheres \(a\) and \(c\) are not. We have \(\dim_{a}^{\prime}P=2\), \(\dim_{b}^{\prime}P=0\), \(\dim_{c}^{\prime}P=1\) and \(\dim_{d}^{\prime}P=0\). Recall \(\dim_{a}P=\dim_{b}P=2\) and \(\dim_{c}P=\dim_{d}P=1\).
**Lemma 5.3**.: _Let \((\mathcal{X},\mathbf{d})\) be a compact metric space and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Let \(\varepsilon\) and \(L\) be positive numbers, and let \(s\) be a real number with \(s>\max_{\mathcal{X}}\varphi\). Suppose there exists a map \(f\colon\mathcal{X}\to[0,1]^{N}\) such that_
* \(\left\|f(x)-f(y)\right\|_{\infty}\leq L\,\mathbf{d}(x,y)\) _for all_ \(x,y\in\mathcal{X}\)_,_
* _if_ \(d(x,y)\geq\varepsilon\) _then_ \(\left\|f(x)-f(y)\right\|_{\infty}=1\)_._
_Here \(\left\|\cdot\right\|_{\infty}\) is the \(\ell^{\infty}\)-norm. Moreover we assume_
\[4^{N}(L+1)^{1+s+\left\|\varphi\right\|_{\infty}}\mathcal{H}_{1}^{s}\left( \mathcal{X},\mathbf{d},\varphi\right)<1,\]
_where \(\left\|\varphi\right\|_{\infty}=\max_{x\in\mathcal{X}}\left|\varphi(x)\right|\). Then we conclude that_
\[\operatorname{Widim}_{\varepsilon}^{\prime}(\mathcal{X},\mathbf{d},\varphi) \leq s+1.\]
The following theorem is the main result of this subsection.
**Theorem 5.4**.: _Let \(T\colon\mathbb{Z}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{Z}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Then_
\[\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\underline{\operatorname{mdim} }_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi).\]
Proof.: If \(\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi)\) is infinite then the statement is trivial. So we assume that it is finite. Let \(s\) be an arbitrary number larger than \(\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi)\). We prove that \(\operatorname{mdim}(\mathcal{X},T,\varphi)\leq s\).
Let \(\varepsilon\) be a positive number. There is a Lipschitz map \(f\colon\mathcal{X}\to[0,1]^{M}\) such that4 if \(\mathbf{d}(x,y)\geq\varepsilon\) then \(\left\|f(x)-f(y)\right\|_{\infty}=1\). Let \(L\) be a Lipschitz constant of \(f\). Namely we have \(\left\|f(x)-f(y)\right\|_{\infty}\leq L\,\mathbf{d}(x,y)\). For each natural number \(N\) we define \(f_{N}\colon\mathcal{X}\to\left([0,1]^{M}\right)^{\left[N\right]^{d}}\) by
Footnote 4: The construction of \(f\) is as follows. Take a Lipschitz function \(\psi\colon[0,\infty)\to[0,1]\) such that \(\psi(t)=1\) for \(0\leq t\leq\varepsilon/4\) and \(\psi(t)=0\) for \(t\geq\varepsilon/2\). Let \(\{x_{1},\dots,x_{M}\}\) be a \((\varepsilon/4)\)-spanning set of \(\mathcal{X}\). Then we set \(f(x)=(\psi(d(x,x_{1})),\psi(d(x,x_{2})),\dots,\psi(d(x,x_{M})))\).
\[f_{N}(x)=\left(f(T^{u}x)\right)_{u\in[N]^{d}}.\]
Then we have
* \(\left\|f_{N}(x)-f_{N}(y)\right\|_{\infty}\leq L\,\mathbf{d}_{N}(x,y)\) for all \(x,y\in\mathcal{X}\),
* if \(\mathbf{d}_{N}(x,y)\geq\varepsilon\) then \(\left\|f_{N}(x)-f_{N}(y)\right\|_{\infty}=1\).
Let \(\tau\) be an arbitrary positive number. We choose a positive number \(\delta<1\) satisfying
\[4^{M}(L+1)^{1+s+\tau+\left\|\varphi\right\|_{\infty}}\,\delta^{\tau}<1. \tag{5.2}\]
From \(\underline{\operatorname{mdim}}_{\mathrm{H}}(\mathcal{X},T,\mathbf{d},\varphi)<s\), there is a sequence of natural numbers \(N_{1}<N_{2}<N_{3}<\dots\) such that
\[\dim_{\mathrm{H}}\left(\mathcal{X},\mathbf{d}_{N_{n}},\varphi_{N_{n}},\delta \right)<sN_{n}^{d}\quad\text{for all $n\geq 1$.}\]
Then \(\mathcal{H}_{\delta}^{sN_{n}^{d}}\left(\mathcal{X},\mathbf{d}_{N_{n}},\varphi_ {N_{n}}\right)<1\) and hence
\[\mathcal{H}_{\delta}^{(s+\tau)N_{n}^{d}}\left(\mathcal{X},\mathbf{d}_{N_{n}}, \varphi_{N_{n}}\right)\leq\delta^{\tau N_{n}^{d}}\mathcal{H}_{\delta}^{sN_{n}^ {d}}\left(\mathcal{X},\mathbf{d}_{N_{n}},\varphi_{N_{n}}\right)<\delta^{\tau N _{n}^{d}}.\]
Therefore
\[4^{MN_{n}^{d}}(L+1)^{1+(s+\tau)N_{n}^{d}+\left\|\varphi_{N_{n}}\right\|_{ \infty}}\,\mathcal{H}_{1}^{(s+\tau)N_{n}^{d}}\left(\mathcal{X},\mathbf{d}_{N_{n }},\varphi_{N_{n}}\right) <\left\{4^{M}(L+1)^{1+s+\tau+\left\|\varphi\right\|_{\infty}}\, \delta^{\tau}\right\}^{N_{n}^{d}}\] \[<1\quad\text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq
By Theorem 5.4
\[\operatorname{mdim}\left(\mathcal{X},T|_{\mathbb{Z}^{d}},\varphi_{1}^{\mathbb{R}} \right)\leq\underline{\operatorname{mdim}_{\operatorname{H}}}\left(\mathcal{X},T|_{\mathbb{Z}^{d}},\mathbf{d}^{\prime},\varphi_{1}^{\mathbb{R}}\right).\]
Combining all the above inequalities, we conclude
\[\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\underline{\operatorname{mdim}_{ \operatorname{H},L^{1}}}\left(\mathcal{X},T,\mathbf{d},\varphi\right).\]
**Remark 5.6**.: Since \(\underline{\operatorname{mdim}_{\operatorname{H},L^{1}}}\left(\mathcal{X},T, \mathbf{d},\varphi\right)\leq\underline{\operatorname{mdim}_{\operatorname{H }}}\left(\mathcal{X},T,\mathbf{d},\varphi\right)\), we also have
\[\operatorname{mdim}(\mathcal{X},T,\varphi)\leq\underline{\operatorname{mdim}_ {\operatorname{H}}}\left(\mathcal{X},T,\mathbf{d},\varphi\right).\]
We can directly prove this inequality (for \(\mathbb{R}^{d}\)-actions) without using \(\mathbb{Z}^{d}\)-actions. The proof is almost the identical with the proof of Theorem 5.4. However we have not found a direct proof Theorem 3.4 (the \(L^{1}\) version) so far.
## 6. Mean Hausdorff dimension is bounded by rate distortion dimension: proof of Theorem 3.7
In this section we prove Theorem 3.7 (dynamical Frostman's lemma). The proof is based on results on mutual information prepared in SS2.2. Another key ingredient is the following version of Frostman's lemma. This was proved in [14, Corollary 4.4].
**Lemma 6.1**.: _For any \(0<c<1\) there exists \(\delta_{0}=\delta_{0}(c)\in(0,1)\) such that for any compact metric space \((\mathcal{X},\mathbf{d})\) and any \(0<\delta\leq\delta_{0}\) there exists a Borel probability measure \(\nu\) on \(\mathcal{X}\) satisfying_
\[\nu(E)\leq(\operatorname{Diam}E)^{c\text{-}\operatorname{dim}_{\operatorname{ H}}(\mathcal{X},\mathbf{d},\delta)}\quad\text{ for all Borel sets $E\subset\mathcal{X}$ with $\operatorname{Diam}E<\frac{\delta}{6}$}.\]
We also need the following elementary lemma. This was proved in [14, Appendix].
**Lemma 6.2**.: _Let \(A\) be a finite set and \(\{\mu_{n}\}\) a sequence of probability measures on \(A\). Suppose that \(\mu_{n}\) converges to some probability measure \(\mu\) in the weak\({}^{*}\) topology (i.e. \(\mu_{n}(a)\to\mu(a)\) for every \(a\in A\)). Then there exists a sequence of probability measures \(\pi_{n}\) on \(A\times A\) such that_
* \(\pi_{n}\) _is a coupling between_ \(\mu_{n}\) _and_ \(\mu\)_, i.e., the first and second marginals of_ \(\pi_{n}\) _are_ \(\mu_{n}\) _and_ \(\mu\) _respectively,_
* \(\pi_{n}\) _converges to_ \((\operatorname{id}\times\operatorname{id})_{*}\mu\) _in the weak_\({}^{*}\) _topology, namely_ \[\pi_{n}(a,b)\to\begin{cases}\mu(a)&(\text{if $a=b$})\\ 0&(\text{if $a\neq b$})\end{cases}.\]
We write the statement of Theorem 3.7 again.
**Theorem 6.3** (= Theorem 3.7).: _Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Then we have_
\[\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}\left(\mathcal{X},T, \mathbf{d},\varphi\right)\leq\sup_{\mu\in\mathscr{M}^{T}(\mathcal{X})}\left( \underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{ X}}\varphi\,d\mu\right).\]
_Here recall that \(\mathscr{M}^{T}(\mathcal{X})\) is the set of all \(T\)-invariant Borel probability measures on \(\mathcal{X}\)._
Proof.: Let \(c\) and \(s\) be arbitrary real numbers with \(0<c<1\) and \(s<\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)\). We will construct \(\mu\in\mathscr{M}^{T}(\mathcal{X})\) satisfying
\[\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{ X}}\varphi\,d\mu\geq cs-(1-c)\left\|\varphi\right\|_{\infty}. \tag{6.1}\]
If this is proved then we get the claim of the theorem by letting \(c\to 1\) and \(s\to\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)\).
Take \(\eta>0\) with \(\overline{\operatorname{mdim}}_{\operatorname{H},L^{1}}(\mathcal{X},T, \mathbf{d},\varphi)>s+2\eta\). Let \(\delta_{0}=\delta_{0}(c)\in(0,1)\) be the constant introduced in Lemma 6.1. There are \(\delta\in(0,\delta_{0})\) and a sequence of positive numbers \(L_{1}<L_{2}<L_{3}<\cdots\to\infty\) satisfying
\[\operatorname{dim}_{\operatorname{H}}\left(\mathcal{X},\overline{\mathbf{d}} _{L_{n}},\varphi_{L_{n}},\delta\right)>(s+2\eta)L_{n}^{d}\]
for all \(n\geq 1\).
For a real number \(t\) we set
\[\mathcal{X}_{n}(t):=\left(\frac{\varphi_{L_{n}}}{L_{n}^{d}}\right)^{-1}[t,t+ \eta]=\left\{x\in\mathcal{X}\right|t\leq\frac{\varphi_{L_{n}}(x)}{L_{n}^{d}} \leq t+\eta\right\}.\]
**Claim 6.4**.: _We can choose \(t\in[-\left\|\varphi\right\|_{\infty},\left\|\varphi\right\|_{\infty}]\) such that for infinitely many \(n\) we have_
\[\operatorname{dim}_{\operatorname{H}}\left(\mathcal{X}_{n}(t),\overline{ \mathbf{d}}_{L_{n}},\delta\right)\geq(s-t)L_{n}^{d}.\]
_Notice that, in particular, this inequality implies that \(\mathcal{X}_{n}(t)\) is not empty because we assumed that \(\operatorname{dim}_{\operatorname{H}}(\cdot)\) is \(-\infty\) for the empty set._
Proof.: We have5\(\mathcal{H}_{\delta}^{(s+2\eta)L_{n}^{d}}\left(\mathcal{X},\overline{ \mathbf{d}}_{L_{n}},\varphi_{L_{n}}\right)\geq 1\). Set \(m=\left\lceil 2\left\|\varphi\right\|_{\infty}/\eta\right\rceil\). We have
Footnote 5: The quantity \(\mathcal{H}_{\delta}^{(s+2\eta)L_{n}^{d}}\left(\mathcal{X},\overline{\mathbf{d }}_{L_{n}},\varphi_{L_{n}}\right)\) is defined only when \((s+2\eta)L_{n}^{d}>\max\varphi_{L_{n}}\). Therefore the following argument is problematic if we have \((s+2\eta)L_{n}^{d}\leq\max\varphi_{L_{n}}\) for all but finitely many \(n\). However, in this case, there is \(t\in[-\left\|\varphi\right\|_{\infty},\left\|\varphi\right\|_{\infty}]\) such that \(t\geq s+\eta\) and \(\mathcal{X}_{n}(t)\neq\emptyset\) for infinitely many \(n\). Then we have \(\operatorname{dim}_{\operatorname{H}}(\mathcal{X}_{n}(t),\overline{\mathbf{d }}_{L_{n}},\delta)\geq 0>c(s-t)L_{n}^{d}\) for infinitely many \(n\) for this choice of \(t\).
\[\mathcal{X}=\bigcup_{\ell=0}^{m-1}\mathcal{X}_{n}\left(-\left\|\varphi\right\| _{\infty}+\ell\eta\right).\]
Then there exists \(t\in\{-\left\|\varphi\right\|_{\infty}+\ell\eta\mid\ell=0,1,\ldots,m-1\}\) such that
\[\mathcal{H}_{\delta}^{(s+2\eta)L_{n}^{d}}\left(\mathcal{X}_{n}(t),\overline{ \mathbf{d}}_{L_{n}},\varphi_{L_{n}}\right)\geq\frac{1}{m}\quad\text{for infinitely many $n$.}\]
On the set \(\mathcal{X}_{n}(t)\) we have
\[(s+2\eta)L_{n}^{d}-\varphi_{L_{n}}\geq(s+2\eta)L_{n}^{d}-(t+\eta)L_{n}^{d}=(s-t+ \eta)L_{n}^{d}.\]
Hence
\[\mathcal{H}_{\delta}^{(s+2\eta)L_{n}^{d}}\left(\mathcal{X}_{n}(t),\overline{\mathbf{d}}_{L_{n}},\varphi_{L_{n}}\right) \leq\mathcal{H}_{\delta}^{(s-t+\eta)L_{n}^{d}}\left(\mathcal{X}_{ n}(t),\overline{\mathbf{d}}_{L_{n}}\right)\] \[\leq\delta^{\eta L_{n}^{d}}\mathcal{H}_{\delta}^{(s-t)L_{n}^{d}} \left(\mathcal{X}_{n}(t),\overline{\mathbf{d}}_{L_{n}}\right).\]
Therefore for infinitely many \(n\) we have
\[\mathcal{H}_{\delta}^{(s-t)L_{n}^{d}}\left(\mathcal{X}_{n}(t),\overline{ \mathbf{d}}_{L_{n}}\right)\geq\frac{\delta^{-\eta L_{n}^{d}}}{m}\to\infty\quad( n\to\infty).\]
Thus \(\dim_{\mathrm{H}}\left(\mathcal{X}_{n}(t),\overline{\mathbf{d}}_{L_{n}}, \delta\right)\geq(s-t)L_{n}^{d}\) for infinitely many \(n\).
We fix \(t\in[-\left\|\varphi\right\|_{\infty},\left\|\varphi\right\|_{\infty}]\) satisfying the statement of this claim. By choosing a subsequence (also denoted by \(L_{n}\)) we can assume that
\[\dim_{\mathrm{H}}\left(\mathcal{X}_{n}(t),\overline{\mathbf{d}}_{L_{n}}, \delta\right)\geq(s-t)L_{n}^{d}\quad\text{for all $n$}.\]
By a version of Frostman's lemma (Lemma 6.1), there is a Borel probability measure \(\nu_{n}\) on \(\mathcal{X}_{n}(t)\) such that
\[\nu_{n}(E)\leq\left(\operatorname{Diam}\left(E,\overline{\mathbf{d}}_{L_{n}} \right)\right)^{c(s-t)L_{n}^{d}}\quad\text{for all Borel sets $E\subset\mathcal{X}$ with $\operatorname{Diam}\left(E,\overline{\mathbf{d}}_{L_{n}}\right)<\frac{ \delta}{6}$}. \tag{6.2}\]
We define a Borel probability measure \(\mu_{n}\) on \(\mathcal{X}\) by
\[\mu_{n}=\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}T_{*}^{u}\nu_{n}\,du.\]
By choosing a subsequence (also denoted by \(\mu_{n}\)) we can assume that \(\mu_{n}\) converges to \(\mu\in\mathscr{M}^{T}(\mathcal{X})\) in the weak\({}^{*}\) topology. We have
\[\int_{\mathcal{X}}\varphi\,d\mu_{n}=\int_{\mathcal{X}}\frac{\varphi_{L_{n}}}{ L_{n}^{d}}\,d\nu_{n}=\int_{\mathcal{X}_{n}(t)}\frac{\varphi_{L_{n}}}{L_{n}^{d}}\,d \nu_{n}\geq t.\]
Here we have used that \(\nu_{n}\) is supported in \(\mathcal{X}_{n}(t)\) in the second inequality and that \(\varphi_{L_{n}}/L_{n}^{d}\geq t\) on \(\mathcal{X}_{n}(t)\) in the last inequality. Since \(\mu_{n}\rightharpoonup\mu\), we have
\[\int_{\mathcal{X}}\varphi\,d\mu\geq t.\]
If \(t\geq s\) then (6.1) trivially holds (recalling \(0<c<1\)):
\[\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{ X}}\varphi\,d\mu\geq t\geq cs-(1-c)\left\|\varphi\right\|_{\infty}.\]
Therefore we assume \(s>t\).
We will prove that for sufficiently small \(\varepsilon>0\)
\[R(\mathbf{d},\mu,\varepsilon)\geq c(s-t)\log(1/\varepsilon)-Kc(s-t), \tag{6.3}\]
where \(R(\mathbf{d},\mu,\varepsilon)\) is the rate distortion function and \(K\) is the universal positive constant introduced in Proposition 2.13. Once this is proved, we have
\[\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)=\liminf_{ \varepsilon\to 0}\frac{R(\mathbf{d},\mu,\varepsilon)}{\log(1/\varepsilon)} \geq c(s-t).\]
Then we get (6.1) by
\[\underline{\operatorname{rdim}}(\mathcal{X},T,\mathbf{d},\mu)+\int_{\mathcal{ X}}\varphi\,d\mu\geq c(s-t)+t=cs+(1-c)t\geq cs-(1-c)\left\|\varphi\right\|_{ \infty}.\]
Here we have used \(0<c<1\) and \(t\geq-\left\|\varphi\right\|_{\infty}\). So the task is to prove (6.3).
Let \(\varepsilon\) be a positive number with \(2\varepsilon\log(1/\varepsilon)<\delta/6\). Let \(M\) be a positive number, and let \(X\) and \(Y\) be random variables such that
* \(X\) takes values in \(\mathcal{X}\) with \(\operatorname{Law}X=\mu\),
* \(Y\) takes values in \(L^{1}([0,M)^{d},\mathcal{X})\) with \(\mathbb{E}\left(\int_{[0,M)^{d}}\mathbf{d}(T^{v}X,Y_{v})\,dv\right)< \varepsilon\,M^{d}\).
We want to prove
\[\frac{1}{M^{d}}I(X;Y)\geq c(s-t)\log(1/\varepsilon)-Kc(s-t).\]
If this is proved then we get (6.3) and the proof is done. We can assume that \(Y\) takes only finitely many values (Remark 2.15). We denote the set of values of \(Y\) by \(\mathcal{Y}\). This is a finite subset of \(L^{1}([0,M)^{d},\mathcal{X})\).
Take a positive number \(\tau\) satisfying \(\mathbb{E}\left(\int_{[0,M)^{d}}\mathbf{d}(T^{v}X,Y_{v})\,dv\right)<( \varepsilon-3\tau)M^{d}\). We take a measurable partition
\[\mathcal{X}=P_{1}\cup P_{2}\cup\cdots\cup P_{\alpha}\quad\text{(disjoint union)}\]
such that \(\operatorname{Diam}(P_{i},\overline{\mathbf{d}}_{M})<\tau\) and \(\mu(\partial P_{i})=0\) for all \(1\leq i\leq\alpha\). We pick a point \(x_{i}\in P_{i}\) for each \(i\) and set \(A=\{x_{1},\ldots,x_{\alpha}\}\). We define a map \(\mathcal{P}\colon\mathcal{X}\to A\) by \(\mathcal{P}(P_{i})=\{x_{i}\}\). Then we have
\[\mathbb{E}\left(\frac{1}{M^{d}}\int_{[0,M)^{d}}\mathbf{d}\left(T^{v}\mathcal{ P}(X),Y_{v}\right)\,dv\right)<\varepsilon-2\tau. \tag{6.4}\]
We consider the push-forward measures \(\mathcal{P}_{*}\mu_{n}\) on \(A\). They converge to \(\mathcal{P}_{*}\mu\) as \(n\to\infty\) in the weak\({}^{*}\) topology by \(\mu(\partial P_{i})=0\).
By Lemma 6.2, we can construct random variables \(X(n)\) coupled to \(\mathcal{P}(X)\) such that \(X(n)\) take values in \(A\) with \(\operatorname{Law}X(n)=\mathcal{P}_{*}\mu_{n}\) and
\[\mathbb{P}\left(X(n)=x_{i},\mathcal{P}(X)=x_{j}\right)\to\delta_{ij}\mathbb{P }\left(\mathcal{P}(X)=x_{j}\right)\quad(n\to\infty).\]
Then \(\mathbb{E}\overline{\mathbf{d}}_{M}\left(X(n),\mathcal{P}(X)\right)\to 0\) as \(n\to\infty\). We consider6
Footnote 6: This sentence is not rigorous. Strictly speaking, we can construct random variables \(X(n),X^{\prime},Y^{\prime}\) defined on a common probability space such that \(\operatorname{Law}\left(X^{\prime},Y^{\prime}\right)=\operatorname{Law} \left(\mathcal{P}(X),Y\right)\),
\[\mathbb{P}\left(X(n)=x_{i},X^{\prime}=x_{j}\right)\to\delta_{ij}\mathbb{P} \left(X^{\prime}=x_{j}\right)\quad(n\to\infty),\]
and \[\mathbb{P}\left(X(n)=x_{i},Y=y\right|X^{\prime}=x_{j}\right)=\mathbb{P}\left(X (n)=x_{i}|\,X^{\prime}=x_{j}\right)\cdot\mathbb{P}\left(Y=y|\,X^{\prime}=x_{j }\right).\]
For simplicity we identify \(X^{\prime}\) and \(Y^{\prime}\) with \(\mathcal{P}(X)\) and \(Y\) respectively. that \(X(n)\) is coupled to \(Y\) with the conditional distribution
\[\mathbb{P}\left(X(n)=x_{i},Y=y\right|\mathcal{P}(X)=x_{j}\right)=\mathbb{P} \left(X(n)=x_{i}|\,\mathcal{P}(X)=x_{j}\right)\cdot\mathbb{P}\left(Y=y|\, \mathcal{P}(X)=x_{j}\right)\]
for \(x_{i},x_{j}\in A\) and \(y\in\mathcal{Y}\). Namely \(X(n)\) and \(Y\) are conditionally independent given \(\mathcal{P}(X)\). Then
\[\mathbb{P}\left(X(n)=x_{i},Y=y\right) =\sum_{j=1}^{\alpha}\mathbb{P}\left(X(n)=x_{i},\mathcal{P}(X)=x_{ j}\right)\cdot\mathbb{P}\left(Y=y|\,\mathcal{P}(X)=x_{j}\right)\] \[\to\mathbb{P}\left(\mathcal{P}(X)=x_{i},Y=y\right)\quad(n\to \infty).\]
By (6.4)
\[\mathbb{E}\left(\frac{1}{M^{d}}\int_{[0,M)^{d}}\mathbf{d}\left(T^{u}X(n),Y_{u} \right)du\right)<\varepsilon-2\tau\quad\text{for large $n$}. \tag{6.5}\]
Notice that \((X(n),Y)\) take values in a fixed finite set \(A\times\mathcal{Y}\) and that their distributions converge to that of \((\mathcal{P}(X),Y)\). Hence by Lemma 2.6
\[I\left(X(n);Y\right)\to I\left(\mathcal{P}(X);Y\right)\quad(n\to\infty).\]
We want to estimate \(I\left(X(n);Y\right)\) from below.
Fix a point \(x_{0}\in\mathcal{X}\). We will also denote by \(x_{0}\) any constant function whose value is \(x_{0}\). For \(x\in A\) and \(y\in L^{1}([0,M)^{d},\mathcal{X})\) we define a conditional probability mass function by
\[\rho_{n}(y|x)=\mathbb{P}\left(Y=y|\,X(n)=x\right).\]
This is nonzero only for \(y\in\mathcal{Y}\). (Here \(\rho_{n}(\cdot|x)\) may be an arbitrary probability measure on \(\mathcal{Y}\) if \(\mathbb{P}\left(X(n)=x\right)=0\).)
We define \(\Lambda\subset\mathbb{R}^{d}\) by
\[\Lambda=\left\{\left(Mm_{1},Mm_{2},\ldots,Mm_{d}\right)\middle|\,m_{k}\in \mathbb{Z},\;0\leq m_{k}\leq\frac{L_{n}}{M}-2\left(1\leq k\leq d\right)\right\}.\]
Let \(v\in[0,M)^{d}\). We have
\[\bigcup_{\lambda\in\Lambda}\left(v+\lambda+[0,M)^{d}\right)\subset[0,L_{n})^{ d}.\]
Notice that the left-hand side is a disjoint union. Here \(v+\lambda+[0,M)^{d}=\{v+\lambda+w\mid w\in[0,M)^{d}\}\). Set
\[E_{v}=[0,L_{n})^{d}\setminus\bigcup_{\lambda\in\Lambda}\left(v+\lambda+[0,M)^{d }\right).\]
See Figure 3.
For \(x\in\mathcal{X}\) and \(f\in L^{1}\left([0,L_{n})^{d},\mathcal{X}\right)\) we define a conditional probability mass function \(\sigma_{n,v}(f|x)\) by
\[\sigma_{n,v}(f|x)=\delta_{x_{0}}\left(f|_{E_{v}}\right)\cdot\prod_{\lambda\in \Lambda}\rho_{n}\left(f|_{v+\lambda+[0,M)^{d}}\big{|}\,\mathcal{P}(T^{v+ \lambda}x)\right).\]
Here \(f|_{E_{v}}\) is the restriction of \(f\) to \(E_{v}\) and \(\delta_{x_{0}}\) is the delta probability measure concentrated at the constant function \(x_{0}\in L^{1}(E_{v},\mathcal{X})\). We naturally consider \(f|_{v+\lambda+[0,M)^{d}}\) (the restriction of \(f\) to \(v+\lambda+[0,M)^{d}\)) as an element of \(L^{1}\left([0,M)^{d},\mathcal{X}\right)\).
We define a transition probability \(\sigma_{n}\) on \(\mathcal{X}\times L^{1}([0,L_{n})^{d},\mathcal{X})\) by
\[\sigma_{n}(B\mid x)=\frac{1}{M^{d}}\int_{[0,M)^{d}}\sigma_{n,v}(B\mid x)\,dv\]
for \(x\in\mathcal{X}\) and Borel subsets \(B\subset L^{1}([0,L_{n})^{d},\mathcal{X})\). Here \(\sigma_{n,v}(B\mid x)=\sum_{f\in B}\sigma_{n,v}(f|x)\). (Notice that for each \(x\in\mathcal{X}\) the function \(\sigma_{n,v}(f|x)\) is nonzero only for finitely many \(f\).)
Take random variables \(Z\) and \(W\) such that \(Z\) takes values in \(\mathcal{X}\) with \(\text{Law}Z=\nu_{n}\) and that \(W\) takes values in \(L^{1}([0,L_{n})^{d},\mathcal{X})\) with
\[\mathbb{P}\left(W\in B\middle|\,Z=x\right)=\sigma_{n}(B\mid x)\quad(x\in \mathcal{X},B\subset L^{1}([0,L_{n})^{d},\mathcal{X})).\]
Notice that \(Z\) and \(W\) depend on \(n\). Rigorously speaking, we should denote them by \(Z^{(n)}\) and \(W^{(n)}\). However we suppress their dependence on \(n\) in the notations for simplicity.
We estimate \(\mathbb{E}\left(\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W_{u} )\,du\right)\) and \(I(Z;W)\).
**Claim 6.5**.: _For all sufficiently large \(n\) we have_
\[\mathbb{E}\left(\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W_{u })\,du\right)<\varepsilon.\]
Proof.: For each \(v\in[0,M)^{d}\) we take a random variable \(W(v)\) coupled to \(Z\) such that \(W(v)\) takes values in \(L^{1}([0,L_{n})^{d},\mathcal{X})\) with \(\mathbb{P}\left(W(v)=f\mid Z=x\right)=\sigma_{n,v}(f|x)\) for \(x\in\mathcal{X}\) and \(f\in L^{1}([0,L_{n})^{d},\mathcal{X})\). Then
\[\mathbb{E}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W_{u})\,du=\frac{1}{M^{d}}\int_ {[0,M)^{d}}\mathbb{E}\left(\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W(v)_{u})\,du \right)\,dv.\]
For each \(v\in[0,M)^{d}\) we have
\[\mathbb{E}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W(v)_{u})\,du\] \[=\mathbb{E}\int_{E_{v}}\mathbf{d}(T^{u}Z,W(v)_{u})\,du+\sum_{ \lambda\in\Lambda}\mathbb{E}\int_{v+\lambda+[0,M)^{d}}\mathbf{d}(T^{u}Z,W(v)_ {u})\,du\] \[\leq CL_{n}^{d-1}+\sum_{\lambda\in\Lambda}\mathbb{E}\int_{v+ \lambda+[0,M)^{d}}\mathbf{d}(T^{u}Z,W(v)_{u})\,du,\]
where \(C\) is a positive constant independent of \(v,L_{n}\). In the last inequality we have used \(\mathbf{m}(E_{v})\leq\mathrm{const}\cdot L_{n}^{d-1}\). Since \(\overline{\mathbf{d}}_{M}(x,\mathcal{P}(x))<\tau\) for every \(x\in\mathcal{X}\), we have
\[\mathbb{E}\int_{v+\lambda+[0,M)^{d}}\mathbf{d}(T^{u}Z,W(v)_{u}) \,du\] \[\leq M^{d}\tau+\mathbb{E}\int_{[0,M)^{d}}\mathbf{d}\left(T^{u} \mathcal{P}(T^{v+\lambda}Z),W(v)_{v+\lambda+u}\right)du\] \[\leq M^{d}\tau+\sum_{f\in\mathcal{Y}}\int_{[0,M)^{d}}\left(\int_{ \mathcal{X}}\mathbf{d}(T^{u}x,f_{u})\rho_{n}(f|x)\,d\left(\mathcal{P}_{*}T_{* }^{v+\lambda}\nu_{n}(x)\right)\right)du.\]
We sum up these estimates over \(\lambda\in\Lambda\). Noting \(M^{d}|\Lambda|\leq L_{n}^{d}\), we have
\[\mathbb{E}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W(v)_{u})\,du\] \[\leq CL_{n}^{d-1}+\tau L_{n}^{d}+\sum_{\begin{subarray}{c}\lambda \in\Lambda\\ f\in\mathcal{Y}\end{subarray}}\int_{[0,M)^{d}}\left(\int_{\mathcal{X}}\mathbf{d }(T^{u}x,f_{u})\rho_{n}(f|x)d\left(\mathcal{P}_{*}T_{*}^{v+\lambda}\nu_{n}(x) \right)\right)du.\]
We integrate this over \(v\in[0,M)^{d}\). Note that7
Footnote 7: For two measures \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) on \(\mathcal{X}\) we write \(\mathbf{m}_{1}\leq\mathbf{m}_{2}\) if we have \(\mathbf{m}_{1}(B)\leq\mathbf{m}_{2}(B)\) for all Borel subsets \(B\subset\mathcal{X}\).
\[\int_{[0,M)^{d}}\left(\sum_{\lambda\in\Lambda}\mathcal{P}_{*}T_{* }^{v+\lambda}\nu_{n}\right)dv =\sum_{\lambda\in\Lambda}\int_{\lambda+[0,M)^{d}}\mathcal{P}_{*} T_{*}^{v}\nu_{n}\,dv\] \[\leq\int_{[0,L_{n})^{d}}\mathcal{P}_{*}T_{*}^{v}\nu_{n}\,dv=L_{n}^ {d}\mathcal{P}_{*}\mu_{n}.\]
Hence we have
\[\frac{1}{M^{d}}\int_{[0,M)^{d}}\mathbb{E}\left(\int_{[0,L_{n})^{d}} \mathbf{d}(T^{u}Z,W(v)_{u})\,du\right)\,dv\] \[\leq CL_{n}^{d-1}+\tau L_{n}^{d}+\frac{L_{n}^{d}}{M^{d}}\sum_{f\in \mathcal{Y}}\int_{[0,M)^{d}}\left(\int_{\mathcal{X}}\mathbf{d}(T^{u}x,f_{u}) \rho_{n}(f|x)d\mathcal{P}_{*}\mu_{n}(x)\right)du\] \[=CL_{n}^{d-1}+\tau L_{n}^{d}+\frac{L_{n}^{d}}{M^{d}}\mathbb{E} \left(\int_{[0,M)^{d}}\mathbf{d}\left(T^{u}X(n),Y_{u}\right)du\right).\]
In the last equality we have used \(\mathrm{Law}X(n)=\mathcal{P}_{*}\mu_{n}\) and \(\rho_{n}(f|x)=\mathbb{P}(Y=f\mid X(n)=x)\). Therefore
\[\mathbb{E}\left(\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W_{u })\,du\right)\leq\frac{C}{L_{n}}+\tau+\mathbb{E}\left(\frac{1}{M^{d}}\int_{[ 0,M)^{d}}\mathbf{d}\left(T^{u}X(n),Y_{u}\right)du\right).\]
The third term in the right-hand side is smaller than \(\varepsilon-2\tau\) for large \(n\) by (6.5). Therefore we have
\[\mathbb{E}\left(\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathbf{d}(T^{u}Z,W_{u })\,du\right)<\varepsilon\]
for all sufficiently large \(n\).
**Claim 6.6**.: \[\frac{1}{L_{n}^{d}}I(Z;W)\leq\frac{1}{M^{d}}I\left(X(n);Y\right).\]
Proof.: We have \(I(Z;W)=I(\nu_{n},\sigma_{n})\). Since \(\sigma_{n}=(1/M^{d})\int_{[0,M)^{d}}\sigma_{n,v}\,dv\), we apply to it Proposition 2.10 (2) (the convexity of mutual information in transition probability):
\[I(\nu_{n},\sigma_{n})\leq\frac{1}{M^{d}}\int_{[0,M)^{d}}I(\nu_{n},\sigma_{n,v })\,dv.\]
By Lemma 2.7 (subadditivity of mutual information)8
Footnote 8: Let \(W(v)\) be the random variable introduced in Claim 6.5. Then \(I(\nu_{n},\sigma_{n,v})=I(Z;W(v))\). Consider the restrictions \(W(v)|_{v+\lambda+[0,M)^{d}}\) (\(\lambda\in\Lambda\)) and \(W(v)|_{E_{v}}\). From the definition of the measure \(\sigma_{n,v}\), they are conditionally independent given \(Z\). By Lemma 2.7
\[I(Z;W)\leq I\left(Z;W(v)|_{E_{v}}\right)+\sum_{\lambda\in\Lambda}I\left(Z;W(v) |_{v+\lambda+[0,M)^{d}}\right).\]
\(I\left(Z;W(v)|_{E_{v}}\right)=0\) because \(W(v)|_{E_{v}}\) is constantly equal to \(x_{0}\). We have
\[I\left(Z;W(v)|_{v+\lambda+[0,M)^{d}}\right)=I\left(\mathcal{P}(T^{v+\lambda}Z); W(v)|_{v+\lambda+[0,M)^{d}}\right)=I\left(\mathcal{P}_{*}T_{*}^{v+\lambda}\nu_{n}, \rho_{n}\right).\]
\[I(\nu_{n},\sigma_{n,v})\leq\sum_{\lambda\in\Lambda}I\left(\mathcal{P}_{*}T_{*}^ {v+\lambda}\nu_{n},\rho_{n}\right).\]
Hence
\[I(\nu_{n},\sigma_{n}) \leq\frac{1}{M^{d}}\sum_{\lambda\in\Lambda}\int_{[0,M)^{d}}I\left( \mathcal{P}_{*}T_{*}^{\lambda+v}\nu_{n},\rho_{n}\right)dv\] \[=\frac{1}{M^{d}}\int_{\cup_{\lambda\in\Lambda}\left(\lambda+[0,M) ^{d}\right)}I\left(\mathcal{P}_{*}T_{*}^{v}\nu_{n},\rho_{n}\right)dv\] \[\leq\frac{1}{M^{d}}\int_{[0,L_{n})^{d}}I\left(\mathcal{P}_{*}T_{* }^{v}\nu_{n},\rho_{n}\right)dv\]
By Proposition 2.10 (1) (the concavity of mutual information in probability measure)
\[\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}I\left(\mathcal{P}_{*}T_{* }^{v}\nu_{n},\rho_{n}\right)dv \leq I\left(\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathcal{P}_{* }T_{*}^{v}\nu_{n}\,dv,\rho_{n}\right)\] \[=I\left(\mathcal{P}_{*}\mu_{n},\rho_{n}\right)\] \[=I(X(n);Y).\]
Therefore we conclude
\[I(Z;W)=I(\nu_{n},\sigma_{n})\leq\frac{L_{n}^{d}}{M^{d}}I(X(n);Y).\]
We define a metric \(D_{n}\) on \(L^{1}\left([0,L_{n})^{d},\mathcal{X}\right)\) by
\[D_{n}(f,g)=\frac{1}{L_{n}^{d}}\int_{[0,L_{n})^{d}}\mathbf{d}\left(f(u),g(u) \right)du.\]
Then the map
\[F_{n}\colon\left(\mathcal{X},\overline{\mathbf{d}}_{L_{n}}\right)\ni x\mapsto \left(T^{t}x\right)_{t\in[0,L_{n})^{d}}\in\left(L^{1}\left([0,L_{n})^{d}, \mathcal{X}\right),D_{n}\right)\]
is an isometric embedding. Consider the push-forward measure \(F_{n*}\nu_{n}\) on \(L^{1}\left([0,L_{n})^{d},\mathcal{X}\right)\). It follows from (6.2) that
\[F_{n*}\nu_{n}\left(E\right)\leq\left(\operatorname{Diam}(E,D_{n})\right)^{c( s-t)L_{n}^{d}}\]
for all Borel subsets \(E\subset L^{1}\left([0,L_{n})^{d},\mathcal{X}\right)\) with \(\operatorname{Diam}(E,D_{n})<\delta/6\).
We have \(\operatorname{Law}F_{n}(Z)=F_{n*}\nu_{n}\), and by Claim 6.5
\[\mathbb{E}\left(D_{n}(F_{n}(Z),W)\right)<\varepsilon\quad\text{for large $n$}.\]
Since \(2\varepsilon\log(1/\varepsilon)<\delta/6\), we apply Proposition 2.13 (Kawabata-Dembo estimate) to \((F_{n}(Z),W)\) and get
\[I(Z;W)=I\left(F_{n}(Z);W\right)\geq c(s-t)L_{n}^{d}\log(1/\varepsilon)-K\left( c(s-t)L_{n}^{d}+1\right)\]
for large \(n\). Here \(K\) is a universal positive constant. By Claim 6.6
\[\frac{1}{M^{d}}I\left(X(n);Y\right)\geq\frac{1}{L_{n}^{d}}I\left(Z;W\right) \geq c(s-t)\log(1/\varepsilon)-K\left(c(s-t)+\frac{1}{L_{n}^{d}}\right)\]
for large \(n\). Since \(I\left(X(n);Y\right)\to I\left(\mathcal{P}(X);Y\right)\) as \(n\to\infty\), we get
\[\frac{1}{M^{d}}I\left(\mathcal{P}(X);Y\right)\geq c(s-t)\log(1/\varepsilon)-Kc( s-t).\]
Then we have
\[\frac{1}{M^{d}}I(X;Y)\geq\frac{1}{M^{d}}I\left(\mathcal{P}(X);Y\right)\geq c(s -t)\log(1/\varepsilon)-Kc(s-t).\]
This is what we want to prove.
Now we have proved Theorems 3.4 and 3.7. These two theorems implies Theorem 1.3 (Main theorem) as we already explained in SS3. Therefore we have completed the proof of Theorem 1.3.
## 7. Local nature of metric mean dimension with potential
This section is independent of the proof of Theorem 1.3 (Main Theorem). It can be read independently of Sections 2, 4, 5 and 6. Here we present a formula expressing metric mean dimension with potential by using a certain _local_ quantity. We plan to use it in a future study of geometric examples of dynamical systems [11, 12].
### A formula of metric mean dimension with potential
Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. For a subset \(E\subset\mathcal{X}\) and \(\varepsilon>0\) we set
\[P_{T}(E,\mathbf{d},\varphi,\varepsilon)=\liminf_{L\to\infty}\frac{\log\# \left(E,\mathbf{d}_{L},\varphi_{L},\varepsilon\right)}{L^{d}}.\]
Here recall that
\[\#\left(E,\mathbf{d}_{L},\varphi_{L},\varepsilon\right)=\inf\left\{\sum_{i=1} ^{n}(1/\varepsilon)^{\sup_{U_{i}}\varphi}\middle|\begin{aligned} E \subset U_{1}\cup\cdots\cup U_{n}.\text{ Each }U_{i}\text{ is an open set}\\ \text{ of }\mathcal{X}\text{ with }\operatorname{Diam}(U_{i}, \mathbf{d}_{L})<\varepsilon.\end{aligned}\right\}.\]
Also recall that the upper and lower metric mean dimensions with potential are defined by
\[\overline{\operatorname{mdim}_{\operatorname{M}}}(\mathcal{X},T, \mathbf{d},\varphi) =\limsup_{\varepsilon\to 0}\frac{P_{T}(\mathcal{X},\mathbf{d}, \varphi,\varepsilon)}{\log(1/\varepsilon)},\] \[\overline{\operatorname{mdim}_{\operatorname{M}}}(\mathcal{X},T, \mathbf{d},\varphi) =\liminf_{\varepsilon\to 0}\frac{P_{T}(\mathcal{X},\mathbf{d}, \varphi,\varepsilon)}{\log(1/\varepsilon)}.\]
For a (not necessarily bounded) subset \(A\) of \(\mathbb{R}^{d}\) we define a metric \(\mathbf{d}_{A}\) on \(\mathcal{X}\) by
\[\mathbf{d}_{A}(x,y)=\sup_{a\in A}\mathbf{d}\left(T^{a}x,T^{a}y\right).\]
(If \(A\) is unbounded, this metric is not compatible with the given topology of \(\mathcal{X}\) in general.) For \(x\in\mathcal{X}\) and \(\delta>0\) we define \(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}})\) as the closed \(\delta\)-ball with respect to \(\mathbf{d}_{\mathbb{R}^{d}}\):
\[B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}})=\{y\in\mathcal{X}\mid\mathbf{d}_{ \mathbb{R}^{d}}(x,y)\leq\delta\}=\{y\in\mathcal{X}\mid\mathbf{d}(T^{u}x,T^{u} y)\leq\delta\left(\forall u\in\mathbb{R}^{d}\right)\}.\]
The following is the main result of this section.
**Theorem 7.1**.: _For any \(\delta>0\) we have_
\[\overline{\operatorname{mdim}}_{\operatorname{M}}(\mathcal{X},T,\mathbf{d}, \varphi) =\limsup_{\varepsilon\to 0}\frac{\sup_{x\in\mathcal{X}}P_{T}\left(B_{ \delta}(x,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d},\varphi,\varepsilon\right)}{ \log(1/\varepsilon)},\]
\[\underline{\operatorname{mdim}}_{\operatorname{M}}(\mathcal{X},T,\mathbf{d}, \varphi) =\liminf_{\varepsilon\to 0}\frac{\sup_{x\in\mathcal{X}}P_{T}\left(B_{ \delta}(x,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d},\varphi,\varepsilon\right)}{ \log(1/\varepsilon)}.\]
Notice that \(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}})\) is not a neighborhood of \(x\) with respect to the original metric \(\mathbf{d}\) in general. Nevertheless we can calculate the metric mean dimension with potential by gathering such information.
In the case that \(\varphi\) is identically zero, Theorem 7.1 was proved in [14]. The proof of Theorem 7.1 follows the argument of [14], which is in turn based on the paper of Bowen [1].
### Tiling argument
Here we prepare a technical lemma (Lemma 7.2 below). For \(x=(x_{1},\ldots,x_{d})\in\mathbb{R}^{d}\) we set \(\left\|x\right\|_{\infty}=\max_{1\leq i\leq d}|x_{i}|\). A **cube** of \(\mathbb{R}^{d}\) is a set \(\Lambda\) of the form
\[\Lambda=u+[0,L]^{d}=\{u+v\mid v\in[0,L]^{d}\},\]
where \(u\in\mathbb{R}^{d}\) and \(L>0\). We set \(\ell(\Lambda)=L\). For \(r>0\) and \(A\subset\mathbb{R}^{d}\) we define
\[\partial(A,r)=\left\{x\in\mathbb{R}^{d}\mid\exists y\in A,\exists z\in \mathbb{R}^{d}\setminus A:\left\|x-y\right\|_{\infty}\leq r\text{ and }\left\|x-z\right\|_{\infty}\leq r\right\},\]
\[B_{r}(A)=A\cup\partial(A,r)=\{x\in\mathbb{R}^{d}\mid\exists y\in A:\left\|x- y\right\|_{\infty}\leq r\}.\]
For a finite set \(\mathcal{C}=\{\Lambda_{1},\ldots,\Lambda_{n}\}\) of cubes of \(\mathbb{R}^{d}\) we set
\[\ell_{\min}(\mathcal{C})=\min_{1\leq i\leq n}\ell(\Lambda_{i}),\quad\ell_{ \max}(\mathcal{C})=\max_{1\leq i\leq n}\ell(\Lambda_{i}).\]
The following lemma was proved in [14, Proposition 3.4].
**Lemma 7.2**.: _For any \(\eta>0\) there exists a natural number \(k_{0}=k_{0}(\eta)>0\) for which the following statement holds. Let \(A\) be a bounded Borel subset of \(\mathbb{R}^{d}\). Let \(\mathcal{C}_{k}\)\((1\leq k\leq k_{0})\) be finite sets of cubes of \(\mathbb{R}^{d}\) such that_
1. \(\ell_{\max}(\mathcal{C}_{1})\geq 1\) _and_ \(\ell_{\min}(\mathcal{C}_{k+1})\geq k_{0}\cdot\ell_{\max}(\mathcal{C}_{k})\) _for all_ \(1\leq k\leq k_{0}-1\)_,_
2. \(\mathbf{m}\left(\partial(A,\ell_{\max}(\mathcal{C}_{k_{0}}))\right)<(\eta/3) \cdot\mathbf{m}(A)\)_,_
3. \(A\subset\bigcup_{\Lambda\in\mathcal{C}_{k}}\Lambda\) _for every_ \(1\leq k\leq k_{0}\)_._
_Then there is a disjoint subfamily \(\mathcal{C}^{\prime}\subset\mathcal{C}_{1}\cup\cdots\cup\mathcal{C}_{k_{0}}\) satisfying_
\[\bigcup_{\Lambda\in\mathcal{C}^{\prime}}\Lambda\subset A,\quad\mathbf{m}\left( B_{1}\left(A\setminus\bigcup_{\Lambda\in\mathcal{C}^{\prime}}\Lambda \right)\right)<\eta\cdot\mathbf{m}(A).\]
_Here "disjoint" means that for any two distinct \(\Lambda_{1},\Lambda_{2}\in\mathcal{C}^{\prime}\) we have \(\Lambda_{1}\cap\Lambda_{2}=\emptyset\)._
This is a rather technical statement. The assumption (1) means that some cube of \(\mathcal{C}_{1}\) is not so small and that every cube in \(\mathcal{C}_{k+1}\) is much larger than cubes in \(\mathcal{C}_{k}\). The assumption (2) means that \(A\) is much larger than all the given cubes. The assumption (3) means that
each \(\mathcal{C}_{k}\) covers \(A\). The conclusion means that we can find a disjoint subfamily \(\mathcal{C}^{\prime}\) which covers a substantial portion of \(A\).
### The case that \(\varphi\) is nonnegative
This subsection is also a preparation for the proof of Theorem 7.1. Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. Throughout this subsection, we assume that \(\varphi\) is a nonnegative function.
Recall that for a bounded Borel subset \(A\subset\mathbb{R}^{d}\) a new function \(\varphi_{A}\colon\mathcal{X}\to\mathbb{R}\) is defined by
\[\varphi_{A}(x)=\int_{A}\varphi(T^{u}x)\,du.\]
**Lemma 7.3**.: _Let \(0<\varepsilon<1\) and \(E\subset\mathcal{X}\). Let \(A,A_{1},A_{2},\ldots,A_{n}\) be bounded Borel subsets of \(\mathbb{R}^{d}\). If \(A\subset A_{1}\cup A_{2}\cup\cdots\cup A_{n}\) then_
\[\#\left(E,\mathbf{d}_{A},\varphi_{A},\varepsilon\right)\leq\prod_{k=1}^{n}\# \left(E,\mathbf{d}_{A_{k}},\varphi_{A_{k}},\varepsilon\right).\]
Proof.: Suppose we are given an open cover \(E\subset U_{k1}\cup\cdots\cup U_{km_{k}}\) with \(\operatorname{Diam}(U_{kj},\mathbf{d}_{A_{k}})<\varepsilon\) for each \(1\leq k\leq n\). Then
\[E\subset\bigcup\left\{U_{1j_{1}}\cap U_{2j_{2}}\cap\cdots\cap U_{nj_{n}}|\,1 \leq j_{1}\leq m_{1},1\leq j_{2}\leq m_{2},\ldots,1\leq j_{n}\leq m_{n}\right\}.\]
From \(A\subset A_{1}\cup\cdots\cup A_{n}\), the diameter of \(U_{1j_{1}}\cap U_{2j_{2}}\cap\cdots\cap U_{nj_{n}}\) is smaller than \(\varepsilon\) with respect to the metric \(\mathbf{d}_{A}\). Since \(\varphi\) is nonnegative (here we use this assumption), we have
\[\varphi_{A}\leq\varphi_{A_{1}}+\varphi_{A_{2}}+\cdots+\varphi_{A_{n}}\]
and hence
\[\sup_{U_{1j_{1}}\cap\cdots\cap U_{nj_{n}}}\varphi_{A}\leq\sum_{k=1}^{n}\sup_{ U_{kj_{k}}}\varphi_{A_{k}}.\]
Therefore we have
\[\sum_{\begin{subarray}{c}1\leq j_{1}\leq m_{1}\\ 1\leq j_{n}\leq m_{n}\end{subarray}}\left(\frac{1}{\varepsilon}\right)^{\sup _{U_{1j_{1}}\cap\cdots\cap U_{nj_{n}}}\varphi_{A}}\leq\prod_{k=1}^{n}\left( \sum_{j=1}^{m_{k}}\left(\frac{1}{\varepsilon}\right)^{\sup_{U_{kj}}\varphi_{ A_{k}}}\right).\]
This proves the claim of the lemma.
**Lemma 7.4**.: _For \(0<\varepsilon<1\) and a bounded Borel subset \(A\subset\mathbb{R}^{d}\), we have_
\[\#\left(\mathcal{X},\mathbf{d}_{A},\varphi_{A},\varepsilon\right)\leq\{\# \left(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^{d}},\varepsilon\right) \}^{\mathbf{m}(B_{1}(A))}.\]
_Notice that \(\#\left(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^{d}},\varepsilon\right)\geq 1\) because \(0<\varepsilon<1\) and \(\varphi\) is nonnegative._
Proof.: Let \(\Omega\) be the set of \(u\in\mathbb{Z}^{d}\) with \(\left(u+[0,1]^{d}\right)\cap A\neq\emptyset\). We have
\[A\subset\bigcup_{u\in\Omega}\left(u+[0,1]^{d}\right)\subset B_{1}(A).\]
In particular the cardinality of \(\Omega\) is bounded from above by \(\mathbf{m}\left(B_{1}(A)\right)\). Then by Lemma 7.3
\[\#\left(\mathcal{X},\mathbf{d}_{A},\varphi_{A},\varepsilon\right) \leq\prod_{u\in\Omega}\#\left(\mathcal{X},\mathbf{d}_{u+[0,1]^{d} },\varphi_{u+[0,1]^{d}},\varepsilon\right)\] \[=\prod_{u\in\Omega}\#\left(\mathcal{X},\mathbf{d}_{[0,1]^{d}}, \varphi_{[0,1]^{d}},\varepsilon\right)\] \[\leq\{\#\left(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^ {d}},\varepsilon\right)\}^{\mathbf{m}\left(B_{1}(A)\right)}.\]
The following is the main result of this subsection. This is a modification of a classical result of Bowen [1, Proposition 2.2]. Here recall that we have assumed that \(\varphi\) is nonnegative.
**Proposition 7.5**.: _For positive numbers \(\delta,\beta\) and \(0<\varepsilon<1\) there is a positive number \(D=D(\delta,\beta,\varepsilon)\) for which the following statement holds. Set_
\[a=\frac{\sup_{x\in\mathcal{X}}P_{T}\left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^ {d}}),\mathbf{d},\varphi,\varepsilon\right)}{\log(1/\varepsilon)}.\]
_Then for all sufficiently large \(L\) we have_
\[\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}), \mathbf{d}_{L},\varphi_{L},\varepsilon\right)\leq\left(\frac{1}{\varepsilon} \right)^{(a+\beta)L^{d}}.\]
_Here \(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}})=\{y\in\mathcal{X}\mid\mathbf{d}_{[-D,L +D]^{d}}(x,y)\leq\delta\}\)._
Proof.: Choose a positive number \(\eta\) satisfying
\[\left(\#(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^{d}},\varepsilon) \right)^{\eta}<\left(\frac{1}{\varepsilon}\right)^{\frac{\beta}{2}}. \tag{7.1}\]
Let \(k_{0}=k_{0}(\eta)\) be the natural number introduced in Lemma 7.2.
We will construct the following data inductively on \(k=1,2,\ldots,k_{0}\).
* A finite set \(Y_{k}\subset\mathcal{X}\).
* Positive numbers \(L_{k}(y)\) and \(M_{k}(y)\) for each \(y\in Y_{k}\).
* Open neighborhoods \(V_{k}(y)\) and \(U_{k}(y)\) of \(y\) in \(\mathcal{X}\) with \(V_{k}(y)\subset U_{k}(y)\) for each \(y\in Y_{k}\).
We assume the following conditions.
1. \(L_{1}(y)>1\) for all \(y\in Y_{1}\).
2. \(L_{k}(y)>k_{0}L_{k-1}(z)\) for all \(y\in Y_{k}\), \(z\in Y_{k-1}\) and \(2\leq k\leq k_{0}\).
3. \(\#\left(U_{k}(y),\mathbf{d}_{L_{k}(y)},\varphi_{L_{k}(y)},\varepsilon\right) <(1/\varepsilon)^{(a+\frac{\beta}{2})L_{k}(y)^{d}}\) for all \(y\in Y_{k}\).
4. \(B_{\delta}(v,\mathbf{d}_{[-M_{k}(y),M_{k}(y)]^{d}})\subset U_{k}(y)\) for all \(y\in Y_{k}\) and \(v\in V_{k}(y)\).
5. \(X=\bigcup_{y\in Y_{k}}V_{k}(y)\) for every \(1\leq k\leq k_{0}\).
The construction of these data go as follows. Suppose that the data of \((k-1)\)-th step (i.e. \(Y_{k-1},L_{k-1}(y),M_{k-1}(y),V_{k-1}(y),U_{k-1}(y)\)) have been constructed. We consider the \(k\)-th step. (The case of \(k=1\) is similar.)
Take an arbitrary \(y\in\mathcal{X}\). Since we have
\[\frac{P_{T}\left(B_{\delta}(y,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d},\varphi, \varepsilon\right)}{\log(1/\varepsilon)}\leq a<a+\frac{\beta}{2},\]
there is a positive number \(L_{k}(y)\) larger than \(k_{0}\max_{z\in Y_{k-1}}L_{k-1}(z)\) (we assume \(L_{1}(y)>1\) in the case of \(k=1\)) satisfying
\[\frac{1}{L_{k}(y)^{d}}\log\#\left(B_{\delta}(y,\mathbf{d}_{\mathbb{R}^{d}}), \mathbf{d}_{L_{k}(y)},\varphi_{L_{k}(y)},\varepsilon\right)<\left(a+\frac{ \beta}{2}\right)\log(1/\varepsilon).\]
Then there is an open set \(U_{k}(y)\supset B_{\delta}(y,\mathbf{d}_{\mathbb{R}^{d}})\) such that
\[\frac{1}{L_{k}(y)^{d}}\log\#\left(U_{k}(y),\mathbf{d}_{L_{k}(y)},\varphi_{L_{ k}(y)},\varepsilon\right)<\left(a+\frac{\beta}{2}\right)\log(1/\varepsilon).\]
Namely we have
\[\#\left(U_{k}(y),\mathbf{d}_{L_{k}(y)},\varphi_{L_{k}(y)},\varepsilon\right)< \left(\frac{1}{\varepsilon}\right)^{(a+\frac{\beta}{2})L_{k}(y)^{d}}.\]
Since \(B_{\delta}(y,\mathbf{d}_{\mathbb{R}^{d}})=\bigcap_{M=1}^{\infty}B_{\delta}(y, \mathbf{d}_{[-M,M]^{d}})\subset U_{k}(y)\), there is a positive number \(M_{k}(y)\) with \(B_{\delta}(y,\mathbf{d}_{[-M_{k}(y),M_{k}(y)]^{d}})\subset U_{k}(y)\). There is an open neighborhood \(V_{k}(y)\) of \(y\) such that for every \(v\in V_{k}(y)\) we have \(B_{\delta}(v,\mathbf{d}_{[-M_{k}(y),M_{k}(y)]^{d}})\subset U_{k}(y)\). Since \(\mathcal{X}\) is compact we can find a finite set \(Y_{k}\subset\mathcal{X}\) satisfying \(\mathcal{X}=\bigcup_{y\in Y_{k}}V_{k}(y)\). The construction of the \(k\)-th step has been finished.
Take \(D>1\) with \(D>\max\{M_{k}(y)\mid 1\leq k\leq k_{0},y\in Y_{k}\}\). Let \(L\) be a sufficiently large number so that the cube \(A:=[0,L]^{d}\) satisfies \(\mathbf{m}\left(\partial(A,\max_{y\in Y_{k_{0}}}L_{k_{0}}(y))\right)<\frac{ \eta}{3}L^{d}\). We will show that \(\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{ d}_{L},\varphi_{L},\varepsilon\right)\leq\left(\frac{1}{\varepsilon}\right)^{(a+ \beta)L^{d}}\).
Take an arbitrary point \(x\in\mathcal{X}\). For each \(1\leq k\leq k_{0}\) and \(t\in A\cap\mathbb{Z}^{d}\) we pick \(y\in Y_{k}\) with \(T^{t}x\in V_{k}(y)\). Set \(\Lambda_{k,t}=t+[0,L_{k}(y)]^{d}\). It follows from the choice of \(D\) that
\[T^{t}\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}})\right)\subset B_{\delta}(T ^{t}x,\mathbf{d}_{[-M_{k}(y),M_{k}(y)]^{d}})\subset U_{k}(y).\]
Hence
\[\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{\Lambda_{k,t}}, \varphi_{\Lambda_{k,t}},\varepsilon\right)\leq\#\left(U_{k}(y),\mathbf{d}_{L_{ k}(y)},\varphi_{L_{k}(y)},\varepsilon\right)<\left(\frac{1}{\varepsilon}\right)^{(a+ \frac{\beta}{2})L_{k}(y)^{d}}.\]
Set \(\mathcal{C}_{k}=\{\Lambda_{k,t}\mid t\in A\cap\mathbb{Z}^{d}\}\). This is a finite family of cubes covering \(A=[0,L]^{d}\). Notice that \(\mathcal{C}_{k}\) depends on the choice of \(x\); we suppress its dependence on \(x\) in the notation for simplicity.
By Lemma 7.2 there is a disjoint subfamily \(\mathcal{C}^{\prime}\subset\mathcal{C}_{1}\cup\cdots\cup\mathcal{C}_{k_{0}}\) such that
\[\bigcup_{\Lambda\in\mathcal{C}^{\prime}}\Lambda\subset A,\quad\mathbf{m}\left(B _{1}\left(A\setminus\bigcup_{\Lambda\in\mathcal{C}^{\prime}}\Lambda\right) \right)<\eta\,\mathbf{m}(A).\]
Set
\[A^{\prime}=A\setminus\bigcup_{\Lambda\in\mathcal{C}^{\prime}}\Lambda.\]
For every \(\Lambda\in\mathcal{C}^{\prime}\) we have
\[\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{\Lambda},\varphi_{ \Lambda},\varepsilon\right)<\left(\frac{1}{\varepsilon}\right)^{(a+\frac{ \beta}{2})\mathbf{m}(\Lambda)}.\]
By Lemma 7.4
\[\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{A^{ \prime}},\varphi_{A^{\prime}},\varepsilon\right) \leq\#\left(\mathcal{X},\mathbf{d}_{A^{\prime}},\varphi_{A^{\prime }},\varepsilon\right)\] \[\leq\left\{\#\left(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^{d}},\varepsilon\right)\right\}^{\mathbf{m}(B_{1}(A^{\prime}))}\] \[<\left\{\#\left(\mathcal{X},\mathbf{d}_{[0,1]^{d}},\varphi_{[0,1]^{d}},\varepsilon\right)\right\}^{\eta\,\mathbf{m}(A)}\] \[<\left(\frac{1}{\varepsilon}\right)^{\frac{\beta}{2}\mathbf{m}(A )}.\]
In the last inequality we have used (7.1). From Lemma 7.3
\[\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{A}, \varphi_{A},\varepsilon\right)\] \[\leq\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{A ^{\prime}},\varphi_{A^{\prime}},\varepsilon\right)\prod_{\Lambda\in\mathcal{C }^{\prime}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{\Lambda },\varphi_{\Lambda},\varepsilon\right)\] \[<\left(\frac{1}{\varepsilon}\right)^{(a+\beta)\mathbf{m}(A)}.\]
This holds for every point \(x\in\mathcal{X}\). Thus we have proved the claim of the proposition.
### Proof of Theorem 7.1
Here we prove Theorem 7.1. Let \(T\colon\mathbb{R}^{d}\times\mathcal{X}\to\mathcal{X}\) be a continuous action of \(\mathbb{R}^{d}\) on a compact metrizable space \(\mathcal{X}\). Let \(\mathbf{d}\) be a metric on \(\mathcal{X}\) and \(\varphi\colon\mathcal{X}\to\mathbb{R}\) a continuous function. We do not assume that \(\varphi\) is nonnegative.
The next proposition looks the same as Proposition 7.5. The point is that we do not assume the nonnegativity of \(\varphi\) here whereas we assumed it in Proposition 7.5.
**Proposition 7.6**.: _Let \(\delta,\beta,\varepsilon\) be positive numbers with \(0<\varepsilon<1\). Set_
\[a=\sup_{x\in\mathcal{X}}\frac{P_{T}\left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{ d}}),\mathbf{d},\varphi,\varepsilon\right)}{\log(1/\varepsilon)}.\]
_Then for all sufficiently large \(L\) we have_
\[\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{ d}_{L},\varphi_{L},\varepsilon\right)\leq\left(\frac{1}{\varepsilon}\right)^{(a+ \beta)L^{d}}.\]
_Here \(D=D(\delta,\beta,\varepsilon)\) is the positive constant9 introduced in Proposition 7.5._
Footnote 9: Strictly speaking, the constant \(D\) depends on not only \(\delta,\beta,\varepsilon\) but also \((\mathcal{X},T,\mathbf{d},\psi)\) where \(\psi:=\varphi-\min_{X}\varphi\).
Proof.: Set \(c=\min_{x\in\mathcal{X}}\varphi(x)\) and \(\psi(x)=\varphi(x)-c\). We have \(\psi(x)\geq 0\). For any positive number \(L\) we have
\[\psi_{L}(x)=\varphi_{L}(x)-cL^{d}.\]
For any subset \(E\subset\mathcal{X}\)
\[\#\left(E,\mathbf{d}_{L},\psi_{L},\varepsilon\right)=(1/\varepsilon)^{-cL^{d} }\#\left(E,\mathbf{d}_{L},\varphi_{L},\varepsilon\right).\]
Hence
\[P_{T}(E,\mathbf{d},\psi,\varepsilon)=P_{T}(E,\mathbf{d},\varphi, \varepsilon)-c\log(1/\varepsilon),\] \[\sup_{x\in\mathcal{X}}\frac{P_{T}\left(B_{\delta}(x,\mathbf{d}_{ \mathbb{R}^{d}}),\mathbf{d},\psi,\varepsilon\right)}{\log(1/\varepsilon)}=a-c.\]
Since \(\psi\) is a nonnegative function, we apply Proposition 7.5 and get
\[\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}), \mathbf{d}_{L},\psi_{L},\varepsilon\right)\leq\left(\frac{1}{\varepsilon} \right)^{(a-c+\beta)L^{d}}\]
for sufficiently large \(L\). We have
\[\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{L},\psi_{L}, \varepsilon\right)=\left(\frac{1}{\varepsilon}\right)^{-cL^{d}}\#\left(B_{ \delta}(x,\mathbf{d}_{[-D,L+D]^{d}}),\mathbf{d}_{L},\varphi_{L},\varepsilon \right).\]
Therefore
\[\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D,L+D]^{d}}), \mathbf{d}_{L},\varphi_{L},\varepsilon\right)\leq\left(\frac{1}{\varepsilon} \right)^{(a+\beta)L^{d}}.\]
Now we prove Theorem 7.1. We write the statement again.
**Theorem 7.7** (= Theorem 7.1).: _For any positive number \(\delta\)_
\[\begin{split}&\overline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T, \mathbf{d},\varphi)=\limsup_{\varepsilon\to 0}\frac{\sup_{x\in\mathcal{X}}P_{T} \left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d},\varphi, \varepsilon\right)}{\log(1/\varepsilon)},\\ &\underline{\mathrm{mdim}}_{\mathrm{M}}(\mathcal{X},T,\mathbf{d},\varphi)=\liminf_{\varepsilon\to 0}\frac{\sup_{x\in\mathcal{X}}P_{T} \left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d},\varphi, \varepsilon\right)}{\log(1/\varepsilon)}.\end{split} \tag{7.2}\]
Proof.: It is obvious that the left-hand sides of (7.2) are greater than or equal to the right-hand sides. So it is enough to prove the reverse inequalities. Let \(\beta\) and \(\varepsilon\) be arbitrary positive numbers with \(0<\varepsilon<1\). Let \(D=D(\delta,\beta,\varepsilon)\) be the positive constant introduced in Proposition 7.5. Set
\[a=\sup_{x\in\mathcal{X}}\frac{P_{T}\left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{ d}}),\mathbf{d},\varphi,\varepsilon\right)}{\log(1/\varepsilon)}.\]
For any positive number \(L\) we can take points \(x_{1},\ldots,x_{M}\in\mathcal{X}\) such that
\[\mathcal{X}=\bigcup_{m=1}^{M}B_{\delta}(x_{m},\mathbf{d}_{[-D,L+D]^{d}}),\]
\[M\leq\#\left(\mathcal{X},\mathbf{d}_{[-D,L+D]^{d}},\delta\right)=\#\left( \mathcal{X},\mathbf{d}_{[0,L+2D]^{d}},\delta\right).\]
Then we have
\[\#\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L},\varepsilon\right) \leq\sum_{m=1}^{M}\#\left(B_{\delta}(x_{m},\mathbf{d}_{[-D,L+D]^{d }}),\mathbf{d}_{L},\varphi_{L},\varepsilon\right)\] \[\leq M\sup_{x\in\mathcal{X}}\#\left(B_{\delta}(x,\mathbf{d}_{[-D, L+D]^{d}}),\mathbf{d}_{L},\varphi_{L},\varepsilon\right)\] \[\leq M\left(\frac{1}{\varepsilon}\right)^{(a+\beta)L^{d}}.\]
The last inequality holds for all sufficiently large \(L\) by Proposition 7.6. Therefore
\[\log\#\left(\mathcal{X},\mathbf{d}_{L},\varphi_{L},\varepsilon\right) \leq\log\#\left(\mathcal{X},\mathbf{d}_{L+2D},\delta\right)+(a+ \beta)L^{d}\log(1/\varepsilon).\]
Dividing this by \(L^{d}\) and letting \(L\to\infty\), we have
\[P_{T}(\mathcal{X},\mathbf{d},\varphi,\varepsilon) \leq\lim_{L\to\infty}\frac{\log\#\left(\mathcal{X},\mathbf{d}_{L },\delta\right)}{L^{d}}+(a+\beta)\log(1/\varepsilon)\] \[\leq\log\#\left(\mathcal{X},\mathbf{d}_{1},\delta\right)+(a+ \beta)\log(1/\varepsilon).\]
We can let \(\beta\to 0\) and get
\[P_{T}(\mathcal{X},\mathbf{d},\varphi,\varepsilon) \leq\log\#\left(\mathcal{X},\mathbf{d}_{1},\delta\right)+a\log(1/\varepsilon)\] \[=\log\#\left(\mathcal{X},\mathbf{d}_{1},\delta\right)+\sup_{x\in \mathcal{X}}P_{T}\left(B_{\delta}(x,\mathbf{d}_{\mathbb{R}^{d}}),\mathbf{d}, \varphi,\varepsilon\right).\]
We divide this by \(\log(1/\varepsilon)\) and let \(\varepsilon\to 0\). Then we conclude that the left-hand sides of (7.2) are less than or equal to the right-hand sides.
|
2307.11942 | DeepMartNet -- A Martingale based Deep Neural Network Learning Algorithm
for Eigenvalue/BVP Problems and Optimal Stochastic Controls | In this paper, we propose a neural network learning algorithm for solving
eigenvalue problems and boundary value problems (BVPs) for elliptic operators
and initial BVPs (IBVPs) of quasi-linear parabolic equations in high dimensions
as well as optimal stochastic controls. The method is based on the Martingale
property in the stochastic representation for the eigenvalue/BVP/IBVP problems
and martingale principle for optimal stochastic controls. A loss function based
on the Martingale property can be used for efficient optimization by sampling
the stochastic processes associated with the elliptic operators or value
process for stochastic controls. The proposed algorithm can be used for
eigenvalue problems and BVPs and IBVPs with Dirichlet, Neumann, and Robin
boundaries in bounded or unbounded domains and some feedback stochastic control
problems. | Wei Cai | 2023-07-21T23:51:52Z | http://arxiv.org/abs/2307.11942v3 | DeepMartNet - A Martingale based Deep Neural Network Learning Algorithm for Eigenvalue/BVP Problems and Optimal Stochastic Controls1
###### Abstract
In this paper, we propose a neural network learning algorithm for solving eigenvalue problems and boundary value problems (BVPs) for elliptic operators and initial BVPs (IBVPs) of quasi-linear parabolic equations in high dimensions as well as optimal stochastic controls. The method is based on the Martingale property in the stochastic representation for the eigenvalue/BVP/IBVP problems and martingale principle for optimal stochastic controls. A loss function based on the Martingale property can be used for efficient optimization by sampling the stochastic processes associated with the elliptic operators or value process for stochastic controls. The proposed algorithm can be used for eigenvalue problems and BVPs and IBVPs with Dirichlet, Neumann, and Robin boundaries in bounded or unbounded domains and some feedback stochastic control problems.
**AMS subject classifications**: 35Q68, 65N99, 68T07, 76M99
## 1 Introduction
Computing eigenvalue and/or eigenfunctions for elliptic operators or solving boundary value problem of PDEs, and optimal stochastic control are among the key tasks for many scientific computing problems, e.g., ground states and band structure calculation in quantum systems, and financial engineering. Neural networks have been recently explored for those tasks. FermitNet [1] is one of leading methods using anti-symmetrized neural network wavefunctions in variational Monte Carlo calculation of eigenvalues. Recently, Han et al [2] developed a diffusion Monte Carlo method using the connection between stochastic process and solution of elliptic equation and the backward Kolmogorov equation to build a loss function for eigen-value calculations. Based on the same connection, DeepBSDE has also been designed to solve high dimensional quasi-linear PDEs [10], which has also been used for stochastic controls [9].
In this paper, we use the Martingale problem for the eigenvalue problems and stochastic controls, and a loss function using fact that the expectation of a Martingale is constant, thus among any time locations where the expectation can be approximated by sampling stochastic processes associated with the elliptic operator and value processes.
## 2 DeepMartNet - a Martingale based neural network
First, we will propose a neural network for computing eigenvalue and eigenfunction for elliptic operator in high dimensions as arising from quantum mechanics. It will be apparent that the approach can be applied to solve boundary value problems of elliptic PDEs and initial boundary value problems of quasilinear parabolic PDEs.
Consider the following eigenvalue problem
\[\mathcal{L}u+V(\mathbf{x})u =\lambda u,\ \ \mathbf{x}\in D\subset R^{d}, \tag{2.1}\] \[\Gamma(u) =0,\ \ \mathbf{x}\in\partial D,\]
where the boundary operator could be one of the following three cases,
\[\Gamma(u)=\left\{\begin{array}{cc}u&\text{Dirichlet}\\ \frac{\partial u}{\partial n}&\text{Neumann}\\ \frac{\partial u}{\partial n}-cu&\text{Robin}\end{array}\right.,\]
a decay condition will be given at \(\infty\) if \(D=R^{d}\), and the differential operator \(L\) is given as
\[\mathcal{L}=\mu^{\top}\nabla+\frac{1}{2}Tr(\sigma\sigma^{\top}\nabla\nabla^{ \top}) \tag{2.2}\]
and the vector \(\mu\in R^{d}\), matrix \(\sigma_{d\times d}\) can be associated with the drift and diffusion of the following stochastic Ito process \(X_{t}(\omega)\in R^{d}\),\(\omega\in\Omega\) (random sample space) with \(\mathcal{L}\) as its generator
\[d\mathbf{X}_{t} =\mu dt+\sigma\cdot d\mathbf{B}_{t} \tag{2.3}\] \[\mathbf{X}_{t} =\mathbf{x}_{0}\in D\]
where \(\mathbf{B}_{t}=(B_{t}^{1},\cdots,B_{t}^{d})^{\top}\in R^{d}\) is Brownian motion in \(R^{d}\).
### Dirichlet Eigenvalue Problem
By Ito formula, we have
\[du(\mathbf{X}_{t})=\mathcal{L}u(\mathbf{X}_{t})dt+\sigma^{\top}\nabla u( \mathbf{X}_{t})d\mathbf{B}_{t}, \tag{2.4}\]
i.e.,
\[u(\mathbf{X}_{t}) = u(x_{0})+\int_{0}^{t}\mathcal{L}u(\mathbf{X}_{s})ds+\int_{0}^{t} \sigma^{\top}\nabla u(\mathbf{X}_{s})d\mathbf{B}_{s} \tag{2.5}\] \[= u(x_{0})+\int_{0}^{t}(\lambda-V(\mathbf{X}_{s}))u(\mathbf{X}_{s}) ds+\int_{0}^{t}\sigma(\mathbf{X}_{s})^{\top}\nabla u(\mathbf{X}_{s})d\mathbf{B}_{s}.\]
Due to the fact that the last Ito integral term in (2.5) is a Martingale [6], therefore the following defines Martingale with respect to a \(\mathbf{B}_{t}-\)natural filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\),
\[M_{t}=u(\mathbf{X}_{t})-u(\mathbf{x}_{0})-\int_{0}^{t}(\lambda-V(\mathbf{X}_{ s}))u(\mathbf{X}_{s})ds, \tag{2.6}\]
namely, for any \(s<t\),
\[E[M_{t}|\mathcal{F}_{s}]=M_{s}, \tag{2.7}\]
which implies for any measurable set \(A\in\mathcal{F}_{s}\),
\[\int_{A}M_{t}P(d\omega)=\int_{A}E[M_{t}|\mathcal{F}_{s}]P(d\omega)=\int_{A}M_ {s}P(d\omega) \tag{2.8}\]
or
\[\int_{A}(M_{t}-M_{s})\,P(d\omega)=0, \tag{2.9}\]
i.e.,
\[\int_{\Omega}(M_{t}-M_{s})\,I_{A}(\omega)P(d\omega)=0 \tag{2.10}\]
where \(I_{A}(\omega)\) is the indicator function of the set \(A\).
In particular, if we take \(A=\Omega\in\mathcal{F}_{s}\) in (2.10), we have
\[E[M_{t}-M_{s}]=0. \tag{2.11}\]
i.e. the Martingale \(M_{t}\) has a constant expectation.
In the case of finite domain \(D\), \(\tau_{\partial D}\) is a stopping time where \(\tau_{\partial D}\) is the first exit time of the process \(X_{t}\) outside \(D\), then \(M_{t/\gamma_{\partial D}}\) is still a Martingale [6], thus
\[E[M_{t/\gamma_{\partial D}}-M_{s/\gamma_{\partial D}}]=0. \tag{2.12}\]
**Remark 2.1**.: We could define a different generator \(\mathcal{L}\) by not including \(\mu^{\top}\nabla\) in (2.2), then the Martingale in (2.13) will be changed to
\[M_{t}^{*}=u(\mathbf{X}_{t})-u(\mathbf{x}_{0})-\int_{0}^{t}(\lambda-\mu^{\top }(\mathbf{X}_{s})\nabla-V(\mathbf{X}_{s}))u(\mathbf{X}_{s})ds, \tag{2.13}\]
where the process \(\mathbf{X}_{t}\) is given by \(d\mathbf{X}_{t}=\sigma\cdot d\mathbf{B}_{t}\),instead.
* DeepMartNet for eigenvalue \(\lambda\)
Let \(u_{\theta}(\mathbf{x})\) be a neural network which will approximate the eigenfunction with \(\theta\) denoting all the weight and bias parameters, for a given time interval \([0,T]\), we define a partition
\[0=t_{0}<t_{1}<\cdots<t_{i}<t_{i+1}<\cdots<t_{N}=T,\]
and \(M\)-discrete realizations
\[\Omega^{\prime}=\{\omega_{m}\}_{m=1}^{M}\subset\Omega \tag{2.14}\]
of the Ito process using Euler-Maruyama scheme with \(M\)-realizations of the Brownian motions \(\mathbf{B}_{i}^{(m)}\), \(0\leq m\leq M\),
\[\mathbf{X}_{i}^{(m)}(\omega_{m})\sim X(t_{i},\omega_{m}),0\leq i\leq N,\]
where
\[\mathbf{X}_{i+1}^{(m)} = \mathbf{X}_{i}^{(m)}+\mu(\mathbf{X}_{i}^{(m)})\Delta t_{i}+ \sigma(\mathbf{X}_{i}^{(m)})\cdot\Delta\mathbf{B}_{i}^{(m)},\] \[\mathbf{X}_{0}^{(m)} = \mathbf{x}_{0}\]
where \(\Delta t_{i}=t_{i+1}-t_{i}\),
\[\Delta\mathbf{B}_{i}^{(m)}=\mathbf{B}_{i+1}^{(m)}-\mathbf{B}_{i}^{(m)}.\]
We will build the loss function \(l(\theta,\lambda)\) for the eigenfunction neural network \(u_{\theta}(\mathbf{x})\) and the eigenvalue \(\lambda\) using the Martingale property (2.9) and the M-realization of the Ito diffusion (2.3).
For each \(t_{i}\), we randomly take a subset of \(A_{i}\subset\Omega^{\prime}\) with a uniform sampling (without replacement), corresponding to the mini-batch in computing the stochastic gradient for the empirical training loss, we should have
\[\int_{A_{i}}\left(M_{t_{i+1}}-M_{t_{i}}\right)P(d\omega)=0, \tag{2.15}\]
which gives an approximate identity for the eigenfunction \(u(\mathbf{X}_{t})\) and eigenvalue \(\lambda\) using the \(A_{i}-\)ensemble average,
\[\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|}\left(u(\mathbf{X}_{i+1}^{(m)})-u( \mathbf{X}_{i}^{(m)})-(\lambda-V(\mathbf{X}_{i}^{(m)}))u(\mathbf{X}_{i}^{(m)} )\Delta t_{i}\right)\doteq 0,\]
with \(|A_{i}|\) being the number of samples in \(A_{i}\) (i..e. size of mini-batch), \(\mathbf{X}_{i}^{(m)}=\mathbf{X}_{i}^{(m)}(t_{i},\omega_{m})\), \(\omega_{m}\in A_{i}\), suggesting a loss function, to be used for some epoch(s) of training with a given
selection of \(A_{i}\)'s, in the following form
\[l(\theta,\lambda) = l_{\mathbf{x}_{0}}(\theta,\lambda)=\frac{1}{N}\sum_{i=0}^{N-1} \left(\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|}\left(u_{\theta}(\mathbf{X}_{i+1}^{(m )})-u_{\theta}(\mathbf{X}_{i}^{(m)})-(\lambda-V(\mathbf{X}_{i}^{(m)}))u_{ \theta}(\mathbf{X}_{i}^{(m)})\Delta t_{i}\right)\right)^{2}\] \[+\beta l_{reg}(\theta),\]
where the subscript in \(l_{\mathbf{x}_{0}}\) indicates all the sampled paths of the stochastic process starts from \(\mathbf{x}_{0}\), and an regularization term \(l_{reg}(\theta)\) is added for specific needs to be discussed later.
The DeepMartNet approximation for the eigenvalue \(\lambda\sim\lambda^{*}\) will be obtained by minimizing the loss function \(l(\theta,\lambda)\) using stochastic gradient decent,
\[(\theta^{*},\lambda^{*})=\arg\min l(\theta,\lambda). \tag{2.17}\]
**Remark 2.2**.: **(Mini-batch in SGD training and Martingale property)** Due to the equivalence between (2.9) and (2.7), the loss function defined above ensures that \(M_{t}\) of (2.6) for \(u_{\theta}(\mathbf{x})\) will be a Martingale approximately if the mini-batch \(A_{i}\) explores all subsets of the sample space \(\Omega^{\prime}\) during the SGD optimization process of the training, and the sample size \(M=|\Omega^{\prime}|\to\infty\),the time step \(\max|\Delta t_{i}|\to 0\), and the training converges.
Also, if we take \(A_{i}=\Omega^{\prime}\) for all \(i\), there will be no stochasticity in the gradient calculation for the loss function, we will have a traditional full gradient descent method and the full Martingale property for \(u_{\theta}(\mathbf{x})\) is not enforced either. Therefore, the mini-batch practice in DNN SGD optimization corresponds perfectly with the Martingale definition (2.7).
**Remark 2.3**.: (regularizer \(l_{reg}(\theta)\)). due to non-uniqueness of the eigenvalues, we will need to introduce a constrain if we intend to compute the lowest eigen-value (ground state for quantum systems). The Rayleigh energy can be used for this purpose for zero drift and constant diffusion coefficient
\[l_{reg}(\theta)=\int_{\Omega}\left(\nabla^{\top}u_{\theta}\frac{\sigma\sigma ^{\top}}{2}\nabla u_{\theta}+Vu_{\theta}^{2}\right)dx+\gamma\left(\int_{\Omega }u_{\theta}^{2}d\mathbf{x}-1\right)^{2}, \tag{2.18}\]
where 1-normalization factor for the eigenfunction is also included and the Rayleigh energy integral can be evaluated with a separate and coarse grid.
* DeepMartNet for eigenvalue \(\lambda\) and eigenfunction \(u\)
As the loss function in (2.16) only involves paths \(\mathbf{X}_{t}\) starting from a fixed point \(\mathbf{x}_{0}\), it may not be able to explore all the state space of the process, therefore the minimization problem in (2.17) is expected only to produce a good approximation for the eigenvalue.
To achieve a good approximation to the eigenfunction as well, we will need to sample the paths of the process \(\mathbf{X}_{t}\) from \(K\)- initial point \(x_{0}^{(k)},1\leq k\leq K\),and define a global loss function
\[R(\theta,\lambda)=\frac{1}{K}\sum_{k=1}^{K}l_{x_{0}^{(k)}}(\theta,\lambda), \tag{2.19}\]
whose minimizer \((\theta^{*},\lambda^{*})\) is expected to approximate both the eigenfunction and eigenvalue
\[u(x)\sim u_{\theta^{*}},\qquad\lambda\sim\lambda^{*},\]
where
\[(\theta^{*},\lambda^{*})=\argmin l(\theta,\lambda). \tag{2.20}\]
### Neumann and Robin Eigenvalue Problem
We will illustrate the idea for the Robin eigenvalue problem for the simple case of Laplacian operator,
\[\mathcal{L}=\frac{1}{2}\Delta\]
In probabilistic solutions for Neumann and Robin BVPs, reflecting Brownian motion will be needed which will go through specular reflections upon hitting the domain boundary, and a measure of such reflections, the local time of RBM, will be needed. we will introduce the boundary local time \(L(t)\) for reflecting Brownian motion through a Skorohod problem.
(**Skorohod problem):** Assume \(D\) is a bounded domain in \(R^{d}\) with a \(C^{2}\) boundary. Let \(f(t)\) be a (continuous) path in \(R^{d}\) with \(f(0)\in\bar{D}\). A pair \((\xi(t),L(t))\) is a solution to the Skorohod problem \(S(f;D)\) if the following conditions are satisfied:
1. \(\xi\) is a path in \(\bar{D}\);
2. \(L(t)\) is a nondecreasing function which increases only when \(\xi\in\partial D\), namely, \[L(t)=\int_{0}^{t}I_{\partial D}(\xi(s))L(ds),\] (2.21)
3. The Skorohod equation holds: \[S(f;D):\qquad\xi(t)=f(t)-\int_{0}^{t}n(\xi(s))L(ds),\] (2.22) where \(n(x)\) stands for the outward unit normal vector at \(x\in\partial D\).
For our case that \(f(t)\!=\!B_{t}\), the corresponding \(\xi_{t}\) will be the reflecting Brownian motion (RBM) \(\mathbf{X}_{t}\). As the name suggests, a RBM behaves like a BM as long as its path remains inside the domain \(D\), but it will be reflected back inwardly along the normal direction of the boundary when the path attempts to pass through the boundary. The fact that \(\mathbf{X}_{t}\) is a diffusion process can be proven by using a martingale formulation and showing that \(\mathbf{X}_{t}\) is the solution to the corresponding martingale problem with the Neumann boundary condition [3][4].
Due to the fact that RBM \(\mathbf{X}_{t}\) is a semimartingale [3][4], for which the Ito formula [6] give the following
\[u(\mathbf{X}_{t})\!=\!u(x_{0})-\int_{0}^{t}\!cu(\mathbf{X}_{s})dL(s)-\int_{0} ^{t}\!(V(\mathbf{X}_{s})-\lambda)\,u(\mathbf{X}_{s})ds+\int_{0}^{t}\!\nabla u( \mathbf{X}_{s})\!\cdot\!d\mathbf{B}_{s}, \tag{2.23}\]
where an additional path integral term involving the local time \(L(s)\) is added compared with (2.5). Again, the last term above being a Martingale, we can define the following Martingale
\[M_{t}\!=\!u(\mathbf{X}_{t})-u(x_{0})+\int_{0}^{t}\!cu(\mathbf{X}_{s})dL(s)+ \int_{0}^{t}\!(V(\mathbf{X}_{s})-\lambda)\,u(\mathbf{X}_{s})ds. \tag{2.24}\]
Using this Martingale, the DeepMartNet for the Dirichlet eigenvalue problem can be carried out similarly for the Neumann and Robin eigenvalue problems. The sampling of reflecting Brownian motion and the computation of local time \(L(t)\) can be found in [5].
## 3 Optimal Stochastic control
### Martingale Optimality Principle
In this section, we will apply the above DeepMartNet for solving the optimal control of solutions to stochastic differential equations with a finite time horizon \(T\).
Let us consider the following SDE,
\[d\mathbf{X}_{t}\!=\!\mu(t,\mathbf{X}_{t},u_{t})dt\!+\!\sigma(t,\mathbf{X}_{t} )\!\cdot\!d\mathbf{B}_{t},\ \ 0\!\leq\!t\!\leq\!T \tag{3.1}\]
where control \(u_{t}\in\mathcal{U}\), where \(\mathcal{U}\) is the control space consisting of \(\{\mathcal{F}_{t}\}_{t\geq 0}\)-predictable processes taking values in \(U\!\subset\!R^{m}\) and \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is the natural filtration generated by \(\mathbf{B}_{t}\).
The running cost of the control problem is a function
\[c\!:\!\Omega\times[0,T]\times U\!\to\!R, \tag{3.2}\]
and for a feedback control, the dependence of \(c\) on \(\omega\!\in\!\Omega\) will be through the state of the system \(\mathbf{X}_{t}(\omega)\), i.e.
\[c(\omega,t,u)\!=\!c(\mathbf{X}_{t}(\omega),t,u), \tag{3.3}\]
and the terminal cost is defined by a \(\mathcal{F}_{T}\)-measurable random variable
\[\xi(\omega)=\xi(\mathbf{X}_{T}(\omega)) \tag{3.4}\]
where an explicit dependence on the \(X_{T}\) is assumed.
For a given control \(u\), the total expected cost is then defined by
\[J(u)=E_{u}\left[\xi+\int_{[0,T]}c(\mathbf{X}_{t}(\omega),t,u_{t})dt\right] \tag{3.5}\]
where the expectation \(E_{u}\) is taken with respect to the measure \(P^{u}\).
The optimal control problem is to find a control \(u^{*}\) such that
\[u^{*}=\arg\inf_{u\in\mathcal{U}}J(u). \tag{3.6}\]
To present the Martingale principle for the optimal control, we define the expected remaining cost for a given control \(u\)
\[J(\omega,t,u)=E_{u}\left[\xi(\mathbf{X}_{T}(\omega))+\int_{[t,T]}c(\mathbf{X} _{t}(\omega),t,u_{t})dt|\mathcal{F}_{t}\right] \tag{3.7}\]
and a value process
\[V_{t}(\omega)=\inf_{u\in\mathcal{U}}J(\omega,t,u),\text{ and }E[V_{0}]=\inf_{u \in\mathcal{U}}J(u), \tag{3.8}\]
and a cost process
\[M_{t}^{u}(\omega)=\int_{[0,t]}c(\mathbf{X}_{s}(\omega),s,u_{s})ds+V_{t}( \omega). \tag{3.9}\]
The Martingale optimality principle is stated in the following theorem [8].
**Theorem 3.1**.: _(Martingale optimality principle) \(M_{t}^{u}\) is a \(P^{u}\)-submartingale. \(M_{t}^{u}\) is a \(P^{u}\)-martingale if and only if control \(u=u^{*}\) (the optimal control),and_
\[E[V_{0}]=E_{u}[M_{0}^{u^{*}}]=\inf_{u\in\mathcal{U}}J(u).\]
Moreover, the value process \(V_{t}(\omega)\) satisfies the following backward SDE (BSDE)
\[\left\{\begin{array}{c}dV_{t}=-H(t,\mathbf{X}_{t},\mathbf{Z}_{t})dt+\mathbf{ Z}_{t}dB_{t},0\leq t<T\\ V_{T}(\omega)=\xi(\mathbf{X}_{T}(\omega))\end{array}\right., \tag{3.10}\]
where the Hamiltanian
\[H(t,\mathbf{x},\mathbf{z})=\inf_{u\in\mathcal{U}}f(t,\mathbf{x},\mathbf{z};u)\]
and
\[f(t,\mathbf{x},\mathbf{z};u) = c(\mathbf{x},t,u)+\mathbf{z}\alpha(t,\mathbf{x},u),\] \[\alpha(t,\mathbf{x},u) = \sigma^{-1}(t,\mathbf{x})\mu(t,\mathbf{x},u).\]
From Pardoux-Peng theory [7] on the relation between quasi-linear parabolic equation and backward SDEs, we know that the value process as well as \(Z_{t}(\omega)\) can be expressed in terms of a deterministic function \(v(t,x)\)
\[V_{t}(\omega) = v(t,\mathbf{X}_{t}(\omega))\] \[Z_{t}(\omega) = \nabla v(t,\mathbf{X}_{t}(\omega))\sigma(t,\mathbf{X}_{t}(\omega))\]
where the value function \(v(t,\mathbf{x})\) satisfies the following Hamilton-Jacobi-Bellman (HJB) equation
\[\left\{\begin{array}{cc}0\!=\!\frac{\partial v}{\partial t}(t,\mathbf{x})\! +\!\mathcal{L}v(t,\!\mathbf{x})\!+\!H(t,\!\mathbf{x},\nabla_{x}v\sigma(t, \mathbf{x})),&0\!\leq\!t\!<\!T,\mathbf{x}\!\in\!R^{d}\\ v(T,\mathbf{x})\!=\!\xi(\mathbf{x})\end{array}\right.. \tag{3.11}\]
### DeepMartNet for optimal control \(u^{*}\) and value function \(v(t,\mathbf{x})\)
Based on the martingale principle theorem for the optimal feedback control, we can extend DeepMartNet to approximate the optimal control by a neural network
\[u_{t}(\omega)\!=\!u_{t}(\mathbf{X}(\omega))\!\sim\!u_{\theta_{1}}(t,\! \mathbf{X}(\omega)), \tag{3.12}\]
where \(u_{\theta_{1}}(t,\mathbf{x})\!\in\!C([0,T]\times R^{d}\) ) will be a neural network approximation for a \(d\!+\!1\) dimensional function with network parameters \(\theta_{1}\), and the value function by another network
\[v(t,\mathbf{x})\!\sim\!v_{\theta_{2}}(t,\!\mathbf{x}). \tag{3.13}\]
The loss function will consist of two parts, one for the control network and one for the value network
\[l(\theta_{1},\theta_{2})\!=\!l_{ctr}(\theta_{1})\!+\!l_{val}(\theta_{2})\]
where, similar to (2.16),
\[l_{ctr}(\theta_{1}) = l_{ctr,\mathbf{x}_{0}}(\theta_{1}) \tag{3.14}\] \[= \frac{1}{N}\sum_{i=0}^{N-1}\frac{1}{|A_{i}|}\sum_{m=1}^{|A_{i}|} \left(c(X_{i_{t}},t_{i},u_{\theta_{1}}(t_{i},\mathbf{X}_{i}^{(m)}))\Delta t_{ i}\!+\!v_{\theta_{2}}(t_{i+1}\mathbf{X}_{i+1}^{(m)})\!-\!v_{\theta_{2}}(t_{i}, \mathbf{X}_{i}^{(m)})\right)^{2}\]
and, by using Ito formula for \(v_{\theta_{2}}(t,\!\mathbf{x})\), we can obtain a similar Martingale form for the HJB equation (3.11) and define a similar loss function for the value function \(v(t,\!\mathbf{x})\) as in (2.16)
\[\begin{split}& l_{val}(\theta_{2})\!=\!l_{val,\mathbf{x}_{0}}( \theta_{2})\\ =&\frac{1}{N}\sum_{i=0}^{N-1}\left(\frac{1}{|A_{i}|} \sum_{m=1}^{|A_{i}|}\left(\begin{array}{c}v_{\theta_{2}}(t_{i+1},\mathbf{X} _{i+1}^{(m)})\!-\!v_{\theta_{2}}(t_{i},\mathbf{X}_{i}^{(m)})\!+\\ H(t_{i},\mathbf{X}_{i}^{(m)},\nabla_{x}v_{\theta_{2}}(t_{i},\mathbf{X}_{i}^{(m)}) \sigma(t,\mathbf{X}_{i}^{(m)}))\Delta t_{i}\end{array}\right)\right)^{2}\\ +&\beta\frac{1}{M}\sum_{m=1}^{M}(v_{\theta_{2}}(T,\mathbf{X}_{N}^{(m)})\!-\! \xi(\mathbf{X}_{N}^{(m)}))^{2}.\end{split} \tag{3.15}\]
Again, for better accuracy globally for the control and value networks, we can define a global loss function with more sampling of the starting points \(\mathbf{x}_{0}^{(k)},1\leq k\leq K\),
\[R(\theta_{1},\theta_{2})=\frac{1}{K}\sum_{k=1}^{K}\Big{(}l_{ctr,\mathbf{x}_{0}^{ (k)}}(\theta_{1})+l_{val,\mathbf{x}_{0}^{(k)}}(\theta_{2})\Big{)}. \tag{3.16}\]
The above approach requires an accurate result for the value function \(v(t_{i}\mathbf{X}_{i}^{(m)})\) in the region explored by the process \(\mathbf{X}_{i}^{(m)}\), this could pose a challenge to the DeepMartNet. An alternative approach is to use FBSDE based learning algorithm in [11], which has been shown to be able to meet this requirement.
## 4 Conclusion
In this paper, we introduce a Martingale based neural network for finding the eigenvalue and eigenfunction for general elliptic operators for general types of boundary conditions, solutions of BVPs and IBVPs of PDEs as well as optimal stochastic controls. Future numerical experiments will be carried out to evaluate the efficiency and accuracy of the proposed algorithm, especially in high dimensions.
|
2305.18879 | Dust and inclination corrected star-formation and interstellar medium
scaling relations in nearby galaxies | Following from our recent work, we present a detailed analysis of
star-formation and interstellar medium (ISM) scaling relations, done on a
representative sample of nearby galaxies. H$\alpha$ images are analysed in
order to derive the integrated galaxy luminosity, known as a more instantenous
and accurate star-formation rate (SFR) tracer, and the required photometric and
structural parameters. Dust and inclination corrected H$\alpha$ luminosities,
SFRs and related quantities are determined using a self-consistent method based
on previous work prescriptions, which do not require the assumption of a dust
attenuation curve and use of Balmer decrements (or other hydrogen recombination
lines) to estimate the dust attenuation, with the advantage of determining dust
opacities and dust masses along the way. We investigate the extent to which
dust and inclination effects bias the specific parameters of these relations,
the scatter and degree of correlation, and which relations are fundamental or
are just a consequence of others. Most of our results are consistent within
errors with other similar studies, while others come in opposition or are
inconclusive. By comparing the B band optical and H$\alpha$ (star-forming)
discs scalelengths, we found on average, the star-formation distribution to be
more extended than the stellar continuum emission one (the ratio being 1.10),
this difference increasing with stellar mass. Similarly, more massive galaxies
have a more compact stellar emission surface density than the star-formation
one (average ratio of 0.77). The method proposed can be applied in larger scale
studies of star-formation and ISM evolution, for normal low to intermediate
redshift galaxies. | Bogdan A. Pastrav | 2023-05-30T09:20:32Z | http://arxiv.org/abs/2305.18879v2 | Dust and inclination corrected star-formation and interstellar medium scaling relations in nearby galaxies
###### Abstract
Following from our recent work, we present here a detailed analysis of dust and star-formation scaling relations, done on a representative sample of nearby galaxies. H\(\alpha\) images are analysed in order to derive the integrated flux / luminosity for each galaxy, used as a more instantenous and accurate star-formation rate (SFR) tracer, and the relevant photometric and structural parameters. Dust and inclination corrected H\(\alpha\) luminosities and SFRs are subsequently determined using a method that circumvents the assumption of a dust attenuation curve and the use of the Balmer decrements or other hydrogen recombination lines in order to estimate the dust attenuation, which have been shown to be affected by various biases or being inconsistent between different types of galaxies. We investigate the extent to which dust and inclination effects bias the specific parameters of these relations, the scatter and degree of correlation between the parameters, and which relations are fundamental or are just a consequence of others. Our results are consistent within errors with other similar studies. By comparing the scalelengths of the B band optical and H\(\alpha\) (star-forming) discs, we found on average, the distribution of star-formation to be more extended than the stellar continuum emission one (the ratio being 1.10), this difference increasing with stellar mass. Similarly, more massive galaxies have a more compact stellar emission surface density than the star-formation one (average ratio of 0.77) for our sample. The method proposed can be applied in larger scale studies of star-formation and ISM evolution, at low to intermediate redshifts.
keywords: galaxies: star formation - ISM: dust, extinction - ISM: evolution - galaxies: evolution - galaxies: spiral - galaxies: ISM
## 1 Introduction
Dust and star-formation scaling relations are essential in studies of interstellar medium (ISM) evolution, in star-formation and galaxy evolution studies, or related to the duty cycle of dust and gas in galaxies. Dust can be considered a good ISM tracer even though it is found in quantities of only up to \(\simeq\) 1% of the total ISM mass (Draine & Lee 1984, Draine 2003), the rest being mostly gas in different phases and forms (atomic/neutral, molecular or ionized hydrogen). It is also a processor of stellar radiation as it scatters and absorbs the stellar radiation in ultraviolet (UV) and optical and emits it at longer wavelengths, in mid-infrared (MIR) to far-infrared (FIR) domain. Besides being present in significant quantities in the discs of spiral galaxies (Tuffs et al. 2002, Popescu et al. 2002, Vlahakis et al. 2005, Driver et al. 2007, Dariush et al. 2011, Rowlands et al. 2012, Bourne et al. 2012, Dale et al. 2012) - the _diffuse_ dust distribution, it also surrounds the birthclouds of stars in the star-forming regions - the _localized_ distribution, obscuring the radiation coming from the young stars and is a nuisance in estimations of star-formation rates and of the fraction of radiation which escapes the birthclouds of stars into ISM.
Star-formation rates (SFR), which quantify the star-formation process - the transformation of cold gas into stars, are pivotal quantities in the attempt to understand and characterise galaxy evolution. As galaxies can have various ISM properties and be in different stages of star-formation (e.g. actively star-forming, starbursts, quiescent), deriving consistent and unbiased values for the star-formation rates based on different proxies or tracers is a real problem, and it produces important differences in the values obtained. Another aspect to be considered when using different tracers to estimate SFR is that most of them are affected by various systematic biases, such as dust attenuation, considering a constant initial mass function (IMF), the metallicity dependence, location within the galaxy and variations in the ISM conditions. This in turn can significantly influence the related scaling relations and their characteristic parameters (e.g. slope, zero-point or correlation coefficient).
Thus, the most direct method of determination for star-formation
rates is to count the number of stars of a certain age (Kennicutt & Evans, 2012), but at the level of current instrumentation capabilities this method is limited to mostly Local Group galaxies. For more distant galaxies, the usual strategy to measure SFRs is to use UV continuum (Salim et al., 2007) and emission line tracers. Near-ultraviolet (NUV) continuum is one of the most direct tracers of recent star-formation as it traces the emission from young stars (Kennicutt & Evans, 2012). However, NUV observations are strongly affected by interstellar dust attenuation, this effect being less important at longer wavelengths, as shown by Tuffs et al. (2004), Mollenhoff et al. (2006), Gadotti et al. (2010), Pastrav et al. (2013a) and Pastrav et al. (2013b). The UV slope (so called \(\beta\)) has been used to estimate the attenuation (Hao et al., 2011) but this approach relies on many assumptions like the shape of the dust attenuation curve and dust geometry or the unknown intrinsic UV colors. Combination of UV and infrared (IR) have been used by Kennicutt et al. (2009), Hao et al. (2011), Skibba et al. (2011), Whitaker et al. (2014), Remy-Ruyer et al. (2015), Barro et al. (2019), Hunt et al. (2019) to derive star-formation rates corrected for dust attenuation effects using an energy-balance approach (Calzetti et al., 2007, Zhu et al., 2008, Kennicutt et al., 2009). Still, this method is also affected by dust attenuation, which is often calculated from Balmer decrements, or by a stellar population age dependence (Kennicutt & Evans, 2012). Another method is to use hydrogen recombination lines (with wavelengths in the optical range), such as the H\(\alpha\) line flux/luminosity, in combination with other MIR fluxes such as the 8\(\mu\)m or 22/24\(\mu\)m fluxes (Kennicutt et al., 2007, Kennicutt et al., 2009, Calzetti et al., 2007, Skibba et al., 2011, Remy-Ruyer et al., 2015, Hunt et al., 2016, Hunt et al., 2019). Other near-IR hydrogen recombination lines, such as the near-infrared Pa\(\beta\) and Pa\(\beta\) lines, have been used to estimate SFR as these are far less affected by dust extinction, but at the same time are fainter for longer wavelengths and more sensitive to the density and temperature of the gas (Calzetti, 2013). These lines can probe higher optical depths than the Balmer decrements (Liu et al. (2013)), reveal more obscured star-formation regions than the former, as found by Tateuchi et al. (2015), Cleri et al. (2022), and have been used by Alonso-Herrero et al. (2006), Calzetti et al. (2007), Piqueras Lopez et al. (2016), Gimenez-Arteaga et al. (2022) or Cleri et al. (2022) to calibrate SFR indicators in the MIR to NIR.
In this third paper of the series, we concentrate on the star-formation and dust/ISM characteristic scaling relations, investigating the extent to which dust and inclination (projection) effects bias the specific parameters of these relations, such as slope, zero point, scatter and correlation coefficient, or produce underestimated values for the star-formation rates of galaxies and other associated relevant parameters. Through the method proposed, we try to eliminate the uncertainties produced by dust attenuation in the measurements of relevant quantities, especially in the SFR. We choose to use the \(H\alpha\) optical emission line flux as a SFR tracer. We make use of H\(\alpha\) galaxy images for the purpose of this work, and, as in Pastrav (2020) and Pastrav (2021), we decompose each galaxy into its main components (bulge+disc). Then, we use the method of Pastrav et al. (2013a) and Pastrav et al. (2013b) and their numerical corrections for projection (inclination), dust and decomposition effects, to recover the needed corrected photometric and structural parameters involved in the analysed scaling relations. The numerical corrections were derived by analysing and fitting simulated images of galaxies produced by means of radiative transfer calculations and the model of Popescu et al. (2011). The empirical relation found by Grootes et al. (2013) is used again here, tailored for the H\(\alpha\) line wavelength to determine the new values for the central face-on dust opacity (\(\tau_{H\alpha}\)), a parameter which is essential when applying the corrections for dust effects. When determining corrected H\(\alpha\) luminosities and star-formation rates, our proposed method circumvents the need of assuming of a dust attenuation curve (usually a Galactic extinction curve or other similar ones as in many studies) and the use of Balmer decrements or other hydrogen recombination lines to estimate the dust attenuation (assuming a foreground dust screen approximation), which have been shown to be affected by various biases or being inconsistent for different types of galaxies. We thus derive SFR using the unattenuated H\(\alpha\) luminosities to obtain more accurate and instantenous star-formation rate values than would be derived through other methods. For most of the corrected relations we investigate the degree of correlation between the parameters, calculate the scatter of these relations and analyse the implications of the main results for star-formation and galaxy evolution. We then discuss these results and compare with other relevant studies on nearby galaxies. The method proposed here can be applied successfully in future larger scale studies of star-formation and ISM evolution, at low to intermediate redshifts, eliminating some of the biases involved. Our study comes to emphasize the importance of having accurate, unbiased derived star-formation rates and scaling relations in studies of ISM evolution and star-formation.
The paper is organised as follows. In Sect. 2 we present the galaxy sample used in this study, while in Sect. 3 we describe the method used for this analysis and the motivation for our choices. In Sect. 4 we present the main results - the dust and inclination corrected star-formation and interstellar medium scaling relations, together with all their characteristic parameters, and comment upon them in relation with other relavant studies in the literature. In Sect. 5 we discuss upon the possible sources of errors, differences with other studies, and the limitations of the method, while in Sect. 6 we summarise the results obtained in this study and draw conclusions.
## 2 Sample
Our sample consists of 19 low-redshift spiral galaxies and 5 lenticulars, included in the SINGS (_Spitzer_ Infrared Nearby Galaxies Survey; Kennicutt et al. 2003) survey and the KINGFISH project (Key Insights on Nearby Galaxies: a Far-Infrared Survey with _Herschel_; Kennicutt et al. 2011). The galaxies were already analysed in B band in Pastrav (2020) and Pastrav (2021), while another galaxy - NGC 5194 (M51) was added here. For the purpose of this study, we needed the H\(\alpha\) line images for the same galaxies (analysed previously in B band), which we extracted from the NASA/IPAC Infrared Science Archive (IRSA) and NASA IPAC Extragalactic Database (NED). Most of the images were taken with the KPNO (Kitt Peak National Observatory) and CTIO (Cerro Tololo Inter-American Observatory) telescopes (see Kennicutt et al., 2003, Kennicutt et al., 2009). For NGC 5033 no suitable H\(\alpha\) image was found, therefore this galaxy was excluded in this study.
The KINGFISH project is an imaging and spectroscopic survey, consisting of 61 nearby (d\(<\)30 Mpc) galaxies, chosen to cover a wide range of galaxy properties (morphologies, luminosities, SFR, etc.) and local ISM environments characteristic for the nearby universe.
## 3 Method
The method used in this third part of our study is in general similar to the one used in Papers I and II when it comes to the fitting procedure, the sky determination and subtraction, the photometry (now all done for the H\(\alpha\) images), while the relations for the derivation of the dust opacity and dust mass were adapted to the H\(\alpha\) line wavelength. Therefore, for a more detailed description, we refer the reader to Paper I, where the whole procedure is presented in great detail. Here, we just resume the whole procedure into a more concise version, given below.
### Fitting procedure
For the fitting procedure of the H\(\alpha\) line images of the galaxies in our sample, just as for the B band images, we used the GALFIT (version 3.0.2) data analysis algorithm (Peng et al. 2002, Peng et al. 2010). GALFIT uses a non-linear least-squares fitting based on the Levenberg-Marquardt algorithm. For the structural analysis (bulge-disc decomposition) of each galaxy and to fit the observed surface brightness of the spirals and lenticulars, we used the exponential ("expdisc") and the Sersic ("sersic") functions available in GALFIT, for the disc and bulge surface brightness profiles, while the "sky" function was used for an initial estimation of the background in each image.
As in our previous works, the free parameters of the fits are: the X and Y coordinates of the centre of the galaxy in pixels, the bulge and disc integrated magnitudes, the disc scale-length / bulge effective radius (for exponential/Sersic function), axis-ratios of discs and bulges, bulge Sersic index (for Sersic function), the sky background (only in the preliminary fit - Step 1, see Paper I) and the sky gradients in X and Y. The input values for the coordinates of galaxy centre were determined after a careful inspection of each image. Initial values for the position angles (PA) and axis-ratios were taken from NED. Although the central coordinates are free parameters, we imposed a constraint on the fitting procedure, ensuring that the bulge and disc components were centred on the same position. The axis-ratio is defined as the ratio between the semi-minor and semi-major axis of the model fit (for each component). The position angle is the angle between the semi-major axis and the Y axis (increasing counter clock-wise). To mask the pixels corresponding to the additional light coming from neighboring galaxies, stars, compact sources, AGN or image artifacts, for each galaxy image we used a complex star-masking routine to create a bad pixel mask. This was used as input in GALFIT.
### Sky determination and subtraction. Photometry
Following the procedure in three steps described in Paper I to estimate as accurate as possible the background level, we calculate the integrated fluxes for each galaxy, together with the corresponding bulge-to-disc ratios and then derive all the structural and photometric parameters (this time at H\(\alpha\) wavelength). The integrated (total) flux of each galaxy is calculated from the maximum curve-of-growth (CoG) value (in counts), at the \(R_{max}\) galactocentric radius (this is defined as the radius beyond which there is no galaxy emission and, therefore, the CoG is basically flat towards larger radii). As for the B band images in Paper I, the uncertainties of the fluxes are estimated from the root mean square of the CoG values from the first 10 elliptical annuli beyond \(R_{max}\). The bulge-to-disc ratio (\(B/D\)) is estimated from the disc and bulge CoGs and compared with the one determined by the ratio of the total counts of the decomposed disc and bulge images, as it has to be consistent, within errors. We have used again the positive sky residuals in the outer parts of galaxies (towards \(R_{max}\) and beyond) to estimate the systematic errors in bulge-to-disc ratios. We determine here the H\(\alpha\) bulge-to-disc ratios just to compare with the B band values (one would expect the former to be higher) as these are not really necessary for the purpose of this study.
Here, in deriving the H\(\alpha\) line fluxes, we have to take into consideration the contamination of the flux values by the \(N[II]\lambda 6548,6584\) lines, positioned very close in the spectrum with respect with this Balmer line (\(\lambda(H\alpha)=6563\)A). Therefore, the previously derived values have to be corrected for the effect of the aforementioned line deblending / mixing, using the \(N[II]/H\alpha\) ratios available in the literature. In this study, we have chosen to use the values derived in Kennicutt et al. (2009) (see their Table 1), and multiplied the initially derived fluxes with a correction factor, as in the equation below:
\[F_{d}^{obs}(H\alpha)=F_{d}^{obs}([NII]+H\alpha)\times f_{corr}, \tag{1}\]
with \(f_{corr}=1/([NII]/H\alpha+1)\), and \(F_{d}^{obs}([NII]+H\alpha)\) the contaminated H\(\alpha\) flux. Then, we also corrected the new values for foreground extinction, \(A_{es}\), derived by considering the values at optical wavelengths taken from NED, as in Schlafly & Finkbeiner (2011) recalibration of the Schlegel et al. (1998) infrared based dust map, and interpolating at the H\(\alpha\) line wavelength. This gives approximately \(A_{es}(H\alpha)=0.6A_{es}(B)\). As the H\(\alpha\) line emission is concentrated in the young stellar disc of galaxies, we use the integrated disc flux to further derive the observed (measured) H\(\alpha\) luminosities of the sample (avoiding any bulge flux contamination this way), according with the general formula
\[L^{obs}(H\alpha)=4\pi d_{gal}^{2}F_{d}^{obs}(H\alpha) \tag{2}\]
The derived integrated fluxes (in \(erg/cm^{2}/s\)) and the corresponding H\(\alpha\) luminosities (both in log scale), the bulge-to-disc ratios for all galaxies of our sample are given in Table 1, together with the distances to each galaxy used in this study (the same as in the previous papers), taken from NED.
### Deriving star-formation rates
The H\(\alpha\) luminosity, previously calculated, is needed in this study to derive first the measured star-formation rates (SFR) for the analysed galaxies, as it is known to be a SFR tracer (e.g. add references). It is known that star-formation rates derived based on the \(H\alpha\) line or other hydrogen recombination lines (which arise from HII regions throughout the galaxy) are more accurate and give of a more instantaneous value for the SFRs (tracing more recent star-formation, \(<\)10 Myr; Kennicutt & Evans 2012) than when using UV continuum or a combination of UV+MIR/FIR fluxes, or extract the rates from fitting the spectral energy distribution (SED) of a galaxy (which gives an estimation of star-formation over the last 100-500 Myr).
In the literature, there is a wide range of studies to estimate SFR using the either a combination of FUV+TIR fluxes (Kennicutt et al. 2009, Hao et al. 2011, Skibba et al. 2011, Remy-Ruyer et al. 2015, Hunt et al. 2016, Hunt et al. 2019), FUV/NUV+MIR (e.g. 22\(\mu\)m flux, Leroy et al. 2021), the H\(\alpha\) line flux/luminosity and other MIR fluxes such as H\(\alpha\)+\(8\mu\)m (Kennicutt et al. 2007, Kennicutt et al. 2009, Calzetti et al. 2007), H\(\alpha\)+24\(\mu\)m (Kennicutt et al. 2007, Calzetti et al. 2007, Kennicutt et al. 2009, Skibba et al. 2011, Remy-Ruyer et al. 2015, Hunt et al. 2016, Hunt et al. 2019), or even just
the 24\(\mu\)m luminosity (Alonso-Herrero et al. 2006, Calzetti et al. 2007, Piqueras Lopez et al. 2016) or the Par line flux (Alonso-Herrero et al. 2006, Tateuchi et al. 2015, Piqueras Lopez et al. 2016). The use of TIR and 24\(\mu\)m luminosities has been shown to be problematic due to the contamination of dust heating by low mass older stars (Kennicutt et al. 2009, Boquien et al. 2014, De Looze et al. 2014, Viaene et al. 2017).
Since we had thoroughly derived the fluxes and luminosities for our small sample, together with the self-consistent calculation of dust opacities which attenuate the \(H\alpha\) fluxes, we considered that deriving corrected SFRs based on the unattenuated H\(\alpha\) luminosities to be considerably accurate. Therefore we do not use other UV or MIR/FIR/TIR fluxes / luminosities in combination with the H\(\alpha\) luminosity. We further motivate our choice in the following section. To determine the observed (attenuated) star-formation rates, we use the calibration from Kennicutt (1998) and convert from a Salpeter (Salpeter 1955) to a Chabrier (Chabrier 2003) initial mass function (IMF), as in Gimenez-Arteaga et al. (2022), obtaining
\[SFR^{obs}=4.4\times 10^{-42}L^{obs}(H\alpha) \tag{3}\]
We also determine the specific star-formation rates, \(sSFR\), for the galaxies in our sample - the ratio between SFR and stellar mass - \(sSFR^{obs}=SFR^{obs}/M_{*}\). For the purpose of investigating certain star-formation related scaling relations and because we did not calculate the molecular gas surface densities, we instead derive the observed star-formation surface densities for our galaxies as
\[\Sigma_{SFR}^{obs}=\frac{SFR^{obs}}{2\pi R_{eff,d}^{2}(H\alpha)} \tag{4}\]
with \(R_{eff,d}(H\alpha)\) as the effective observed H\(\alpha\) disc radius.
### Dust opacity and dust mass derivation
In Papers I and II (see section 3.3), we described the procedure and equations used to derive the dust opacity and dust mass for the analysed sample of spiral galaxies, in B band. Now we have to adapt the same equations for this study, at H\(\alpha\) line wavelength of 6563A. This is because the dust optical depth in the disk and the associated dust mass will be different at this wavelength. Starting with Eq.(2) from Grootes et al. (2013) (but see also its derivation in Eqs.(A1-A5) from Appendix A of the same paper), that relates the dust mass at a wavelength \(\lambda\) to the corresponding central face-on dust opacity, we rewrite the aforementioned relation for the H\(\alpha\) case:
\[\tau_{H\alpha}^{f}=K(H\alpha)\frac{M_{dust}(H\alpha)}{R_{z,d}^{2}(H\alpha)} \tag{5}\]
This relation was calculated considering the dust geometry of the Popescu et al. (2011) model, where the diffuse dust in the disk (which mostly determines the optical depth of a spiral galaxy) is distributed axisymetrically in two exponential disks. Therefore, the optical depth at a given wavelength and position will be a function of the central face-on density of dust, respectively, the face-on opacity at a reference wavelength, \(\lambda\). In Eq. 5, \(K(H\alpha)\) is a constant containing the details of the dust geometry and the spectral emissivity of the Weingartner & Draine (2001) model. We thus had to recalculate it using Popescu et al. (2011) model equations (and data from their E.1 table) and the dust model of Draine (2003), now having a value of \(0.6040\,pc^{2}/kg\), considerably different than the value of \(1.0089\,pc^{2}/kg\), found for B band. Correspondingly, \(R_{s,d}(H\alpha)\) is the scalelength of the H\(\alpha\) stellar disc, in kpc.
Now, looking at the empirical correlation between \(\tau_{B}^{f}\) and stellar mass surface density (\(\mu\),) of nearby spiral galaxies found by Grootes et al. (2013) that we used in Papers I and II to derive the central face-on optical depth of the disc in B band
\[\log(\tau_{B}^{f})=1.12(\pm 0.11)\cdot\log(\mu_{*}/M_{\odot}kpc^{-2})-8.6(\pm 0.8), \tag{6}\]
we had to evaluate the changes needed and their significance, for it to be valid for deriving \(\tau_{H\alpha}^{f}\) - the dust opacity of the disc at H\(\alpha\) line wavelength. Therefore, we came back to the detailed derivation of dust opacity in the paper of Grootes et al. (2013), namely Eqs. (1), (2) in section 2 and more important, the suite of relations (A1-A9), thoroughly described in their Appendix A. Following the equations, the main change in the expression of \(\tau_{A}^{f}\) would be in the value of the factor \(A\) (See Eqs. (A8) & (A9) in the same paper), empirically calibrated to the value of \(6.939\times 10^{-13}\ arcsec^{2}J/Jy/s/Hz/m^{2}/sr\) based on Popescu et al. (2011) model. This is because a factor of \(\gamma^{2}\) was introduced in the expression of \(A\) by the authors to convert the disc scalelength of the disk in B band to the corresponding one in \(r\) band (suitable for their analysis), as \(\gamma=R_{s,d}(B)/R_{s,d}(r)\). Its value was derived considering the fixed geometry of the Popescu et al. (2011) model, where the intrinsic disc scalelength of the disc decreases with wavelength, thus \(R_{s,d}(r)\) being smaller than \(R_{s,d}(B)\), this ratio being \(\gamma=1.067\) at \(r\) band wavelength of 6600A.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{_Galaxy_} & \(d_{gal}\) & \(B/D\) & \(F_{gal}^{obs}\) & \(\sigma_{F,gal}\) & \(F_{d}^{obs}\) & \(\sigma_{F,gal}\) \\ & [Mpc] & & \(\lfloor\frac{d\sigma^{2}}{cm^{2}}\rfloor\) & \(\lfloor\frac{d\sigma^{2}}{cm^{2}}\rfloor\) & \(\lfloor\frac{d\sigma^{2}}{cm^{2}}\rfloor\) & \(\lfloor\frac{d\sigma^{2}}{cm^{2}}\rfloor\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline NGC0024 & \(7.67^{a}\) & \(0.00^{+0.00}_{-0.00}\) & -11.50 & -12.19 & -11.50 & -12.23 \\ NGC0628 & \(9.59^{b}\) & \(0.05^{+0.00}_{-0.00}\) & -10.99 & -12.36 & -11.01 & -12.38 \\ NGC2841 & \(14.60^{a}\) & \(0.21^{+0.00}_{-0.00}\) & -10.70 & -12.94 & -10.78 & -13.04 \\ NGC2976 & \(3.57^{c}\) & \(0.00^{+0.00}_{-0.00}\) & -10.85 & -12.46 & -10.85 & -12.38 \\ NGC3031 & \(3.62^{d}\) & \(1.26^{e}_{-0.05}\) & -10.79 & -11.72 & -11.15 & -12.19 \\ NGC3190 & \(24.20^{e}\) & \(0.36^{+0.02}_{-0.08}\) & -12.34 & -12.49 & -12.47 & -12.64 \\ NGC3621 & \(6.73^{e}\) & \(0.03^{+0.02}_{-0.01}\) & -11.88 & -13.03 & -11.89 & -13.04 \\ NGC3938 & \(17.90^{f}\) & \(0.03^{+0.00}_{-0.01}\) & -11.87 & -12.48 & -11.88 & -12.49 \\ NGC4254 & \(14.40^{f}\) & \(0.08^{+0.00}_{-0.00}\) & -11.29 & -13.00 & -11.32 & -13.04 \\ NGC4450 & \(15.20^{f}\) & \(0.29^{+0.08}_{-0.08}\) & -11.61 & -12.12 & -11.72 & -12.30 \\ NGC4594 & \(9.5^{b}\) & \(4.71^{+0.06}_{-0.08}\) & -10.98 & -11.32 & -11.14 & -12.11 \\ NGC4736 & \(4.59^{b}\) & \(1.23^{+0.03}_{-0.02}\) & -10.90 & -12.11 & -11.15 & -12.49 \\ NGC4826 & \(5.50^{f}\) & \(0.77^{+0.00}_{-0.01}\) & -11.30 & -12.33 & -11.57 & -12.56 \\ NGC5055 & \(8.20^{f}\) & \(0.21^{+0.00}_{-0.00}\) & -11.58 & -12.66 & -11.66 & -12.74 \\ NGC5474 & \(6.98^{a}\) & \(0.16^{+0.03}_{-0.03}\) & -10.78 & -13.28 & -11.84 & -13.34 \\ NGC7331 & \(13.90^{f}\) & \(0.66^{+0.02}_{-0.02}\) & -10.99 & -12.16 & -11.21 & -12.28 \\ NGC7793 & \(3.70^{f}\) & \(0.01^{+0.00}_{-0.00}\) & -10.85 & -13.03 & -10.86 & -13.04 \\ NGC1377 & \(21.00^{f}\) & \(1.21^{+0.00}_{-0.00}\) & -12.13 & -
Leaving the other reference values unchanged, we subsequently re-calculated the value of factor \(A\) for our case by multiplying the value derived in Grootes et al. (2013) with \(\gamma^{2}\) and then dividing it with another \(\gamma_{Ha}^{2}\) term, to account for the conversion of disc scale-lengths \(R_{i,d}(B)\Rightarrow R_{i,d}(H\alpha)\). To derive this new value we again considered the same geometry of the Popescu et al. (2011) model and interpolated their values at the \(\lambda_{Ha}=6563\) A wavelength. We derived a \(\gamma_{Ha}=1.074\) value and a new factor \(A\) with a corresponding value of \(6.852\times 10^{-13}\ arcsec^{2}J/Jy/s/H_{2}/m^{2}/sr\). As a result, the changes induced in the slope and intercept of the correlation will be within the standard deviations derived by Grootes et al. (2013), of \(\pm 0.11\) and \(\pm 0.8\).
We can thus rewrite Eq. 6 as
\[\log(\tau_{Ha}^{f})=1.12(\pm 0.11)\cdot\log(\mu_{,Ha}/M_{\odot}kpc^{-2})-8.6( \pm 0.8), \tag{7}\]
with \(\mu_{,Ha}\) being the stellar mass surface density (derived using the scalelength of the H\(\alpha\) disc obtained through the bulge-disc decomposition)
\[\mu_{,Ha}=M_{*}/2\pi R_{i,d}^{2}(H\alpha) \tag{8}\]
The dust opacities, stellar mass surface densities and dust masses calculated using these relations are presented in Table 2.
### Correcting for dust, projection and decomposition effects
Once again, as in Papers I and II, in order to derive corrected values for all the parameters involved in the analysed dust/ISM and star-formation scaling relations, we used the method developed and presented in Pastrav et al. (2013a,b). More specifically, we used the whole chain of corrections presented in Eqs.(4-13) Pastrav et al. (2013a) and Eqs.(3-13) from Pastrav et al. (2013b), together with all the numerical results (given in electronic form as data tables at CDS) to correct the measured parameters for projection (inclination), dust and decomposition effects, in order to obtain their dust-free, intrinsic values. As now we analysed the images of \(H\alpha\) emission which comes from the young stellar disc of galaxies, we used the aforementioned numerical corrections for the young stellar disc, already derived in Pastrav et al. (2013a) for the H\(\alpha\) line. Then we proceeded similarly as in Paper I to correct all the necessary photometric and structural parameters (see Eqs. (4-9) & (12-14) in Paper I for discs and Eqs. (1-5) & (8-12) in Paper II for bulges) for all the previously mentioned biases. The relevant parameters are also corrected for foreground extinction and cosmological redshift dimming (the latter in the range of \(0.01-0.05\) mag.). K-corrections or evolutionary ones were not applied as all the galaxies are at low redshift.
In the case of star-formation rates, the \(H\alpha\) luminosity was the quantity that had to be debiased, in order to obtain corrected \(SFR\) value for our sample. Thus, we can write the mathematical luminosity as
\[L(H\alpha)^{corr}=L(H\alpha)^{obs}e^{\tau_{Ha}}=L(H\alpha)^{obs}10^{\frac{4 \mu_{a}}{2-3}}, \tag{9}\]
with \(A_{H\alpha}=1.086\tau_{Ha}\) being the attenuation of the H\(\alpha\) line emission. However, the dust opacity of the emission line is usually not equal with the dust opacity of the stellar continuum - the opacity of the starlight heating the dust, which depends on many factors (dust attenuation curve, dust geometry, the SED of the stellar
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline _Galaxy_ & \(\tau_{Ha}^{f}\) & \(log(\mu_{*})\) & \(log(\mu_{*}^{f})\) & \(log(M_{*})\) & \(log(M_{dust})\) & \(log(M_{dust}^{i})\) & \(log(m_{HI})\) & \(\sigma_{m_{HI}}\) & \(\sigma_{\tau_{Ha}^{f}}\) & \(\sigma_{\mu_{a}}\) & \(\sigma_{M_{dust}}\) & \(\sigma_{M_{dust}}\) \\ & & \(\{\frac{M_{\odot}}{12pc^{-2}}\}\) & \([\frac{M_{\odot}}{12pc^{-2}}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([\frac{M_{\odot}}{12pc^{-2}}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) & \([M_{\odot}]\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) & (14) \\ \hline NGC 0024 & 2.21 & 7.99 & 9.10 & 9.48\({}^{d}\) & 6.81 & 5.70 & 9.97 & 0.07 & 0.41 & 0.07 & 0.07 & 0.09 & 0.14 \\ NGC 0628 & 0.66 & 7.51 & 7.55 & 10.29\({}^{b}\) & 7.56 & 7.53 & 9.57 & 0.07 & 0.10 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 2841 & 1.22 & 7.76 & 8.46 & 10.17\({}^{e}\) & 7.47 & 6.76 & 9.94 & 0.07 & 0.19 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 2976 & 2.33 & 8.01 & 8.68 & 8.96\({}^{e}\) & 6.29 & 5.61 & 8.10 & 0.07 & 0.47 & 0.08 & 0.07 & 0.11 & 0.16 \\ NGC 3031 & 2.44 & 8.02 & 8.73 & 10.39\({}^{h}\) & 7.72 & 7.02 & 8.88 & 0.07 & 0.26 & 0.04 & 0.04 & 0.05 & 0.05 \\ NGC 3190 & 1.14 & 7.73 & 8.61 & 10.03\({}^{e}\) & 7.33 & 6.45 & 8.63 & 0.16 & 0.18 & 0.06 & 0.06 & 0.07 & 0.08 \\ NGC 3621 & 1.08 & 7.71 & 8.41 & 9.43\({}^{e}\) & 6.72 & 6.02 & 9.84 & 0.07 & 0.17 & 0.06 & 0.06 & 0.07 & 0.09 \\ NGC 3938 & 0.33 & 7.25 & 7.32 & 9.46\({}^{e}\) & 6.70 & 6.62 & 9.90 & 0.07 & 0.05 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 4254 & 0.68 & 7.53 & 7.64 & 9.61\({}^{e}\) & 6.88 & 6.77 & 9.58 & 0.07 & 0.11 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 4450 & 3.28 & 8.14 & 8.47 & 10.40\({}^{h}\) & 7.75 & 7.41 & 8.61 & 0.07 & 0.85 & 0.10 & 0.10 & 0.11 & 0.11 \\ NGC 4594 & 3.80 & 9.20 & 10.92 & 10.97\({}^{e}\) & 8.44 & 5.60 & 8.41 & 0.07 & 0.69 & 0.07 & 0.07 & 0.08 & 0.19 \\ NGC 4736 & 3.80 & 8.76 & 9.12 & 10.21\({}^{h}\) & 7.63 & 6.64 & 8.61 & 0.07 & 0.62 & 0.06 & 0.06 & 0.08 & 0.09 \\ NGC 4826 & 1.97 & 7.94 & 8.32 & 9.99\({}^{e}\) & 7.31 & 6.93 & 8.44 & 0.07 & 0.61 & 0.12 & 0.12 & 0.14 & 0.14 \\ NGC 5055 & 2.27 & 8.00 & 8.87 & 10.49\({}^{e}\) & 7.82 & 6.94 & 9.75 & 0.07 & 0.35 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 5474 & 0.55 & 7.44 & 7.52 & 9.06\({}^{e}\) & 6.32 & 6.25 & 8.99 & 0.11 & 0.07 & 0.05 & 0.05 & 0.07 & 0.07 \\ NGC 7331 & 1.22 & 7.76 & 8.58 & 10.56\({}^{e}\) & 7.86 & 7.04 & 9.95 & 0.07 & 0.19 & 0.06 & 0.06 & 0.07 & 0.07 \\ NGC 7793 & 2.04 & 7.96 & 8.25 & 9.47\({}^{e}\) & 6.79 & 6.50 & 8.94 & 0.07 & 0.33 & 0.06 & 0.06 & 0.08 & 0.0
populations that heat the dust, etc.). But, as it has been shown in Kennicutt et al. (2009), for the particular case of the H\(\alpha\)(\(\lambda\)6563A) line, this approximation holds, with the exception of more extreme cases. Thus, considering this aproximation valid for our case, we use the \(\tau_{H\alpha}\) values derived from Eq. 7 to account for the dust attenuation of our SFR tracer, the H\(\alpha\) luminosity. With this choice, we avoid the need of assuming of a dust attenuation curve (for example, a Galactic extinction curve or other similar ones) and the use of either the Balmer decrements (the H\(\alpha\)/H\(\beta\) ratios, Calzetti et al. 2000, Kewley et al. 2002, Brinchmann et al. 2004, Moustakas et al. 2006, Pessa et al. 2021, Pessa et al. 2022), or other near-IR hydrogen recombination lines and ratios between them, such as Paschen (Pa\(\alpha\), Pa\(\beta\), eg. Piqueras Lopez et al. 2016); Pa\(\alpha\)/H\(\alpha\), Pa\(\beta\)/H\(\alpha\) ratios, e.g. Alonso-Herrero et al. 2006, Calzetti et al. 2007, Liu et al. 2013, Cleri et al. 2022, Gimenez-Arteaga et al. 2022) or Brackett lines (\(Br\gamma\), Piqueras Lopez et al. 2016), which may introduce systematic errors when deriving dust attenuations (with a greater extent for the Balmer line ratio). Our values for the attenuation of the emission line have been self-consistently derived, with a fixed star-dust geometry introduced in the calculation of dust opacities and dust masses, which can also introduce some systematic errors. Still, as the relations in Eqs. 5 and 7 have been calibrated on a representative sample of spiral galaxies, we choose to use the \(\tau_{H\alpha}\) values to correct the \(L(H\alpha)^{obs}\) luminosities (as in Eq. 9) instead of combining it with an additional MIR/FIR line (as an observational dust attenuation proxy). We further use \(\tau_{H\alpha}\) to determine the corrected (intrinsic) SFR, sSFR and \(\Sigma_{SFR}\) as follows:
\[SFR^{corr}=4.4\times 10^{-42}L(H\alpha)^{corr}=4.4\times 10^{-42}L(H\alpha)^{obs}e ^{\tau_{H\alpha}} \tag{10}\]
\[sSFR^{corr}=SFR^{corr}/M_{*}. \tag{11}\]
\[\Sigma_{SFR}^{corr}=SFR^{corr}/2\pi(R_{eff,d}^{i})^{2}(H\alpha) \tag{12}\]
with \(R_{eff,d}^{i}\) being the intrinsic effective radius of the disc at H\(\alpha\) wavelength. The relevant observed and intrinsic photometric and structural parameters needed for this study are shown in Table 3. Likewise, all the star-formation related parameters - H\(\alpha\) luminosities, SFR, sSFR, SFR surface densities and their corresponding uncertainties (derived as described in the following section) are displayed in Table 4.
### Error estimation
To estimate the systematic errors on the main photometric and structural parameters needed in this study, namely the ones that characterise the H\(\alpha\) discs of spiral galaxies, we ran a new set of fits for a few galaxies. In this process, we fixed the sky value to the one found initially by GALFIT and added \(\pm 1\sigma\), or \(\pm 3\sigma\) (\(\sigma\) being the uncertainty in the sky level), leaving free the parameters of interest (mainly the \(R_{i,d}\)(H\(\alpha\)), the disc central surface brightness or the disc axis-ratio, \(Q_{L}\)), while all other parameters were also fixed to the values found by GALFIT. The systematic errors in the disk scalelengths and bulge effective radii were within the range 1-10 pixels (1-3 arcsecs). They were less significant for the axis-ratios, up to 0.01. This approach of error estimation of bulge parameters was also used previously by Gao et al. (2020) and Gao et al. (2019). The error over \(d_{gal}\) (measured distance to the galaxy) was taken from NED. We then performed propagation of errors in Eqs. 2-4 and Eqs. (7-12) to obtain standard deviation (\(\sigma\)) for all the needed parameters.
Having estimated previously the uncertainties of the H\(\alpha\) integrated fluxes, we continued with the propagation of errors to calculate the standard deviations for \(L(H\alpha)\), SFR and sSFR, for both the observed and corrected values.
## 4 Results
The most important and representative star-formation relation is the one between \(SFR\) and \(M_{*}\), valid for local galaxies but also at higher redshifts (Brinchmann et al. 2004, Noeske et al. 2007, Salim et al. 2007, Elbaz et al. 2011, Karim et al. 2011, Whitaker et al. 2012), the "star-formation main sequence", SFMS. We show this relation in the left-hand panel of Fig. 1. We recover the expected trend, the linear increase of SFR with stellar mass. To obtain the specific parameters for the SFMS, we apply a linear regregion fit for the corrected values and plot it as a red soline. The zero-point, slope and scatter derived - \(\beta\) = \(-6.95\pm 1.22\), \(\alpha\) = \(0.69\pm 12\) and \(\sigma\) = \(0.39\)dex, are consistent and within errors with values calculated in other similar, larger scale studies. For example, Hunt et al. (2016) found a value of 0.8 for a sample of galaxies from the local universe, including the KINGFISH galaxies, while Elbaz et al. (2007) derived a value of 0.77, also for a sample of local galaxies. A higher slope of 0.89 was determined by Gavazzi et al. (2013) for their HI-normal sample of spiral galaxies taken mostly from the Virgo cluster. Whitaker et al. (2012) determined a slope of 0.7 for their low redshift sample, with a reduced degree of observed scatter of 0.34dex. A lower slope of 0.67 and closer to our determined value was recently found by Cooke et al. (2023) for the low redshift (z=0.0-0.3) slice of his large sample of galaxies, selected to study the role of morphology and environment on the evolution of
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(Galaxy\) & \(Q_{d}^{i}(H\alpha)\) & \(R_{i,d}(H\alpha)\) & \(R_{i,d}^{i}(H\alpha)\) & \((B/Dj)(H\alpha)\) \\ & & [kpc] & [kpc] & \\ (1) & (2) & (3) & (4) & (5) \\ \hline NGC 0024 & 0.26 & 1.33 & 0.37 & 0.00 \\ NGC 0628 & 0.95 & 5.23 & 5.58 & 0.06 \\ NGC 2841 & 0.43 & 3.82 & 1.70 & 0.25 \\ NGC 2976 & 0.50 & 0.71 & 0.33 & 0.00 \\ NGC 3031 & 0.41 & 3.57 & 1.59 & 1.96 \\ NGC 3190 & 0.34 & 3.36 & 1.22 & 0.40 \\ NGC 3621 & 0.44 & 1.72 & 0.77 & 0.02 \\ NGC 3938 & 0.91 & 3.04 & 2.78 & 0.02 \\ NGC 4254 & 0.81 & 2.15 & 2.29 & 0.07 \\ NGC 4450 & 0.70 & 2.96 & 2.18 & 0.52 \\ NGC 4594 & 0.11 & 1.54 & 0.25 & 5.10 \\ NGC 4736 & 0.69 & 1.26 & 0.83 & 1.40 \\ NGC 4826 & 0.67 & 2.51 & 1.62 & 0.69 \\ NGC 5055 & 0.41 & 3.80 & 1.53 & 0.18 \\ NGC 5474 & 0.92 & 1.53 & 1.40 & 0.14 \\ NGC 7331 & 0.38 & 6.00 & 2.33 & 0.69 \\ NGC 7793 & 0.70 & 1.36 & 0.97 & 0.01 \\ NGC 1377 & 0.59 & 1.07 & 0.61 & 1.25 \\ NGC 1482 & 0.56 & 6.38 & 3.83 & 2.83 \\ NGC 1705 & 0.93 & 0.53 & 0.54 & 0.88 \\ NGC 3773 & 0.80 & 0.68 & 0.52 & 0.21 \\ NGC 5866 & 0.38 & 1.62 & 0.88 & 0.21 \\ NGC 5194 & 0.51 & 4.04 & 2.00 & 0.50 \\ \hline \end{tabular}
\end{table}
Table 3: The photometric and structural parameters of the H\(\alpha\) discs. The columns represent: (1) - galaxy name; (2) (-) the intrinsic disk axis-ratio, corrected for projection and dust effects; (3), (4) – the observed and intrinsic disk scalelengths; (5) - intrinsic bulge-to-disk ratio. In square brackets we have the units in which these quantities are given.
SFMS. While not plotted in Fig. 1, it is important to mention that the slope of the measured relation is severely lower than the one for the corrected SFMS, having a value of \(\alpha=0.35\pm 0.13\), whith a slightly increased scatter, \(\sigma=0.41\)dex. This underlines the importance of deriving dust and inclination corrected star-formation rates based on unbiased tracers.
In the right panel of Fig. 1, we show a similar plot, this time with the relation between sSFR versus stellar mass. The decreasing trend with stellar mass is noticed, as found in other studies (e.g. Gavazzi et al. 2013, Grossi et al. 2015, Hunt et al. 2016, etc), with the slope of the corrected relation being this time shallower than for the observed one. We thus obtained \(\beta=-6.95\pm 1.22\), \(\alpha=-0.31\pm 12\) and \(\sigma=0.40\)dex in an analogus way as for the SFRMS. The slope for the corrected relation is in very good agreement with the value of \(-0.29\) found by Hunt et al. (2016) for \(z\sim 0\) galaxies. Gavazzi et al. (2013) however, derived a significantly steeper re
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline _Galaxy_ & \(L(H\alpha)^{obs}\) & \(L(H\alpha)^{corr}\) & \(\sigma_{L(H\alpha)^{corr}}\) & \(SFR^{obs}\) & \(SFR^{corr}\) & \(\sigma_{SFRun}\) & \(SSFR^{obs}\) & \(SSFR^{obs}\) & \(\sigma_{SFRun}\) & \(\Sigma_{SFR}^{obs}\) & \(\Sigma_{SFR}^{corr}\) & \(\sigma_{S_{SFR}}^{obs}\) \\ & \([\frac{erg}{s}]\) & \([\frac{erg}{s}]\) & \([\frac{erg}{s}]\) & \([\frac{M_{s}}{s^{2}}]\) & \([\frac{M_{s}}{s^{2}}]\) & \([\frac{M_{s}}{s^{2}}]\) & \([\frac{1}{s^{2}}]\) & \([\frac{1}{s^{2}}]\) & \([\frac{1}{s^{2}}]\) & \([\frac{M_{s}}{s^{2}}]\) & \([\frac{M_{s}}{s^{2}}]\) & \([\frac{M_{s}}{s^{2}}]\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13) \\ \hline NGC 0024 & 40.35 & 41.31 & 40.73 & 0.10 & 0.89 & 0.24 & -10.49 & -9.53 & -10.11 & -2.50 & -0.43 & -1.01 \\ NGC 0628 & 41.03 & 41.32 & 40.52 & 0.47 & 0.91 & 0.14 & -10.62 & -10.33 & -11.13 & -3.01 & -2.78 & -3.36 \\ NGC 2841 & 41.62 & 42.15 & 41.35 & 1.86 & 6.28 & 0.98 & -9.90 & -9.37 & -10.18 & -2.14 & -0.91 & -1.72 \\ NGC 2976 & 40.33 & 41.34 & 40.66 & 0.09 & 0.96 & 0.20 & -9.99 & -8.98 & -9.66 & -1.98 & -0.30 & -0.98 \\ NGC 3031 & 40.04 & 41.10 & 40.27 & 0.05 & 0.56 & 0.08 & -11.70 & -10.64 & -11.48 & -3.66 & -1.90 & -2.59 \\ NGC 3190 & 40.37 & 40.87 & 40.71 & 0.10 & 0.33 & 0.23 & -11.01 & -10.52 & -10.68 & -3.28 & -1.91 & -2.07 \\ NGC 3621 & 39.84 & 40.31 & 39.55 & 0.03 & 0.09 & 0.02 & -10.94 & -10.47 & -11.24 & -3.23 & -2.06 & -2.83 \\ NGC 3938 & 40.70 & 40.85 & 40.30 & 0.22 & 0.31 & 0.09 & -10.11 & -9.97 & -10.51 & -2.87 & -2.64 & -3.18 \\ NGC 4254 & 41.08 & 41.37 & 40.59 & 0.52 & 1.03 & 0.17 & -9.89 & -9.60 & -10.38 & -2.37 & -1.95 & -2.69 \\ NGC 4450 & 40.72 & 42.15 & 41.72 & 0.23 & 6.16 & 2.29 & -11.03 & -9.61 & -10.04 & -2.82 & -1.13 & -1.52 \\ NGC 4594 & 40.30 & 41.95 & 41.62 & 0.09 & 3.89 & 1.82 & -12.03 & -10.38 & -10.71 & -2.68 & 0.55 & 0.22 \\ NGC 4736 & 40.15 & 41.80 & 41.04 & 0.06 & 2.77 & 0.48 & -11.42 & -9.77 & -10.53 & -2.66 & -0.64 & -1.40 \\ NGC 4826 & 39.98 & 40.84 & 40.35 & 0.04 & 0.30 & 0.10 & -11.36 & -10.51 & -10.99 & -3.42 & -2.18 & -2.65 \\ NGC 5055 & 40.24 & 41.23 & 40.47 & 0.08 & 0.74 & 0.13 & -11.61 & -10.62 & -11.37 & -3.52 & -1.75 & -2.48 \\ NGC 5474 & 39.92 & 40.16 & 39.35 & 0.04 & 0.06 & 0.01 & -10.50 & -10.26 & -11.06 & -3.05 & -2.74 & -3.28 \\ NGC 7331 & 41.15 & 41.68 & 40.91 & 0.62 & 2.09 & 0.36 & -10.77 & -10.24 & -11.01 & -3.01 & -1.66 & -2.36 \\ NGC 7793 & 40.36 & 41.24 & 40.47 & 0.10 & 0.77 & 0.13 & -10.47 & -9.58 & -10.36 & -2.51 & -1.33 & -2.06 \\ NGC 1377 & 40.05 & 40.98 & 40.77 & 0.05 & 0.42 & 0.26 & -10.59 & -9.66 & -9.86 & -2.62 & -1.20 & -1.40 \\ NGC 1482 & 40.76 & 40.86 & 40.34 & 0.25 & 0.32 & 0.10 & -10.59 & -10.49 & -11.00 & -3.46 & -2.91 & -3.24 \\ NGC 1705 & 39.73 & 40.00 & 39.56 & 0.02 & 0.04 & 0.02 & -9.81 & -9.55 & -9.98 & -2.32 & -2.07 & -2.38 \\ NGC 3773 & 39.68 & 39.89 & 39.55 & 0.02 & 0.03 & 0.02 & -9.99 & -9.78 & -10.12 & -2.59 & -2.15 & -2.49 \\ NGC 5866 & 40.40 & 41.39 & 40.86 & 0.11 & 1.08 & 0.32 & -10.97 & -9.99 & -10.52 & -2.62 & -1.10 & -1.56 \\ NGC 5194 & 40.65 & 41.84 & 41.35 & 0.19 & 3.01 & 0.99 & -11.24 & -10.05 & -10.54 & -3.17 & -1.37 & -1.81 \\ \hline \end{tabular}
\end{table}
Table 4: The star-formation rates and the rest of related parameters (calculated using Eqs. 2-4 & Eqs. (9-12) and their uncertainties. The columns represent: (1) - galaxy name; (2)-(4) – the H\(\alpha\) observed and corrected luminosities, and the standard deviations for \(L(H\alpha)^{corr}\) (in decimal logarithm unit scale); (5)-(7) - the observed and corrected star-formation rates, and standard deviations for \(SFR\); (8)-(10) - the observed and corrected specific star-formation rates (in log scale), and the corresponding standard deviations for \(\Delta SFR\); (11)-(13) - the observed and corrected SFR surface densities (in log scale), and standard deviations for \(\Sigma_{SFR}^{obs}\).
Figure 1: _Left panel_: Star-formation main sequence, \(SFR-M_{*}\), plotted in log scale. The observed SFR are shown with black triangles, while the corrected rates are
lation, with a slope of \(-0.56\) for their HI-normal sample. We do note here that the strength of the correlation for this relation is weaker than for the SFMS, with the derived correlation coefficient being \(r_{sSFR,M_{*}}=-0.48\) (sSFR and \(M_{*}\) are anticorrelated), as compared with the 0.77 value for the first relation. In a similar study, Gavazzi et al. (2013) found a value of -0.46 for this correlation.
In Fig. 2 we show the SFR and sSFR of our sample, this time plotted as a function of stellar mass surface density. In this case, the trend in the corrected relations is maintained, however it is much shallower, with the corresponding slopes being \(\alpha=0.46\pm 0.14\) and \(-0.05\pm 0.12\) respectively, and with a higher degree of scatter of 0.51 and 0.45dex. We can thus notice that there is practically no correlation between the sSFR and \(\mu_{*}\), while the correlation \(SFR-\mu_{*}\) is weaker than the SFMS, with the correlation coefficient being 0.57. We thus conclude that the SFMS is the more fundamental relation, rather than the \(SFR-\mu_{*}\) one.
Another relation that we present here is the one between dust mass, \(M_{dust}\), and SFR, displayed in Fig. 3. One can notice from the plot the increasing trend, with more dust being found in galaxies with higher SFR. Considering the already tight relations \(SFR-M_{*}\), and \(M_{dust}-M_{*}\)(Grootes et al., 2013, De Vis et al., 2017, Pastrav, 2020, van der Giessen et al., 2022), and the similar increasing behaviour observed, this relation can be bought as a consequence of the existence of these relations. A tight correlation between these quantities and similar increasing trend has been already observed by da Cunha et al. (2010) when analysing a large sample of SDSS galaxies. Hunt et al. (2019) have also explored this relation, finding a tight correlation with a scatter of 0.4-0.5dex. We obtained for the corrected relation a slope of \(\alpha=0.49\pm 0.18\), with a scatter of \(\sigma=0.53\)dex. The Pearson correlation coefficient calculated for the corrected relation confirms that this is a considerably tight one, having \(r_{M_{dust},SFR}=0.78\). One can note here the higher degree of scattering in this relation than the one calculated for the SFMS relation. This could be due mostly to the few outliers that were not excluded from the calculation of the coefficients, but also due to reduced size of our sample (better statistics from a larger sample would most likely reduce the scatter). The similar parameters for the observed relation are considerably higher, with a slope of \(\alpha=0.72\pm 0.28\) and 0.64dex derived scatter.
Having analysed the SFMS, \(M_{dust}-SFR\), and in Pastrav (2020) the \(M_{dust}-M_{*}\) relation, we further investigate here if there exists a correlation between the dust-to-stellar mass ratio, \(M_{dust}/M_{*}\), and SFR, as one might expect. In addition, we also plot this ratio as a function of sSFR. These plots are displayed in Fig. 4. One can immediately notice that in the case of the observed ratios, there is a flat trend with SFR (as in the case of \(M_{dust}/M_{*}-M_{*}\), see Pastrav, 2020) and sSFR, and thus no correlation. Looking at the corrected dust-to-stellar mass ratios variation with the corrected SFR, it can be seen a slightly decreasing trend (slope - \(\alpha=-0.37\pm 0.16\)) of the dust-to-stellar mass ratios. This can be explained by the fact that more massive galaxies (and thus older) with higher star-formation rates have less dust available, as part of it has been destroyed by supernovae shocks / winds or other processes in the ISM. At the same time, the gas fraction decreases, less dust is produced and the newly formed dust quantities can no longer overcome the destroyed mass of dust. The trend seen is expected considering the already observed decreasing behaviour in the \(M_{dust}/M_{*}-M_{*}\)(Cortese et al. (2012), Grossi et al. (2015), Pastrav (2020), Casasola et al. (2020)) relation, and the SFMS. This correlation is not so strong and tight, as we find a low correlation coefficient, \(r_{M_{dust}/M_{*},SFR}=-0.43\), with \(\sigma=0.48\). In the second plot, there is no obvious increasing or decreasing trend with sSFR, taking into consideration the associated large uncertainties. Thus, the downward trend is not obvious and the dependence of \(M_{dust}/M_{*}\) on sSFR seems to be weak. This result
Figure 3: The dust mass, \(M_{dust}\), as a function of galaxy SFR. The symbols and colors are the same as in previous figures. The solid red line is a linear regression fit of the corrected values.
Figure 2: _Left panel_: Star-formation rate versus stellar mass surface density, \(SFR-\mu_{*}\), plotted in log scale. The observed SFR are shown with black triangles, while the corrected rates are represented with red stars. The red solid line is a linear regression fit of the corrected values. The error bars represent the standard deviations. _Right panel_: Similar plot for the specific star-formation rate, \(sSFR\).
has also been previously found in Hunt et al. (2019) for the KINGFISH galaxies, while Casasola et al. (2022) observed an apparently weak increasing behaviour for the resolved version of this relation. The almost flat and inconclusive trend for \(M_{dust}/M_{*}\) vs sSFR comes in opposition with the result found by Remy-Ruyer et al. (2015) (see their Fig. 11), Skibba et al. (2011), and De Vis et al. (2017), which show an increase in the dust-to-stellar ratio with sSFR for the KINGFISH sample. The same behaviour was observed by da Cunha et al. (2010) from analysing a larger sample of low redshift SDSS galaxies.
One of the most important relations, derived from the SFMS relation shown in the left panel of Fig. 1, is the one between the star-formation surface density, \(\Sigma_{SFR}\), and the stellar mass difference density, \(\mu\), (\(\Sigma_{*}\) in other notations), previously named as the resolved star-formation main sequence relation - rSFMS. This tight correlation was observed before in studies by Sanchez et al. (2013), Cano-Diaz et al. (2016), Gonzalez-Delgado et al. (2016), Hsieh et al. (2017), Medling et al. (2018), Erros-Ferrer et al. (2019), Lin et al. (2019), Ellison et al. (2021) and Casasola et al. (2022). In this study, we have not derived \(\Sigma_{SFR}\) and \(\mu\), for each relevant spaxel of the galaxy images as in aforementioned studies, but according with the formulas from Eqs. 4 and 12, this gives us a picture of the relation mostly at kpc-scale, and only for some at sub-kpc scales. Nevertheless, a comparison of the characteristic parameters of this relation with those derived in previous works is still justified. We found the same linearly increasing trend (in log scale) as in previously mentioned studies. Following a linear regression procedure, we found a slope for the corrected relation of \(\alpha=1.03\pm 0.18\), a zero-point \(\beta=-10.22\pm 1.5\), with a rather large scatter of 0.44dex derived including all the galaxies, and larger than in previously mentioned studies (e.g. 0.2-0.4dex). The slope value is within the range of values found in these studies, e.g. 0.68 or 1.37 in Ellison et al. (2021), 0.71 or 1.00 in Hsieh et al. (2017), 0.88 in Casasola et al. (2022), 1.19 in Lin et al. (2019), for example, depending on the linear regression method used. The larger scatter in our study could be due to our small sample, and thus inferior statistics, but the sample selection, the morphology and the variation of the global relation between galaxies can also significantly contribute to this, as found by Ellison et al. (2021). The characteristics of the relation also depend on the sample selection and the fitting method, which may influence the final best-fit parameters, as shown in Hsieh et al. (2017), Ellison et al. (2021). The Pearson correlation coefficient that we calculated, \(r_{\Sigma_{SFR}\mu_{*}}=0.79\), underlines the strength of this correlation. This value is considerably higher than those found by Lin et al. (2019) (0.64), Ellison et al. (2021) (0.57), for example, which may due to the fact that our best-fit relation is not so "local" as the ones derived in these studies. Casasola et al. (2022) found however a even higher correlation coefficient of 0.85 for their comparable small sample of nearby spiral galaxies, a part of them being present in our sample as well.
In Fig. 6 we check if there exists a relation between the dust central face-on optical depth and the star-formation rate, the stellar mass, specific star-formation rate, or the star-formation rate surface density, \(\Sigma_{SFR}\), given the already observed \(M_{dust}-SFR\) relation in Fig. 3 and the direct dependence of \(r^{\prime}_{flu}\) on \(M_{dust}\), given in Eq. 5. One can see an increase in dust opacity for galaxies with higher star-formation activity, and thus higher dust production and higher attenuation. This result was found in van der Giessen et al. (2022) too, but only for their low-redshift SDSS galaxies. Thus, due to the aforementioned dependence, there is a correlation between the dust opacities and star-formation rates, but weaker than in the case of \(M_{dust}-SFR\) relation, with a correlation coefficient \(r^{\prime}_{flu,SFR}=0.64\). The scatter and uncertainties in the vertical direction are larger than for the \(M_{dust}-SFR\) relation, as dust opacity is a quantity more dif
Figure 4: _Left panel_: The dust-to-stellar mass ratios, \(M_{dust}/M_{*}\), plotted against SFR. _Right panel_: The same ratios, plotted against the specific star-formation rates, sSFR. The symbols and color legend are the same as in previous figures.
Figure 5: The resolved star-formation main sequence relation, rSFMS. The star-formation densities are derived according with Eqs.4 and 12. The red dotted line represent the best-fit for the rSFMS, obtained through a linear regresssion procedure. The symbols and color legend are the same as in previous figures.
ficult do derive with great precision. Given all these facts, we conclude that \(M_{dust}-SFR\) is the more fundamental relation, rather than this one.
In the upper right panel of Fig. 6, we see that \(\tau^{f}_{H_{\alpha}}\) increases for galaxies with higher stellar masses, as also recently observed by van der Giessen et al. (2022) for a much larger sample of galaxies at low and intermediate redshifts. This relation is a result of the positive \(M_{dust}-M\), correlation, observed by Grootes et al. (2013), De Vis et al. (2017) and Pastrav (2020). From the bottom left plot, we should note that no correlation between the dust opacity and corrected specific star-formation rate was found (\(r_{H_{\alpha},SFR}\)=-0.09), in opposition with the decreasing trend found in van der Giessen et al. (2022), again only for their low redshift SDSS and GAMA galaxies. The anticorrelation between dust optical depth and sSFR is observed for our measured relation only, which may lead us to the conclusion that dust and inclination effects or other other biases have not been properly accounted for in the mentioned study. In the bottom right panel, we can also see an increase in dust opacity with the star-formation rate surface density - a tracer for the molecular gas surface density (Leroy et al., 2008, Schruba et al., 2011), which fuels the star-formation. This slightly upwards trend was also found in van der Giessen et al. (2022) but only for their low-redshift SDSS and GAMA samples, but not for the high redshift ones. It is important to note that this increasing trend is only seen in the corrected relation, with a correlation coefficient \(r_{r_{H_{\alpha}},\Sigma_{SFR}}=0.79\), which shows the strength of this correlation, at least for low redshift galaxies. The existence of this last relation reveals, at least for low redshift galaxies, the tight connection between star-formation fuel in the young stellar disc and dust mass distribution (related directly with \(\tau^{f}_{H_{\alpha}}\) as in Eq. 5) in the dust disc.
In the next figure, Fig. 7, we show some of the dust scaling relations important for ISM studies, that can provide evidence about the role of dust in the star-formation cycle and constrain chemical evolution models. Thus, on the upper row of plots, the dust-to-HI (atomic hydrogen) mass ratio variation with \(M_{*}\) and \(\mu_{*}\) are presented, \(M_{HI}\) being the neutral hydrogen gas mass, taken from Remp-Ruyer et al. (2015) and Grossi et al. (2015). On the bottom row, the \(M_{dust}\) variation with HI mass is shown, and the \(M_{dust}/M_{*}\) ratio vs. gas-to-star ratio, \(M_{HI}/M_{*}\) (bottom right). The increasing trends in the \(M_{dust}/M_{HI}\) vs \(M_{*}\) and \(M_{dust}/M_{*}\) vs \(M_{HI}/M_{*}\) are recovered after applying the corrections, and we find the average dust-to-gas ratio of our sample to be \(-2.19\pm 0.12\) (or 0.63%), a value consistent with -2.1 one found by Cortese et al. (2012) for HI normal galaxies. This numerical ratio of 0.63% is in line with the \(\simeq 1\%\) estimation of the dust mass fraction in the ISM of galaxies. The increasing trend in \(M_{dust}/M_{HI}\) towards more massive galaxies has also been noticed previously in Cortese et al. (2012) (but shallower than in this study), Grossi et al. (2015) and De Vis et al. (2017), and can be understood considering the relation between the stellar mass and gas metallicity (Trementi et al., 2004). The scatter for the first relation is rather high - \(\sigma=0.53\)dex (compared with the 0.37dex scatter found by Cortese et al. 2012, for example) for the corrected one, while the corresponding correlation coefficient \(r=0.32\) is consistent with the value of \(r=0.31\) found in Cortese et al. (2012), but slightly lower than the value derived by De Vis et al. (2017) (0.47). Practically no correlation is observed
Figure 6: The dust optical depth as a function of SFR (_upper left panel_), \(M_{*}\) (_upper right panel_), sSFR (_bottom left panel_) and star-formation rate surface density, \(\Sigma_{SFR}\) (_bottom right panel_). The symbols and color legend are the same as in previous figures. The error bars represent the standard deviations.
for the corrected second investigated relation, \(M_{HI}/M_{dust}\) vs stellar mass surface density, \(\mu_{\star}\), with \(r_{M_{HI}/M_{\star},\,\mu_{\star}}=0.06\), which is in line with what was found by Cortese et al. (2012), who derived - a 0.10 value for this coefficient.
From the bottom left plot in Fig. 7, we can see the strong correlation between dust and HI masses in the disc, for which we derive a correlation coefficient \(r_{M_{dust}/M_{HI}}=0.94\) for the corrected relation, with a rather high degree of scatter, \(\sigma=0.54\)dex. Our derived coefficient is considerably higher than the ones found by De Vis et al. (2017) (0.74) or Casasola et al. (2020) (0.80, for the reversed relation, \(M_{HI}\) vs. \(M_{dust}\)) by analysing larger samples of low-redshift spiral galaxies than ours. The slope found for this correlation, \(\alpha=0.82\pm 0.18\) is consistent with those found in previously mentioned studies - Casasola et al. (2020) found \(0.85\pm 0.03\) but for the reversed relation. The tight correlation observed here should suggest that both dust and HI masses follow a similar radial distribution in the discs of galaxies. This is generally the case for the dust, which is distributed in an exponential disc, while the HI distribution has a more complex form (Casasola et al. (2017)). A weaker correlation is found for the \(M_{dust}/M_{\star}\) vs \(M_{HI}/M_{\star}\) relation, with \(r=0.36\), the increasing trend being also observed by Cortese et al. (2012), De Vis et al. (2017) and Casasola et al. (2020), albeit more pronounced than in our plot (we derived a slope \(\alpha=0.25\pm 0.08\) for the corrected relation). For instance, De Vis et al. (2017) derived a slope of 0.47 and found a very high correlation coefficient of 0.87, while Casasola et al. (2020) also found a much stronger correlation than in this study, with \(r=0.80\), for a much larger sample of late-type spiral galaxies, and a strong increasing trend with a slope of \(1.21\pm 0.05\), but for the reversed relation - \(M_{HI}/M_{\star}-M_{dust}/M_{HI}\). As \(M_{HI}/M_{\star}\) ratio is considered an indicator of the evolutionary stage of a galaxy, and keeping in mind the \(M_{dust}/M_{\star}-SFR\) variation seen earlier (and the explanation given for it), the fact that we observe the existence of this last correlation in Fig. 7 means that \(M_{dust}/M_{\star}\) is a measure of the evolutionary state of the galaxy. The scatter of this last relation, \(\sigma=0.32\)dex is significantly more reduced than in the case of the previous two relations.
While in other studies (as the just mentioned ones in this paragraph, and others) these dust scaling relations are studied as a function of environment (e.g. galaxies in clusters or groups vs field galaxies, HI normal to HI normal to HI galaxies), our sample is too reduced and formed out of galaxies with ISM environments characteristic for the nearby universe, to make such comparisons. Therefore we do not comment on these issues here.
To study the spatial distribution of star-formation in the young stellar disc and compare it with the extent of stellar continuum optical emission, we plot in the upper plot of Fig. 8 the ratio of scale-lengths in B band and corresponding to the H\(\alpha\) line, both observed and intrinsic (corrected), as a function of \(M_{\star}\). The observed and intrinsic (dust and inclination corrected) B band scalelengths were already determined in Pastrav (2020) and Pastrav (2021). For the observed scalelength ratio, one would expect the measured scalelength of the stellar continuum optical emission to be larger than the \(H\alpha\) line one. This is because dust effects, which tend to artificially flatten the central parts of the disc surface brightness profiles (Pastrav et al. 2013a) and thus increase the measured disc scale
Figure 7: Dust scaling relations. The four panels represent: the dust-to-HI (atomic hydrogen) mass ratio variation with galaxy stellar mass (_upper left_) and stellar mass surface density (_upper righty_; \(M_{dust}\) vs. HI mass, \(M_{HI}\) (_bottom left_); the \(M_{dust}/M_{\star}\) ratio vs. HI-to-stellar mass ratio, \(M_{HI}/M_{\star}\) (_bottom right_).
lengths due to the more concentrated dust distribution towards the centre of the disc, are stronger at shorter wavelengths, as shown in Pastrav et al. (2013a). Therefore, one would measure a larger disc scalelength in B band than for the H\(\alpha\) disc, for the same galaxy. One can see that this is the case for about 2/3 of our galaxies, with a few values around 1.00 and a few outliers. On the other hand, comparing now the ratio of the intrinsic scalelengths, one needs to consider that the corrections due to dust effects are larger for B band (as found in Pastrav et al. (2013a)). Thus, we would expect to obtain a scalelength ratio on average of 1.00 (within errors) after applying the corrections, if there would not be any inside-out growing of galaxy disc through star-formation. However, with the exception of a few outliers, for most of the galaxies this ratio is sub-unitary, meaning that on average, the distribution of star-formation in the disc is more extended than the optical emission one. It is also apparent that more massive galaxies have a more extended H\(\alpha\) disc than the B band optical one, compared with low mass ones, although this trend is rather weak. On average, we find for the corrected inverse ratio, \(R_{s}^{d}(H\alpha)/R_{s}^{d}(B)\), a value of 1.10. This result is slightly lower but consistent with the \(1.18\pm 0.08\) value found by Matharu et al. (2022) for their larger \(z\sim 0.5\) sample galaxies, considering also the fact that our sample galaxies are at much lower redshifts. Other studies of local Universe galaxies, such as those of James et al. (2009) and Fossati et al. (2013), have found the H\(\alpha\) disc to have the same spatial extent as the optical stellar disc. On the other hand, other recent studies on higher redshift samples of star-forming galaxies, such as those of Nelson et al. (2016) (\(z\sim 1\) sample, taken from the 3D-HST survey) and Wilman et al. (2020) (\(z\sim 1.7\) sample from the \(KMO\)s\({}^{3D}\), K-band Multi-Object Spectrograph survey), have revealed the star-formation disc to be slightly more extended than the stellar continuum one. This result is in agreement with our result and the one of Matharu et al. (2022). However, a larger sample in our study would bring more clarity in this issue, at least for the local Universe galaxies.
In the bottom plot of Fig. 8, the ratio of stellar mass surface densities for B band and H\(\alpha\) line are plotted (in log scale), again as a function of stellar mass. A slightly decreasing trend with stellar mass can be seen, more massive galaxies having a more compact stellar emission surface density than the star-formation one, while for lower mass galaxies this difference is not so significant. This downward trend towards more massive galaxies has also been observed recently in Matharu et al. (2022) for their \(z\sim 0.5\) and \(z\sim 1\) galaxies. Overall, we find an average \(\mu_{*}(H\alpha)/\mu_{*}(B)\) of 0.77 for our local sample, a value in very good agreement with the result of \(0.81\pm 0.15\) found in Matharu et al. (2022) for their \(z\sim 0.5\) sample.
## 5 Discussion
In this section, we are coming back to a few issues observed while analysing the main results, analyse the potential sources of systematic errors, and discuss the limitations of our method.
### Potential sources of systematic errors
A potential source of uncertainty is the calibration / conversion coefficient used to calculate SFR, as in Eq. 3, which then produces effects in all the other star-formation related quantities and the characteristics of the relevant scaling relations. Various version of this coefficient exist in the literature, depending on the IMF and stellar evolution models considered, with values in the range \((4.4-7.9)\times 10^{-42}\), as in Kennicutt (1998), Calzetti et al. (2000), Calzetti et al. (2007), Kennicutt et al. (2009), Pessa et al. (2022), Gimenez-Arteaga et al. (2022), etc. Thus, a variation of up to 80% exists between various calibration coefficients, producing a significant uncertainty already at this point. Star-formation rates can also be underestimated when using H\(\alpha\) luminosities to derive it for galaxies with \(L^{obs}(H\alpha)\leq 2.5\times 10^{39}\) (low SFR regions with \(SFR\leq 0.01M_{\sun}/yr\)) as pointed out by Lee et al. (2009), even if the SFR are corrected for dust attenuation effects. However, this is not the case for any of our sample galaxies.
Another important source of systematic errors affecting SFR determined from \(L^{obs}(H\alpha)\), is the dust attenuation prescription used. As mentioned in Sec. 3.5, the Balmer decrements or ratios between other hydrogen recombination lines (Paschen - Pa\(\alpha\)/H\(\alpha\), Pa\(\beta\)/H\(\alpha\) or Brackett lines) have been widely used in many other previous studies, with assumptions of a dust attenuation curve and/or a foreground dust screen aproximation. Balmer decrement method however, works mostly for normal galaxies, not in starbursts or dusty galaxies, and has large variations on small scales (Kennicutt & Evans). Through the method proposed in this paper, we circumvent these assumptions but we do introduce another potential source of uncertainty through the tailored Grootes et al. (2013)\(\tau-\mu\), correlation and the relation between \(\tau_{H\alpha}\) and \(M_{dust}\) in Eq. 5, both relying on Popescu et al. (2011) model, which assumes a fixed dust-star geometry and dust disc scale-height. Moreover, a certain degree of uncertainty can arise from the choice of the dust model, its characteristics being encapsulated in the \(K(H\alpha)\) constant, but this cannot be avoided. Nevertheless, the relations in
Figure 8: The ratio of the intrinsic disc scale-lengths seen in optical B band and in \(H\alpha\) line as a function of stellar mass (_upper plot_). The stellar mass surface density ratio vs \(M_{*}\) (_bottom plot_).
Eqs. 5 and 7 have been calibrated on a representative sample of low-redshift spiral galaxies and thus, our values for the attenuation of the emission line have been self-consistently derived with this method. Moreover, the sample analysed here is composed mostly by normal spiral and lenticular galaxies, not galaxies with a more peculiar geometry or starburst galaxies. Thus, we appreciate that the introduced errors are less significant than the ones introduced by the previously mentioned approaches.
A third source of uncertainties, this time only for the characteristics of the scaling relations analysed can come from the choice of the regression routine used, which may affect significantly the slope and the zero-point of the relation, and further, its \(\sigma\) parameter. The choice of the regression routine can produce a better representation of the fitted data. Observational uncertainties can also determine a certain degree of intrinsic scatter in the parameters associated with most scaling relations, this effect being difficult to assess, as pointed out by Stone et al. (2021), and not taken into account in most studies. For our small sample, we appreciate that the ordinary least-squares (OLS) routine used here (where necessary) outputs a best-fit that reproduces the trends seen in the data, with a high degree of accuracy. However, we do recognise that more complex routines, such as orthogonal distance regression (ODR), linear bisector regression algorithms (BCES, Akritas & Bershady 1996) or others, capable of taking into account covariant uncertainties, can and should be used for larger scale studies, as needed.
In the previous section, we compared our results for the main characteristics of the scaling relations and the trends observed with results from other relevant studies. While most of the results were consistent within errors with the compared studies, there were also some noticeable differences pertaining to the lack of a correlation in some cases, or inconclusive / different trends for other relations. We also compared our results with those found in studies done on samples of low redshift local galaxies, similar to our sample, or at the closest redshift possible, to make the comparison more meaningful. As mentioned earlier, inconclusive trends (e.g. \(M_{star}/M_{star}-sSFR\)) can be attributed to the low statistics in this work. Results in apparent opposition with other studies (e.g. \(\tau_{H_{\alpha}}^{\prime}-sSFR\), \(\tau_{H_{\alpha}}^{\prime}-\Sigma_{SFR}\)) or in agreement only with some (\(M_{star}/M_{star}-sSFR\)) may be explained through an inadequate treatment of the biases introduced by dust and inclination effects on the measured parameters in the respective studies, or other systematic biases.
### Limitations of the method and range of applicability
The limitations of the proposed method (and its succesion of steps) and its range of applicability are tightly connected to the range of applicability of the tailored Eq. 7 & Eq. 5, and also of the numerical corrections for dust and inclination (projection) effects used here, from Pastrav et al. (2013a,b). The latter are destined to be applied for normal low to intermediate redshift spiral, elliptical and lenticular galaxies, but not for the more irregular geometries, dwarf, disturbed or peculiar shape galaxies, as they were derived using simulated images produced by radiative transfer calculations, with a typical fixed star-dust geometry, considered in Popescu et al. (2011). Fig. 7 and Eq. 5 also depend on the range of applicability of the large-scale geometry of the exponential dust disks as calibrated in Popescu et al. (2011) model to the range of galaxy types, morphologies (same as for the dust and inclination effects numerical corrections) and stellar mass surface densities, \(8.0\leq log(\mu_{*})\leq 11.0\) (therefore intermediate mass galaxies), one has to analyse with this method.
## 6 Summary and conclusions
In this paper we have presented a detailed analysis of dust/ISM and star-formation scaling relations of a small representative sample of nearby spiral and lenticular galaxies taken from the SINGS/KINGFISH survey. This was done with the purpose of: i) investigating the changes induced by dust and inclination effects in the characteristics of these relations (slope, zero-point, scatter and correlation coefficients) and in the SFR values; ii) understanding which relations are fundamental and which are derived, or are a consequence of others; iii) verifying which of the derived specific parameters are actually correlated and why (besides the already established relations) and which relations are tighter (reduced degree of scatter) than others.
For this purpose, the \(H\alpha\) optical emission line flux was chosen as a SFR tracer and the H\(\alpha\) line images were used and analysed in order to derive the integrated fluxes and luminosities, needed further for determination of the star-formation rate of each galaxy. The succesion of steps as in Pastrav (2020, 2021) was followed for the photometry, structural analysis and the caculation of corrected relevant parameters involved in the analysed scaling relations. We used again the empirical relation found by Grootes et al. (2013), slightly modified for the H\(\alpha\) line wavelength to determine the central face-on dust opacity \(\tau_{H\alpha}\), needed when applying the corrections for dust effects. The method proposed here to determine the corrected H\(\alpha\) luminosities and star-formation rates circumvents the need of assuming of a dust attenuation curve and the use of Balmer decrements or other hydrogen recombination lines to estimate the dust attenuation, as in many other similar studies in the literature. It thus eliminates the uncertainties produced by dust attenuation in the measurements of SFR and the other relavant parameters. We thus derived the SFR using the unattenuated H\(\alpha\) luminosities to obtain more accurate and instantaneous star-formation rate values than would be derived through other methods. For most of the corrected relations we investigated the degree of correlation between the parameters, calculated the scatter of these relations and analysed the implications of the main results for star-formation and galaxy evolution. Our main results are:
* the corrected SFMS and \(sSFR-M\), trends and characteristics parameters obtained are consistent within errors with those found in similar studies, with \(sSFR-M\), correlation being weaker than the SFMS one;
* no correlation between corrected SFR and \(\mu_{*}\) is observed, while the \(SFR-\mu_{*}\) slope is shallow than the SFMS one and less tight;
* the \(M_{dust}-SFR\) correlation is confirmed, with a higher degree of scatter; we appreciate that this relation exist as a consequence of the tighter \(M_{dust}-M_{*}\) and \(SFR-M\), relations;
* an expected apparent downward trend of dust-to-stellar mass ratios with SFR was observed, while no conclusive evolution between \(M_{dust}/M_{*}\) and SFR was found;
* the resolved SFMS is recovered with an almost linear slope of \(1.03\pm 0.18\) (in log scale) within range of those obtained in a more detailed way, with a high correlation coefficient (0.79, comparable with the global SFMS one), but with a larger scatter;
* the H\(\alpha\) face-on optical depth is found to increase with SFR and \(M_{\star}\), a consequence of the \(M_{dust}-SFR\) and Eq. 5, but also with \(\Sigma_{SFR}\), in agreement with other works; no dependence of \(\tau_{H\alpha}\) on corrected sSFR was found
* we confirm the increase of dust-to-gas ratio (HI) towards more massive galaxies, but not with \(\mu_{*}\), while the characteristic parameters are consistent within errors with those found by other authors;
the average \(M_{dust}/M_{HI}\) of 0.63% four our sample is consistent with \(\simeq 1\%\) the dust mass fraction in the ISM of normal galaxies; the strong correlation between \(M_{dust}\) and \(M_{HI}\) is also confirmed even for this small sample;
* we compared the B band optical disc scalength and H\(\alpha\) line (star-forming) disc one and found that on average, the latter is more extended than the stellar continuum optical one (with a ratio of 1.10), this extent being larger for more massive galaxies; similarly, more massive galaxies have a more compact stellar emission surface density than the star-formation one, this behaviour being less apparent for lower mass galaxies; we find an average \(\mu_{*}(H\alpha)/\mu_{*}(B)\) of 0.77 for our local sample.
We compared the main results and the trends seen for the analysed scaling relations and found most of these to be consistent with other relevant results in the literature. While this study has been done on a small sample but representative galaxies in the local Universe, we advocate that this method can be used in future larger scale studies of star-formation and ISM evolution for low to mid-redshift galaxies. This work underlines the importance of having accurate, unbiased scaling relations in models of ISM evolution and star-formation.
## Acknowledgements
This research made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This research was supported by Romanian Ministry of Research, Innovation and Digitalization under Romanian National Core Programs LAPLAS VI/2019 and LAPLAS VII - contract no. 30N/2023.
## Data Availability
The data underlying this article are available in the article and in its online supplementary material.
|
2307.10484 | Inductive diagrams for causal reasoning | The Lamport diagram is a pervasive and intuitive tool for informal reasoning
about "happens-before" relationships in a concurrent system. However,
traditional axiomatic formalizations of Lamport diagrams can be painful to work
with in a mechanized setting like Agda. We propose an alternative, inductive
formalization -- the causal separation diagram (CSD) -- that takes inspiration
from string diagrams and concurrent separation logic, but enjoys a graphical
syntax similar to Lamport diagrams. Critically, CSDs are based on the idea that
causal relationships between events are witnessed by the paths that information
follows between them. To that end, we model happens-before as a dependent type
of paths between events.
The inductive formulation of CSDs enables their interpretation into a variety
of semantic domains. We demonstrate the interpretability of CSDs with a case
study on properties of logical clocks, widely-used mechanisms for reifying
causal relationships as data. We carry out this study by implementing a series
of interpreters for CSDs, culminating in a generic proof of Lamport's clock
condition that is parametric in a choice of clock. We instantiate this proof on
Lamport's scalar clock, on Mattern's vector clock, and on the matrix clocks of
Raynal et al. and of Wuu and Bernstein, yielding verified implementations of
each. The CSD formalism and our case study are mechanized in the Agda proof
assistant. | Jonathan Castello, Patrick Redmond, Lindsey Kuper | 2023-07-19T22:43:21Z | http://arxiv.org/abs/2307.10484v2 | # Inductive diagrams for causal reasoning
###### Abstract.
The Lamport diagram is a pervasive and intuitive tool for informal reasoning about causality in a concurrent system. However, traditional axiomatic formalizations of Lamport diagrams can be painful to work with in a mechanized setting like Agda, whereas inductively-defined data would enjoy structural induction and automatic normalization. We propose an alternative, inductive formalization -- the _causal separation diagram_ (CSD) -- that takes inspiration from string diagrams and concurrent separation logic. CSDs enjoy a graphical syntax similar to Lamport diagrams, and can be given compositional semantics in a variety of domains. We demonstrate the utility of CSDs by applying them to _logical clocks_ -- widely-used mechanisms for reifying causal relationships as data -- yielding a generic proof of Lamport's _clock condition_ that is parametric in a choice of clock. We instantiate this proof on Lamport's scalar clock, on Mattern's vector clock, and on the matrix clocks of Raynal et al. and of Wuu and Bernstein, yielding verified implementations of each. Our results and general framework are mechanized in the Agda proof assistant.
## 1. Introduction
Concurrent systems are famously difficult to reason about. Since concurrent actions can interleave in an arbitrary order, we cannot reason about just one sequence of actions; we must contend with a combinatorial explosion of potential linearizations. Bringing (partial) order to this chaos is _causality_, the principle that an effect cannot happen before its cause. In both shared-memory and message-passing systems, causality undergirds every protocol for strengthening the communication model beyond asynchrony: we commit to performing certain actions only once we have observed others, so that observers of the effects of our action will understand that we have, indeed, observed the effects of the first.
A ubiquitous device for visualizing causal relationships over space and time is the _Lamport diagram_.1 Figure 1 shows diverse examples of Lamport diagrams spanning six decades of computing literature. In a Lamport diagram, agents (or "processes") evolve over time along straight throughlines, and messages travel laterally between them. Importantly, causal relationships are reduced to simple geometric paths: two points in space and time are causally ordered if, and only if, they are connected by a forward path along the diagram.
Footnote 1: Lamport diagrams go by many other names, including time diagrams, spacetime diagrams, sequence diagrams, and more. While Lamport (1978)’s analysis of causality in the context of distributed systems was an early use of such diagrams, it appears to not have been _the_ first in the published literature; the oldest we have found is via Le Lann (1977).
As illustrations, Lamport diagrams are by nature informal. To support _formal_ reasoning about concurrent systems, we need formal models that capture the same scenarios displayed by these diagrams. Lamport (1978)'s own model of executions consists of a set of processes, each with a sequence of local actions, together with a set of pairs of actions indicating send/receive communications between processes. From this data, Lamport's causal _happens-before_ relation can be derived, capturing all causally-related points in the execution. A similar model has arisen in the context of _message sequence charts_ (MSCs), a more expressive cousin of the Lamport diagram (Alur et al., 2000; Ladkin and Leue, 1993; Broy, 2005; ITU-T, 2011). Because Lamport's executions and MSCs are so similar (indeed, equivalent in their current formulations), we will refer to both of them simply as _formal executions_ (or just _executions_).
Formal executions provide a strong mathematical basis for reasoning about causality in concurrent systems. However, they are typically characterized _axiomatically_ rather than _inductively_. While this makes them well-suited to traditional mathematical proofs, our experience has been
that applying them to _mechanized proof_ is a considerable struggle. Proof assistants founded on constructive type theory, such as our choice of Agda, excel at problems leveraging inductive data; and some of the most powerful tools in the canon of programming language theory, including the pervasive dichotomy of _syntax_ and _semantics_, are founded on inductive definitions. Ideally, then, we want to inductively factor a concurrent system into smaller pieces for local analyses, then build them back up into a global analysis. However, the collections of sets in a formal execution do not lend themselves easily to factorization without making arbitrary choices: to split an execution into a "before" and an "after", we must make a particular choice of consistent cut through the execution. While executions can be (and certainly have been) mechanized axiomatically, we would prefer to play better to the strengths of our tools.
To that end, we return to the Lamport diagram to derive a different kind of formal execution: a **causal separation diagram** (or CSD). CSDs enjoy an inductive definition, built up from a small set of primitive features (emission and reception of messages, together with local actions) together with syntactic operators for sequential and concurrent composition. CSDs then constitute a _syntax_ for describing executions of a concurrent system; and like any syntax, we can interpret CSDs into a variety of _semantics_. We take inspiration from concurrent separation logic in modeling the concurrent composition of actions over distributed state, and from the method of string diagrams in monoidal category theory for describing formal objects using a two-dimensional, graphical syntax. However, no familiarity with either discipline is required to read this paper.
We recover a proof-relevant analogue of Lamport's _happens-before_ relation by interpreting the syntax of CSDs into a semantic domain of _causal paths_. A causal path describes a particular potential flow of information from one point in a diagram to another: where a Lamport diagram makes causal relationships visible to the eye via geometric paths, we capture those paths directly as data. The proposition that "\(e_{1}\) happens before \(e_{2}\)" then becomes a type \(e_{1}\rightsquigarrow e_{2}\), and the terms that inhabit this type are particular paths witnessing that relationship. Because paths are also defined inductively, they become much more useful than mere truth values for further proofs.
As an in-depth case study of the application of CSDs, we consider the verification of **logical clocks**, a common class of devices for reifying causal information into a system at runtime. A logical
Figure 1: An assortment of Lamport diagrams from the literature. In these examples, time flows from top to bottom [11, 12, 13], from left to right [10, 11], or, rarely, from bottom to top [10], and parallel through-lines represent processes, threads, or spatially-separated sites, while arrows represent communication between them.
clock associates some metadata (a "timestamp") with every event, with the condition (Lamport [1978]'s "clock condition") that whenever two events are causally ordered, their timestamps are ordered likewise. The contents of a timestamp will vary with the choice of clock; some clocks _reinfor more_ causal information than others. For instance, Lamport's original scalar clock [Lamport 1978] flattens the partial order of events into the total order of integers, while vector clocks [Mattern 1989; Fidge 1988] and matrix clocks [Wuu and Bernstein 1984; Raynal et al. 1991] yield (predictably) vector and matrix timestamps, which provide progressively higher-fidelity information.
Existing proofs of the clock condition -- including mechanized proofs [Mansky et al. 2017] -- apply only to individual clocks. Other work on mechanized verification of distributed systems that use logical clocks typically focuses on higher-level properties, such as causal consistency of distributed databases [Lesani et al. 2016; Gondelman et al. 2021], convergence of replicated data structures [Nieto et al. 2022], or causal order of message delivery [Nieto et al. 2022; Redmond et al. 2023]. Those mechanized proofs take the clock condition as an axiom (either explicitly or implicitly) on the way to proving those higher-level properties. We address this situation by giving a _generic_ mechanized proof of the clock condition for any _realizable_ clock that can be realized by a system of runtime replicas -- in other words, a clock defined in terms of standard "increment" and "merge" functions. Realizable clocks include the well-known scalar, vector, and matrix clocks, which we instantiate within our framework to yield a proof of the clock condition for each, in a handful of lines of Agda code. Notably, while the clock condition has previously been proved for the matrix clocks of Raynal et al. [1991] and of Wuu and Bernstein [1984], we give what appear to be the first mechanized proofs for these clocks.
In summary, the main contributions of this paper are as follows:
* **Causal separation diagrams (CSDs).** After presenting informal intuitions in Section 2, we describe a new formal diagrammatic language for reasoning about executions of concurrent systems (Section 3). CSDs are inspired by Lamport diagrams -- a well-established visual language for expressing the behavior of distributed systems -- but they are inductively defined, which makes them amenable to interpretation into many semantic domains.
* **Interpreting CSDs.*
* We present interpretations of CSDs into three semantic domains:
* **Into types:*
* We provide an interpretation of CSDs into the domain of _causal paths_ (Section 4). Causal paths are a proof-relevant analogue of Lamport's _happens-before_ relation, where any given path inductively describes a particular flow of information.
* **Into functions:*
* We provide an interpretation of CSDs into a domain of _clocks_; that is, functions that compute a logical timestamp at every event (Section 5). Our interpretation is parametric in the particular choice of logical clock, so long as it is realizable as a local data structure with **increment*
* and **merge*
* operations (Section 5.1).
* **Into proofs relating types and functions:*
* We relate the above interpretations via a third interpretation of CSDs into proofs that clocks respect causality (Section 6). This yields a proof of Lamport's clock condition for any realizable clock whose timestamps increase with successive operations.
* **Applying CSDs: verified logical clocks.** Finally, we instantiate our interpretations on the clocks of Lamport, Mattern, Raynal et al., and Wuu and Bernstein, yielding mechanically verified implementations of each (Section 7). In particular, we give the first (to our knowledge) mechanized proofs of the clock condition for both matrix clocks.
All of our contributions are mechanized in the Agda proof assistant; moreover, we have published an open-source library for working with CSDs, available at github.com/lsd-ucsc/csds.
## 2. Deriving a new formal model of executions
In this section we recall the construction and properties of the existing model of formal executions, then manufacture a "just-so" story for their derivation from Lamport diagrams. By changing one essential step in this story, we are led to a derivation of our proposed model, the causal separation diagram (CSD).
Definition 2.1 (Formal executions (Lamport, 1978; Alur et al., 2000)).: A _formal execution_ is:
* A set \(P\) of _processes_, each of which is a sequence of atoms called _actions2_; together with Footnote 2: We avoid the traditional term “event”, for now, because the causal relation we define in Section 4 does not (directly) relate actions. A causal order ought to relate “events”; so we reserve that term and speak of “actions” here instead.
* A set \(M\) of _messages_, each of which is an ordered pair of actions across two processes (the message's associated “send” and “receive” actions).
Definition 2.2 (Happens-before (Lamport, 1978)).: Given a formal execution, the _happens-before_ relation on actions, written \(a_{1}\leq a_{2}\), is the reflexive-transitive closure3 of the execution's set of messages together with the total orders given by each process.
Footnote 3: Lamport’s own characterization of happens-before is irreflexive, unlike ours. Since reflexive and irreflexive partial orders are in one-to-one correspondence, the choice comes down to a matter of preference.
By tradition, we exclude from consideration executions for which _happens-before_ fails to be antisymmetric, as these indicate a failure of causality.
The data of a formal execution can be visualized in a _Lamport diagram_, an _informal_ graphical representation in which process histories become parallel lines; the actions on each history become dots along those lines; and the messages between them become arrows crossing laterally between parallel process lines. Importantly, the _happens-before_ relation can be inferred directly from a Lamport diagram: we have \(a_{i}\leq a_{j}\) if and only if there exists a forward path along the diagram from \(a_{i}\) to \(a_{j}\).
For example, the Lamport diagram in Figure 2 depicts an execution involving three processes, \(p_{1}\), \(p_{2}\), and \(p_{3}\), each having performed a few actions. Some of the actions in this execution are causally ordered. For instance, we see that \(a_{1}\leq a_{4}\) since \(a_{1}\) and \(a_{4}\) are the send and receive actions of message \(m_{1}\), and \(a_{4}\leq a_{5}\) because they occur in sequence on \(p_{2}\). Therefore, by transitivity, \(a_{1}\leq a_{5}\). We also have that \(a_{3}\leq a_{4}\) and \(a_{3}\leq a_{7}\), among other relationships. However, \(a_{1}\) and \(a_{3}\) are not related by _happens-before_, nor are \(a_{4}\) and \(a_{7}\). Such pairs of actions are said to be _concurrent_ or _causally independent_.
We can also take an informal diagram and formalize the scenario it displays as a formal execution. Therefore, we can consider the diagram to come first, with the derivation of an execution from an informal diagram serving as an origin story for the formal model itself. We can rederive the traditional execution by first splitting a diagram along spatial boundaries -- separating the
Figure 2. An example Lamport diagram.
process lines from one another -- and then separating the sequential actions along each process line by temporal boundaries. Doing so for the diagram in Figure 2 yields the decomposition in Figure 3(a). However, we could also have begun by laying down a sequence of _temporal_ boundaries -- demarcating _global steps_ over the entire system -- and only then separating the atomic steps within each global step by spatial boundaries. This yields the decomposition in Figure 3(b).
Both decompositions yield a partition of the diagram into graphical tiles; and it is precisely the relationships between these tiles, witnessed by the dataflow lines passing between them, which must be captured formally. In the traditional decomposition in Figure 3(a), tiles may be related across both temporal and spatial boundaries. Process orders record the relationships across temporal boundaries, while messages record relationships across spatial boundaries.4 This data, comprising a traditional formal execution, is sufficient to capture all information presented in the diagram.
Footnote 4: Depending on the execution being visualized, we may need to draw message-lines passing through tiles which neither send nor receive them; an effective visualization would be decidedly non-planar. Nonetheless, we consider that the relationship remains one of passing through the spatial medium.
The state of affairs for our alternative decomposition in Figure 3(b) is notably different. First, information flows between tiles only at temporal boundaries; spatial boundaries only separate causally-independent actions which cannot influence each other. Intuitively, it takes time to move through space -- spatial boundaries separate actions which may as well occur simultaneously, so the propagation of information from one place to another can only occur across temporal boundaries. However, this also means that differing quantities of state can leave a global step than enter it: a process may consume a message to decrease the quantity of data floating around, or emit a message to increase the quantity of data. Without bracing ourselves against the suggestive global geometry of fixed parallel lines for each process, we cannot even distinguish process state from message state: a global step simply transforms one configuration of separated state into another. Because of this indistinguishability, instead of referring to "processes" and "messages" we will refer only to **sites**: a site is a _place where state exists_, encompassing both processes and messages.
Second, we could have drawn different temporal boundaries -- different _consistent cuts_ -- and found a different decomposition. Consistent cuts [14, 15] are of fundamental importance to the analysis of concurrent systems, as they model the realizable _global states_ of a system. Thus, the formal representation for a diagram will embed a choice of consistent
Figure 3: Two ways to decompose the Lamport diagram of Figure 2 into “tiles”. On the left (a), we split first along spatial boundaries (dashed red lines), yielding individual processes, and then along temporal boundaries (solid blue lines). On the right (b), we split first along temporal boundaries, yielding consistent cuts, and then along spatial boundaries.
cuts; and as we will find in Sections 5 and 6, working with global information from the start enables simpler proof methods for reasoning about concurrent systems.5
Footnote 5: We expect there to be a means of algebraically transforming a CSD to manipulate which consistent cuts it embeds; this would then yield a completely syntactic account of consistent cuts. However, we defer this to future work.
Process lines can be recovered as chosen paths spanning the diagram -- that is, a chosen total order of actions, just as in the traditional execution. These path essentially names pieces of state as they evolves over time; any state not on some path is, morally, a message. We can even interpret this in a shared-memory setting: the configuration of sites along a consistent cut describe a shared heap, with each individual site modeling an exclusive region of memory. A global step then updates the heap, claiming regions by merging them and releasing regions by splitting them apart.
Figure 4 illustrates this notion of sites in more detail for our example. The shaded global step on the left has three incoming sites and five outgoing sites, so we might compactly say it has type \(3\xrightarrow{}5\) ("three to five"). The next two global steps have types \(5\xrightarrow{}3\) and \(3\xrightarrow{}3\), respectively. Adjacent global steps must "match up" the sites on their incident site configurations; but during a global step, sites may be joined with or forked from others.
In Section 3, we will describe a novel formal model for concurrent executions based on these observations. However, we can already see the shape this formalization must take:
* Since we have essentially _transposed_ the sequential and concurrent boundaries compared to the traditional formalization, our formal data will consist of a sequence of global steps acting over separated state.
* Each global step will decompose into a collection of concurrent, atomic steps, no two of which act over the same site -- data flowing into and out of a global step must flow through precisely one of its constituent atomic steps. These steps include individual local actions \(a_{1}\), but also include fork actions (which split one site into two) and join actions (which fuse two sites into one).
* A causal relationship between actions \(a_{1}\leq a_{2}\) will be witnessed by a sequence (or _path_) of atomic steps, running forward from \(a_{1}\) to \(a_{2}\), such that adjacent steps share a site.
Our unification of messages and processes into sites makes our formalization "natively" suited for reasoning about shared-memory concurrent systems as well as distributed systems. While Lamport diagrams can effectively visualize shared-memory systems as well as distributed ones, Lamport's formal executions are not well suited for the shared-memory domain, since processes and messages are often not the right abstractions. With CSDs, we have a diagrammatic syntax _and_ a formal model that fit both domains.
Figure 4: Global steps in our example diagram.
## 3. Syntax and Semantics of Causal Separation Diagrams
In Section 2 we discussed the intuitions behind causal separation diagrams (CSDs), and how they arise from Lamport diagrams. In this section we give a formal treatment of CSDs as terms of an inductive data type, and develop a concept of semantic interpretations of CSDs that we will make heavy use of in later sections.
### Site configurations
Recall from Section 2 that Lamport diagrams can be decomposed into a sequence of _global steps_, where each adjacent pair of steps meets at a collection of sites called a _site configuration_ (or just _configuration_). The configuration at the start of a global step describes the state of the sites before that step takes place, while the configuration at the end describes the state of the sites after the step. The diagram as a whole also starts and ends on a pair of configurations -- namely, the starting configuration of its first step, and the ending configuration of its last step. A formally-defined CSD will have type \(\Gamma_{1}\rightrightarrows\Gamma_{2}\), where \(\Gamma_{1}\) and \(\Gamma_{2}\) are _bounding configurations_ -- the configurations the diagram begins and ends on, respectively. Site configurations are themselves terms, so \(\Gamma_{1}\rightrightarrows\Gamma_{2}\) will be a _dependent type_. (In fact, nearly _every_ type we define will be dependent.)
Definition 3.1 (Site configurations).: Let \(\tau\) be a universe of types with products. Then a _site configuration_\(\Gamma\) is a binary tree with leaves drawn from \(\tau\), i.e., a term of the following grammar:
\[\Gamma \coloneqq\Gamma\otimes\Gamma\ \ \mid\ \left[\tau\right]\] \[\tau \coloneqq\tau\times\tau\ \ \mid\ \ldots\]
The leaf constructor \(\left[-\right]\) gives the type of some state that is isolated at one site, while the spatial product \(\otimes\) models a kind of separating conjunction6, giving the type of state that is spatially distributed over multiple sites. For instance, if the type universe \(\tau\) includes naturals \(\mathbb{N}\) and booleans \(\mathbb{B}\), then \(\left[\mathbb{N}\times\mathbb{B}\right]\otimes\left[\mathbb{B}\right]\) is a configuration with two sites, one carrying a pair of a natural and a boolean, and the other carrying a single boolean.
Footnote 6: _Separating conjunction_ is a logical connective found in separation logic, where two properties of heaps can be conjoined if a heap can be split into two factors, one of which satisfies one property and one of which satisfies the other. A site configuration can thus be thought of as a particular factorization of a distributed heap.
The spatial product \(\otimes\) is like a "lifted" version of the local product \(\times\); and like the local product, we will wish to treat \(\otimes\) as associative and commutative. Since reordered/rebalanced binary trees are syntactically distinct terms, however, we introduce a type of permutations \(\sigma:\Gamma_{1}\simeq\Gamma_{2}\) to mediate between equivalent configurations.
Definition 3.2 (Sites).: The type \(\operatorname{Site}(\Gamma)\), defined recursively over the structure of configuration \(\Gamma\), is the type of paths from the root of \(\Gamma\) to each of its leaves:
\[\operatorname{Site}(\left[\tau\right]) =\top\] \[\operatorname{Site}(\Gamma_{1}\otimes\Gamma_{2}) =\operatorname{Site}(\Gamma_{1})+\operatorname{Site}(\Gamma_{2})\]
Definition 3.3 (Permutations of sites (\(\simeq\))).: The type of _permutations_\(\Gamma_{1}\simeq\Gamma_{2}\) is an equivalence relation on site configurations, defined so that its elements \(\sigma\) correspond to type-preserving bijections \(\operatorname{Site}(\Gamma_{1})\to\operatorname{Site}(\Gamma_{2})\). By abuse of notation, we denote by \(\sigma\) (and \(\sigma^{-1}\)) the bijection witnessed by \(\sigma\).
In Definition 3.2, \(\top\) is the unit type (with single value \(\bullet\)), and \(+\) gives sum types (with injections \(\mathbf{inj}_{\ell}\) and \(\mathbf{inj}_{r}\)). For example, the type of sites for \((\left[\mathbb{N}\right]\otimes\left[\mathbb{B}\right])\otimes\left[\mathbb{B}\right]\) is \((\top+\top)+\top\). To address the site of type \(\mathbb{N}\), we write the term \(\mathbf{inj}_{\ell}(\mathbf{inj}_{\ell}(\bullet))\), which tells us we can isolate this site by focusing along the left-hand subtrees of this configuration.
### Causal separation diagrams
From Section 2, we know that CSDs have two forms of composition: sequential composition and concurrent composition.7 Just as conjunctive normal form makes Boolean formulae easier to work with, we will restrict concurrent composition to appear only under sequential composition. Every CSD, then, has two layers: an outer list modeling sequencing, and an inner tree modeling concurrency. To separate these layers, we give them distinct symbols: a diagram \(x:\Gamma_{1}\rightrightarrows\Gamma_{2}\) is a diagram proper, and can be composed sequentially, while a diagram \(x:\Gamma_{1}\dashedrightarrow\Gamma_{2}\) is a global step, and can be composed concurrently. These are morally both diagrams -- a global step is just a diagram in the process of being built -- and we will generally not distinguish between them.
Definition 3.4 (Causal separation diagrams (\(\mathrel{\mathop{\hbox to 0.0pt{\rightarrowfill}}\limits}\))): _A causal separation diagram is a sequence of global steps (see Definition 3.5, next), constructed according to the following rules:_
Footnote 7: Some readers will recognize the syntax of CSDs as a (free) symmetric monoidal category. We will have more to say about categorical connections in Section 9; for now, we acknowledge the connections but proceed concretely.
\[\infer{\mathbf{id}:\Gamma\rightrightarrows\Gamma}{\mathbf{id}:\Gamma \rightrightarrows\Gamma}{\mathbf{x}:\Gamma_{1}\rightrightarrows\Gamma_{2}}{ \mathbf{x}:\Gamma_{1}\rightrightarrows\Gamma_{2}}{\mathbf{x}:\Gamma_{1} \rightrightarrows\Gamma_{3}}\]
The \(\mathbf{id}\) and sequencing (;) constructors play the same roles, respectively, as "nil" and "cons" do for inductive lists. We take our sequences to grow to the right (a "snoc" list) from an initial \(\mathbf{id}\) seed, and moreover require that adjacent global steps be compatible: if a step ends on one configuration, the following step must begin on the same configuration.
Definition 3.5 (Global steps (\(\multimap\))): _A global step is a binary tree of atomic steps, constructed according to the rules below:_
\[\infer{\mathbf{fork}:[\tau\times\tau^{\prime}]\dashedrightarrows[\tau]\otimes[ \tau^{\prime}]}{\mathbf{fork}:[\tau\times\tau^{\prime}]\dashedrightarrows[\tau] \otimes[\tau^{\prime}]}\]
\[\infer{\mathbf{perm}\sigma:\Gamma_{1}\rightrightarrows\Gamma_{2}}{\mathbf{ ticks}:[\tau_{1}]\dashedrightarrows[\tau_{2}]}\]
The atomic steps \(\mathbf{tick}\), \(\mathbf{fork}\), \(\mathbf{join}\), and \(\mathbf{perm}\) describe the elementary ways in which sites can be transformed over time. The concurrence (\(\|\)) operator fuses two global steps into one. Since the two steps must operate over distinct configurations, no atomic step can share a site with any concurrent step. Thus, just as \(\otimes\) acts like a separating conjunction, \(\|\) acts like the concurrency rule of concurrent separation logic. (We discuss future work following this analogy in Section 9.)
The \(\mathbf{perm}\) constructor transforms a configuration into any equivalent configuration according to the type of permutations \(\simeq\) of Definition 3.3. It will be convenient to have shorthand for three special cases of \(\mathbf{perm}\):
* \(\mathbf{noop}:\Gamma\dashedrightarrows\Gamma\) is a step over the identity permutation;
* \(\mathbf{swap}:[\tau]\otimes[\tau^{\prime}]\dashedrightarrows[\tau^{\prime}] \otimes[\tau]\) is a step commuting two sites; and
* \(\mathbf{assoc}:\Gamma_{1}\otimes(\Gamma_{2}\otimes\Gamma_{3})\dashedrightarrows( \Gamma_{1}\otimes\Gamma_{2})\otimes\Gamma_{3}\) is a step reassociating a configuration.
The \(\mathbf{tick}\) constructor models any arbitrary local transformation of state. For instance, a \(\mathbf{tick}\) of type \([\mathbb{N}]\dashedrightarrow[\mathbb{N}\times\mathbb{B}]\) might describe an action which prepares a (boolean) message depending on the current (numeric) state. We deliberately leave the local transformations unconstrained to avoid parameterizing CSDs over yet another type. Concrete information about each individual \(\mathbf{tick}\) can instead be associated to a CSD by way of _labeling_, which we will discuss in Section 3.3.
The \(\mathbf{fork}\) and \(\mathbf{join}\) constructors reify the connection between spatial and local products alluded to in Section 3.1. If we have a local pair of state at one site -- for instance, a pair \([\mathbb{N}\times\mathbb{B}]\) of numeric
state and prepared message -- we can spatially separate its components onto two sites with **fork**. Conversely, state distributed over two sites can be fused into a local product on one site with **join**. Therefore, these steps are our analogues of the send/receive actions found in Lamport executions.
Although a traditional Lamport diagram treats send and receive actions as state-modifying actions, we factor them into two separate steps: a Lamport-style send is realized as a **tick** followed by a **fork**, and a Lamport-style receive is realized as a **join** followed by a **tick**.8 This factorization allows us to treat _all_ modifications of local state uniformly via **tick**, which helps us greatly when associating concrete operations to each **tick** (Section 3.3).
Footnote 8: To obtain a legitimate CSD from Figure 3(b), we would need to extract the implicit **tick** from each send and receive action.
Figure 5 depicts the **tick**, **fork**, **join**, **noop**, **swap**, and **assoc** atomic steps graphically. These tiles can be freely composed along like boundaries (that is, solid blue lines compose with solid blue lines, and dashed red lines compose with dashed red lines) to construct whole diagrams, so long as any sequenced pair of diagrams agree on the arrangement of sites crossing between them. For instance, consider the CSD given by the term **id** ; (**tick**\(\parallel\)**fork**) ; **assoc** ; (**join**\(\parallel\)**noop**). As a (snoc)-list, this CSD begins from an empty diagram (**id**) to which successive global steps are appended (with ;). Each constituent global step is built up as a concurrent composition of atomic steps (with \(\parallel\)). We can better display the structure of this CSD diagrammatically:
We begin on some site configuration \([\tau_{1}]\otimes[\tau_{2}\times\tau_{3}]\), and perform a **tick** on the first site and a **fork** on the second site to reach configuration \([\tau^{\prime}_{1}]\otimes([\tau_{2}]\otimes[\tau_{3}])\), where \(\tau^{\prime}_{1}\) is the result type of the **tick**. With **assoc**, we then rebalance the configuration into \(([\tau^{\prime}_{1}]\otimes[\tau_{2}])\otimes[\tau_{3}]\), so that the following step can **join** the first two sites (while leaving the third alone with **noop**). This CSD thus ends on configuration \([\tau^{\prime}_{1}\times\tau_{2}]\otimes[\tau_{3}]\). Since the type \(\tau_{2}\) ends up migrating from one site to another, this CSD might describe a message sent from one process to another.
Figure 5. Atomic steps of a CSD, depicted as graphical tiles. The **noop**, **swap**, and **assoc** tiles characterize the more general **perm** atomic step.
_Abuses of notation._ Since CSDs are lists of global steps, we can define a version of concurrent composition that acts over entire CSDs by zipping them together (with **noop** padding if their lengths are mismatched) and composing each pair. Likewise, we can sequentially extend a CSD by another CSD using the equivalent of a _concat_ operator. Rather than allocate new symbols to these binary operators, we will abuse notation, letting \(\|\) and ; stand in for them.
In our Agda mechanization, the indexed types \(\rightrightarrows\) and \(\leftrightharpoons\) are unified in a type with an auxiliary index over \(\{\text{Seq},\text{Par}\}\). Throughout the rest of this paper, we take advantage of this technical contrivance to define single functions that can pattern-match through both sequential and concurrent layers of a CSD, instead of defining a separate function for each layer.
### Labeled CSDs
Recall that a **tick** step is meant to model a local transformation of state. However, up to this point, there is no way to specify _what_ that local transformation actually is for each **tick**. If we only have one transformation in a given setting, we can interpret each tick as that specific transformation. But this is clearly too much of a limitation -- most systems can do more than one thing!
While we could parameterize CSDs over a type of actions (and construct each **tick** with a choice of action), this would complicate the type signature of CSDs, and introduce data for which the CSD itself is simply a carrier. Instead, we follow the pattern of _container types_(Altenkirch and Morris, 2009), in which the places where data can be held are characterized separately from the assignment of data to those places. For example, the generic type of lists \(\text{List}(T)\) can be factored into two parts: a Peano natural \(n:\mathbb{N}\) and an assignment \(\text{Fin}(n)\to T\) of values to indices. The Peano natural \(n\) describes a particular _shape_ of list (with zero playing the role of the empty list, and the successor constructor playing the role of list consing), while \(\text{Fin}(n)\) characterizes the positions within a list of that shape. The assignment \(\text{Fin}(n)\to T\) then fills those positions with concrete values.
Definition 3.6 (The type of ticks).: For a CSD \(X\), the type \(\text{Tick}(X)\) has precisely one value for every **tick** in \(X\), and is defined recursively over the structure of \(X\):
\[\text{Tick}(\textbf{tick}) =\top\] \[\text{Tick}(\textbf{fork}) =\bot\] \[\text{Tick}(\textbf{join}) =\bot\] \[\text{Tick}(\textbf{perm }\sigma) =\bot\] \[\text{Tick}(x\parallel y) =\text{Tick}(x)+\text{Tick}(y)\]
Here, \(\bot\) is the empty type, \(\top\) is the unit type (with only value \(\bullet\)), and \(+\) gives sum types (with injections \(\textbf{inj}_{\ell}\), \(\textbf{inj}_{r}\)).
Definition 3.7 (Labeled CSDs).: A _\(T\)-labeling_\(f:\text{Tick}(X)\to T\) assigns a value of type \(T\) to every **tick** in \(X\). A _\(T\)-labeled CSD_, written \(\langle X,f\rangle:\Gamma_{1}\rightrightarrows^{T}\Gamma_{2}\), is a diagram together with a \(T\)-labeling.
Given a labeled CSD, we can restrict its labeling to a subdiagram by pre-composing with the left or right injection for sums. For instance, the prefix of the labeled CSD \(\langle(x\mathbin{;}y),f\rangle\) can be obtained as \(\langle x,f\circ\textbf{inj}_{\ell}\rangle\). In the base case, we end up with \(\langle\textbf{tick},\bullet\mapsto v\rangle\) -- precisely a **tick** annotated with a value. This makes labeled CSDs an excellent solution for specifying the behavior of each **tick**.
In a traditional execution (Definition 2.1), every local action comes with some information built in -- not what the action is, but _who_ performed it. This is because every action occurs on a particular process's total order. Although CSDs do not treat process lines specially, we can include this same information by positing a type Pid of process identifiers, and working in terms of Pid-labeled CSDs.
### Semantic interpretations of CSDs
The construction of the Tick type in Definition 3.6 is our first example of an _interpretation_ of CSDs: we assigned some type to each atomic step, and described how sequential and concurrent composition act over those types to yield a type for larger diagrams. This pattern is emblematic of denotational semantics: "the meaning of the composition is the composition of the meanings."9 By itself, the CSD representation is not much use; its utility comes from its interpretability.
Footnote 9: This compositionality principle appears to be folklore in denotational semantics; we cannot find a canonical source. It dates at least to Frege, in the context of natural languages.
Definition 3.8 (Semantic interpretations).: _A semantic interpretation_ (or _semantics_, or _interpretation_) of CSDs is a function \((\Gamma_{1}\rightrightarrows\Gamma_{2})\to F(\Gamma_{1},\Gamma_{2})\) mapping each CSD to a semantic domain \(F\) indexed by site configurations.10
Footnote 10: The domain \(F\) ought to be a symmetric monoidal category, with an interpretation being a functor from \(\rightrightarrows\) to \(F\). However, we neither prove nor require that \(\rightrightarrows\) be such a category — although we are eager to make such connections in the future.
In the case of Tick, we take \(F(-,-)\) to be \(\mathbf{Type}\), so its semantic domain does not vary with the particular bounding configurations. Much of the rest of this paper will be devoted to the construction and analysis of additional interpretations, following the landmarks given in the introduction:
* In Section 4, we give a semantics in \(F(\Gamma_{1},\Gamma_{2})=\operatorname{Site}(\Gamma_{1})\to \operatorname{Site}(\Gamma_{2})\to\mathbf{Type}\), a domain of types \(\rightsquigarrow\) whose elements \(p_{12}:s_{1}\rightsquigarrow s_{2}\) are _causal paths_ between sites at the boundaries of the diagram. This yields a proof-relevant analogue of Lamport's _happens-before_ relation, where a path gives concrete evidence for why its endpoints are causally related.
* In Section 5, we give a semantics in \(F(\Gamma_{1},\Gamma_{2})=\operatorname{Valuation}(\Gamma_{1})\to \operatorname{Valuation}(\Gamma_{2})\), a domain of functions \(\mathcal{C}\), parametric in a choice of logical clock. A valuation \(v:\operatorname{Valuation}(\Gamma_{1})\) is an assignment \(\operatorname{Site}(\Gamma_{1})\to\operatorname{Time}\) of timestamps to each site; so functions \(\mathcal{C}\) compute timestamps \(\mathcal{C}_{v}\) on \(\Gamma_{2}\) from timestamps \(v\) on \(\Gamma_{1}\).
* In Section 6, we give a semantics in \(F(\Gamma_{1},\Gamma_{2})=\forall s_{1}\;s_{2}.\)\((s_{1}\rightsquigarrow s_{2})\to(\forall v.\ v(s_{1})\leq\mathcal{C}_{v}(s_{2}))\), a domain of _proofs_ relating the first two interpretations via Lamport's clock condition.11 The resulting proof is constructed modularly, by composing proofs over atomic steps into proofs over whole diagrams, and is parametric in a choice of logical clock. Footnote 11: Although it looks like \(\Gamma_{1}\) and \(\Gamma_{2}\) are not used in this domain, we are using the \(\rightsquigarrow\) and \(\mathcal{C}\) obtained from the other two interpretations, which very much do depend on the given configurations.
Our target domains (_happens-before_, logical clocks, and the clock condition) are all pre-existing concepts in the literature. However, the interpretations sketched above only directly relate points on the beginning and ending _boundaries_ of a diagram, while these concepts traditionally speak of points _interior_ to a diagram. To bridge this gap, we provide a general, two-phase recipe for building interpretations.
* First we define a "spanning" interpretation, restricting the target domain to relationships between the initial and final sites of a CSD. These interpretations are typically easy to implement recursively over the structure of a CSD. For the causal paths of Section 4, this will yield a domain of "spanning paths" giving causal relationships only between the sites on the boundary of a diagram.
* Next we define an "interior" interpretation, extending the first interpretation to include relationships between points on the interior of a diagram \(X\). For causal paths, an "interior path" will be a spanning path across any subdiagram of \(X\), so our interpretation will relate sites in any of the site configurations visited by \(X\).
The interpretations presented in Sections 4 to 6 all follow this same recipe.
## 4. The inductive type of causal paths
In this section we develop a notion of causal order within CSDs that captures the potential flows of information through a concurrent system. These flows are traditionally visualized in Lamport diagrams as geometric paths, reducing causality to a kind of connectivity between two points in space and time. We take these paths seriously as _bona fide_ data: the type of _causal paths_ is defined by a semantic interpretation of CSDs, following the pattern established in Section 3.4. This results in a causal relation that is _proof-relevant_: rather than the mere fact that "_\(e_{1}\) happens-before \(e_{2}\)_" observed in traditional executions, we have concrete (and potentially multiple) paths \(p:e_{1}\narrowarrow e_{2}\). Such witnesses become extremely useful in proof by induction, including those we present in Section 6 for logical clocks.
### Spanning paths
We first restrict our attention to causal relationships between sites in the bounding configurations of a diagram, which we will hereafter call _bounding sites_. In Section 4.2, we will extend these relationships to sites on any configuration visited by a diagram.
Definition 4.1 (Spanning relations).: A _spanning relation_ between configurations \(\Gamma_{1},\Gamma_{2}\) is a type family \(\operatorname{Site}(\Gamma_{1})\to\operatorname{Site}(\Gamma_{2})\to \operatorname{\mathbf{Type}}\) taking a pair of sites to a type of relationships between them.
If \(\narrow\) is a spanning relation, an element of type \(s_{1}\narrow s_{2}\) describes a potential flow of information between sites \(s_{1}\) and \(s_{2}\). Because information might take one of many branching and converging paths _en route_ between any pair of sites, \(s_{1}\narrow s_{2}\) may have multiple distinct values. This makes spanning relations _proof-relevant_: knowing that \(s_{1}\narrow s_{2}\) means knowing _why_ that fact is true.
Given two spanning relations \(\narrow_{1}\) and \(\narrow_{2}\), we can compose them sequentially or concurrently. Sequential composition is standard relational composition (\(\circ\)): we have a path across the sequence of two spanning relations if we have paths across each individually that meet at some common site. Concurrent composition is a disjoint sum (+): we have a path across the concurrence of two spanning relations if we have a path across either individually.
Every CSD induces a spanning relation modeling the concrete ways information can flow from one side of the diagram to the other. These are precisely the paths that the Lamport diagram makes evident graphically.
Definition 4.2 (Spanning paths).: The type family \(\operatorname{Span}(X)\) of _spanning paths_ through a CSD \(X:\Gamma_{1}\xrightarrow{}\Gamma_{2}\) is a spanning relation, and is defined inductively over the structure of \(X\):
\[\operatorname{Span}(\mathbf{tick}) =\lambda s_{1}\;s_{2}.\;\top\] \[\operatorname{Span}(\mathbf{fork}) =\lambda s_{1}\;s_{2}.\;\top\] \[\operatorname{Span}(\mathbf{join}) =\lambda s_{1}\;s_{2}.\;\top\] \[\operatorname{Span}(\mathbf{perm}\;\sigma) =\lambda s_{1}\;s_{2}.\;(s_{2}\equiv\sigma(s_{1})) \operatorname{Span}(\mathbf{id}) =\lambda s_{1}\;s_{2}.\;(s_{2}\equiv s_{1})\] \[\operatorname{Span}(x\parallel y) =\operatorname{Span}(x)+\operatorname{Span}(y) \operatorname{Span}(x\;;y) =\operatorname{Span}(y)\circ\operatorname{Span}(x)\]
When \(X\) is understood, we write \(s_{1}\narrow s_{2}\) to mean \(\operatorname{Span}(X)(s_{1},s_{2})\).
The \(\mathbf{tick}\), \(\mathbf{fork}\), and \(\mathbf{join}\) steps are interpreted trivially into the unit type \(\top\), because those steps have precisely one path for every opposing pair of bounding sites: \(\mathbf{join}\), for instance, relates two input sites to one output site, and information on both inputs will flow into the single output. Meanwhile, \(\mathbf{id}\) relates a configuration to itself (so only matching indices are connected by paths); and \(\mathbf{perm}\;\sigma\) relates inputs to outputs according to the permutation of sites performed by \(\sigma\).
For example, the CSD depicted in Figure 6 goes from configuration \(s_{1}\otimes s_{2}\) to configuration \(s_{3}\otimes s_{4}\). Because \(s_{2}\) is causally related to \(s_{4}\) by two distinct paths, the type \(s_{2}\narrow s_{4}\) has two inhabitants.
### Interior paths
Next, we will extend our spanning relation between bounding sites to a relation on _all_ points of interest within a diagram. To do this, we need to refer not only to sites in the bounding configurations of \(X\), but on _any_ site configuration visited by \(X\). A CSD with a sequence of \(N\) global steps visits \(N+1\) site configurations: one at the start of the diagram, and one at the end of each global step. Hence, an _event_ will be a choice of site configuration in a diagram, together with a choice of site within that configuration.
Definition 4.3 (Cuts).: The type \(\operatorname{Cut}(X)\) of _cuts_ within a diagram \(X:\Gamma_{1}\xrightarrow{}\Gamma_{2}\) has one inhabitant for every site configuration visited by \(X\), and is defined recursively over the structure of \(X\). The associated function \(cut(-)\) picks out the site configuration for each index of \(\operatorname{Cut}(X)\).
\[\operatorname{Cut}(\mathbf{id}) =\top cut(\bullet) =\Gamma_{2}\] \[\operatorname{Cut}(x\ ;y) =\operatorname{Cut}(x)+\top cut(\mathbf{inj}_{t}(t))=cut(t) cut(\mathbf{inj}_{r}(\bullet)) =\Gamma_{2}\]
Definition 4.4 (Events).: The type \(\operatorname{Event}(X)\) of _events_ in a diagram \(X\) is the type of points in spacetime consisting of a temporal coordinate (a cut) together with a spatial coordinate (a site):
\[\operatorname{Event}(X)=(t:\operatorname{Cut}(X),\ s:\operatorname{Site}(cut( t)))\]
This order of coordinates inverts the convention for events in a traditional execution, where we first select a process (a spatial coordinate) and then select an action occurring on that process (a temporal coordinate). In our figures (such as Figure 6), events exist wherever a line modeling the flow of data (in black) intersects a consistent cut (in blue).
Care should be taken not to confuse _events_ with _actions_. In the traditional model of executions, an "event" is modeled by a local action -- the equivalent of our **tick**. However, since an action is effectively a discontinuous, instantaneous change to state, this leads to questions about what the state of a system is "at" a local action: Has the action actually happened yet or not? Is the action included in its own causal history? These ties are usually broken by interpreting events to occur either slightly before or slightly after an action -- and sometimes both, depending on context. We prefer not to conflate these concepts in the first place: for us, an event is no more than a point in space at a point in time, with no presumption that it is special in any particular way.
Next, we need a way to describe paths between any two events. For any two cuts in a CSD, we can consider the global steps between them as a subdiagram. Then a path between two events is no more than a path spanning the subdiagram between their cuts. Order matters, however: if a CSD passes through distinct cuts \(t_{1},t_{2}\) (in that order), the subdiagram "from \(t_{2}\) to \(t_{1}\)" does not really exist -- at least not in the expected sense. To preclude such inversions, we will define subdiagrams only over legal intervals.
Figure 6. In this diagram, the bolded paths identify distinct witnesses to the causal relationship between initial site \(s_{2}\) and final site \(s_{4}\).
**Definition 4.5** (Intervals).: The _interval_\(t_{1}\cdots t_{2}\) between cuts \(t_{1},t_{2}\) in a diagram \(X\) is the type with a (unique) inhabitant \(t_{12}\) if and only if \(X\) visits \(t_{1}\) no later than \(t_{2}\).
**Definition 4.6** (The subdiagram over an interval).: The _subdiagram over an interval_\(t_{12}:t_{1}\cdots t_{2}\), denoted \(\mathit{during}(t_{12})\), is the CSD consisting of the global steps appearing strictly between cuts \(t_{1},t_{2}\) in a diagram \(X\).
Since CSDs are effectively (snoc-)lists at the top level, using \(\mathit{during}(-)\) is akin to using the common list functions \(\mathsf{drop}\) and \(\mathsf{take}\): we drop everything after both cuts, then take everything that remains after the first cut.
Finally, we can obtain a causal relation between events:
**Definition 4.7** (Causal relations).: For a diagram \(X\), a _causal relation_ is a type family \(\mathrm{Event}(X)\to\mathrm{Event}(X)\to\mathrm{Type}\) taking every pair of events to a type of relationships between them.
**Definition 4.8** (Causal paths).: The type family \(\rightsquigarrow\) of _causal paths_ (sometimes _interior paths_) through a diagram \(X\) is a causal relation. The inhabitants of \(e_{1}\rightsquigarrow e_{2}\) are (dependent) pairs consisting of an interval between the events together with a spanning path under that interval:
\[(t_{1},s_{1})\rightsquigarrow(t_{2},s_{2})=(t_{12}:t_{1}\cdots t_{2},\ p_{12}: \mathrm{Span}(\mathit{during}(t_{12}))(s_{1},s_{2}))\]
We consistently pun \(\rightsquigarrow\) to mean either spanning paths or causal paths depending on whether its arguments are sites or events. Similar liberties will be taken (and acknowledged) with the interpretations of Sections 5 and 6.
The causal relation \(\rightsquigarrow\) enjoys reflexivity, antisymmetry, and transitivity, making it a partial order. As a proof-relevant type, reflexivity arises from the existence of unit paths, and transitivity arises from the composition of paths -- which is, moreover, strictly associative. Unlike traditional executions (Definition 2.1), antisymmetry is guaranteed by construction for every CSD: it is impossible to introduce a causal loop because state flows only forward in time. Proofs of these properties can be found in our Agda development; we elide them here for brevity.
_An order on actions._ Here and in Section 2, we were careful to distinguish the actions related by _happens-before_ from the spacetime coordinates we call events. Nonetheless, the two notions are closely related: every local action \(a\) has a pair of associated events \(e_{a}^{\prime},e_{a}^{\prime}\) before and after it. We can choose one of these events to act as proxy for the actions in our system to recover an irreflexive order on actions: \(a_{i}<a_{j}\) if and only if \(e_{a_{i}}^{\prime}\rightsquigarrow e_{a_{j}}^{\prime}\). For example, in Figure 6, we have \(a_{1}<a_{2}\), since \(e_{a_{1}}^{\prime}\rightsquigarrow e_{a_{2}}^{\prime}\). Because of this correspondence, we speak only of events in what follows -- we can always choose a suitable event to stand in for any action of interest.
## 5. Interpreting CSDs into logical clocks
In this section (and Sections 6 and 7) we apply CSDs to the analysis of _logical clocks_, a common class of devices for reifying causal information into a concurrent system at runtime. As Lamport (1978) observed, we often cannot rely on physical timekeeping to coordinate agents in a concurrent system: one agent's clock may drift relative to the others, and messages may take variable (or unbounded) amounts of time to propagate from sender to recipient. Logical clocks solve this problem by measuring time against the occurrence of intentional _actions_ of the agents in the system.
In the setting of Lamport (1978), a **logical clock** (or just _clock_) is a global assignment of partially-ordered values (called _timestamps_) to actions in a concurrent execution. Figure 7 gives examples of these assignments for two widely used logical clocks: the scalar clock (Lamport, 1978; Fidge, 1988) and the vector clock (Matterm, 1989; Fidge, 1988), which respectively use scalar and vector timestamps. We will discuss the specifics of these clocks in more detail in Section 7, along with matrix clocks (Wuu and Bernstein, 1984; Raynal et al., 1991).
In our setting, a clock will assign a timestamp to every _event_ in a CSD. Just as in Section 4.2, we can assign timestamps to _actions_ by choosing an adjacent event to represent that action.
We will use a common formulation of clocks as implementations of an abstract data type with local _increment_ and _merge_ operations (Raynal and Singhal, 1996), and we bridge this local characterization of clocks into a global assignment of timestamps via interpretation. We begin by justifying this choice of formulation; then, just as in the case of causal paths (Section 4), we construct an interpretation of CSDs \(X:\Gamma_{1}\rightrightarrows\Gamma_{2}\) into a _spanning_ domain, in which an assignment of timestamps (or "valuation") on the sites of \(\Gamma_{1}\) is updated into a valuation on \(\Gamma_{2}\). We conclude by extending this interpretation to an _interior_ domain, which will assign timestamps to all events within a diagram.
### Realizable clocks
In practical implementations, a logical clock is realized as a data structure, instantiated by each agent in a concurrent system, that tracks the passage of (logical) time from the perspective of that agent. The timestamp associated to any action is that displayed by the clock of the agent when it performed the action. The archetypal logical clock is the scalar clock of (Lamport, 1978), in which every agent's clock maintains a single monotonically-increasing integer. To ensure that every action occurs at a later "time" than those that occur causally prior, the scalar clock increments with each action, and updates to the maximum of its timestamp and that of any message received at that agent. This property -- that causally-related actions have like-ordered timestamps -- is so important that it is called the _clock condition_, and is required of _any_ prospective logical clock.12
Footnote 12: Lamport (1978) uses an irreflexive relation, while our formulation is reflexive. While we can easily recover an irreflexive relation on actions from our reflexive relation on events, our version of the clock condition does not guarantee forward progress: a broken clock is yet a clock. In practice, the _inverse_ clock condition satisfied by other clocks covers the difference.
While we can always build a global assignment of timestamps from a system of clock replicas, we cannot always go in the reverse direction: a clock in the global sense may not be realizable as a data structure. For instance, given an execution with \(n\) actions, if \(\mathcal{C}[-]\) is a monotone assignment of integer timestamps to this execution, then so is \(\mathcal{C}[-]+n\). But an agent early in the execution has no knowledge of how many actions _will occur_ in total: any prediction it makes may be invalidated depending on what transpires in the future. So even if \(\mathcal{C}[-]\) can be realized as a system of local clock instances, \(\mathcal{C}[-]+n\) certainly cannot be.
We restrict our attention to such _realizable clocks_, as these make up the majority of clocks in the literature.13 Following Raynal and Singhal (1996), we treat logical clocks as an abstract data type (ADT) with two operators, _increment_ and _merge_. In addition, we assume a type Act of actions performable by any agent in the system.
Figure 7. An example execution with an assignment of timestamps by the (a) Lamport clock and (b) vector clock.
**Definition 5.1** (Clocks as an ADT).: A _logical clock_ is a type Time together with
* a family of operations \(\mathbf{increment}_{a}\) of type Time \(\rightarrow\) Time for every \(a:\) Act,
* an operation \(\sqcup\) (pronounced \(\mathbf{merge}\)) of type Time \(\times\) Time \(\rightarrow\) Time.
Moreover, Time must be preordered by a relation \(\leq\), such that for all timestamps \(t_{1},t_{2}:\) Time, the above operations are inflationary:
* \(t_{1}\leq\mathbf{increment}_{a}(t_{1})\),
* \(t_{1}\leq(t_{1}\sqcup t_{2})\), and
* \(t_{2}\leq(t_{1}\sqcup t_{2})\).
The \(\mathbf{increment}_{a}\) operation advances the clock's time depending on what the action \(a\) is. For instance, a vector clock maintains an index for every agent, and it increments a _different_ index depending on which agent performed the action. Since a CSD doesn't carry information about the provenance of an action, we take the elements of Act to include that information themselves.14
Footnote 14: Alternatively, we can take Act to be the type of process identifiers, so that any agent may increment any index of the clock \(-\) even one not intended to track that agent. Section 7.1 develops this perspective in more depth.
The \(\mathbf{merge}\) operation advances the clock's time to any time after the two given timestamps. This operation is used when an agent receives a message decorated with the sender's timestamp: by merging the sender's timestamp with the recipient's timestamp, any action occuring from that point on is guaranteed to have a timestamp no less than than anything in its causal history.
### Update functions
Given a logical clock, our goal is to derive a global assignment of timestamps to events for any CSD. Following the pattern in Section 3.4, we first restrict our attention to an assignment of timestamps to the _bounding sites_ of an Act-labeled diagram \(X:\Gamma_{1}\rightrightarrows^{\text{Act}}\Gamma_{2}\).
Intuitively, we will want to interpret every \(\langle\mathbf{tick},a\rangle\) as an \(\mathbf{increment}_{a}\) operation, and every \(\langle\mathbf{join},\bullet\rangle\) as a \(\mathbf{merge}\) over the input timestamps. An Act-labeled CSD is then an expression arranging any number of clock operations on timestamps into a one-shot, compound operation over an entire configuration of clocks. In other words, every Act-labeled CSD yields a function mapping an assignment of timestamps on its input sites to an assignment of timestamps on its output sites.
**Definition 5.2** (Valuations).: The type of _valuations on \(\Gamma\)_, written \(\operatorname{Valuation}(\Gamma)\), is the type of functions \(v:\operatorname{Site}(\Gamma)\rightarrow\) Time assigning a timestamp to each site in \(\Gamma\).
**Definition 5.3** (Update functions).: For every logical clock, the interpretation \(\llbracket\!\!-\rrbracket\) of Act-labeled CSDs \(X:\Gamma_{1}\rightrightarrows^{\text{Act}}\Gamma_{2}\) into _update functions_ of type \(\operatorname{Valuation}(\Gamma_{1})\rightarrow\operatorname{Valuation}( \Gamma_{2})\) is defined as:
\[\llbracket\mathbf{tick},\bullet\mapsto a\rrbracket =\lambda v.\ \lambda-.\ \mathbf{increment}(a)(v(\bullet))\] \[\llbracket\mathbf{fork},\_\_\rrbracket =\lambda v.\ \lambda-.\ v(\bullet)\] \[\llbracket\mathbf{join},\_\_\rrbracket =\lambda v.\ \lambda-.\ v(\mathbf{inj}_{f}(\bullet))\sqcup v(\mathbf{inj}_{r}(\bullet))\] \[\llbracket\mathbf{perm}\ \sigma,\_\_\rrbracket =\lambda v.\ v\circ\sigma^{-1} \llbracket\mathbf{id},\_\rrbracket =\lambda v.\ v\] \[\llbracket x\ \parallel y,f_{x}+f_{y}\rrbracket =\llbracket\!\!\!\llbracket x,f_{x}\rrbracket+\llbracket\!\!\! \llbracket y,f_{y}\rrbracket \llbracket\!\!\!\llbracket x\ ;y,f_{x}+f_{y}\rrbracket =\llbracket\!\!\!\llbracket y,f_{y}\rrbracket\circ\llbracket\!\!\! \llbracket x,f_{x}\rrbracket\!\!\!\rrbracket\]
When the diagram \(X\) is understood, we will write \(\mathcal{C}_{v}[s]\) to mean \(\llbracket\!\!\!\llbracket X\rrbracket(v)(s)\).
Because a \(\mathbf{tick}\) transforms a valuation on one site into a valuation on one site, it serves as a very thin wrapper around \(\mathbf{increment}_{a}\). The new valuation can ignore its argument, because there is only one input to a \(\mathbf{tick}\). Likewise, \(\mathbf{fork}\) ignores its argument because both outputs receive their timestamp from the same input site, and \(\mathbf{join}\) merges both input sites onto the single output site.
In contrast, the **perm** constructor doesn't manipulate any timestamps directly. Instead, any given site is translated by the permutation \(\sigma\) into an index on the input valuation: the requested timestamp is just one of those in the input. The **id** constructor behaves similarly.
Finally, sequential and concurrent composition each combine the evaluation functions from each subdiagram. Sequential composition is given by the usual composition of functions (\(\circ\)); and concurrent composition is given by the usual pairing of two functions over a sum type (\(+\)). We abuse pattern-matching notation somewhat by writing \(f_{x}+f_{y}\) on the left-hand side, where we would otherwise write simply \(f\) and compose its uses with the appropriate injection.
### Clock functions
The interpretation of Definition 5.3 only tells us what timestamps a system terminates on, not the timestamps along the way. To obtain the latter, we must extend our function \(\mathcal{C}_{v}\) to accept any event (Definition 4.4), not just output sites. That is, we want a function \(\mathcal{C}:\operatorname{\text{Valuation}}(\Gamma_{1})\to(\operatorname{ \text{Event}}(X)\to\operatorname{\text{Time}})\), computing an assignment of timestamps to all events given an initial assignment of timestamps.
Following Section 4.2, we will select a subdiagram with the event of interest on its boundary. The timestamp at an event is then one of the timestamps on which that subdiagram terminates.
Definition 5.4 (The subdiagram before a cut)The _subdiagram before a cut_\(t\), denoted \(\mathit{before}(t)\), is the CSD consisting of the global steps appearing strictly before the cut \(t\) in a diagram \(X\).
Definition 5.5 (Clock function)For every choice of logical clock and Act-labeled diagram \(X\), the _clock function_\(\mathcal{C}\) of type \(\operatorname{\text{Valuation}}(\Gamma_{1})\to(\operatorname{\text{Event}}(X) \to\operatorname{\text{Time}})\) is given by
\[\mathcal{C}_{v}[(t,s)]=\llbracket\mathit{before}(t)\rrbracket(v)(s).\]
We consistently pun \(\mathcal{C}_{v}\) to mean either the update function (Definition 5.3) or the clock function depending on whether its argument is a site or an event.
Figure 8 depicts the execution from Figure 7 as a CSD, with timestamps assigned to events according to the Lamport clock, given a starting valuation of zeroes and using the interpretation in Definition 5.3. As discussed in Section 4.2, we can associate timestamps to _actions_ rather than events just by selecting one of the neighboring events for each action to represent it. In this case, convention suggests adopting the timestamp of the event immediately following each action.
## 6. Relating causal paths to clocks
In Section 4, we introduced an interpretation into paths \(e_{1}\nightsquigarrow e_{2}\), giving a proof-relevant causal order on events; and in Section 5, we introduced a family of interpretations into clock functions \(\mathcal{C}_{v}[-]\), giving an assignment of timestamps to events. In this section, we will relate these two
Figure 8. The execution from Figure 7 as a CSD, with Lamport timestamps assigned to events.
interpretation via a third, ultimately yielding a proof of the clock condition: if \(e_{1}\rightsquigarrow e_{2}\), then \(\mathcal{C}_{\nu}[e_{1}]\leq\mathcal{C}_{\nu}[e_{2}]\). Following the recipe in Section 3.4, we will again begin with a _spanning_ proof relating paths and timestamps on the bounding sites, then extend to an _interior_ proof relating paths and timestamps on all events.
### Inflationarity of update functions
The clock condition relates any two events in a diagram: if \(e_{1}\rightsquigarrow e_{2}\), then \(\mathcal{C}_{\nu}[e_{1}]\leq\mathcal{C}_{\nu}[e_{2}]\). If we restrict our attention to sites \(s_{1},s_{2}\) at the start and end of the diagram, respectively, then \(\mathcal{C}_{\nu}[e_{1}]\) reduces to simply \(\nu(s_{1})\), because the diagram before an initial site is the empty diagram \(\mathbf{id}\). This leads us to the following statement:
Theorem 6.1 (The update function is inflationary).: _Fix a choice of logical clock, and let \(X\) be an \(\operatorname{Act}\)-labeled CSD \(\Gamma_{1}\rightrightarrows^{\operatorname{Act}}\Gamma_{2}\) with an initial valuation \(\nu:\operatorname{Valuation}(\Gamma_{1})\). Then the clock's update function \(\mathcal{C}\) is inflationary on causally related sites:_
\[\forall(s_{1}:\operatorname{Site}(\Gamma_{1}))(s_{2}:\operatorname{Site}( \Gamma_{2}).\ (s_{1}\rightsquigarrow s_{2})\to(\nu(s_{1})\leq \mathcal{C}_{\nu}[s_{2}]).\]
This property is an analogue of the inflationary property satisfied by the clock operations of Definition 5.1: if an output _can be_ influenced by an input, then the output _must be_ bounded below by the input. In some ways, it would be surprising if Theorem 6.1 didn't hold of \(\mathcal{C}\), as it is built entirely from inflationary clock operations. Our proof will be built in kind, compsing proofs over atomic steps to yield proofs for entire diagrams. We sketch the proof at a high level here; the details are available in our Agda development.
* The proof for a \(\mathbf{tick}\) step uses the fact that the clock's **increment** operator is inflationary: \(t\leq\mathbf{increment}_{a}(t)\) for every action \(a\) and timestamp \(t\). This is true by construction for any clock implementing Definition 5.1.
* The proof for a \(\mathbf{join}\) step uses the fact that the clock's \(\sqcup\) operator is inflationary on both arguments: both \(t_{1}\leq(t_{1}\sqcup t_{2})\) and \(t_{2}\leq(t_{1}\sqcup t_{2})\) for every pair of timestamps \(t_{1}\), \(t_{2}\). Again, this is definitionally true.
* The proof for a \(\mathbf{fork}\) step uses the fact that the clock's ordering relation is reflexive: we simply copy the input timestamp onto both outputs, so the actual values are unchanged. Indeed, this is true of \(\mathbf{perm}\) and \(\mathbf{id}\), too: all outputs are precisely the same as the (unique) inputs they are causally related to.
* The proof for a sequential composition (\(\cdot\)) uses the fact that the clock's ordering relation is transitive. If we have a path through an intermediate site, where the time at the intermediate site is bounded below at the input and bounded above at the output, we must use transitivity to obtain a direct relationship between the input and output.
* The proof for a concurrent composition requires no information about the clock; however, the _proof-relevance_ of our causal relation plays an essential role. We know that \(s_{1}\) and \(s_{2}\) are causally ordered because we were given a _specific_ path witnessing the fact; and any given path through a concurrent composition is a path wholly through one concurrent half of the diagram or the other. Thus, we can simply dispatch to whichever sub-proof applies to the path at hand.
Somewhat surprisingly, nowhere do we require antisymmetry: even though partial orders are traditionally used in logical clocks, _preorders_ are enough. This proof also holds for _every_ CSD, even those not reflecting a well-behaved system. All we require is that updates are inflationary -- the clock condition is not actually sensitive to _what_ those updates are, or _who_ performs them. This reveals a clean separation between clocks as ADTs and the protocols they are employed in; the clock condition is solely concerned with the ADT itself.
### Monotonicity of clock functions
Just as in Sections 4.2 and 5.3, we need to be a little creative to leverage Theorem 6.1 into a proof of the full clock condition. The key insight is that, if we have a path of type \(e_{1}\nightsquigarrow e_{2}\) and an initial valuation \(v\), we can run the clock's update function on the subdiagram _before_\(e_{1}\). The resulting valuation is an initial valuation for the subdiagram _between_\(e_{1}\) and \(e_{2}\), on which we can apply inflationary. Once more, we leave the finer details to our Agda implementation.
Theorem 6.2 (The clock function is monotonic).: _Fix a choice of logical clock, and let \(X\) be an \(\operatorname{Act}\)-labeled \(\operatorname{CSD}\Gamma_{1}\rightrightarrows^{\operatorname{Act}}\Gamma_{2}\) with an initial valuation \(v:\operatorname{Valuation}(\Gamma_{1})\). Then the clock function \(\mathcal{C}\) is monotonic on causally related events:_
\[\forall(e_{1}\;e_{2}:\operatorname{Event}(X)).\;(e_{1}\nightsquigarrow e_{2}) \rightarrow(\mathcal{C}_{v}[e_{1}]\leq\mathcal{C}_{v}[e_{2}]).\]
Theorem 6.2 tells us that every logical clock implementing the clock ADT of Definition 5.1 must necessarily satisfy the clock condition. In Section 7, we will actually instantiate these results on several clocks from the literature.
## 7. Verified Logical Clocks
In Sections 4 to 6, we developed a framework for reasoning about causal relationships and logical clocks, culminating in a generic proof of the clock condition for implementations of the standard clock abstract data type. In this section we apply our results to several well-known clocks: Lamport's scalar clock (Lamport, 1978), Mattern's vector clock (Lamport, 1989), Raynal et al.'s matrix clock (Lamport, 1991), and Wuu and Bernstein's matrix clock (Raynal et al., 2000). Implementations of these clocks are included in our Agda development, each with an instantiation of our generic proof of the clock condition.
Although there is only one "scalar" clock and "vector" clock in common use, there are two distinct "matrix" clocks with two-dimensional timestamps. The clock of Raynal et al., like the others we discuss, merges timestamps strictly pointwise; in contrast, the clock of Wuu and Bernstein (1984) additionally merges a row at one index into a row at another, yielding a _noncommutative_ merge operator. To avoid confusion, we will refer to the former as _the RST clock_, and the latter as _the Wuu-Bernstein clock_. We will have more to say about the characteristics of the Wuu-Bernstein clock in Section 7.2; for now, we restrict our attention to the scalar, vector, and RST clocks.
### Classifier clocks
The scalar, vector, and RST clocks all follow a similar template: we _classify_ actions by some application-specific criterion, then maintain a count of observed actions for every class.
* The scalar clock classifies all actions into one single, universal class. Its timestamp consists of a single natural number, assessing a lower bound on the total number of actions that have occurred prior.
* The vector clock classifies actions based on who performed them, i.e. by _actor_. Its timestamp consists of a vector of natural numbers -- or, equivalently, a function assigning a natural to every actor.
* The RST clock classifies actions based on _subject_ and _object_: that is, every action is performed by some subject against some object. For Raynal et al. (1991), these actions are the submission of messages, where every message has both a sender (the subject) and a recipient (the object). The RST clock's timestamp is thus a table counting messages sent between any two actors -- or, equivalently, a function assigning a natural to every pair of actors.
Surprisingly, these clocks turn out to be structurally identical, differing only in their indexing classes \(I\). In all cases, timestamps are maps \(I\rightarrow\mathbb{N}\) ordered pointwise; the **increment** operation increments the value for a chosen class \(i\in I\) by one; and the merge of two timestamps is their pointwise
maximum. From elementary properties of natural numbers, this pointwise order is a preorder, and both operations are inflationary. Thus, we model all three clocks with one implementation, which we call a **classifier clock**, parametric in a classification function giving each action its class.
By instantiating Definition 5.5 and Theorem 6.2 on the classifier clock, we obtain a global assignment of timestamps for every CSD, together with a proof that this assignment is monotone (i.e., the clock condition). When specialized to sender-recipient classes (that is, indices \(\mathrm{Pid}\times\mathrm{Pid}\)), this yields the first mechanized proof (to our knowledge) of the clock condition for the RST clock.
### Tensor clocks
The Wuu and Bernstein clock (Wuu and Bernstein, 1984) differs from the others in that it merges a row at the sender's index into a row at the recipient's, in addition to the usual pointwise merge. This merge operation is noncommutative, since it depends on which timestamp is considered the sender's, and which is considered the recipient's.
Kshemkalyani (Kshemkalyani, 2004) constructs a whole _tensor clock hierarchy_ of clocks with noncommutative merge, where a general index \((c,o_{1},o_{2},\dots)\) models information of the form "\(o_{1}\) knows that \(o_{2}\) knows that \(\dots c\) occurred at least \(N\) many times." Clocks in this hierarchy model a kind of transitive knowledge: if one agent observes some population of actions, and they send a message to another agent, then the recipient transitively observes that same population of actions. The Wuu and Bernstein clock falls out as a special case of Kshemkalyani's hierarchy.15
Footnote 15: The vector clock also appears as a member of the tensor clock hierarchy, though it exists as something of a base case – unlike higher tensor clocks, its merge is commutative.
We have implemented and verified the clock condition for the Wuu-Bernstein clock in our framework. However, the noncommutative merge operation poses some theoretical problems for the model of interpretation we developed in Section 5, which interprets the **join** atomic step into the clock's merge operator. We want to treat **join** as commutative (up to isomorphism), as with the products of sets or types. Therefore, an interpretation via Definition 5.3 of **join** into a noncommutative merge operator would take equivalent CSDs to non-equivalent update functions. That said, since all such update functions are increasing, our proof of the clock condition in Theorem 6.2 still holds -- there is no pair of equivalent CSDs for which the clock condition holds on one but not the other. Nonetheless, we hope to construct a more adequate interpretation that accounts for the full tensor clock hierarchy in the future.
## 8. Related Work
_MSCs and their semantics_. Message sequence charts (MSCs) are a diagrammatic language for representations of message-passing computations, widely used by practitioners and researchers (e.g., Lohrey and Muscholl (2004); Alur et al. (2000); Bollig et al. (2021); Di Giusto et al. (2023), as a small sampling). There have been various efforts to formalize MSCs or MSC-like diagrammatic languages, including the MSC standard itself (ITU-T, 2011) and others (Schatz et al., 1996), and investigations of the semantics of MSCs (Ladkin and Leue, 1993; Broy, 2005; Alur et al., 1996; Mauw and Reniers, 1994; Gehrke et al., 1998). However, we are not aware of any formalizations of MSCs that define them inductively, as we have done for CSDs. Rather, existing MSC formalizations are in terms of a given set of messages and a given set of processes.
Alur et al. (2006) note that MSCs admit "a variety of semantic interpretations", seemingly similar in spirit to our interpretations of CSDs. However, Alur et al.'s interpretations yield refinements of causal order - for example, they note that the meaning of a given MSC may depend on the choice of network model and fault model (e.g., whether message loss or reordering are possible). While
we give an interpretation of CSDs into a causal order, our range of possible semantic domains is greater: we also give interpretations into computable functions and into proofs.
_Mechanized reasoning about clocks and causality in concurrent systems_. In distributed systems, the notion of causal ordering arises in a myriad of settings, including causally consistent data stores (Ahamad et al., 1995; Lloyd et al., 2011), distributed snapshot protocols (Mattern, 1989; Acharya and Badrinath, 1992; Alagar and Venkatesan, 1994), causal message delivery protocols (Birman and Joseph, 1987; Schiper et al., 1989; Birman and Joseph, 1987; Birman et al., 1991), and conflict-free replicated data types (CRDTs) (Shapiro et al., 2011). In shared-memory systems, the need to reason about causality arises in the setting of data race detection for multithreaded programs (Pozniansky and Schuster, 2003; Flanagan and Freund, 2009). It is typical for such applications to use logical clocks of one kind or another to reify causal information.
There are several mechanically verified implementations of distributed algorithms that use logical clocks (Lesani et al., 2016; Gondelman et al., 2021; Nieto et al., 2022; Redmond et al., 2023). These proof developments focus on verifying properties of those higher-level algorithms (such as causal consistency of replicated databases (Lesani et al., 2016; Gondelman et al., 2021), convergence of CRDTs (Nieto et al., 2022), or safety of causal message broadcast (Nieto et al., 2022; Redmond et al., 2023)), and they (implicitly or explicitly) take the clock condition as an axiom.
The only other work that we are aware of on mechanized verification of the clock condition itself is by Mansky et al. (2017), whose work focuses on the verification of dynamic race detection algorithms. As part of their larger proof development, Mansky et al. proved in Coq that vector clocks precisely characterize the causal order. That is, they proved not only the clock condition for vector clocks, as we do here, but also the _inverse_ clock condition: if \(e_{i}\)'s timestamp is less than \(e_{j}\)'s timestamp, then \(e_{i}\) causally precedes \(e_{j}\). Unlike the (forward) clock condition, the inverse clock condition depends on the particular protocol governing use of the clock: a process must not increment an index owned by another process. While our proof development works for any clock that can be expressed as an ADT, we cannot yet prove protocol-dependent properties like the inverse clock condition. We hope to approach such properties in future work.
_Separation logics_. Separation logics (Reynolds, 2002) are program logics for reasoning about the correct use of resources -- concrete resources such as memory, but, excitingly, also _logical resources_ such as permissions and execution history. _Concurrent_ separation logics (O'Hearn, 2007) enable such reasoning about concurrent programs. The literature on separation logics and concurrent separation logics is too vast to summarize here, although O'Hearn (2019) offers an accessible introduction and Brookes and O'Hearn (2016) give an overview of important developments. CSDs are heavily inspired by concurrent separation logic, but we have not yet pursued a program logic based on CSDs. Wickerson et al. (2013)'s _ribbon proofs_, a diagrammatic proof system based on separation logic, could be an inspiration for future work in this direction.
Separation logic has been used in the service of reasoning about causality. Gondelman et al. (2021) and Nieto et al. (2022) both use the Aneris concurrent separation logic framework (Krogh-Jespersen et al., 2020), itself built on the Iris (Jung et al., 2018) framework, to verify the correctness of distributed systems in which causality is a central concern. However, the Aneris framework does not offer any particular support for reasoning about causality. In fact, we are not aware of program logics or verification frameworks that are specifically intended for reasoning about causality, which is perhaps surprising, considering the importance of causality in concurrent systems. Rather than reasoning about causal relationships as logical resources, as one would do when using Iris or Aneris, causality in a CSD-based proof system would manifest in the structure of the proof itself. |
2308.00012 | A topologically charged four-dimensional wormhole and the energy
conditions | In this research work, our primary focus revolves around the examination of a
specific category of traversable wormholes known as topologically charged
generalized Schwarzschild-Simpson-Visser-type wormhole,
$ds^2=-\Big(1-\frac{2\,M}{\sqrt{x^2+b^2}}\Big)\,dt^2+\Big(1-\frac{2\,M}{\sqrt{x^2+b^2}}\Big)^{-1}\,\Big(\frac{dx^2}{\alpha^2}\Big)+(x^2+a^2)\,(d\theta^2+\sin^2
\theta\,d\phi^2)$. This wormhole is uniquely defined by a pair of key
parameters (length scales $a$ and $b$), together with the global monopole
charge $\alpha$. A noteworthy outcome of our investigation is the observation
that the energy-momentum tensor associated with this wormhole complies with
both the weak energy condition (WEC) and the null energy condition (NEC).
Furthermore, incorporation of global monopole charge introduces a substantial
influence on the curvature properties of wormhole space-time and various
associated physical quantities derived from this geometry. | Faizuddin Ahmed | 2023-07-30T04:23:06Z | http://arxiv.org/abs/2308.00012v2 | # A topologically charged four-dimensional wormhole and the energy conditions
###### Abstract
In this research, we investigate a particular type of traversable wormhole metric named topologically charged generalized Schwarzschild-Simpson-Visser-type wormhole, which is characterized by two parameters. Notably, we find that the energy-momentum tensor adheres to the weak energy condition (WEC) and partially satisfies the null energy condition (NEC). Interestingly, we demonstrate that the introduction of topological charged has a significant impact on the space-time curvature and related properties derived from this.
pacs: 04.50.Kd pacs: 04.20.Jb Modified theories of gravity Exact solutions pacs: 14.80.Hv Magnetic Monopoles
## 1 Introduction
The concept of a traversable wormhole was pioneeringly introduced by Morris and Thorne in 1988 [1]. Unlike previously considered wormholes, such as the Einstein-Rosen bridge [2] or Wheeler's microscopic charge-carrying wormholes [3], traversable wormholes are defined in a way that allows for the two-way travel of objects. Before Morris and Thorne's work, Ellis [4] and Bronnikov [5], independently, had provided a static and spherically symmetric wormhole space-time with a free phantom scalar field. Although the practical feasibility of creating or finding such wormholes remains doubtful, their study has nevertheless paved the way for highly productive research in modern times.
In the scientific literature, numerous wormhole space-time solutions, both with and without a cosmological constant, have been constructed and thoroughly investigated, significantly contributing to our understanding of these speculative structures (Ref. [6]). However, a primary concern with wormhole space-times is that they are exact solutions of the field equations that often violate one or more energy conditions. As a result, the stress-energy tensor associated with these solutions may not correspond to realistic physical matter, although it is not inherently impossible. Despite the challenges posed by the violation of weak energy condition (WEC) and the null energy condition (NEC), the exploration of wormholes has stimulated considerable scientific curiosity and has sparked ongoing research in various related fields. The theoretical investigation of these exotic space-time geometries continues to expand our knowledge of general relativity and other fundamental aspects of physics.
Remarkable progress has been achieved in the field of non-vacuum wormhole space-times, addressing the long-standing challenge of exotic matter. It has been acknowledged that vacuum solutions of the field equations naturally satisfy the energy conditions. Building upon this research, a recent breakthrough was reported in Ref. [7], where a non-vacuum (or
vacuum) traversable wormhole space-time was introduced that adheres to both the weak and null energy conditions.
Furthermore, there have been significant developments in understanding wormhole structures. In Ref. [8], a vacuum defect wormhole was presented, representing a fascinating advancement in this area. This wormhole solution provides valuable insights into the nature of such structures. Additionally, in Ref. [9], a novel type of traversable wormhole solution was proposed, eliminating the need for exotic matter, which is a significant step forward in making wormholes more feasible and accessible. Extending the research to higher dimensions, Ref. [10] introduced a higher-dimensional extension of a vacuum-defect wormhole. This advancement opens up new possibilities for exploring wormholes in broader contexts.
Moreover, the studies have not been limited to theoretical considerations only. In Ref. [11], a Schwarzschild-Klinkhamer traversable wormhole was introduced, incorporating a cosmic string and global monopole while maintaining compliance with the energy conditions. This development highlights the potential interplay between different physical phenomena in the context of wormholes. Additionally, Ref. [12] introduced a Schwarzschild-type defect wormhole, further enriching our understanding of the diverse types of traversable wormholes.
Apart from these, other researchers have also made significant contributions to the study of wormholes. For instance, rotating cylindrical wormholes without exotic matter but not asymptotically flat have been investigated [13, 14]. Wormholes without exotic matter have been explored in the context of Einstein-Cartan theory [15], and stationary cylindrically symmetric wormhole models have been developed [16]. These diverse studies represent crucial advances in our understanding of traversable wormholes and their possible existence within the framework of general relativity and other theories of gravity. While the practical realization of such wormholes remains uncertain, the theoretical exploration of these solutions continues to provide valuable insights into the fundamental nature of space-time.
In various publications, several intriguing space-time geometries have been explored, each offering unique extensions and variations of well-known black hole and wormhole solutions. In Ref. [17], a novel space-time was introduced, representing a one-parameter extension of the Schwarzschild black hole. The outcome of this study revealed a family of metrics that smoothly interpolate between a Schwarzschild black hole, a black-bounce geometry, and a traversable wormhole. Another interesting investigation in Ref. [18] focused on a regular Fisner space-time. This space-time was coupled with a massless canonical scalar field, leading to the emergence of a traversable wormhole. In Ref. [19], an extension of the Reissner-Nordstr"om space-time was reported, known as the black-bounce solution. This extension showcased intriguing properties and characteristics. Furthermore, in Ref. [20], a wide-ranging collection of globally regular black bounce space-times was introduced. These space-times offer a generalization of the original SV model and have become a subject of significant interest in the field. These studies represent only a fraction of the diverse range of space-time models that have been explored in the literature. The exciting discoveries and innovative solutions found in these works continue to deepen our understanding of the complex nature of black holes, wormholes, and other fascinating aspects of theoretical physics.
In Ref. [21], gravitational lensing in a topologically charged wormhole space-time was discussed. The metric describing a topologically charged wormhole is given by
\[ds^{2}=-dt^{2}+\frac{dx^{2}}{\alpha^{2}}+(x^{2}+a^{2})\,(d\theta^{2}+\sin^{2} \theta\,d\phi^{2}), \tag{1}\]
where \(\alpha\) is the global monopole parameter. For \(\alpha\to 1\), this metric becomes an Ellis-Bronnikov-Morris-Thorne (EBMT)-type wormhole space-time.
Our objective is to investigate how topological charged affects the curvature of space-time in a generalized two-parameter Schwarzschild-Simpson-Visser (SSV)-type wormhole. To achieve this, we analyze this space-time by solving the Einstein's field equations. The
resulting energy-density and pressures are derived, and intriguingly, we find that the stress-energy tensor satisfies both the weak and null energy conditions. One remarkable finding is that the various physical quantities associated with the space-time curvature remain finite at \(x=0\) and tend to vanish as \(x\) approaches \(\pm\,\infty\). This observation suggests interesting and potentially significant implications for the behavior of the space-time around this wormhole configuration.
In this paper, we focus on a study concerning a topologically charged generalized Schwarzschild-Simpson-Visser-type wormhole. Our investigation involves three main sections. In Section II, we introduce an ansatz for the metric describing the topologically charged generalized SSV-type wormhole. We derive the corresponding field equations that govern the behavior of this metric. Moving on to Section III, we delve into a special case of this generalized metric introduced earlier. Within this specific scenario, we explore and discuss various properties associated with the wormhole. Finally, in Section IV, we present our discussions and conclusions based on the results obtained from the previous sections. We summarize the main findings of our study and potentially draw connections to related research in the field.
Throughout this paper, we adopt a system of units where the speed of light \(c\) and the reduced Planck constant \(\hbar\) are set to unity (\(c=1=\hbar\)). Additionally, we utilize the convention where \(8\,\pi\,G=1\), with \(G\) representing the gravitational constant. These unit choices simplify the mathematical expressions and allow us to focus on the physical aspects of the wormhole and its curvature without unnecessary constants.
## II A topologically charged generalized SSV-wormhole space-time
In this section, we consider a generalized version of topologically charged wormhole (1) by incorporating a factor \(A(x)\) analogue to the Schwarzschild solution. Therefore, we consider the following ansatz for a topologically charged wormhole space-time described by the line-element
\[ds^{2}=-\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}\,dt^{2}+ \Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1}\,\frac{dx^{2}}{\alpha^{2 }}+(x^{2}+a^{2})\,(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}), \tag{2}\]
where \(M\) represents mass of the objects, and the parameters \((b,a)\) are non-zero positive constants with
\[0<a\leq b,\quad b>2\,M. \tag{3}\]
The different coordinates are in the ranges
\[-\infty<t<+\infty,\quad-\infty<x<+\infty,\quad 0\leq\theta<\pi,\quad 0\leq \phi<2\,\pi. \tag{4}\]
For \(M\to 0\), this space-time ansatz (2) reduces to a topologically charged wormhole metric (1). For \(b=a\), and \(\alpha\to 1\), the space-time (2) reduces to the well-know Schwarzschild-Simpson-Visser wormhole [17] which is a one parameter modification of the Schwarzschild solution (see, also Refs. [18, 19]). In this analysis, we are mainly interested on \(b\neq a\) and \(\alpha\neq 1\). Thus, the wormhole metric (2) is a two parameter \((a,b)\) modifications of Schwarzschild solution called topologically charged generalized Schwarzschild-Simpson-Visser-type wormhole. Note that for \(b=0=a\) together, one can find from metric (2) the Schwarzschild-like solution with a global monopole.
The nonzero components of the Ricci tensor \(R_{\mu\nu}\) for the space-time (2) are given by
\[R_{tt}=\frac{M\,\alpha^{2}\,(3\,b^{2}\,x^{2}+a^{2}\,b^{2}-2\,a^ {2}\,x^{2})}{(x^{2}+b^{2})^{5/2}\,(x^{2}+a^{2})}\,\Big{(}1-\frac{2\,M}{\sqrt{x ^{2}+b^{2}}}\Big{)},\] \[R_{xx}=-\frac{M\,(b^{2}-2\,x^{2})}{(x^{2}+b^{2})^{5/2}}\,\Big{(} 1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1}-\frac{2}{(x^{2}+a^{2})^{2}}\, \Bigg{[}a^{2}+\frac{M\,x^{2}\,(x^{2}+a^{2})}{(x^{2}+b^{2})^{3/2}}\times\] \[\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1}\Bigg{]},\]
\[R_{\theta\theta}=1-\alpha^{2}+\frac{2\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{3/2}}, \quad R_{\phi\phi}=R_{\theta\theta}\,\sin^{2}\theta. \tag{5}\]
The different scalar curvatures, such as the Ricci scalar \(R\), the quadratic Ricci invariant \(R_{\mu\nu}\,R^{\mu\nu}\), and the Kretschmann scalar \({\cal K}=R_{\mu\nu\rho\sigma}\,R^{\mu\nu\rho\sigma}\) for the space-time (2) are given by
\[R=\frac{2}{(x^{2}+a^{2})^{2}(x^{2}+b^{2})^{5/2}}\Big{[}-a^{4}\,M \,(b^{2}-2\,x^{2})\alpha^{2}-x^{6}\sqrt{b^{2}+x^{2}}(-1+\alpha^{2})\] \[+b^{4}\,x^{2}\Big{(}2\,M\,\alpha^{2}-\sqrt{b^{2}+x^{2}}(-1+\alpha^ {2})\Big{)}-b^{2}\,x^{4}\,\Big{(}M\,\alpha^{2}+2\,\sqrt{b^{2}+x^{2}}(-1+\alpha ^{2})\Big{)}\] \[+a^{2}\,\Big{\{}2\,b^{2}\,x^{2}\,\Big{(}M\,\alpha^{2}+\sqrt{b^{2} +x^{2}}(1-2\,\alpha^{2})\Big{)}+b^{4}\,\Big{(}4\,M\,\alpha^{2}+\sqrt{b^{2}+x^{ 2}}(1-2\,\alpha^{2})\Big{)}\] \[+x^{4}\,\Big{(}4\,M\,\alpha^{2}+\sqrt{b^{2}+x^{2}}(1-2\,\alpha^{ 2})\Big{)}\Big{\}}\Big{\}}\Big{]},\] \[R_{\mu\nu}\,R^{\mu\nu}=\frac{1}{(x^{2}+a^{2})^{4}\,(x^{2}+b^{2})^ {5}}\,\Bigg{[}M^{2}\,(a^{2}+x^{2})^{2}\,\Big{(}3\,b^{2}\,x^{2}+a^{2}\,(b^{2}-2 \,x^{2})\Big{)}^{2}\,\alpha^{4}\] \[+2\,(a^{2}+x^{2})^{2}\,\Big{\{}1+\Big{(}-1+\frac{2\,M\,b^{2}}{(x^ {2}+b^{2})^{3/2}}\Big{)}\,\alpha^{2}\Big{\}}^{2}+\Big{\{}3\,b^{2}\,M\,x^{4}+a ^{4}\,M\,(b^{2}-2\,x^{2})\] \[+2\,a^{2}\,\Big{(}x^{4}(-3\,M+\sqrt{b^{2}+x^{2}})+b^{4}(-2\,M+ \sqrt{b^{2}+x^{2}})\] \[+2\,b^{2}\,x^{2}\,(-M+\sqrt{b^{2}+x^{2}})\Big{)}\Big{\}}^{2}\, \alpha^{4}\Bigg{]},\] \[{\cal K}=\frac{2}{(x^{2}+b^{2})^{6}\,(x^{2}+a^{2})^{4}}\,\Big{[}8 \,M^{2}\,x^{4}\,(a^{2}+x^{2})^{2}\,(b^{2}+x^{2})^{3}\,\alpha^{4}\] \[+4\,M^{2}\,(b^{2}-2\,x^{2})^{2}\,(a^{2}+x^{2})^{4}\,(b^{2}+x^{2}) \,\alpha^{4}\] \[+4\,(b^{2}+x^{2})^{3}\,\Big{\{}M\,x^{2}\,(a^{2}+x^{2})+a^{2}\,(b ^{2}+x^{2})\,\Big{(}-2\,M+\sqrt{b^{2}+x^{2}}\Big{)}\Big{\}}^{2}\alpha^{4}\] \[+4\,(b^{2}+x^{2})^{3}\,\Big{\{}M\,x^{4}+a^{2}\,\Big{\{}b^{2}\, \Big{(}-2\,M+\sqrt{b^{2}+x^{2}}\Big{)}+x^{2}\,(-M+\sqrt{b^{2}+x^{2}})\Big{\}} \Big{\}}\Big{\}}^{2}\alpha^{4}\] \[+2\,(b^{2}+x^{2})^{5}\,\Big{\{}(a^{2}+x^{2})\,\sqrt{b^{2}+x^{2}}-x ^{2}\,\Big{(}-2\,M+\sqrt{b^{2}+x^{2}}\Big{)}\,\alpha^{2}\Big{\}}^{2}\] \[+2\,(b^{2}+x^{2})^{5}\Big{\{}a^{2}\,\sqrt{b^{2}+x^{2}}+x^{2}\, \Big{(}2\,M\,\alpha^{2}-\sqrt{b^{2}+x^{2}}(-1+\alpha^{2})\Big{)}\Big{\}}^{2} \Big{]}. \tag{6}\]
We have calculated the nonzero components of the Riemann tensor \(R^{\lambda}_{\mu\nu\sigma}\) for the space-time (2) and these are given by
\[R^{t}_{xxt}=\frac{M\,(b^{2}-2\,x^{2})}{(x^{2}+b^{2})^{5/2}}\, \Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1},\quad R^{t}_{\theta \theta t}=\frac{M\,\alpha^{2}\,x^{2}}{(x^{2}+b^{2})^{3/2}},\quad R^{t}_{\phi \phi t}=R^{t}_{\theta\theta t}\,\sin^{2}\theta,\] \[R^{x}_{txt}=\alpha^{2}\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}} }\Big{)}^{2}\,R^{t}_{xxt},\] \[R^{x}_{\theta\theta x}=\frac{\alpha^{2}\Big{[}M\,x^{4}+\sqrt{x^{2 }+b^{2}}\,(a^{2}\,b^{2}+x^{2})\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)} \Big{]}}{(x^{2}+a^{2})\,(x^{2}+b^{2})^{3/2}},\] \[R^{x}_{\phi\phi x}=R^{x}_{\theta\theta x}\,\sin^{2}\theta\quad, \quad R^{\theta}_{t\theta t}=R^{\phi\phi}_{t\phi t}=\frac{1}{x^{2}+a^{2}}\, \Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}\,R^{t}_{\theta\theta t},\] \[R^{\theta}_{x\theta x}=R^{\phi}_{x\phi x}=-\frac{1}{x^{2}+a^{2}} \,\Big{[}a^{2}+\frac{M\,x^{2}\,(x^{2}+a^{2})}{(x^{2}+b^{2})^{3/2}}\,\Big{(}1- \frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1}\Big{]},\] \[R^{\theta}_{\phi\phi\theta}=-R^{\phi}_{\theta\phi\theta}\,\sin^{2} \theta\quad,\quad R^{\phi}_{\theta\phi\theta}=1-\frac{x^{2}\,\alpha^{2}}{(x^{2}+a^ {2})}\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}. \tag{7}\]
From the preceding analysis, it is evident that the presence of a global monopole has a notable impact on the curvature of the space-time under investigation. Various physical
entities, such as the Ricci tensor, the Ricci scalar, the Kretschmann scalar, the quadratic Ricci invariant as well as the Riemann tensor are influenced by the global monopole.
In Figure 1, we illustrate the behavior of the Ricci scalar with respect to the coordinate \(x\) for various parameter values. On the left plot, we set \(\alpha=1/2\) and \(M=1=a\). On the right plot, we fix \(b=2.5\) and \(M=1=a\). As shown in the plots, increasing the values of the parameter \(b\) (where \(b>2\,M\)) and the global monopole parameter \(\alpha\) leads to a notable decrease in the Ricci scalar.
Figure 2 presents the behavior of the Kretschmann scalar as a function of the coordinate \(x\) for various parameter values. On the left plot, we set \(\alpha=1/2\) and \(M=1=a\), while on the right plot, we use \(b=2.5\) and \(M=1=a\). As depicted in the plots, increasing the value of the parameter \(b\) (where \(b>2\,M\)) results in a notable decrease in the Kretschmann scalar. This trend indicates that the space-time curvature is influenced by variations in parameter \(b\). On the other hand, the Kretschmann scalar increases with increasing values of the parameter \(\alpha\).
Figure 3 illustrates the behavior of the quadratic Ricci invariant as a function of the coordinate \(x\) for different parameter values. On the left plot, we set \(\alpha=1/2\) and \(M=1=a\), while on the right plot, we use \(b=2.5\) and \(M=1=a\). As depicted in the plots, increasing the value of the parameter \(b\) (where \(b>2\,M\)) and the global monopole parameter \(\alpha\) leads to a significant decrease in the quadratic Ricci invariant.
Overall, these graphical representations further emphasize the important role played by the global monopole parameter \(\alpha\) and parameters \((a,b)\) in shaping the curvature characteristics of the space-time. By understanding how these parameters influence the Ricci scalar, the Kretschmann scalar, and the quadratic Ricci invariant, we gain valuable insights into the intricate properties of the generalized Schwarzschild-Simpson-Visser-type (SSV) wormhole metric and its physical implications.
One can easily find that at \(x=0\), the scalar curvatures from Eq. (6) becomes
\[R|_{x=0} = \frac{2}{a^{2}}\left[1-\frac{\left(2\,b^{3}+a^{2}\,M-4\,b^{2}\,M \right)\alpha^{2}}{b^{3}}\right]\!,\] \[R_{\mu\nu}\,R^{\mu\nu}|_{x=0} = \frac{1}{a^{4}\,b^{6}}\,\Big{[}a^{4}\,M^{2}\,\alpha^{4}+\left(2\, b^{3}+a^{2}\,M-4\,b^{2}\,M\right)^{2}\alpha^{4}+\frac{2}{b^{6}}\,\Big{\{}b+ \left(-b+2\,M\right)\alpha^{2}\Big{\}}^{2}\Big{]},\] \[\mathcal{K}|_{x=0} = \frac{4}{a^{4}\,b^{6}}\,\Big{[}M^{2}\,\alpha^{4}\,a^{4}+8\,b^{4} \,M\,(-b+M)\,\alpha^{4}+b^{6}\,(1+2\,\alpha^{4})\Big{]} \tag{8}\]
which are finite and vanishes for \(x\to\pm\,\infty\). Also, the nonzero components of the Riemann tensor from Eq. (7) at \(x=0\) becomes
\[R^{t}_{xxt}=\frac{M}{b^{3}}\,\Big{(}1-\frac{2\,M}{b}\Big{)}^{-1 },\quad R^{x}_{txt}=\frac{M\,\alpha^{2}}{b^{3}}\,\Big{(}1-\frac{2\,M}{b}\Big{)},\quad R^{x}_{\theta\theta x}=\alpha^{2}\,(b-2\,M),\] \[R^{x}_{\phi\phi x}=R^{x}_{\theta\theta x}\,\sin^{2}\theta,\quad R ^{\theta}_{x\theta x}=R^{\phi}_{x\phi x}=-1=-R^{\phi}_{\theta\phi\theta},\quad R ^{\theta}_{\phi\phi\theta}=-\sin^{2}\theta \tag{9}\]
which are finite provided \(b\neq 2\,M\).
We choose the following null vector \(k^{\mu}\), a time-like unit vector \(U^{\mu}\), and a spacelike unit vector \(\eta^{\mu}\) along \(x\)-direction defined by
\[k^{\mu}=\Bigg{[}-\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)} ^{-1/2},\alpha\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{1/2},0,0\Bigg{]},\] \[U^{\mu}=\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1/2}\, \delta^{\mu}_{t},\quad\eta^{\mu}=\alpha\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{ 2}}}\Big{)}^{1/2}\,\delta^{\mu}_{x},\] \[\text{where}\qquad-U^{\mu}\,U_{\mu}=1=\eta^{\mu}\,\eta_{\mu},\quad U _{\mu}\,\eta^{\mu}=0=k^{\mu}\,k_{\mu},\quad k_{\mu}\,\eta^{\mu}=1=U_{\mu}\,k^{ \mu}. \tag{10}\]
Let us examine the field equations \(G_{\mu\nu}=\Big{(}R_{\mu\nu}-\frac{1}{2}\,g_{\mu\nu}\,R\Big{)}=T_{\mu\nu}\) by considering the following
energy-momentum tensor form
\[T_{\mu\nu}=\left(\rho+p_{t}\right)U_{\mu}\,U_{\nu}+p_{t}\,g_{\mu\nu}+\left(p_{x}-p _{t}\right)\eta_{\mu}\,\eta_{\nu}, \tag{11}\]
where \(\rho,p_{x},p_{t}\) represents the energy-density, pressure component along \(x\)-direction, and tangential pressure, respectively. For the space-time (2), these physical quantities are given by
\[\rho=-T_{t}^{t}=-G_{t}^{t}=\frac{M\,\alpha^{2}\left(3\,b^{2}\,x^{2}+a^{2}\,b^ {2}-2\,a^{2}\,x^{2}\right)}{(x^{2}+b^{2})^{5/2}\,(x^{2}+a^{2})}+R/2>0,\]
\[p_{x}=T_{x}^{x}=G_{x}^{x}=-\frac{M\,\alpha^{2}\left(b^{2}-2\,x^{2}\right)}{(x^ {2}+b^{2})^{5/2}}-\frac{2\,\alpha^{2}}{(x^{2}+a^{2})^{2}}\left[a^{2}\left(1- \frac{2\,M}{\sqrt{x^{2}+b^{2}}}\right)\right.\]
Figure 1: The Ricci scalar
Figure 3: The quadratic Ricci Invariant
Figure 2: The Kretschmann scalar
\[+\frac{M\,x^{2}\,(x^{2}+a^{2})}{(x^{2}+b^{2})^{3/2}}\Bigg{]}-R/2,\] \[p_{t}=T_{\theta}^{\theta}=G_{\theta}^{\theta}=\frac{1}{x^{2}+a^{2 }}\left[1-\alpha^{2}+\frac{2\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{3/2}}\right] -R/2=T_{\phi}^{\phi}=G_{\phi}^{\phi}, \tag{12}\]
where \(R\) is given in Eq. (6).
Now, we check the validate of the different energy conditions. With the help of (10), we have
\[T_{\mu\nu}\,U^{\mu}\,U^{\nu}=\rho>0 \tag{13}\]
a positive energy-density. To satisfy the null energy condition, the relations \(\rho+p_{x}>0\) and \(\rho+p_{t}>0\) must holds good. In our case, we find
\[\rho+p_{x}=\frac{M\,\alpha^{2}\,(3\,b^{2}\,x^{2}+a^{2}\,b^{2}-2 \,a^{2}\,x^{2})}{(x^{2}+b^{2})^{5/2}\,(x^{2}+a^{2})}-\frac{M\,\alpha^{2}\,(b^ {2}-2\,x^{2})}{(x^{2}+b^{2})^{5/2}}\] \[-\frac{2\,\alpha^{2}}{(x^{2}+a^{2})^{2}}\,\Big{[}a^{2}\,\Big{(} 1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}+\frac{M\,x^{2}\,(x^{2}+a^{2})}{(x^{2} +b^{2})^{3/2}}\Big{]},\] \[\rho+p_{t}=\frac{M\,\alpha^{2}\,(3\,b^{2}\,x^{2}+a^{2}\,b^{2}-2\, a^{2}\,x^{2})}{(x^{2}+b^{2})^{5/2}\,(x^{2}+a^{2})}+\frac{1}{x^{2}+a^{2}}\, \Big{[}1-\alpha^{2}+\frac{2\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{3/2}}\Big{]}. \tag{14}\]
It is interesting to note that at \(x=0\), these physical quantities (12) are finite and given by
\[\rho|_{x=0}=\frac{1}{a^{2}}\,\Big{[}1-2\,\alpha^{2}\,\Big{(}1- \frac{2\,M}{b}\Big{)}\Big{]}>0,\] \[p_{x}|_{x=0}=-\frac{1}{a^{2}},\quad p_{t}|_{x=0}=\frac{\alpha^{2 }}{a^{2}}\,\Big{(}1-\frac{2\,M}{b}\Big{)}+\frac{M\,\alpha^{2}}{b^{3}} \tag{15}\]
where we have restricted \(\alpha^{2}\,\Big{(}1-\frac{2\,M}{b}\Big{)}<1/2\) with \(b>2\,M\), and the global monopole parameter \(\alpha\) is in interval \(0<\alpha<1\) in gravitation and cosmology. These physical quantities associated with the stress-energy tensor vanishes for \(x\to\pm\,\infty\). From above, we observe that the null energy condition by the relations \((\rho+p_{x})<0\) everywhere even at \(x=0\) and \((\rho+p_{t})>0\) for \(x\geq 0\). This indicates that even though the energy-density \(\rho\) is positive everywhere, the null energy condition is partially satisfied.
However, if one choose the global monopole parameter \(\alpha\) is very very small, then the terms associated with \(\alpha^{2}\) and its higher power should be neglected. In that situation, the various physical quantities obtained above approximate to as follows:
\[\rho\approx 1/(x^{2}+a^{2})>0,\quad p_{x}\approx-\frac{1}{(x^{2}+a^ {2})},\quad p_{t}\approx 0,\] \[R\approx 2/(x^{2}+a^{2}),\quad{\cal K}\approx\frac{1}{(x^{2}+a^{2 })^{2}},\quad R_{\mu\nu}\,R^{\mu\nu}\approx\frac{1}{(x^{2}+a^{2})^{2}\,(x^{2}+ b^{2})^{5}} \tag{16}\]
which are finite at \(x=0\) and vanishes for \(x\to\pm\,\infty\).
Thus, from the above approximation, we observe that the stress-energy tensor satisfies the weak and null energy conditions. Throughout the analysis, we notice that the presence of global monopoles significantly influences all the physical properties associated with the space-time curvature and, as a result, leads to notable changes in the results.
**IV. A topologically charged Schwarzschild-Simpson-Visser-type wormhole. -** In this section, we consider the case where, \(b^{2}=a^{2}\). Therefore, the space-time (2) under this case will be of the following form
\[ds^{2}=-\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}\,dt^{2}+ \Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1}\,\Big{(}\frac{dx^{2}}{ \alpha^{2}}\Big{)}+(x^{2}+b^{2})\,(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}). \tag{17}\]
For \(\alpha\to 1\), the space-time (17) reduces to the well-know Schwarzschild-Simpson-Visser wormhole [17].
The non-zero components of the Einstein tensor \(G_{\mu\nu}\) for the metric (17) are
\[G_{tt}=\Bigg{[}\frac{4\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{5/2} }+\frac{x^{2}+b^{2}-\alpha^{2}\,(2\,b^{2}+x^{2})}{(x^{2}+b^{2})^{2}}\Bigg{]} \,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)},\] \[G_{xx}=\frac{\Big{(}-b^{2}+x^{2}\,(-1+\alpha^{2})\Big{)}}{\alpha ^{2}\,(x^{2}+b^{2})^{2}}\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1},\] \[G_{\theta\theta}=\frac{\alpha^{2}\,b^{2}\,(-M+\sqrt{x^{2}+b^{2} })}{(x^{2}+b^{2})^{3/2}},\quad G_{\phi\phi}=G_{\theta\theta}\,\sin^{2}\theta. \tag{18}\]
The Ricci scalar \(R\), the quadratic Ricci scalar \(R_{\mu\nu}\,R^{\mu\nu}\), and the Kretschmann scalar curvatures for the metric (17) are given by
\[R=\frac{2}{(b^{2}+x^{2})}+\frac{6\,b^{2}\,M\,\alpha^{2}}{(b^{2}+ x^{2})^{5/2}}-\frac{2\,(2\,b^{2}+x^{2})\,\alpha^{2}}{(b^{2}+x^{2})^{2}},\] \[R_{\mu\nu}\,R^{\mu\nu}=\frac{1}{(b^{2}+x^{2})^{6}\,\Big{(}-2\,M+ \sqrt{b^{2}+x^{2}}\Big{)}^{2}}\,\Bigg{[}b^{4}\,M^{2}\,\Big{(}b^{2}+x^{2}-2\,M \,\sqrt{b^{2}+x^{2}}\Big{)}^{2}\,\alpha^{4}\] \[+b^{4}\,\Big{\{}-7\,M\,x^{2}+6\,M^{2}\,\sqrt{b^{2}+x^{2}}+2\,x^{2 }\,\sqrt{b^{2}+x^{2}}+b^{2}\,\Big{(}-7\,M+2\,\sqrt{b^{2}+x^{2}}\Big{)}\Big{\}} ^{2}\,\alpha^{4}\] \[+2\,(b^{2}+x^{2})\,\Big{(}-2\,M+\sqrt{b^{2}+x^{2}}\Big{)}^{2}\, \Big{\{}(b^{2}+x^{2})^{3/2}+\Big{(}2\,b^{2}\,M-(b^{2}+x^{2})^{3/2}\Big{)}\, \alpha^{2}\Big{\}}^{2}\Bigg{]},\] \[\mathcal{K}=\frac{2}{(x^{2}+b^{2})^{5}}\,\Big{[}4\,M^{2}\,x^{4}\, \alpha^{4}+2\,M^{2}\,(b^{2}-2\,x^{2})^{2}\,\alpha^{4}\] \[+4\,\Big{\{}M\,x^{2}+b^{2}\,\Big{(}-2\,M+\sqrt{b^{2}+x^{2}}\Big{)} \Big{\}}^{2}\,\alpha^{4}\] \[+2\,\Big{\{}(b^{2}+x^{2})^{3/2}+2\,M\,x^{2}\,\alpha^{2}-x^{2}\, \sqrt{b^{2}+x^{2}}\,\alpha^{2}\Big{\}}^{2}\Big{]}. \tag{19}\]
In Figure 4, we plot the Ricci scalar with \(x\) and have shown that it is free from divergence at \(x=0\). At the left one, we choose \(\alpha=1/2\) and \(M=1\). At the right one, we choose \(b=2.3\) and \(M=1\). We see that by increasing the values of the parameter \(b>2\,M\), and the global monopole parameter \(\alpha\), the Ricci scalar gradually decreases and vanishes for \(x\to\pm\,\infty\).
In figure 5, we plot the Kretschmann scalar with \(x\) and have shown free from divergence at \(x=0\). At the left one, we choose \(\alpha=1/2\) and \(M=1\). At the right one, we choose \(b=2.3\) and \(M=1\). By increasing the values of the parameter \(b>2\,M\), the Kretschamn scalar decreases and increases with increasing the values of \(\alpha\).
In figure 6, we plot the quadratic Ricci invariant with \(x\) for different values of various parameters. At the left one, we choose \(\alpha=1/2\) and \(M=1\). At the right one, we choose \(b=2.3\) and \(M=1\). By increasing the values of the parameter \(b>2\,M\) and the global monopole parameter \(\alpha\), the quadratic Ricci invariant decreases.
The non-zero components of the Riemann tensor \(R^{\lambda}_{\mu\nu\sigma}\) are
\[R^{t}_{xxt}=\frac{M\,(b^{2}-2\,x^{2})}{(x^{2}+b^{2})^{5/2}}\, \Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}^{-1},\quad R^{t}_{\theta \theta t}=\frac{M\,\alpha^{2}\,x^{2}}{(x^{2}+b^{2})^{3/2}},\quad R^{t}_{ \phi\phi t}=R^{t}_{\theta\theta t}\,\sin^{2}\theta,\] \[R^{x}_{txt}=\alpha^{2}\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2} }}\Big{)}^{2}\,R^{t}_{xxt}\quad,\quad R^{x}_{\theta\theta x}=\frac{\alpha^{2}}{( x^{2}+b^{2})^{3/2}}\,\Big{[}M\,x^{2}+b^{2}\,(-2\,M+\sqrt{x^{2}+b^{2}})\Bigg{]},\] \[R^{x}_{\phi\phi x}=R^{x}_{\theta\theta x}\,\sin^{2}\theta\quad, \quad R^{\theta}_{t\theta t}=R^{\phi}_{t\phi t}=\frac{1}{x^{2}+b^{2}}\,\Big{(}1 -\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}\,R^{t}_{\theta\theta t},\]
\[R^{\theta}_{x\theta x}=R^{\phi}_{x\phi x}=-\frac{1}{(x^{2}+b^{2})^{2}} \,\Big{[}b^{2}+\frac{M\,x^{2}}{\sqrt{x^{2}+b^{2}}}\,\Big{(}1-\frac{2\,M}{\sqrt{x ^{2}+b^{2}}}\Big{)}^{-1}\Big{]},\] \[R^{\theta}_{\phi\phi\theta}=-R^{\phi}_{\theta\phi\theta}\,\sin^{ 2}\theta\quad,\quad R^{\phi}_{\theta\phi\theta}=1-\frac{x^{2}\,\alpha^{2}}{(x ^{2}+b^{2})}\,\Big{(}1-\frac{2\,M}{\sqrt{x^{2}+b^{2}}}\Big{)}. \tag{20}\]
At \(x=0\), the Riemann tensor components (20) reduces to those components obtained in Eq. (9). Here, the scalar curvatures given in (19) are finite at \(x=0\) given by
\[R|_{x=0} = \frac{6\,M\,\alpha^{2}}{b^{3}}+\frac{2-4\,\alpha^{2}}{b^{2}}\] \[R_{\mu\nu}\,R^{\mu\nu}|_{x=0} = \frac{1}{b^{6}}\,\Big{[}8\,b\,M\,\alpha^{2}+2\,M\,(-10\,b+9\,M)\, \alpha^{4}+b^{2}\,\Big{(}2-4\,\alpha^{2}+6\,\alpha^{4}\Big{)}\Big{]},\]
Figure 4: The Ricci scalar
Figure 5: The Kretschmann scalar
Figure 6: The quadratic Ricci Invariant
\[{\cal K}|_{x=0} = \frac{4}{b^{6}}\,\Big{[}M\,(-8\,b+9\,M)\,\alpha^{4}+b^{2}\,(1+2\, \alpha^{4})\Big{]}. \tag{21}\]
As done earlier, the energy-density, the pressure components are given by
\[\rho=\frac{4\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{5/2}}+\frac{ \Big{(}x^{2}+b^{2}-\alpha^{2}\,(2\,b^{2}+x^{2})\Big{)}}{(x^{2}+b^{2})^{2}}>0,\] \[p_{x}=\frac{-b^{2}+x^{2}\,(-1+\alpha^{2})}{(x^{2}+b^{2})^{2}}, \quad p_{t}=\frac{\alpha^{2}\,b^{2}\,(-M+\sqrt{x^{2}+b^{2}})}{(x^{2}+b^{2})^{5/ 2}}. \tag{22}\]
Now, the null energy condition is that both the relations \(\rho+p_{x}>0\) and \(\rho+p_{t}>0\) for \(b>2\,M\) holds good. We have
\[\rho+p_{x}=-\frac{2\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{2}}\,\Big{(}1-\frac{2 \,M}{\sqrt{x^{2}+b^{2}}}\Big{)}<0,\] \[\rho+p_{t}=\frac{3\,M\,\alpha^{2}\,b^{2}}{(x^{2}+b^{2})^{5/2}}+ \frac{1-\alpha^{2}}{x^{2}+b^{2}}>0. \tag{23}\]
At \(x=0\), these physical quantities (22) are finite given by
\[\rho|_{x=0}=\frac{1}{b^{2}}-\frac{2\,\alpha^{2}}{b^{2}}\,\Big{(}1-\frac{2\,M} {b}\Big{)},\quad p_{x}|_{x=0}=-\frac{1}{b^{2}},\quad p_{t}|_{x=0}=\frac{\alpha ^{2}}{b^{2}}\,\Big{(}1-\frac{M}{b}\Big{)}. \tag{24}\]
From the above analysis, we have observed that the relations \((\rho+p_{x})<0\) and \((\rho+p_{t})>0\) everywhere \(x\geq 0\). This indicates that even though the energy-density \(\rho\) is positive everywhere \(x\geq 0\), the null energy condition is partially satisfied.
However, for very very small values of \(\alpha\), various physical quantities obtained earlier approximates to as follows:
\[\rho\approx\frac{1}{x^{2}+b^{2}}>0,\quad p_{x}\approx-\frac{1}{x ^{2}+b^{2}},\quad p_{t}\approx 0,\] \[R\approx\frac{1}{(x^{2}+b^{2})},\quad R_{\mu\nu}\,R^{\mu\nu} \approx\frac{1}{(x^{2}+b^{2})^{2}},\quad{\cal K}\approx\frac{1}{(x^{2}+b^{2}) ^{2}}. \tag{25}\]
Thus, from the above approximation, we observe that the stress-energy tensor satisfies the weak and null energy conditions. The presence of global monopoles significantly influences all the physical properties associated with the space-time geometry and, as a result, leads to notable changes in the results.
## 4 IV. Conclusions
In this comprehensive analysis, we have presented a general class of topologically charged wormhole space-time, known as the Schwarzschild-Simpson-Visser-type metric. By utilizing this metric, we successfully solved Einstein's field equations, which allowed us to determine the energy-density and pressure components of the stress-energy tensor. Importantly, we demonstrated that the stress-energy tensor satisfies the weak energy condition, even at \(x=0\), and partially satisfies the null energy condition. However, for very very small topological charged parameter \(\alpha\), we have shown that the wormhole space-time satisfies the energy conditions. Throughout the analysis, we have chosen the parameter \(a\leq b\), \(b>2\,M\), and \(\alpha\neq 1\).
Furthermore, we derived several essential physical quantities associated with the space-time, including the Ricci tensor, scalar curvatures, and the Riemann curvature tensor. Our calculations revealed that all these quantities remain finite at \(x=0\). Moreover, the physical quantities associated with the space-time is influenced by the topological defects of point-like global monopole. This finding assures the regularity and validity of the space-time at this particular point.
In our future work, we plan to investigate the geodesics motion of test particles and bending of photon rays around this wormhole space-time. This study will shed light on the gravitational lensing effect caused by the topologically charged wormhole. |
2310.12884 | Connected Components and Disjunctive Existential Rules | In this paper, we explore conjunctive query rewriting, focusing on queries
containing universally quantified negation within the framework of disjunctive
existential rules. We address the undecidability of the existence of a finite
and complete UCQ-rewriting and the identification of finite unification sets
(fus) of rules. We introduce new rule classes, connected linear rules and
connected domain restricted rules, that exhibit the fus property for
existential rules. Additionally, we propose disconnected disjunction for
disjunctive existential rules to achieve the fus property when we extend the
introduced rule fragments to disjunctive existential rules. We present
ECOMPLETO, a system for efficient query rewriting with disjunctive existential
rules, capable of handling UCQs with universally quantified negation. Our
experiments demonstrate ECOMPLETO's consistent ability to produce finite
UCQ-rewritings and describe the performance on different ontologies and
queries. | Enrique Matos Alfonso, Giorgos Stamou | 2023-10-19T16:37:03Z | http://arxiv.org/abs/2310.12884v1 | # Connected Components and Disjunctive
###### Abstract
In this paper, we explore conjunctive query rewriting, focusing on queries containing universally quantified negation within the framework of disjunctive existential rules. We address the undecidability of the existence of a finite and complete \(\mathbf{UCQ}\)-rewriting and the identification of finite unification sets (_fus_) of rules. We introduce new rule classes, _connected linear rules_ and _connected domain restricted rules_, that exhibit the _fus_ property for existential rules. Additionally, we propose _disconnected disjunction_ for disjunctive existential rules to achieve the _fus_ property when we extend the introduced rule fragments to disjunctive existential rules. We present ECompleto, a system for efficient query rewriting with disjunctive existential rules, capable of handling \(\mathbf{UCQ}\)-s with universally quantified negation. Our experiments demonstrate ECompleto's consistent ability to produce finite \(\mathbf{UCQ}\)-rewritings, and describe the performance on different ontologies and queries.
Disjunctive Rules, Queries with Negation, Backward Chaining and Query Rewriting.
## 1 Introduction
Conjunctive query rewriting [1; 2] is one main approaches to perform query answering in the presence of rules. Given a set of rules \(\mathcal{R}\) and a query \(\mathcal{Q}\), the rules are applied in a backward manner to generate a union of conjunctive queries (\(\mathbf{UCQ}\)), which is a rewriting of \(\mathcal{Q}\). The aim of the process is to reach a
complete_ rewriting of \(\mathcal{Q}\) that does not need the rules in order to represent all the answers of \(q\).
Conjunctive query rewriting is very convenient in scenarios where the system's data undergoes frequent modifications or expansions, and the queries and rules remain relatively stable. Conversely, in situations where the data remains static, and different queries are to be responded to, a forward chaining approach [3] is more suitable for the query answering task.
In this paper, we focus on conjunctive query answering for queries that might contain universally quantified negation [4; 5; 6] with respect to disjunctive existential rules [2; 7]. The existence of a finite and complete \(\mathsf{UCQ}\)-rewriting of an arbitrary query with respect to an arbitrary set of rules is an undecidable problem [8]. A set of rules that ensures the existence of a finite and complete \(\mathsf{UCQ}\)-rewriting with respect to any \(\mathsf{UCQ}\) is called _finite unification set_ (_fus_). We also refer to a _fus_ as a _rewritable_ (_first-order rewritable_) set of rules. Determining whether a set of rules is a _fus_ is also an undecidable problem. However, the authors in [8] provide a compilation of some properties that ensure the _fus_ property for the framework of existential rules. In the case of disjunctive existential rules, the only known way to obtain a _fus_ is expanding a _fus_ of existential rules with disjunctive existential rules that are disconnected, i.e., there are no variables shared between the hypothesis and the consequence of the rule.
In the present study, we introduce _connected linear rules_ and _connected domain restricted rules_, two classes of rules that exhibit the _fus_ property for existential rules despite being more expressive than the classes they extend (linear rules and domain restricted rules respectively). The introduced classes do not ensure the _fus_ property for disjunctive existential rules. However, we introduce the concept of _disconnected disjunction_ for disjunctive existential rules that ensures those rule fragments to be a _fus_ even in the case of disjunctive existential rules.
In terms of practical implementation, we describe the latest version of our system, ECompleto, which is designed to perform query rewriting with respect to disjunctive existential rules. This system is capable of handling \(\mathsf{UCQ}\)'s with universally quantified negation and offers an extension of DLGP+ for specifying disjunctive existential rules and negated atoms in queries.
To evaluate the performance of our system, we conducted experiments using two known ontologies, Lehigh University Benchmark LUBM [9] and Travel, both enriched with additional axioms. We generate 500 \(\mathsf{UCQ}\)'s for each ontology and observed that ECompleto successfully produce finite \(\mathsf{UCQ}\)-rewritings for all queries. Notably, the rewriting process for the Travel ontology is significantly faster and consumes less memory compared to the LUBM ontology.
#### Related Work
To the best of our knowledge, the only work related to UCQ-rewritings with respect to disjunctive existential rules was published by Leclere et al. [2]. The authors proposed a rewriting definition for disjunctive existential rules that is different to the one proposed in [7]. Their definition uses piece rewritings to eliminate all the disjunctive components of a disjunctive existential rule and produce a conjunctive query. The main difference between their method and the one proposed in [7] is that intermediate rules which may not lead to a CQ, are avoided by them. Based on their findings, we could avoid expanding a disjunctive rule if one of its disjoints cannot be used to produce UCQ-rewritings. The authors did not reference any implementations of their approach.
#### Paper Structure
Section 2 provides background concepts needed to understand the rest of the paper. Subsection 2.1 introduces concepts related to first-order logic formulas and how the elements of a formula are connected. Subsection 2.2 presents the disjunctive existential rules framework. Section 3 focuses on rewritability. Subsection 3.1 introduces the definition of UCQ-rewriting for disjunctive existential rules and a backward chaining rewriting algorithm for disjunctive existential rules. Subsection 3.2 mentions the rewritable existential rule fragments defined in the literature. Subsection 3.3 proposes two new existential rule classes that have the _fus_ property. It also introduces further restrictions in order to retain the _fus_ property when those rule classes are extended to disjunctive existential rules. Section 4 describes a new implementation and provides an experimental evaluation of the proposed rewriting algorithm. Section 5 discusses the experimental evaluation. Finally, Section 6 concludes the paper and documents our most significant contributions.
## 2 Preliminaries
### Set Formulas and Connected Components
In this section, we provide essential background information which is necessary to understand the concepts and terminology used in this paper. We assume the reader is familiar with standard first-order logic (FOL) formulas. The reader is referred to [10] in case a background reading is needed.
We work exclusively with FOL formulas that lack function symbols and are built upon finite sets of predicates and constant symbols. Standard definitions for the entailment and equivalence of formulas are applied throughout this paper.
We introduce the concepts of conjunctive set formulas (CSFs) and disjunctive set formulas (DSFs) [7] as tools for representing disjunctive rules in a compact way. However, both notations are often used in FOL to represent rules and clauses.
A _conjunctive set formula_ (CSF) is a set of formulas denoted as \(F_{1},\ldots,F_{n}\) and interpreted as the conjunction of these formulas \(F_{i}\). A _disjunctive set formula_ (DSF) is represented as \([F_{1},\ldots,F_{n}]\) and interpreted as the disjunction of its formulas \(F_{i}\). An empty CSF is equivalent to \(\top\), and an empty DSF is equivalent to \(\bot\). Parentheses are used when necessary to avoid ambiguity, e.g., \([(A,B),D]\) is equivalent to \((A\wedge B)\lor D\).
In the context of the entailment operation, we represent the axioms \(A_{i}\) using CSFs and the consequences \(Q_{j}\) using DSFs, i.e.,
\[A_{1},\ldots,A_{n}\models[Q_{1},\ldots,Q_{m}].\]
A _term_ can be a constant or a variable. An _atom_ is a formula of the form \(a(t_{1},\ldots,t_{n})\), where \(a\) is a _predicate_ with _arity_\(n\), and the arguments \(t_{i}\) are terms. A _literal_ is either an atom or its negation. A literal \(l=A\) is _positive_, while \(l=\neg A\) is _negative_. The _complement_\(\bar{l}\) of a literal \(l\) is the literal with the opposite sign, i.e., \(\bar{A}=\neg A\) and \(\neg\neg A=A\). Two literals \(l_{1}\), \(l_{2}\) are _complementary_ if one is the complement of the other \(\bar{l_{1}}=l_{2}\).
The expression \(\mathit{vars}(F)\) denotes the set of variables appearing in a formula \(F\). In a formula, variables can be _universally quantified_, _existentially quantified_, or _free_. A formula is _closed_ when it contains no free variables. A formula is _ground_ if it contains no variables at all.
A _substitution_\(\theta=\{X_{1}\gets t_{1},\ldots X_{n}\gets t_{n}\}\) is a finite mapping of variables \(X_{i}\) to terms \(t_{i}\). The result of applying a substitution \(\theta\) to a formula \(F\) is denoted as \(F\theta\) and it is obtained by replacing in \(F\) every occurrence of every variable \(X_{i}\) by the corresponding term \(t_{i}\).
A substitution \(\theta\) is a _renaming substitution_ for the expression \(F\), if each \(X_{i}\) occurs in \(F\), the variables \(Y_{1},\ldots,Y_{n}\) are distinct and each \(Y_{i}\) is either equal to some \(X_{j}\) in \(\theta\), or \(Y_{i}\) does not occur in \(F\).
The _composition_ of two substitutions, \(\theta\) and \(\sigma\), is a new substitution \(\theta\sigma\) such that \(F\theta\sigma=(F\theta)\sigma\) for any expression \(F\). A substitution \(\theta\) is _more general than_ another substitution \(\sigma\) if there exists a substitution \(\gamma\) such that \(\sigma=\theta\gamma\).
A _unifier_ for a set of expressions \(S=\{F_{1},\ldots,F_{n}\}\) is a substitution \(\theta\) such that, \(F_{1}\theta=F_{2}\theta,\ldots,F_{n-1}\theta=F_{n}\theta\). The expressions in \(S\) are _unifiable_, if there is a unifier for \(S\). The most general unifier (_mgu_) of \(S\) is a unifier that is more general than any other unifier for \(S\) and is unique up to variable renaming.
A _hypergraph_ is a pair \(\langle A,E\rangle\) of nodes \(A\) and _hyperedges_\(E\), where the hyperedges are non-empty subsets of \(A\). We represent a CSF of atoms \(F\) by a hypergraph \(\langle A,E\rangle\), where \(A=\mathit{vars}(F)\). Each atom \(A_{i}\) in \(F\) represents a hyperedge \(\mathit{vars}(A_{i})\in E\) that connects its variables. Following, we introduce some properties for CSFs of atoms that follow from the defined hypergraph representation.
Let \(F\) be a CSF of atoms. The _cardinality_ of a formula \(F\) is the number of variables in the formula, i.e., \(\mathit{card}(F)=|\mathit{vars}(F)|\). The _width_ of a formula \(F\) (denoted by \(\mathit{width}(F)\)) is the number of atoms that have at least one variable in their arguments. Two variables \(u\) and \(v\) in \(\mathit{vars}(F)\) are _connected_ iff they
both belong to the some hyperedge (\(\exists A\in F|\{v,u\}\subseteq\mathit{vars}(A)\)), or iff there is another variable \(z\) in \(F\) that is connected to both \(u\) and \(v\).
A CSF of atoms \(F\) is _connected_ iff all the atoms in it contain variables and all the variables are connected to each other. An atom that has only constants in its arguments is a connected formula by itself and it is represented by an empty hypergraph. The constants in the formula play no role in their hypergraph representation.
A CSF \(F\) can be partitioned into a set \(\{U_{1},\ldots,U_{n}\}\) of connected CSFs such that if \(v\in\mathit{vars}(U_{i})\) is connected to \(u\in\mathit{vars}(U_{j})\), then \(i=j\). The formula \(F\) is equivalent to a CSF of its connected components, i.e., \(F=U_{1},\ldots,U_{n}\).
The _connected cardinality_ and _connected width_ of \(F\) (denoted by \(\mathit{card}^{*}(F)\) and \(\mathit{width}^{*}(F)\)) is defined as the maximum cardinality and width among its connected components, respectively. In other words, \(\mathit{card}^{*}(F)=\max_{i}{(\mathit{card}(U_{i}))}\) and \(\mathit{width}^{*}(F)=\max_{i}{(\mathit{width}(U_{i}))}\).
The connected cardinality and connected width of a DSF \(F=[F_{1},\ldots,F_{m}]\) is the maximum connected cardinality and width of the formulas \(F_{i}\), respectively, i.e., \(\mathit{card}^{*}(F)=\max_{i}{(\mathit{card}^{*}(F_{i}))}\) and \(\mathit{width}^{*}(F)=\max_{i}{(\mathit{width}^{*}(F_{i}))}\).
**Lemma 1**: _Let \(G\) be a CSF and let \(\{U_{1},\ldots,U_{n}\}\) be the partition of a given CSF \(F\) of atoms into connected CSFs \(U_{i}\). Then,_
\[G\models F\ \ \mathit{iff}\ \ \ G\models U_{i}\ \mathit{for\ every}\ \ U_{i}.\]
_Proof_ Given that no variables are shared between the connected components \(U_{i}\), we can safely combine the assignments for the variables within each \(U_{i}\) without introducing conflicts in the values assigned to each variable. Detailed proof for this lemma is given by Tessaris [11]. \(\Box\)
For CSFs with a bounded connected cardinality or the connected width, we can ensure the existence of a finite number of equivalence classes for these formulas.
**Lemma 2**: _Let \(k\) be a natural number. There is a finite number of equivalence classes of CSFs of atoms with connected cardinality of at most \(k\) that can be constructed using a finite set of predicates and constants._
_Proof_ The equivalence of two CSFs composed of atoms holds if and only if they can be unified through a renaming substitution. Given that we have a finite set of predicates and constant symbols, and a maximum of \(k\) distinct variables, we can construct CSFs in a finite number of ways, denoted as M, to ensure they remain connected. In a CSF consisting of more than \(M\) connected CSFs we know that some of the connected components are renamings of others, and keeping only one of them yields a formula equivalent to \(F\) (according to Lemma 1). Hence, there are at most \(2^{M}\) different equivalence classes for CSFs with a maximum of \(k\) distinct variables.
**Lemma 3**: _Let \(k\) be a natural number. There are a finite number of equivalence classes of \(\mathsf{CSF}\)s of atoms with connected widths of at most \(k\) that can be constructed using a finite set of predicates and constants._
Proof: This follows directly from Lemma 2 because an upper bound for the connected width of a \(\mathsf{CSF}\) implies that there is also an upper bound for the connected cardinality for a set of finitely many predicates.
### Disjunctive Existential Rules Framework
A _conjunctive query_ (\(\mathsf{CQ}\)) is a \(\mathsf{CSF}\)\(l_{1},\ldots,l_{n}\) of positive literals (atoms) \(l_{i}\) where all the variables \(\mathbf{X}=\mathit{vars}(l_{1},\ldots,l_{n})\) are existentially quantified, i.e., an expression of the form \(\exists\mathbf{X}\)\(l_{1},\ldots,l_{n}\). Queries that permit negation in the literals \(l_{i}\) are referred to as _conjunctive queries with negation_ (\(\mathsf{CQ}^{\neg}\)). All the variables \(\mathbf{X}\) that appear in the positive literals of a \(\mathsf{CQ}^{\neg}\) are existentially quantified. In this paper, we make use of _universally quantified negation_[4; 5; 6], i.e., all the variables \(\mathbf{Z}\) that appear only in negative literals are universally quantified: \(\exists\mathbf{X}\forall\mathbf{Z}\)\(l_{1},\ldots,l_{n}\). Universal quantification ensures that our queries are safe and domain-independent. From now on, we exclude quantifiers in queries, given that the rules for writing them are well-established. The set of variables that appear in both positive and negative literals is called the _frontier_ of the query. The queries we define are commonly referred to as _Boolean conjunctive queries_. Throughout the paper, by conjunctive query, we mean Boolean conjunctive query. It is important to mention that the algorithms introduced in this paper can also be modified to accommodate queries with answer variables, as detailed in [7].
A \(\mathsf{DSF}\) of conjunctive queries (conjunctive queries with negation) is referred to as a _union of conjunctive queries_ (\(\mathsf{UCQ}\)) (_union of conjunctive queries with negation_ (\(\mathsf{UCQ}^{\neg}\))). For a \(\mathsf{UCQ}^{\neg}\)\(\mathcal{Q}\), the set of \(\mathsf{CQ}^{\neg}\)s in \(\mathcal{Q}\) with exactly \(k\) negated atoms is denoted by \(\mathcal{Q}^{\neg k}\), and the set of \(\mathsf{CQ}^{\neg}\)s in \(\mathcal{Q}\) with two or more negated atoms is denoted by \(\mathcal{Q}^{\neg\#}\). We use the term _query_ to refer to either a \(\mathsf{CQ}\), \(\mathsf{CQ}^{\neg}\), \(\mathsf{UCQ}\) or \(\mathsf{UCQ}^{\neg}\).
A _fact_ is a \(\mathsf{CSF}\)\(a_{1},\ldots,a_{n}\) of atoms \(a_{i}\), where all variables are assumed to be existentially quantified. Note that we omit the explicit use of existential quantifiers. The concept of facts aligns with the definition of Boolean conjunctive queries. Nevertheless, it is essential to distinguish their roles: facts are employed to convey existing knowledge, whereas queries serve as formulas to be validated, having distinct roles in the reasoning process.
A _rule_ is a closed formula of the form
\[\forall\mathbf{X}\,\exists\mathbf{Y}\ B\to H,\]
where the _body_\(B\) is a \(\mathsf{CSF}\) of atoms, and the _head_\(H\) is a \(\mathsf{DSF}\) in which all \(H^{\prime}\in H\) are \(\mathsf{CSF}\)s of atoms. The set \(\mathbf{X}=\mathit{vars}(B)\) includes the variables which occur in the body and are universally quantified. On the other hand,
\(\mathbf{Y}=\mathit{vars}(H)\setminus\mathit{vars}(B)\) are the variables exclusively appearing in the head of the rule. These variables are existentially quantified and are referred to as _existential variables_. The _frontier_ of a rule refers to the variables present in both the body and head of the rule, denoted as \(\mathit{vars}(B)\cap\mathit{vars}(H)\). We simplify the notation by omitting quantifiers when expressing a rule.
A _disjunctive existential rule_ is a rule with more than one disjoint element in the head, i.e., \(|H|>1\). In contrast, an _existential rule_ is a rule featuring exactly one disjoint element in the head. For simplicity, we represent the head of the existential rule as a \(\mathsf{CSF}\) of atoms. A _negative constraint_ is a rule with an empty disjoint in the head, i.e., \(B\to\bot\). In contexts where it is evident that we are referring to a negative constraint, we may omit the "\(\to\bot\)". Occasionally, we also use the terms constraint and negative constraint interchangeably.
In the context of a rule set \(\mathcal{R}\), we denote the set of constraints within \(\mathcal{R}\) as \(\mathcal{R}^{\bot}\), the set of existential rules as \(\mathcal{R}^{\exists}\) and the set of disjunctive existential rules as \(\mathcal{R}^{\vee}\). A set of facts is often denoted as \(\mathcal{D}\).
In this paper, we study the _query entailment_ problem with respect to disjunctive existential rules, i.e.,
\[\mathcal{R},\mathcal{D}\models_{?}\mathcal{Q}. \tag{1}\]
We address the problem (1) by transforming it into the entailment of a \(\mathsf{UCQ}\ \mathcal{Q}^{\prime}\) with respect to the facts \(\mathcal{D}\), i.e.,
\[\mathcal{D}\models_{?}\mathcal{Q}^{\prime}.\]
A \(\mathsf{UCQ}\ \mathcal{Q}^{\prime}\) is a \(\mathsf{UCQ}\)_-rewriting_ of \(Q\) with respect to \(\mathcal{R}\) if for any \(\mathcal{D}\) when the following condition holds:
\[\mathcal{D}\models\mathcal{Q}^{\prime}\ \ \text{implies}\ \ \mathcal{R}, \mathcal{D}\models\mathcal{Q}. \tag{2}\]
Every \(\mathsf{CQ}\)s in \(\mathcal{Q}^{\prime}\) is a \(\mathsf{CQ}\)_-rewriting_ of \(Q\) with respect to \(\mathcal{R}\). If the converse of (2)
\[\mathcal{R},\mathcal{D}\models\mathcal{Q}\ \ \text{implies}\ \ \mathcal{D}\models \mathcal{Q}^{\prime}\]
is also true for any \(\mathcal{D}\), then \(\mathcal{Q}^{\prime}\) is a _complete_\(\mathsf{UCQ}\)-rewriting of \(\mathcal{Q}\) with respect to \(\mathcal{R}\), i.e.,
\[\mathcal{R},\mathcal{D}\models\mathcal{Q}\ \ \text{iff}\ \ \mathcal{D}\models \mathcal{Q}^{\prime}\]
The negative constraints in the entailment problem can be transformed into queries, i.e.,
\[\mathcal{R}^{\exists},\mathcal{R}^{\vee},\mathcal{R}^{\bot},\mathcal{D} \models\mathcal{Q}\ \ \ \text{iff}\ \ \ \mathcal{R},\mathcal{R}^{\vee},\mathcal{D}\models\neg\mathcal{R}^{\bot}, \mathcal{Q}.\]
Additionally, the conjunctive queries with negation, \(\mathcal{Q}^{\neg 1}\) and \(\mathcal{Q}^{\neg\#}\) can be transformed into existential rules and disjunctive existential rules respectively:
\[\begin{array}{l}\mathcal{R}^{\exists},\mathcal{R}^{\vee},\mathcal{R}^{\bot}, \mathcal{D}\models\mathcal{Q}\ \ \ \text{iff}\\ (\mathcal{R}^{\exists},\neg\mathcal{Q}^{\neg 1}),(\mathcal{R}^{\vee},\neg \mathcal{Q}^{\neg\#}),\mathcal{D}\models\neg\mathcal{R}^{\bot},\mathcal{Q}^{ \neg 0},\end{array} \tag{3}\]
Consequently, for the remainder of this paper, we focus on the problem of finding a \(\mathsf{UCQ}\)-rewriting of an input \(\mathsf{UCQ}\)s with respect to (disjunctive) existential rules without constraints.
## 3 Rewritability
In this Section, we begin by presenting an algorithm for finding \(\mathsf{UCQ}\)-rewritings introduced in [7]. The algorithm relies on general rewriting steps utilizing disjunctive rules and conjunctive queries to generate rules with fewer disjoints and eventually other existential rules. After, we introduce the existing rewritable rule fragments defined in the literature. We also present two new existential rule fragments that also have the _fus_ property. Additionally, we introduce restrictions that ensure the _fus_ property of those rule fragments for the case of disjunctive rules.
### Rewriting Steps and Algorithms for Disjunctive
Existential Rules
A _piece-based_ rewriting step, as defined within the existential rules framework [1], corresponds to the rewriting step defined in [7].
**Definition 1** (Rewriting Step): Let \(r=B\to H\) be an existential rule, and \(Q\) a conjunctive query. If there is a subset \(H^{\prime}\subseteq H\) that unifies with some \(Q^{\prime}\subseteq Q\) through a mgu \(\theta\) (i.e., \(H^{\prime}\theta=Q^{\prime}\theta\)) such that
1. if \(v\in\mathit{vars}(Q\setminus Q^{\prime})\) and \(v\neq v\theta\), then \(v\theta\) is a frontier variable of \(r\) or a constant, and
2. if \(v\) is an existential variable of the rule \(r\), then \(v\theta\notin\mathit{vars}(Q\setminus Q^{\prime})\),
then the query \((B\cup(Q\setminus Q^{\prime}))\theta\) is a _rewriting_ of \(Q\) using the existential rule \(r\).
The authors in [7] define a corresponding rewriting step for a disjunctive existential rule and a \(\mathsf{CQ}\). It is a generalization of Definition 1 with the goal to support both existential rules and disjunctive existential rules.
**Definition 2** (General (Disjunctive) Rewriting Step): Let \(r=B\to H\) be a rule, and \(Q\) a conjunctive query. If there is a subset \(H^{\prime}\subseteq H\), and for each \(h_{i}\in H^{\prime}\) there is a subset \(h_{i}^{\prime}\subseteq h_{i}\) that unifies with a \(Q^{\prime}\subseteq Q\) through a mgu \(\theta\) (i.e., \(h_{1}^{\prime}\theta=\ldots h_{n}^{\prime}\theta=Q^{\prime}\theta\)) such that
1. if \(v\in\mathit{vars}(Q\setminus Q^{\prime})\), then \(v\theta\) is a frontier variable of \(r\) or a constant, and
2. if \(v\) is an existential variable of the rule \(r\), then \(v\theta\notin\mathit{vars}(Q\setminus Q^{\prime})\),
then \((B\cup(Q\setminus Q^{\prime})\to H\setminus H^{\prime})\theta\) is a _rewriting_ of \(Q\) using the rule \(r\). A rewriting step is a _disjunctive rewriting step_ if the rule used is a disjunctive existential rule.
A disjunctive rewriting step results in either a disjunctive rule with fewer disjunctive components, an existential rule in case \(|(H\setminus H^{\prime})\theta|=1\) or a negative constraint (the negation of a conjunctive query) in case \(H=H^{\prime}\).
Example 1: Consider the following disjunctive existential rule:
\[r_{1}=\mathit{diabetesRisk}(X)\rightarrow\,[(\mathit{diabetic}(Y),\] \[\mathit{sibling}(Y,X)),\] \[(\mathit{diabetic}(Z),\] \[\mathit{parent}(Z,X))].\]
If we want to rewrite the query \(Q=\mathit{diabetic}(X_{1})\), to verify the presence of people with diabetes, we can obtain the \(\mathsf{UCQ}\)-rewriting \([\mathit{diabetic}(X_{1}),\mathit{diabetesRisk}(X)]\), using \(r_{1}\) with the unifier \(\theta=\{Y\gets X_{1},Z\gets X_{1}\}\).
Alternatively, if together with \(r_{1}\) we have a query corresponding to a negative constraint \(q_{c}=\mathit{singleChild}(X_{1}),\mathit{sibling}(Y_{1},X_{1})\) and the conjunctive query \(Q^{\prime}=\mathit{diabetic}(Y_{2}),parent(Y_{2},X_{2})\) verifying the existence of a diabetic parent, we obtain the following existential rule
\[\mathit{diabetesRisk}(X),\mathit{singleChild}(X)\rightarrow\, \mathit{diabetic}(Z),\] \[\mathit{parent}(Z,X),\]
as the result of rewriting \(q_{c}\) using the rule \(r_{1}\) and the unifier \(\theta_{2}=\{X_{1}\gets X,Y_{1}\gets Y\}\). Using the new existential rule we obtain the following \(\mathsf{UCQ}\)-rewriting:
\[[(\mathit{singleChild}(X),\mathit{sibling}(Y,X)),\] \[(\mathit{diabetic}(Y),parent(Y,X)),\] \[(\mathit{diabetesRisk}(X),\mathit{singleChild}(X))].\]
Note that the final \(\mathsf{UCQ}\)-rewriting also contains queries that correspond to constraints. They are possible reasons for which a query can be entailed, i.e., inconsistency in our facts.
Using the above-mentioned rewriting steps, we define _rewriting_ with respect to disjunctive rules.
Definition 3 (Rewriting): Let \(\langle\mathcal{R},\mathcal{Q}\rangle\) be a tuple consisting of a set \(\mathcal{R}\) of rules and a \(\mathsf{UCQ}\)\(\mathcal{Q}\). A _one-step rewriting_\(\langle\mathcal{R}^{\prime},\mathcal{Q}^{\prime}\rangle\) of \(\langle\mathcal{R},\mathcal{Q}\rangle\) is obtained by adding to \(\mathcal{R}\) or to \(\mathcal{Q}\), as appropriate, the result \(f^{\prime}\) of a general rewriting step that uses one of the conjunctive queries in \(\mathcal{Q}\) and a rule in \(\mathcal{R}\), i.e., \(\mathcal{Q}^{\prime}=\mathcal{Q}\cup(\neg f^{\prime})\) if \(f^{\prime}\) is a negative constraint, \(\mathcal{Q}^{\prime}=\mathcal{Q}\cup f^{\prime}\) if \(f^{\prime}\) is a conjunctive query, otherwise \(\mathcal{R}^{\prime}=\mathcal{R}\cup(f^{\prime})\).
A _\(k\)-step rewriting_ of \(\langle\mathcal{R},\mathcal{Q}\rangle\) is obtained by applying a one-step rewriting to a \((k-1)\)-step rewriting of \(\langle\mathcal{R},\mathcal{Q}\rangle\). For any \(k\), a \(k\)-step rewriting of \(\langle\mathcal{R},\mathcal{Q}\rangle\) is a _rewriting_ of \(\langle\mathcal{R},\mathcal{Q}\rangle\).
A rewriting as defined in 3 is sound and complete.
Theorem 4.1 (Soundness and Completeness of Rewritings): _Let \(\langle\mathcal{R},\mathcal{D}\rangle\) be a tuple consisting of a set \(\mathcal{R}\) of rules and a \(\mathsf{UCQ}\)\(\mathcal{Q}\). Then \(\mathcal{R},\mathcal{D}\models\mathcal{Q}\) iff there is a rewriting \(\langle\mathcal{R}^{\prime},\mathcal{Q}^{\prime}\rangle\) of \(\langle\mathcal{R},\mathcal{Q}\rangle\) such that \(\mathcal{D}\models Q_{i}\) for some conjunctive query \(Q_{i}\) in \(\mathcal{Q}^{\prime}\)._
Proof.: The \(k\)-step rewriting of \(\langle\mathcal{R},\mathcal{Q}\rangle\) is based on a constraint derivation as defined in [7], i.e., resolution derivations that always use a clause with all its literals negated. Moreover, such a rewriting can be mapped to a constraint derivation. Considering that constraint derivations are sound and complete [7], this theorem also holds.
```
function\(\texttt{rewrite}_{k}(\mathcal{R},\mathcal{Q})\)do \(\mathcal{R}_{old}:=\mathcal{R}\) \(\mathcal{Q}_{old}:=\mathcal{Q}\) \(\mathcal{Q}:=\texttt{rewrite}_{k}^{\exists}(\mathcal{R}^{\exists},\mathcal{Q})\) \(\mathcal{R}:=\texttt{rewrite}^{\vee}(\mathcal{R},\mathcal{Q})\) while\((\mathcal{Q}\neq\mathcal{Q}_{old}\) or \(\mathcal{R}\neq\mathcal{R}_{old})\) return\(\mathcal{Q}\) endfunction
```
**Algorithm 1** Function to rewrite UCQs with respect to existential rules and disjunctive existential rules.
Algorithm 1 presents the function \(\texttt{rewrite}_{k}/2\), which computes the rewritings of \(\langle\mathcal{R},\mathcal{Q}\rangle\), for a given a set of rules \(\mathcal{R}\) and a UCQ\(Q\), and yields the corresponding UCQ-rewriting component. The algorithm alternates between computing the rewritings of CQs using existential rules (function \(\texttt{rewrite}_{k}^{\exists}/2\)) and computing the rewritings using disjunctive existential rules (function \(\texttt{rewrite}^{\vee}/2\)). New CQs are used to produce more rules, and new existential rules are used to produce more CQs until a fixed point is reached, i.e., until no new rule or conjunctive query is produced.
All CQs generated by Algorithm 1 are computed according to Definition 3; this ensures the correctness of the rewritings produced, i.e., every CQ that is generated is a CQ-rewriting of the input query with respect to the input sets of rules and constraints.
A detailed description of the rewriting function for disjunctive existential rules (\(\texttt{rewrite}^{\vee}/2\)) can be found in [7]. It generates all the possible rules using an input UCQ rewriting. Due to the fact that new rules have less disjunctive components in the head, the output is always finite. Therefore, the completeness of the result of Algorithm 1 relies totally on the completeness of function \(\texttt{rewrite}_{k}^{\exists}/2\).
The function \(\texttt{rewrite}_{k}^{\exists}/2\) represents depth controlled version of a complete rewriting algorithm proposed on [1]. It implements a breath-first expansion process where each iteration expands a new level of conjunctive queries. The parameter \(k\) that allows us to control how many levels of CQs will be expanded and ensures termination of each individual call for \(k\neq\infty\). However, the loop in Algorithm 1 will keep on calling the function as long as new CQs are generated,
without affecting the completeness of the whole rewriting process. The function yields only the most general \(\mathsf{CQ}\)-rewritings. If there is a finite and complete \(\mathsf{UCQ}\)-rewriting of the input \(\mathsf{UCQ}\), the function \(\mathtt{rewrite}_{k}^{\exists}/2\) will find it within a finite number of calls.
### Existing Rewritable Fragments
The termination of Algorithm 1 is studied in [7]. The algorithm stops, if there is a finite and complete \(\mathsf{UCQ}\)-rewriting of the input query with respect to the rules.
**Theorem 5**: _Let \(\mathcal{R}\) be a set of rules and \(\mathcal{Q}\) a \(\mathsf{UCQ}\). If a \(\mathsf{UCQ}\)\(\mathcal{Q}\) has a finite and complete \(\mathsf{UCQ}\)-rewriting with respect to \(\mathcal{R}\), then Algorithm 1 stops for any finite value of \(k\)._
For the proof, we refer the reader to [7].
The problem of knowing if there exists a finite \(\mathsf{UCQ}\)-rewriting for any \(\mathsf{UCQ}\) with respect to an arbitrary set of existential rules is undecidable [8]. A set of existential rules that ensures the existence of a finite \(\mathsf{UCQ}\)-rewriting for any \(\mathsf{UCQ}\) is called a finite unification set (_fus_) [12]. We extend the concept of _fus_ to disjunctive existential rules.
There are some classes of existential rules that have the _fus_ property:
1. _Linear_ existential rules [12]: existential rules with one atom in the body.
2. _Disconnected_ existential rules [13]: existential rules that do not share variables between the body and the head.
3. _Domain restricted_ rules [12]: existential rules that each atom in the head contains none or all of the variables in the body.
4. _Acyclic graph of rule dependencies_ (_aGRD_) [14]: existential rules that do not contain cycles in the _graph of rule dependencies_.
5. _Sticky_ rules [15]: Each marked variable occurs at most once in a rule body. The marked variable set is built from a rule set using the following marking procedure: (i) for each rule \(r_{i}\) and for each variable \(v\) occurring in the body of \(r_{i}\), if \(v\) does not occur in all atoms of the head of \(r_{i}\), mark (each occurrence of) \(v\) in the body of \(r_{i}\); (ii) apply until a fixpoint is reached: for each rule \(r_{i}\), if a marked variable \(v\) appears at position \(p[k]\) in the body of \(r_{i}\), then for each rule \(r_{j}\) (including \(i=j\)) and for each variable \(x\) appearing at position \(p[k]\) in the head of \(r_{j}\), mark each occurrence of \(x\) in the body of \(r_{j}\).
Extending the concept of linear rules to disjunctive existential rules is not sufficient to ensure the _fus_ property because it may only ensure the existence of a finite \(\mathsf{UCQ}\)-rewriting if the input query is an atomic query.
**Theorem 6**: _Let \(\mathcal{R}\) be a set of rules and \(\mathcal{Q}\) a \(\mathsf{UCQ}\). If \(\mathcal{Q}\) contains only atomic queries and \(\mathcal{R}\) only linear rules, then Algorithm 1 stops for any value of \(k\)._
If a set \(\mathcal{R}\) of existential rules is a _fus_ and a set of the new existential rules generated by the function \(\texttt{rewrite}^{\vee}/2\) is also a _fus_, combining them could yield a new set of existential rules that is not a _fus_[8]. Therefore, we need stronger conditions to ensure that we always call \(\texttt{rewrite}^{\exists}_{k}/2\) with a set of existential rules that is a _fus_. In general, two _fus_\(\{\mathcal{R}_{1},\mathcal{R}_{2}\}\) can be combined into a _fus_ if none of the rules of one set depends on the rules of the other set.
In \(\texttt{rewrite}^{\vee}/2\), even if the resulting set of existential rules \(\mathcal{R}\) is a _fus_, the process of generating new rules could potentially continue forever after new CQs are generated. Therefore, we need ways to ensure that the total number of existential rules generated is bounded, i.e., there is a point beyond which the algorithm will not produce new rules.
Rules that do not share variables between the head and the body produce rewritings where the introduced body of the rule is not connected to the remaining part of the query.
A rule \(B\to H\) with an empty frontier is a _disconnected_ rule, i.e., \(\texttt{vars}(B)\cap\texttt{vars}(H)=\emptyset\). Disconnected rules can still share constants between the body and the head of the rule and this allows us to express knowledge about specific individuals.
**Theorem 7**: _Let \(\mathcal{R}_{1}\) be a fus and \(\mathcal{R}_{2}\) a set of disconnected existential rules. The union of both sets \(\mathcal{R}_{1}\cup\mathcal{R}_{2}\) is also a fus._
Theorem 7 allows us to extend the _fus_ property to disjunctive existential rules. For a detailed proof check [8].
**Theorem 8**: _Let \(\mathcal{R}\) be a set of rules. If \(\mathcal{R}^{\exists}\) is a fus, and \(\mathcal{R}^{\vee}\) a set of disconnected disjunctive existential rules, then \(\mathcal{R}\) is also a fus._
The theorem can be proven by ensuring that Algorithm 1 will always stop. The reader is referred to [7] for a detailed proof. \(\Box\)
### Expanding the Existing Fragments
Domain restricted (_dr_) rules [12] are existential rules where all the atoms in the head contain none or all of the variables in the body of the rule. However, if we consider rules where the bodies can have more than one connected component, then the definition of _dr_ rules can be generalized.
**Definition 4** (Connected domain restricted rule): A rule is called _connected domain restricted_ (_cdr_) rule if for every connected component \(C\) in the body of the rule and for every atom \(h\) in the head, \(h\) contains none or all the variables of \(C\).
_Example 2_ (Common ancestor and six degrees of separation rules): In biology and genealogy, the _most recent common ancestor_ (MRCA), _last common ancestor_ (LCA), or _concestor_ of a set of organisms is the most recent individual from which all the organisms of the set are descended. We could express a simpler rule stating that for every two organisms there exists a common ancestor:
\[\mathit{organism}(X),\mathit{organism}(Y)\rightarrow\mathit{organism}(Z), \mathit{ancestor}(Z,X),\mathit{ancestor}(Z,Y)\]
The rule is obviously not domain restricted but it is connected domain restricted.
Another example of _cdr_ rule that is not a _dr_ is the _six degrees of separation_ rule. It describes the idea that all people are six, or fewer, social connections away from each other.
\[\begin{array}{c}\mathit{person}(X),\mathit{person}(Y)\rightarrow\mathit{knows }(X,X_{1}),\mathit{knows}(X_{1},X_{2}),\mathit{knows}(X_{2},X_{3}),\\ \mathit{knows}(X_{3},X_{4}),\mathit{knows}(X_{4},X_{5}),\mathit{knows}(X_{5}, Y)\end{array}\]
In the example rules we assume that the predicate _ancestor_/2 is irreflexive and antisymmetric, and \(\mathit{knows}/2\) is reflexive and symmetric.
Atoms in the head of a _cdr_ rule \(r\) contain all the variables of some (possibly none) connected components in the body of the rule. We can be sure that the rewritings of a CQ \(q\) with respect to \(r\) will not introduce new variables that are connected to the variables in the part of \(q\) that is not modified by the rewriting. Some new variables might be introduced but they will be in isolated connected components. Hence, the connected cardinality of the rewritings with respect to _cdr_ rules is not increasing. Consequently, the class of _cdr_ rules also has the _fus_ property.
**Theorem 9**: _A set of cdr existential rules is a fus._
Proof: A UCQ-rewriting \(q^{\prime}\) that is generated using a _cdr_ rule \(r\) and a CQ \(q\) has new connected components \(C_{i}^{\prime}\) that are either (i) not connected to the rest of the query or (ii) that all their variables were already present in an atom of \(q^{\prime}\). Therefore, the only new variables (w.r.t. the variables in \(q\)) that are introduced in the rewritings are part of disconnected components that come from the body of the set of rules (case i). For case (ii), we can ensure that in \(q\) there was a connected component \(C_{j}\) that had an atom with all the variables in the newly introduced connected component \(C_{i}^{\prime}\), thus \(\mathit{card}^{*}(C_{i}^{\prime})\leq\mathit{card}^{*}(C_{j})\). We can then ensure that the generated UCQ-rewritings have a bounded connected cardinality. Therefore, _cdr_ rules can only produce a finite number UCQ-rewritings (Lemma 2).
Definition 4 also applies to disjunctive existential rules. However, a rule generated by a disjunctive rewriting step involving a _cdr_ rule might not be a _cdr_ rule. Therefore, the _fus_ property cannot be extended to connected domain restricted disjunctive rules.
_Example 3_ Consider the rule \(\mathit{a}(X),b(Y)\rightarrow[r(X,Y),c(X),c(Y)]\) and the CQ \(r(X,Y),s(X,Y)\). They both generate a new disjunctive rule
\([c(X),c(Y)]\) that is not a _cdr_ rule. If another \(\mathsf{CQ}\)\(c(X),s(X,Z)\) is used instead, then a disjunctive rule that is not a _cdr_ rule is again generated, i.e., \(\textit{a}(X),b(Y),s(X,Z)\rightarrow[r(X,Y),c(Y)]\).
We use a similar approach to define a new rule class based on linear rules.
**Definition 5** (Connected linear rule): A rule is called _connected linear_ rule (_ctr_) if every atom in the head either does not contain variables from the body or contains variables from only one connected component in the body and this connected component has only one atom.
Both rules of Example 2 are also connected linear rules.
Example 4: The following rule is not a _cdr_ but it is clearly a connected linear rule.
\(\textit{graduated}(X,Z),\textit{graduated}(Y,W)\rightarrow\textit{exam}(V), \textit{passed}(X,V),\textit{passed}(Y,V)\)
**Theorem 10**: _A set of connected linear existential rules is a fus._
Proof: A \(\mathsf{UCQ}\)-rewriting \(q^{\prime}\) that is generated using a _clr_ rule \(r\) and a \(\mathsf{CQ}\)\(q\) has new atoms \(a^{\prime}_{i}\) that are either (i) not connected to the rest of the query or (ii) only connected to variables which were already present in an atom of \(q\).
A _clr_ prevents an atom in the head of the rule from containing variables from two different atoms (or connected components) in the body. Therefore, the rewritten atoms in \(q\) are never replaced by more than one corresponding atom that is connected to the rest of the query. This ensures that the newly formed connected component in \(q^{\prime}\) will not have more atoms than those existing in \(q\). The rewriting \(q^{\prime}\) can have other atoms that are not a "replacement" of atoms in \(q\) but those atoms are not connected to the atoms that existed in \(q\). They come from other connected components that were present in the body of the rules. Thus, the \(\mathsf{UCQ}\)-rewritings which are introduced using connected linear rules have a bound on the number of atoms in their connected components. Therefore, connected linear rules may only produce a finite number \(\mathsf{UCQ}\)-rewritings (Lemma 3).
The definition 5 may also be extended to disjunctive existential rules. However, a rule generated by a disjunctive rewriting step involving a _clr_ rule might not be a _clr_ rule. Therefore, the _fus_ property cannot be extended to connected linear disjunctive rules.
Example 5: Consider a connected linear rule \(\textit{a}(X),b(Y)\rightarrow[r(X,W),c(X),c(Y)]\) and a \(\mathsf{CQ}\)\(c(X),s(X,Z)\). We can generate new disjunctive rule (i.e., \(\textit{a}(X),b(Y),s(X,Z)\rightarrow[r(X,W),c(Y)]\)) that is not a connected linear rule.
A disjunctive existential rule can be restricted to have disconnected disjoints.
**Definition 6** (disconnected disjunction): A disjunctive existential rule has _disconnected disjunction_ if the disjoint components in the head of the rule never share variables with the same connected component in the body of the rule. A disjunctive existential rule that has disconnected disjunction is called a _D-disjunctive existential rule_ or DDER.
**Theorem 11**: _The rewritings of DDERs are also DDERs._
Proof: Let \(C_{1},\ldots,C_{n}\rightarrow[D_{1},\ldots,D_{m}]\) be a DDER \(r_{1}\). Without loss of generality, we define a rewriting \(r_{2}\) that removes \(D_{1}\) and introduces new atoms \(B\) in the body, i.e., \(B,C_{1},\ldots,C_{n}\rightarrow[D_{2},\ldots,D_{m}]\). The atoms in \(B\) may possibly merge some connected components \(C_{i}\) of the body of the rule. In particular, those that were connected to \(D_{1}\). However, those components cannot be connected to any of the remaining disjoints \([D_{2},\ldots,D_{m}]\) due to the fact that \(r_{1}\) is a DDER. Thus, \(r_{2}\) is also a DDER.
Using similar reasoning, we can also affirm that _cdr_ (_clr_) that are also DDER, generate rewritings that are also _cdr_ (_clr_).
**Theorem 12**: _Let \(\mathcal{R}\) be a DDER that is also a cdr (clr). Then, \(\mathcal{R}\) is also a fus._
Proof: We state that the Algorithm 1 cannot generate infinitely many rewritings if the rules are DDER and _cdr_ (_clr_).
Let \(Q\) be a UCQ and \(M\) be the maximum cardinality (width) of the bodies in the rules of \(\mathcal{R}\) and the CQs in \(Q\). Given that all the rules in \(\mathcal{R}\) are _cdr_ (_clr_), a rewriting step will only produce queries with a cardinality bounded by \(M\). Additionally, the cardinality of the bodies of rules produced as rewritings of disjunctive rules in \(\mathcal{R}^{\vee}\) will be bounded by \(M\) because they are DDER. The newly generated existential rules will have the same _fus_ property of \(\mathcal{R}^{\exists}\), i.e., _cdr_ (_clr_) and this ensures that at every step of the algorithm \(\mathcal{R}^{\exists}\) is a _fus_.
Thus, the rewriting Algorithm 1 will stop due to the fact that it can only produce finitely many rewritings of the initial arguments.
## 4 System Description and Evaluation
Completo1 is a query rewriting system that focuses on answering UCQ~s in the framework of disjunctive existential rules. The system is implemented in java. The first version of Completo[16] answers CQ~ using a resolution-based approach to eliminate negated atoms. The proposed algorithm is complete only for a restricted type of queries.
Footnote 1: [http://image.ntua.gr/~s](http://image.ntua.gr/~s) gardero/completo3.0/
In the second version of the system [17], only queries with one negated atom are answered by being transformed into rules. The approach is complete but termination is guaranteed only when the resulting set of rules is a _fus_.
The 3rd version of Completo[7] implements Algorithm 1 with deterministic one-step rewriting functions and answers queries with answer variables that have an arbitrary number of negated atoms. Algorithm 1 can be seen as a generalization of both algorithms proposed in [16; 17]. Indeed, queries with one negated atom are transformed into rules, while the rewriting defined for disjunctive rules is similar to what was presented in [16] as constraint resolution. Furthermore, Completo v3 takes advantage of the termination results for knowledge bases consisting of a _fus_ and \(\mathsf{UCQ}\)-s whose frontier is part of the answer variables of the query (Theorem 3.8 on [7]), as well as for knowledge bases consisting only of linear elements (Theorem 3.9 on [7]). Choosing \(k=\infty\) allows the rewriting with respect to existential rules to be performed by an external rewriter if the are no answer variables in the queries.
We present the latest version of the system called ECompleto2 and it is implemented in Elixir. The system answers queries with answer variables that contain an arbitrary number of negated atoms with respect to disjunctive existential rules. Ontologies are provided in DLGP + format, a proposed extension of DLGP3 v2.0 that allows the specification of disjunctive existential rules and negated atoms in queries.
Footnote 2: [https://github.com/gardero/ecompleto](https://github.com/gardero/ecompleto)
Footnote 3: [https://graphik-team.github.io/graal/papers/datalog+_v2.0_en.pdf](https://graphik-team.github.io/graal/papers/datalog+_v2.0_en.pdf)
Disjunction in DLGP+ is specified in the head of a rule by writing a list of the disjoints enclosed in squared brackets. The disjoint elements can be a single atom or several atoms enclosed in brackets, e.g.,
[disj. rule] [leaf(X), (inner_node(X), edge(X,Y))] :- node(X).
Negation in queries with negated atoms is specified with the minus symbol before an atom, e.g.,
[q neg]? :- person(X), -marriedTo(X,Y).
### Experiments
To the best of our knowledge, there is no other system that produces \(\mathsf{UCQ}\)-rewritings for \(\mathsf{UCQ}\)-s with universally quantified negation with respect to disjunctive existential rules. Therefore, the experiments were performed in order to assess the performance of ECompleto producing \(\mathsf{UCQ}\)-rewritings. We used an Intel(R) Core(TM) i5-7300HQ CPU at 2.50 GHz with 24 GB of RAM running Ubuntu 22.04.
For the experiments, we used two ontologies that contain negative constraints and have been used in previous research papers based on queries with negation [16; 17]. The first is the Lehigh University Benchmark (LUBM) ontology [9], enriched with 70 additional disjoint classes axioms added for the atomic _sibling_ classes, i.e., for classes asserted to share the same super-class. Secondly, we used the Travel ontology4 that contains 10 disjoint class axioms.
The OWL 2 ER [18] fragment of both ontologies was translated into existential rules. We were not able to prove the _fus_ property for the set of existential rules obtained from neither of the two ontologies we used.
We prepared 500 \(\mathsf{CQ}\)a "s for each ontology and \(\mathsf{ECompleto}\) produced finite \(\mathsf{UCQ}\)-rewritings for all the \(\mathsf{CQ}\)a ". Each of the queries contains three atoms and two of them are negated. Each query contains one variable in the frontier, which is also an answer variable. The queries were generated by performing Association Rule Mining [19] on a dataset obtained from the assertions of the ontologies. The ontologies and the queries used are publicly available 5.
Footnote 5: [http://image.ntua.gr/](http://image.ntua.gr/)\(\sim\)gardero/complete3.0/ontologies/
Table 1 shows the mean, std, min, max, and the 25th, 50th, and 75th percentiles of the \(\mathsf{UCQ}\) rewriting runtime and the used RAM memory for both ontologies. The \(\mathsf{UCQ}\) rewriting runtime is on average 500 times faster for the Travel ontology than for the LUBM ontology. The rewriting process for the Travel ontology uses on average one third of the RAM memory used to rewrite queries compared to the LUBM ontology.
Figure 1 shows the runtime vs RAM memory of the rewriting process for each query of the LUBM ontology. There are 3 clusters that group the points according to their standardized coordinates. Figure 2 shows also the size of the \(\mathsf{UCQ}\) rewriting using colors. The darker the datapoint is, the larger the size of the corresponding rewriting is. The cluster grouping shows some correlation with the size of the rewriting.
Table 2 shows the count, mean, std, min, max, and the 25th, 50th, and 75th percentiles of the \(\mathsf{UCQ}\) rewriting runtime and the used RAM memory for each of the clusters of LUBM queries.
Figure 3 shows the distribution of the query rewriting runtime for the LUBM ontology. The distribution is multimodal and it is split according to the cluster group.
The distribution of the RAM memory used in the rewriting process is shown in Figure 4. We can notice a bimodal shape, despite having 3 clusters that group the queries.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{LUBM} & \multicolumn{2}{c}{Travel} \\ \cline{2-5} Metric & Runtime (m) & Memory (Mb) & Runtime (m) & Memory (Mb) \\ \hline mean & 18.59 & 370.46 & 0.035 & 104.54 \\ std & 1.67 & 37.00 & 0.150 & 16.70 \\ min & 15.33 & 328.00 & 0.020 & 93.00 \\
25\% & 17.62 & 350.00 & 0.022 & 101.00 \\
50\% & 18.43 & 359.00 & 0.023 & 103.00 \\
75\% & 19.10 & 371.00 & 0.024 & 105.00 \\ max & 24.76 & 526.00 & 2.454 & 337.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Distribution metrics computed on the query rewriting runtime and the memory used for both ontologies.
## 5 Discussion
The experiments conducted in this study aimed to evaluate the performance of the ECompleto system in producing UCQ-rewritings for NUCQs with respect to disjunctive existential rules. These experiments provide valuable
Figure 1: Clustering of query rewriting runtime vs memory usage for LUBM.
Figure 2: Clustering of query rewriting runtime vs memory usage and size of the rewriting for LUBM.
insights into the efficiency and resource requirements of ECompleto when handling complex queries and ontologies.
Various performance metrics were computed to evaluate ECompleto's performance in terms of query rewriting runtime and memory usage. These metrics included mean, standard deviation, minimum, maximum, and percentiles (25th, 50th, and 75th) for both runtime and memory consumption.
The results demonstrated a significant difference in query rewriting runtime between the LUBM and Travel ontologies. On average, the rewriting process for the Travel ontology was approximately 500 times faster than that for the LUBM ontology.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Runtime (m)} & \multicolumn{2}{c}{Memory (Mb)} \\ \cline{2-6} Metric & Cluster 0 & Cluster 1 & Cluster 2 & Cluster 0 & Cluster 1,2 \\ \hline count & 50 & 280 & 170 & 50 & 450 \\ mean & 21.82 & 18.95 & 17.06 & 469.64 & 359.44 \\ std & 1.72 & 0.88 & 0.68 & 24.31 & 15.49 \\ min & 17.57 & 17.92 & 15.33 & 423.00 & 328.00 \\
25\% & 21.37 & 18.40 & 16.53 & 449.25 & 349.00 \\
50\% & 22.15 & 18.68 & 17.10 & 468.00 & 357.00 \\
75\% & 22.90 & 19.20 & 17.65 & 485.75 & 367.00 \\ max & 24.76 & 22.69 & 18.36 & 526.00 & 445.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution metrics computed on the query rewriting runtime and the memory used for both clusters in LUBM ontology.
Figure 3: Histogram of query rewriting runtime for LUBM.
In terms of memory usage, the Travel ontology exhibited greater efficiency, utilizing only about one-third of the RAM memory required for query rewriting compared to the LUBM ontology.
Clustering analysis revealed that the queries formed distinct clusters based on their runtime and memory consumption. The size of the UCQ rewriting also demonstrated a degree of correlation with the cluster grouping.
The distribution of query rewriting runtime for the LUBM ontology was found to be multimodal, with clusters representing different query characteristics. Similarly, the distribution of RAM memory usage exhibited a bimodal pattern despite the presence of three query clusters.
## 6 Conclusion
In conclusion, this paper has investigated the domain of conjunctive query rewriting, an important technique used in the context of query answering within rule-based systems. The main objective of conjunctive query rewriting is to transform an input query into a complete rewriting that does not need the rules to represent all the possible responses. While this approach is very useful in dynamic data environments characterized by stable queries and rules, cases featuring static data combined with dynamically varying queries need a forward chaining approach.
The focus of our investigation is dedicated to the domain of conjunctive query answering, specifically on queries that may have universally quantified negation, within the framework of disjunctive existential rules. We have taken on the computational challenge involved in determining the existence of finite and complete UCQ-rewritings, as well as the identification of finite unification
Figure 4: Memory usage histogram for LUBM.
sets (_fus_) of rules. Our contributions include the introduction of two novel rule classes: _connected linear rules_ and _connected domain restricted rules_, which have the _fus_ property and are more expressive than their antecedent rule classes, namely linear rules and domain-restricted rules.
Furthermore, we have introduced the concept of _disconnected disjunction_ in the framework of disjunctive existential rules. This concept allows us to achieve the _fus_ property for _connected linear rules_ and _connected domain restricted rules_ also in the presence of disjunctive rules with disconnected disjunction.
In terms of practical implementation, we introduced our system, ECompleto, specifically designed for the task of query rewriting within the framework of disjunctive existential rules. The system handles \(\mathsf{UCQ}^{\neg}\) queries that may include universally quantified negation. In addition, we expanded the DLGP+ format to facilitate the specification of disjunctive existential rules and the inclusion of negated atoms within queries.
The empirical evaluation of our system included a series of experiments conducted on established ontologies, namely the Lehigh University Benchmark LUBM and Travel, both augmented with supplementary axioms. The outcome of these experiments showed the consistent performance of ECompleto in generating finite \(\mathsf{UCQ}\)-rewritings for a diverse set of queries. Furthermore, the system exhibited acceptable efficiency during the rewriting process.
Finally, the experiments provided valuable insights into ECompleto's performance when producing \(\mathsf{UCQ}\)-rewritings for \(\mathsf{UCQ}^{\neg}\)s with universally quantified negation and disjunctive existential rules. The significant differences in runtime and memory consumption between the LUBM and Travel ontologies emphasize the importance of considering ontology complexity when assessing ECompleto's performance. Clustering and distribution patterns offered additional insights into the performance of ECompleto under different input queries. Overall, these findings contributed to the understanding of our system's capabilities and provided valuable information for researchers working with complex queries and ontologies in the context of disjunctive existential rules.
|
2305.07781 | Ultra-deep Keck/MOSFIRE spectroscopic observations of $z\sim 2$
galaxies: direct oxygen abundances and nebular excitation properties | Using deep near-infrared Keck/MOSFIRE observations, we analyze the
rest-optical spectra of eight star-forming galaxies in the COSMOS and GOODS-N
fields. We reach integration times of $\sim$10 hours in the deepest bands,
pushing the limits on current ground-based observational capabilities. The
targets fall into two redshift bins -- 5 galaxies at $z \sim 1.7$ and 3 at $z
\sim 2.5$ -- and were selected as likely to yield significant auroral-line
detections. Even with long integration times, detection of the auroral lines
remains challenging. We stack the spectra together into subsets based on
redshift, improving the signal-to-noise ratio on the [O III] $\lambda 4364$
auroral emission line and, in turn, enabling a direct measurement of the oxygen
abundance for each stack. We compare these measurements to commonly-employed
strong-line ratios alongside measurements from the literature. We find that the
stacks fall within the distribution of $z>1$ literature measurements, but a
larger sample size is needed to robustly constrain the relationships between
strong-line ratios and oxygen abundance at high redshift. We additionally
report detections of [O I] $\lambda6302$ for nine individual galaxies and
composite spectra of 21 targets in the MOSFIRE pointings. We plot their line
ratios on the [O III] $\lambda 5008$/H$\beta$ vs. [O I] $\lambda
6302$/H$\alpha$ diagnostic BPT diagram, comparing our targets to local galaxies
and H II regions. We find that the [O I]/H$\alpha$ ratios in our sample of
galaxies are consistent with being produced in gas ionized by $\alpha$-enhanced
massive stars, as has been previously inferred for rapidly-forming galaxies at
early cosmic times. | Leonardo Clarke, Alice Shapley, Ryan L. Sanders, Michael W. Topping, Tucker Jones, Mariska Kriek, Naveen A. Reddy, Daniel P. Stark, Mengtao Tang | 2023-05-12T22:01:14Z | http://arxiv.org/abs/2305.07781v2 | Ultra-deep Keck/MOSFIRE spectroscopic observations of \(z\sim 2\) galaxies: direct oxygen abundances and nebular excitation properties
###### Abstract
Using deep near-infrared Keck/MOSFIRE observations, we analyze the rest-optical spectra of eight star-forming galaxies in the COSMOS and GOODS-N fields. We reach integration times of \(\sim\)10 hours in the deepest bands, pushing the limits on current ground-based observational capabilities. The targets fall into two redshift bins -- 5 galaxies at \(z\sim 1.7\) and 3 at \(z\sim 2.5\) -- and were selected as likely to yield significant auroral-line detections. Even with long integration times, detection of the auroral lines remains challenging. We stack the spectra together into subsets based on redshift, improving the signal-to-noise ratio on the [O iii]\(\lambda\)4364 auroral emission line and, in turn, enabling a direct measurement of the oxygen abundance for each stack. We compare these measurements to commonly-employed strong-line ratios alongside measurements from the literature. We find that the stacks fall within the distribution of \(z>1\) literature measurements, but a larger sample size is needed to robustly constrain the relationships between strong-line ratios and oxygen abundance at high redshift. We additionally report detections of [O i]\(\lambda\)6302 for eight individual galaxies and composite spectra of 21 targets in the MOSFIRE pointings. We plot their line ratios on the [O iii]\(\lambda\)5008/H\(\beta\) vs. [O i]\(\lambda\)6302/H\(\alpha\) diagnostic BPT diagram, comparing our targets to local galaxies and H ii regions. We find that the [O i]/H\(\alpha\) ratios in our sample of galaxies are consistent with being produced in gas ionized by \(\alpha\)-enhanced massive stars, as has been previously inferred for rapidly-forming galaxies at early cosmic times.
0000-0002-8820-7885]Leonardo Clarke
0000-0002-8870-7886]Alice Shapley
0000-0002-4883-0888]Ryan L. Sanders
0000-0002-4883-0888]Michael W. Topping
0000-0002-4883-0888]Tucker Jones
0000-0002-0703-3873]Mariska Kriek
0000-0002-3188-7886]Naveen A. Reddy
0000-0002-4883-0888]Daniel P. Stark
0000-0002-4733-0888]Mengtao Tang
## 1 Introduction
Tracing the chemical evolution of galaxies is key to understanding how galaxy growth and evolution occur over time. The metallicity of a galaxy is influenced by numerous mechanisms such as the reprocessing of gas into heavier elements through nucleosynthesis; metal-enriched outflows driven by supernovae, AGN, and stellar winds; accretion of pristine hydrogen gas onto a galaxy; and accretion of enriched, recycled gas in the form of galactic fountains (Tumlinson et al., 2017; Dave et al., 2017). Observationally, metallicity commonly refers to the gas-phase oxygen abundance in the interstellar medium (ISM) of a galaxy since oxygen is the most abundant metal and produces strong rest-optical emission line features. The oxygen abundance in the ISM of star-forming galaxies has been observed to correlate tightly with the stellar mass, encapsulated in what is referred to as the mass-metallicity relation (MZR). Early evidence for a MZR goes back to Lequeux et al. (1979) who measured oxygen abundances in a small sample of nearby blue compact dwarf galaxies. Later studies (e.g., Tremonti et al., 2004; Kewley & Ellison, 2008; Andrews & Martini, 2013) showed that there is a MZR that generally describes galaxies in the local universe. Furthermore, many works (e.g., Erb et al., 2006; Maiolino et al., 2008; Mannucci et al., 2009; Zahid et al., 2011; Steidel et al., 2014; Zahid et al., 2014; Yabe et al., 2015; Ly et al., 2016; Guo et al., 2016; Sanders et al., 2021) have revealed an evolution in the MZR with redshift, noting a change in the turnover mass and the normalization at higher \(z\). Folding in the global star-formation rate (SFR) to the MZR yields the fundamental metallicity relation (FMR), which appears not to evolve through
cosmic time at least as far back as \(z\sim 3\)(e.g., Mannucci et al., 2010; Sanders et al., 2021; Heintz et al., 2022).
The existence of these scaling relations with galaxy parameters gives insight into the processes that govern galaxy formation. The SFR, which is governed by the gas reservoir in a galaxy, is influenced by the baryon cycle, and is therefore tied to the chemical evolution in the ISM through the processes described above. Additionally, the stellar mass represents the integrated sum of star formation and is also related to the total metal production across a galaxy's lifetime. Overall, the three parameters that comprise the FMR probe the important mechanisms that determine galaxy evolution. The invariance of the FMR through a large portion of cosmic history suggests that galaxies are driven towards an equilibrium among inflow, star formation, and outflow (e.g., Dave et al., 2012; Peng and Maiolino, 2014). These scaling relations are additionally useful for hydrodynamical simulations of galaxy formation since the comparison with observations provides further constraints on the subgrid physics determining the outputs (e.g., Ma et al., 2016; Dave et al., 2017; De Rossi et al., 2017; Torrey et al., 2019). Thus, measuring these galaxy parameters with high accuracy is instrumental in our understanding of galaxy formation and evolution.
Making robust measurements of the metallicity of a galaxy, however, can prove quite challenging. One of the more physically-motivated methods of determining the gas-phase oxygen abundance of a galaxy involves measuring the average electron temperature and density of its H ii regions. From these physical properties, one can determine the emissivities of the emission-line transitions from each ion species which, when scaled by their respective line flux measurements, yields the abundance of each ion relative to hydrogen (i.e. O\({}^{+}\)/H\({}^{+}\) and O\({}^{2+}\)/H\({}^{+}\), see Izotov et al., 2006; Lurdiiana et al., 2015; Peimbert et al., 2017, for more detail). This method of abundance determination is often referred to as the "direct" method. However, to obtain the electron temperature, one must be able to detect a set of faint rest-optical auroral emission lines (e.g. [O iii]\(\lambda 4364\), [O ii]\(\lambda\lambda 7322,7332\)), which can be a hundred times fainter than their nebular counterparts, requiring very long exposure times (Garnett, 1992; Perez-Montero, 2017). The task becomes increasingly challenging when observing targets at \(z>1\) due to the varying transmission of Earth's atmosphere in the near-infrared (i.e., rest-frame optical) and the apparent faintness of targets due to their increased distance. Additionally, in the case of the [O iii]\(\lambda 4364\) line, whose strength relative to [O iii]\(\lambda 5008\) is temperature-sensitive, detection becomes more difficult in higher-metallicity galaxies. This challenge is due to the effects of more efficient metal-line cooling which leads to lower electron temperatures in the constituent H ii regions in more metal-rich galaxies, rendering direct metallicity measurements much more difficult.
In light of the challenges associated with the direct method, it is common to determine metal abundances using indirect metallicity indicators which rely on the ratios of strong emission lines (e.g., Dopita et al., 2013; Pilyugin and Grebel, 2016; Bian et al., 2018; Curti et al., 2020; Nakajima et al., 2022). These relations translating strong-line ratios to metallicity are calibrated to photoionization models and/or measurements of direct metallicity in H ii regions and galaxies in the local universe. It is uncertain, however, whether these indirect indicators remain accurate at higher redshifts since conditions in the ISM of galaxies at earlier cosmic times may not resemble those in the local universe. Current observations suggest that galaxies at \(z\sim 2\) may be characterized by harder ionizing spectra and may also have N/O ratios that vary slightly from galaxies in the local universe at fixed oxygen abundance (e.g., Shapley et al., 2015; Strom et al., 2017; Shapley et al., 2019; Heintz et al., 2022).
The evolving conditions in the ISM of galaxies has typically been traced by measurements of both high- and low-ionization emission lines (e.g., [O iii]/H\(\beta\), [N ii]/H\(\alpha\), [S ii]/H\(\alpha\)). However, one low-ionization diagnostic that has been missing in analyses of galaxies at \(z>1\) is the [O i]\(\lambda 6302\)/H\(\alpha\) line ratio. In studies of low-redshift galaxies, this line ratio offers insights into the hardness of the ionizing spectrum, the contribution of diffuse ionized gas (DIG), and the presence of shocks (e.g., Zhang et al., 2017). However, due to the intrinsic faintness of the [O i]\(\lambda 6302\) line, studies of this line diagnostic at high redshift have typically proven very difficult.
Similarly, due to the difficulty of obtaining auroral-line measurements at high redshift with ground-based facilities, there only exists a small sample of \(z>1\) galaxies in the literature for which direct oxygen abundances have been measured (examples from the literature are discussed in Sanders et al., 2020). Additionally, the integration times for most ground-based near-infrared spectroscopic surveys do not reach the required depth to achieve significant auroral-line detections. The prevalence of auroral-line detections is, however, increasing as a result of observations made by the new _James Webb Space Telescope_ (JWST) (e.g., Curti et al., 2023; Nakajima et al., 2023; Sanders et al., 2023).
The deep spectral observations analyzed in this study push the limits of ground-based, 10-m-class observatories and highlight the importance of JWST and other
future state-of-the-art observatories in characterizing galaxy properties at and beyond cosmic noon. In this study, we utilize the MOSFIRE instrument (McLean et al., 2012) on the 10-m Keck I telescope to make deep observations of a sample of eight galaxies at \(z\sim 1.7-2.5\), reaching up to \(\sim\)10 hours of integration time in some bands. We additionally produce composite spectra from this sample of eight galaxies and determine their average characteristics (i.e., chemical abundances, electron temperatures, and densities). The depth of these observations additionally enables the detection of the [O i]\(\lambda 6302\) feature, beyond the reach of more typical spectroscopic samples with shallower integration times (e.g., Kriek et al., 2015). Based on these measurements, we investigate the position of our sample of \(z\sim 2\) galaxies in the [O iii]\(\lambda 5008\)/H\(\beta\) vs. [N ii]\(\lambda 6585\)/H\(\alpha\), [S ii]\(\lambda\lambda 6718,6733\)/H\(\alpha\), and [O i]\(\lambda 6302\)/H\(\alpha\) diagnostic diagrams (hereafter BPT diagrams1), the latter probing an unexplored parameter space at \(z>1\).
Footnote 1: BPT refers to Baldwin et al. (1981), though the [O iii]\(\lambda 5008\)/H\(\beta\) vs. [S ii]\(\lambda\lambda 6716,6731\)/H\(\alpha\) diagnostic diagram was later introduced by Veilleux & Osterbrock (1987).
In Section 2 of this paper, we give an overview of the observational setup and data processing methods. In Section 3, we present the results of our analysis of the spectra in our sample. In Section 4, we compare our measurements with commonly-used metallicity indicators as well as consider the nature of [O i]\(\lambda 6302\) emission at high redshift. Throughout, we adopt the following cosmological parameters: \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm m}=0.3\), and \(\Omega_{\Lambda}=0.7\). Additionally, we assume a Chabrier (2003) IMF, solar abundances of \(12+\log({\rm O/H})_{\odot}=8.69\) and \(12+\log({\rm N/H})_{\odot}=7.83\), and a solar metallicity of \(Z_{\odot}=0.014\)(Asplund et al., 2009).
## 2 Methods and Observations
### Sample Selection and Observation Configurations
The target galaxies in this analysis reside in the COSMOS and GOODS-N extragalactic legacy fields covered by the CANDELS and 3D-HST surveys (Grogin et al., 2011; Koekemoer et al., 2011; Momcheva et al., 2016). These galaxies were drawn from the MOSFIRE Deep Evolution Field (MOSDEF) survey (Kriek et al., 2015) as well as the sample of extreme emission-line galaxies (EELGs) presented by Tang et al. (2019) selected from the 3D-HST WFC3 grism emission-line catalog (Momcheva et al., 2016). Targets of interest were selected based on their probability of yielding auroral-line detections. To this effect, the target sample was composed of galaxies that had bright [O iii]\(\lambda 5008\) emission and whose ratios among strong nebular emission lines suggested a high electron temperature based on relations observed in \(z\sim 0\) H ii regions (Sanders et al., 2017). Such thermal properties would result in brighter [O iii]\(\lambda 4364\) and [O ii]\(\lambda\lambda 7322,7332\) auroral lines, necessary for making direct metallicity measurements. We also required galaxy targets to lie within the following redshift intervals: \(1.62\leq z\leq 1.70\), \(2.32\leq z\leq 2.61\), and \(2.95\leq z\leq 3.18\) to capture both auroral and strong nebular emission lines in the near-infrared atmospheric transmission windows covered by the Y, J, H, and K-bands.
In light of these selection criteria, we targeted a sample of 12 galaxies, which we refer to as "auroral" targets. Seven of the auroral targets were in the COSMOS field and five were in the GOODS-N field. Two of the galaxies from the COSMOS field are the subjects of a recent paper by Sanders et al. (2023) in which their oxygen abundances were measured via the [O ii]\(\lambda\lambda 7322,7332\) auroral line doublet, representing the first such measurements beyond the local universe.
From the remaining sample of 10 [O iii] auroral targets, two from the COSMOS field were not considered in this study. The first of these targets was excluded because the galaxy was dithered on top of an adjacent object in the field, causing the target signal to be strongly contaminated by the negative trace of its neighbor on the slit. The second of these targets was at \(z=3.12\), placing H\(\alpha\) beyond the coverage of the K-band, and the remaining higher-order Balmer emission lines fell onto atmospheric sky lines. As a result, it was not possible to apply Balmer-decrement-based dust corrections on this particular target. Because of these considerations, our final auroral-target sample consisted of eight galaxies: three at \(z\sim 2.5\) in the COSMOS field and five at \(z\sim 1.7\) in the GOODS-N field. The properties of these targets are summarized in Table 1. We utilized the multiplexing capabilities of MOSFIRE to observe an additional 29 filler targets (14 in the COSMOS pointing and 15 in the GOODS-N pointing) that we analyzed for [O i]\(\lambda 6302\) emission, and we refer to these targets as "non-auroral" targets.
Observations were collected over six nights: January 13, 2019 (COSMOS \(H\)-band); March 16, 2019 (COSMOS \(J\)- and \(H\)-bands); March 3, 2021 (GOODS-N \(J\)-band); March 4, 2021 (COSMOS \(K\)-band, GOODS-N \(J\)-band), April 19, 2021 (GOODS-N \(Y\)-, \(J\)-, and \(H\)-bands), and May 1, 2021 (GOODS-N \(H\)-band). The observations were taken with 0\(\farcs\)7 slits using an ABA'B' dither pattern, and dithered frames were aligned and combined
to perform sky subtraction. Individual frames in the J and H bands had exposure times of 120 s, while the Y and K bands had exposure times of 180 s. The total integration times in the J, H, and K bands in the COSMOS pointing were 1.86 h, 8.05 h, and 3.88 h respectively, while the GOODS-N pointing integration times in the Y, J, and H bands were 1.39 h, 9.71 h, and 1.06 h respectively. For COSMOS-19439 and COSMOS-19753, there were existing observations from the MOSDEF survey with 2 hours in each band. We therefore combined the deep MOSFIRE spectra with existing MOSDEF observations for these two targets, thereby increasing the total exposure time by 2 hours. The spectral resolutions in the Y, J, H, and K bands were 3400, 3000, 3650, and 3600 respectively. The median seeing for the COSMOS observations was \(0\farcs 79\), \(0\farcs 53\), and \(0\farcs 47\) in J, H, and K respectively, while the median seeing in GOODS-N was \(0\farcs 84\), \(0\farcs 61\), and \(0\farcs 67\) in Y, J, and H respectively.
### Data Reduction and Flux Calibration
The two-dimensional (2D) reduction of the MOSFIRE data was performed using an IDL data processing pipeline described in Kriek et al. (2015). The extractions of the 2D spectra were accomplished for each target on each mask individually using the bmap2(Freeman et al., 2019) IDL program, and we used the same slit-loss correction routine as in Reddy et al. (2015) and Kriek et al. (2015).
Footnote 2: [https://github.com/billfreeman44/bmep](https://github.com/billfreeman44/bmep)
In order to monitor seeing conditions and carefully combine individual frames, a slit was placed on a star in each mask. Each exposure was weighted according to the observed flux of the slit star. An issue arose with the flux calibrations from the targets on the COSMOS mask due to the fact that a galaxy in the field was dithered on top of the slit star for that mask, thereby contaminating the slit star spectrum that was used to apply the absolute flux scaling. Consequently, the default flux calibration for the COSMOS mask was unreliable. This effect had varying significance in each of the bands. In order to mitigate this source of systematic error and to ensure accurate band-to-band flux calibrations, we compared the spectra with available photometric observations and with corresponding spectral observations from the MOSDEF survey. The existing MOSDEF spectra were taken with different slit mask configurations from those used for the new, deep observations and thus do not suffer from the same dithering issue.
The targets in the COSMOS field that had corresponding data in the MOSDEF survey were scaled multiplicatively such that the total flux of significantly-detected lines (\(>\)5\(\sigma\)) matched the flux of the same lines in the MOSDEF spectra. For targets on the COSMOS mask with no corresponding MOSDEF observations or without \(>\)5\(\sigma\) line detections in both data sets, the spectra in each band were scaled by the average of the scaling factors on the mask in each respective filter. On average, these scaling factors adjusted the flux calibrations in each band on the order of 3-30%.
Additionally, the spectra on the GOODS-N mask were compared with existing observations in order to ensure the robustness of the flux calibrations. Since the targets on this mask did not have corresponding MOSDEF observations, the spectra in each band were scaled to agree with broad-band ground-based photometry as well as 3D-HST (Skelton et al., 2014; Momcheva et al., 2016) photometric and spectroscopic measurements.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{ ID} & R.A. & Dec. & \(z\) & \(\log{(M_{*}/M_{\odot})}\) & \(\log{(t_{age}/yr)}\) & SFR & sSFR \\ & (J2000) & (J2000) & & & & (\(M_{\odot}\)\(yr^{-1}\)) & (\(Gyr^{-1}\)) \\ \hline COSMOS-18812 & 10:00:36.896 & +02:22:13.82 & 2.46236 & \(8.74^{+0.07}_{-0.04}\) & \(8.30^{+0.14}_{-0.18}\) & \(7.47^{+4.26}_{-2.53}\) & \(13.26\pm 6.42\) \\ COSMOS-19439 & 10:00:24.360 & +02:22:36.20 & 2.46598 & \(10.26^{+0.00}_{-0.00}\) & \(9.40^{+0.00}_{-0.00}\) & \(141.46^{+48.18}_{-34.46}\) & \(7.90\pm 2.25\) \\ COSMOS-19753 & 10:00:18.182 & +02:22:50.31 & 2.46884 & \(10.55^{+0.04}_{-0.00}\) & \(9.40^{+0.00}_{-0.00}\) & \(68.98^{+4.82}_{-4.54}\) & \(1.94\pm 0.16\) \\ GOODS-N-6699 & 12:36:23.385 & +62:10:29.04 & 1.66448 & \(9.82^{+0.05}_{-0.05}\) & \(9.50^{+0.00}_{-0.40}\) & \(11.51^{+2.62}_{-2.20}\) & \(1.78\pm 0.42\) \\ GOODS-N-8013 & 12:36:52.008 & +62:10:54.80 & 1.66776 & \(9.44^{+0.04}_{-0.03}\) & \(8.80^{+0.10}_{-0.09}\) & \(6.68^{+0.70}_{-0.09}\) & \(2.43\pm 0.31\) \\ GOODS-N-8240 & 12:36:25.249 & +62:10:58.91 & 1.69090 & \(9.76^{+0.02}_{-0.13}\) & \(9.20^{+1.08}_{-0.60}\) & \(12.36^{+9.65}_{-6.22}\) & \(2.14\pm 1.43\) \\ GOODS-N-14595 & 12:36:13.373 & +62:12:49.91 & 1.67596 & \(9.02^{+0.09}_{-0.11}\) & \(8.90^{+0.20}_{-0.50}\) & \(9.60^{+3.36}_{-0.29}\) & \(9.08\pm 3.61\) \\ GOODS-N-18462 & 12:36:11.906 & +62:13:58.80 & 1.67463 & \(9.52^{+0.05}_{-0.03}\) & \(9.30^{+0.11}_{-0.10}\) & \(3.46^{+0.30}_{-0.29}\) & \(1.04\pm 0.13\) \\ \hline \end{tabular} Note. – Some of the uncertainties on the stellar masses and ages output from FAST are quoted as being \(\pm 0.00\), which only represents an uncertainty on the fitting of the models to the data. It does not account for systematic uncertainties, which were estimated to be \(\sim\)0.1 dex in a similar analysis by Muzzin et al. (2009).
\end{table}
Table 1: [O iii] auroral targets and physical properties.
### SED and Emission-line Fitting
The spectral energy distributions (SEDs) of each target galaxy in the COSMOS field were fit across 43 photometric data points drawn from the 3D-HST catalog spanning from 3500 A to 8 \(\mu\)m in the observed frame. Similarly, for the GOODS-N targets, 22 photometric points were fit across the same wavelength range. We corrected the near-IR photometric data for bright rest-frame optical emission lines using the emission-line fluxes determined from the MOSFIRE spectra analyzed in this study. The SEDs were fit with flexible stellar population synthesis models (Conroy and Gunn, 2010) using the FAST fitting code (Kriek et al., 2009) in order to determine parameters such as stellar mass and age. We assumed an SMC attenuation curve with a stellar metallicity of 0.22 \(Z_{\odot}\), a delayed-\(\tau\) star-formation history of the form \(t\times\exp{(-t/\tau)}\), and a Chabrier (2003) IMF.
We used a non-linear least squares algorithm to fit a Gaussian profile to the emission line features in each of the individual spectra as well as the stacks. Uncertainties on the line flux measurements were determined using a Monte-Carlo simulation in which each spectrum was perturbed according to the error spectrum over 100 iterations. The weaker emission-line widths were tied to the velocity widths of H\(\alpha\) and [O iii]\(\lambda\)5008. Additionally, the H\(\alpha\) and [N ii] lines were fit simultaneously, with the fluxes of [N ii]\(\lambda\)6550 and [N ii]\(\lambda\)6585 being tied in a ratio of 1:3 respectively. When fitting the [O i]\(\lambda\)6302 and [O i]\(\lambda\)6365 lines, they were fixed with a flux ratio of 3:1 respectively. For targets that appeared to have a broad/offset component to their emission-line profiles (e.g., non-auroral targets COSMOS-19812, 19985, 20062), we fit the brightest lines with a double Gaussian and reported only the narrow-component flux since the additional component is likely attributed to outflows or other gas not physically associated with H ii regions (Leung et al., 2017). The SED fits determined from the photometry were used to model the continuum and the Balmer stellar absorption troughs. The initial, non-emission-line corrected SEDs were used to obtain a continuum fit and estimate line fluxes, and these line flux estimates were then used to correct the near-IR photometry. In turn, we re-fit the emission lines using the corrected SEDs to obtain our final line flux measurements. These line fluxes are reported along with derived physical quantities in Table 2.
### Composite spectra
We found that the signal-to-noise (S/N) on [O iii]\(\lambda\)4364 was very low (\(<2\sigma\)) across most of the sample, so we created composite spectra (or "stacks") in order to boost S/N and derive average galaxy properties for each stack. We chose to stack our sample using three different configurations that are laid out in Table 3. Stack 1 (S1), consisted of the three \(z\sim 2.5\) [O iii] auroral targets in the COSMOS field, stack 2 (S2) consisted of the five \(z\sim 1.7\) GOODS-N auroral targets, and stack 3 (S3) consisted of all [O iii] auroral targets in this study. In addition to the [O iii] auroral-line stacks, we also created composite spectra that consisted of both auroral and filler targets in the MOSFIRE pointings that had coverage of the [O i]\(\lambda\)6302 line unaffected by atmospheric sky lines. We divided these "[O i]" stacks into two groups: a low-redshift (\(1\leq z<2\)) stack consisting of 13 targets, and a high-redshift (\(2\leq z\leq 3\)) stack consisting of 8 targets.
In order to create the composite spectra, we first used the H\(\alpha\)/H\(\beta\) Balmer decrement and a Cardelli et al. (1989) extinction curve to correct for internal dust extinction. We then normalized each spectrum by H\(\alpha\) luminosity before shifting into the rest frame. Prior to stacking, each spectrum was interpolated and resampled to the same wavelength grid with 0.5 A spacing. We then averaged together the SED models of each component spectrum after normalizing by H\(\alpha\) luminosity, and the resulting composite SED curve was used to model the continuum of the stacks during the line-fitting procedure. We report line luminosity measurements for each of the stacks in Table 3. Additionally, we show the stacked spectra in Figure 1 with emission lines of interest labeled.
## 3 Results and Determination of Physical Quantities
### SFR vs. stellar mass
In Figure 2, we plot the SFRs vs. stellar masses for our sample of 8 auroral-line targets and compare them to typical values from the literature. We calculate SFRs from dust-corrected H\(\alpha\) luminosities using a conversion factor of \(3.236\times 10^{-42}\) M\({}_{\odot}\) yr\({}^{-1}\) erg\({}^{-1}\) s based on models of stellar populations with sub-solar metallicity consistent with what we assume for SED fitting (Reddy et al., 2018). The galaxies in this sample agree well with the star-forming main sequence defined by larger populations of galaxies at similar redshifts. Shivaei et al. (2015) performed a linear fit to a sample of 185 galaxies in the range \(1.37\leq z\leq 2.61\), and Topping et al. (2021) fit a sample of 285 galaxies in the range \(1.37\leq z\leq 1.70\). These two empirical fits are shown as dashed and dotted black lines respectively, and they agree with the five
Figure 1: Composite spectra of the [O iii] auroral targets. The black curve shows the luminosity density, while the error spectrum is shown in red in the inset axes. Each of the prominent emission lines is labeled and marked with a blue dotted line.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & \multicolumn{6}{c}{F\({}_{\rm obs}(\lambda)(10^{-17}\) erg s\({}^{-1}\) cm\({}^{-2}\))} \\ \hline \hline \multicolumn{1}{c|}{Line} & \multicolumn{3}{c|}{ID (COSMOS)} & \multicolumn{6}{c}{ID (GOODS-N)} \\ \cline{2-9} & 18812 & 19439 & 19753 & 6699 & 8013 & 8240 & 14595 & 18462 \\ \hline
[O ii] \(\lambda\)3726 & \(<2.66\) & \(2.78\pm 0.71\) & \(6.37\pm 0.49\) & \(2.18\pm 0.26\) & \(3.88\pm 0.48\) & \(1.92\pm 0.42\) & \(1.29\pm 0.24\) & \(1.74\pm 0.25\) \\
[O ii] \(\lambda\)3730 & \(0.50\pm 0.23\) & \(2.33\pm 0.35\) & \(8.21\pm 0.55\) & \(2.58\pm 0.23\) & \(4.30\pm 0.66\) & \(2.03\pm 0.29\) & \(1.38\pm 0.21\) & \(1.82\pm 0.32\) \\
[Ne iii] \(\lambda\)3870 & – & \(<1.52\) & \(<1.19\) & \(<0.60\) & \(0.55\pm 0.17\) & \(<1.11\) & \(<0.57\) & \(<0.65\) \\ H\(\lambda\)4103 & – & – & – & \(0.58\pm 0.17\) & \(1.22\pm 0.30\) & \(0.57\pm 0.19\) & \(<0.93\) & \(<1.23\) \\ H\(\gamma\)4342 & \(0.58\pm 0.09\) & \(1.12\pm 0.14\) & \(2.45\pm 0.40\) & \(1.83\pm 0.21\) & \(2.10\pm 0.17\) & \(1.59\pm 0.14\) & \(1.08\pm 0.10\) & \(1.35\pm 0.20\) \\
[Fe ii] \(\lambda\)4360* & \(<0.34\) & \(0.18\pm 0.08\) & \(<0.31\) & \(0.41\pm 0.11\) & \(<0.78\) & \(<0.38\) & \(<0.24\) & \(<0.62\) \\
[O iii] \(\lambda\)4364* & \(0.34\pm 0.09\) & \(<0.28\) & \(<0.27\) & \(<0.51\) & \(<0.99\) & \(<0.29\) & \(0.20\pm 0.06\) & \(<0.38\) \\
[O iii] \(\lambda\)4364 & \(0.34\pm 0.11\) & \(<0.28\) & \(<0.27\) & \(<0.51\) & \(<0.99\) & \(0.16\pm 0.08\) & \(<0.25\) & \(<0.38\) \\ H\(\beta\)44863 & \(1.10\pm 0.15\) & \(2.30\pm 0.08\) & \(5.88\pm 0.10\) & \(3.21\pm 0.10\) & \(4.77\pm 0.13\) & \(3.23\pm 0.22\) & \(2.24\pm 0.10\) & \(2.57\pm 0.14\) \\
[O iii] \(\lambda\)4360 & \(2.49\pm 0.07\) & \(3.95\pm 0.10\) & \(7.47\pm 0.22\) & \(5.29\pm 0.14\) & \(5.71\pm 0.19\) & \(4.71\pm 0.08\) & \(3.79\pm 0.08\) & \(3.71\pm 0.10\) \\
[O iii] \(\lambda\)5008 & \(6.99\pm 0.11\) & \(11.52\pm 0.15\) & \(21.49\pm 0.12\) & \(15.76\pm 0.41\) & \(15.61\pm 0.10\) & \(14.03\pm 0.22\) & \(11.11\pm 0.12\) & \(10.30\pm 0.11\) \\
[O i] \(\lambda\)6302 & \(<0.40\) & \(<0.48\) & \(0.47\pm 0.09\) & \(0.26\pm 0.11\) & \(<0.40\) & \(<1.35\) & \(<0.35\) & \(<0.34\) \\
[O i] \(\lambda\)6365 & \(<0.40\) & \(<0.63\) & \(0.15\pm 0.03\) & \(0.08\pm 0.04\) & \(<0.37\) & \(<1.96\) & \(<0.32\) & \(<0.40\) \\
[N ii] \(\lambda\)6550 & \(<0.37\) & \(0.42\pm 0.12\) & \(1.01\pm 0.21\) & \(<1.85\) & \(<0.79\) & \(<0.76\) & \(<0.61\) & \(<0.80\) \\ H\(\alpha\)6565 & \(3.28\pm 0.14\) & \(10.89\pm 0.30\) & \(20.21\pm 0.21\) & \(11.58\pm 0.36\) & \(12.51\pm 0.79\) & \(12.22\pm 1.35\) & \(5.07\pm 0.17\) & \(5.41\pm 0.24\) \\
[N ii] \(\lambda\)6585 & \(<0.33\) & \(1.27\pm 0.23\) & \(3.03\pm 0.26\) & \(1.55\pm 0.29\) & \(1.24\pm 0.28\) & \(1.13\pm 0.35\) & \(<0.54\) & \(0.55\pm 0.23\) \\
[S ii] \(\lambda\)6716 & – & \(<0.61\) & \(1.85\pm 0.25\) & \(<2.56\) & \(1.90\pm 0.68\) & \(<3.12\) & \(0.56\pm 0.21\) & – \\
[S ii] \(\lambda\)6731 & – & \(0.90\pm 0.25\) & \(1.66\pm 0.26\) & \(0.76\pm 0.22\) & \(<1.15\) & \(<17.44\) & \(<1.22\) & – \\ \hline E(B-V)\({}_{gas}\) & \(0.04^{+0.16}_{-0.13}\) & \(0.51^{+0.05}_{-0.05}\) & \(0.19^{+0.02}_{-0.02}\) & \(0.23^{+0.04}_{-0.04}\) & \(0.00\) & \(0.29^{+0.12}_{-0.14}\) & \(0.10^{+0.05}_{-0.05}\) & \(0.00\) \\ \(T_{e}({\rm O}^{2+})\) (\(10^{4}\) K) & \(2.11^{+0.26}_{-0.39}\) & \(<1.42\) & \(<1.13\) & \(<2.41\) & \(<2.29\) & \(1.30^{+0.20}_{-0.25}\) & \(<1.86\) & \(<1.66\) \\ \(T_{e}({\rm O}^{+})\) (\(\times 10^{4}\) K) & \(1.77^{+0.26}_{-0.25}\) & \(<1.30\) & \(<1.09\) & \(<1.99\) & \(<1.90\) & \(1.20^{+0.20}_{-0.20}\) & \(<1.61\) & \(<1.46\) \\ \(n_{e}\) (\(10^{2}\) cm\({}^{-3}\)) & – & \(10.70^{+12.55}_{-6.84}\) & \(1.61^{+1.32}_{-0.08}\) & \(4.20^{+2.20}_{-7.79}\) & \(5.26^{+4.77}_{-3.34}\) & \(5.06^{+4.97}_{-3.26}\) & \(5.38^{+6.53}_{-3.12}\) & \(5.57^{+6.11}_{-3.44}\) \\ \(12+\log\) (O\({}^{+}\)/H) & – & \(>7.78\) & \(>7.89\) & \(>6.89\) & \(>6.85\) & \(7.49^{+0.32}_{
GOODS-N galaxies at \(z\sim 1.7\) with the exceptions of GOODS-N-18462 and GOODS-N-14595, which are offset in log(SFR) by roughly -0.3 dex and +0.6 dex respectively. We additionally plot two main sequence relations defined by Speagle et al. (2014) at the median redshifts of our COSMOS and GOODS-N samples shown in orange and blue, respectively. As an additional point of reference, we overplot a sample of 280 \(z\sim 2.3\) galaxies distributed among five stacks by Sanders et al. (2021), and we re-scale the stacks to use the same \(H\alpha\) to SFR conversion factor we utilize in this study. For both the \(z\sim 1.7\) and \(z\sim 2.5\) galaxies in this study, we find agreement within \(1\sigma\) relative to the respective main sequence fits with the exception of GOODS-N-18462, which falls slightly below the \(z\sim 1.7\) relation. Overall, this sample of galaxies is relatively representative in terms of SFR at fixed stellar mass based on larger samples in the same redshift range.
### Abundance Determinations
Throughout this paper, we refer to "direct" oxygen abundances as those derived from determining the ion emissivities based on electron temperature and density measurements as opposed to "indirect" methods, which use the ratios of strong nebular emission lines empirically calibrated to local direct measurements or photoionization models. The direct method of oxygen abundance approximates a galaxy as a single H ii region and thus characterizes electron temperatures and densities based on globally-integrated spectra. In this H ii region approximation, it is conventional to further define two temperatures: the temperature associated with the high-ionization state and the low-ionization state of the ions of interest (e.g., \(T_{e}(\mathrm{O}^{2+})\) and \(T_{e}(\mathrm{O}^{+})\), respectively). Since the energy required to ionize neutral oxygen is similar to that of hydrogen, we expect that nearly all of the oxygen within H ii regions is ionized, either in the singly or doubly-ionized state, with negligible amounts
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multicolumn{4}{c}{L\({}_{obs}(\lambda)(10^{41}\ erg\ s^{-1})\)} \\ \hline \multirow{2}{*}{Line} & \multicolumn{3}{c}{Stack ID} \\ \cline{2-4} & S1 & S2 & S3 \\ & COSMOS & GOODS-N & Full sample \\ \hline \([\mathrm{O}\textsc{ii}]\ \lambda 3726\) & \(88.13\pm 28.17\) & \(6.78\pm 1.17\) & \(19.58\pm 5.00\) \\ \([\mathrm{O}\textsc{ii}]\ \lambda 3730\) & \(89.12\pm 24.34\) & \(8.15\pm 1.11\) & \(21.86\pm 3.36\) \\ \([\mathrm{Ne}\textsc{iii}]\ \lambda 3870\) & – & \(<1.36\) & – \\ H\(\delta\) \(\lambda 4103\) & – & \(<1.60\) & – \\ H\(\gamma\) \(\lambda 4342\) & \(27.21\pm 4.52\) & \(4.46\pm 0.52\) & \(9.23\pm 0.90\) \\ \([\mathrm{Fe}\textsc{ii}]\ \lambda 4360\)* & \(<12.91\pm 0.50\) & \(1.87\pm 0.84\) \\ \([\mathrm{O}\textsc{iii}]\ \lambda 4364\) & \(7.56\pm 3.21\) & \(<0.76\) & \(1.40\pm 0.46\) \\ \([\mathrm{O}\textsc{iii}]\ \lambda 4364\) & \(7.57\pm 2.71\) & \(<0.50\) & \(1.39\pm 0.40\) \\ H\(\beta\) \(\lambda 4863\) & \(49.78\pm 4.91\) & \(9.16\pm 0.50\) & \(17.38\pm 0.94\) \\ \([\mathrm{O}\textsc{iii}]\ \lambda 4960\) & \(76.66\pm 8.72\) & \(13.19\pm 0.90\) & \(26.04\pm 1.33\) \\ \([\mathrm{O}\textsc{iii}]\ \lambda 5008\) & \(255.96\pm 24.15\) & – & – \\ \([\mathrm{O}\textsc{i}]\ \lambda 6302\) & \(<5.08\) & \(<0.74\) & \(<0.81\) \\ \([\mathrm{O}\textsc{i}]\ \lambda 6365\) & \(<5.44\) & \(<0.25\) & \(<0.27\) \\ \([\mathrm{N}\textsc{ii}]\ \lambda 6550\) & \(3.21\pm 0.74\) & \(0.62\pm 0.10\) & \(1.08\pm 0.16\) \\ \([\mathrm{H}\textsc{a}]\ \lambda 6565\) & \(110.87\pm 6.29\) & \(24.01\pm 1.25\) & \(43.23\pm 1.47\) \\ \([\mathrm{N}\textsc{ii}]\ \lambda 6585\) & \(9.63\pm 2.23\) & \(1.85\pm 0.29\) & \(3.25\pm 0.47\) \\ \([\mathrm{S}\textsc{ii}]\ \lambda 6716\) & – & – & – \\ \([\mathrm{S}\textsc{ii}]\ \lambda 6731\) & – & – & – \\ \hline \(\log\left(M/M_{\odot,avg}\right)\) & \(9.85\pm 0.04\) & \(9.51\pm 0.06\) & \(9.64\pm 0.04\) \\ \(T_{e}(\mathrm{O}^{2+})\) (\(10^{4}\) K) & \(1.96^{+0.33}_{-0.37}\) & \(<1.27\) & \(1.44^{+0.19}_{-0.19}\) \\ \(T_{e}(\mathrm{O}^{2+})\) (\(10^{4}\) K) & \(1.67^{+0.26}_{-0.26}\) & \(<1.19\) & \(1.30^{+0.15}_{-0.15}\) \\ \(n_{e}\) (\(10^{2}\) cm\({}^{-3}\)) & \(8.68^{+4.23}_{-0.21}\) & \(3.14^{+3.55}_{-2.44}\) & \(6.65^{+5.99}_{-0.22}\) \\ \(12+\log\left(\mathrm{O}^{+}/\mathrm{H}\right)\) & \(7.41^{+0.26}_{-0.22}\) & \(>7.49\) & \(7.53^{+0.27}_{-0.18}\) \\ \(12+\log\left(\mathrm{O}^{2+}/\mathrm{H}\right)\) & \(7.43^{+0.21}_{-0.16}\) & \(>7.85\) & \(7.72^{+0.17}_{-0.15}\) \\ \(12+\log\left(\mathrm{O}/\mathrm{H}\right)\) & \(7.75^{+0.20}_{-0.15}\) & \(>8.01\) & \(7.96^{+0.15}_{-0.12}\) \\ \(12+\log\left(\mathrm{N}^{+}/\mathrm{H}\right)\) & \(6.13^{+0.17}_{-0.17}\) & \(>6.44\) & \(6.31^{+0.12}_{-0.15}\) \\ \(12+\log\left(\mathrm{N}/\mathrm{H}\right)\) & \(6.46^{+0.18}_{-0.18}\) & \(>6.96\) & \(6.73^{+0.12}_{-0.11}\) \\ \(\log\left(\mathrm{N}/\mathrm{O}\right)\) & \(-1.30^{+0.17}_{-0.19}\) & – & \(-1.22^{+0.11}_{-0.12}\) \\ \hline \end{tabular}
\end{table}
Table 3: Catalog of observed emission-line luminosities and physical properties for stacked spectra.
Figure 2: SFR vs. stellar mass. The colored squares indicate the auroral-line targets included in this study, with blue and orange corresponding to \(z\sim 1.7\) and \(z\sim 2.5\), respectively. The dashed black line shows a linear fit from Shivaei et al. (2015), while the solid blue and orange lines show the \(z\sim 1.7\) and \(z\sim 2.5\) SFR vs. stellar mass relations respectively from Speagle et al. (2014). The 1-\(\sigma\) scatter for each main sequence line is shaded in its respective color. The red triangles represent spectral stacks of \(z\sim 2.3\) galaxies from Sanders et al. (2021).
in higher ionization or neutral states. Therefore, in order to determine the oxygen abundance directly, we sum the abundances of oxygen in its two most prevalent ionization states:
\[\frac{\mathrm{O}}{\mathrm{H}}\approx\frac{\mathrm{O}^{+}}{\mathrm{H}^{+}}+\frac{ \mathrm{O}^{2+}}{\mathrm{H}^{+}} \tag{1}\]
Determining the abundances of each ionization species requires knowledge of the electron temperatures and densities associated with the respective ionization zones. In the \(\mathrm{O}^{2+}\) zone, this can be achieved through measurements of the [O iii]\(\lambda\lambda 4960,5008\) and the [O iii]\(\lambda 4364\) lines, where the [O iii]\(\lambda 4364\) transition originates from a different upper energy level than the [O iii]\(\lambda 4960,5008\) transitions. In turn, measuring the ratios of these lines allows one to determine the electron temperature associated with the \(\mathrm{O}^{2+}\) zone. The same can be achieved for the \(\mathrm{O}^{+}\) ion, using the [O ii]\(\lambda\lambda 3727,3730\) and [O ii]\(\lambda\lambda 7321,7332\) lines which arise from different respective upper energy levels. One of the major challenges in determining direct oxygen abundances, however, is that the auroral lines produced by transitions from the upper energy levels are often intrinsically very faint, and their detection becomes the main limiting factor in obtaining direct abundance estimates. Ideally, one would measure the nebular and the corresponding faint auroral emission lines originating from the \(\mathrm{O}^{+}\) and \(\mathrm{O}^{2+}\) ions to directly determine the electron temperature for both ionization zones. However, since our sample of [O iii] auroral targets does not have coverage of the auroral [O ii] lines, we employ the following theoretical relation presented by Campbell et al. (1986) to infer \(T_{e}(\mathrm{O}^{+})\):
\[T_{e}(\mathrm{O}^{+})=0.7\times T_{e}(\mathrm{O}^{2+})+3000\,\mathrm{K} \tag{2}\]
When converting a \(T_{e}(\mathrm{O}^{+})\) measurement to \(T_{e}(\mathrm{O}^{2+})\), observations suggest an intrinsic scatter of approximately 1300 K in the Campbell et al. (1986) relation (Berg et al., 2020; Rogers et al., 2021). Since we are instead converting \(T_{e}(\mathrm{O}^{2+})\) to \(T_{e}(\mathrm{O}^{+})\) using equation 2, we adopt an intrinsic scatter of \(0.7\times 1300\) K = 910 K, and add this in quadrature when determining the uncertainty on \(T_{e}(\mathrm{O}^{+})\).
Electron temperatures, densities, and ionic abundances were determined using the PyWeb package (Luridiana et al., 2015). In order to compute the electron density \(n_{e}\), we used the getCrossTemDen() method to simultaneously solve for \(T_{e}(\mathrm{O}^{2+})\) and \(n_{e}\), taking the [O ii]\(\lambda 3727/[\mathrm{O} ii]\lambda 3730\) ratio to be the density-sensitive tracer. With the output values of \(T_{e}\) and \(n_{e}\) from getCrossTemDen(), we computed the ionic abundances using the getIonAbundance() method, using the ratios of [O iii]\(\lambda 4959\) and [O ii]\(\lambda 3727,3730\) relative to H\(\beta\) to compute the \(\mathrm{O}^{2+}/\mathrm{H}^{+}\) and the \(\mathrm{O}^{+}/\mathrm{H}^{+}\) abundances, respectively.
We additionally calculated the nitrogen abundance in our galaxy sample since we have coverage of the [N ii]\(\lambda 658\) line in all targets. The nitrogen abundance is ideally determined based on emission lines arising from the N\({}^{+}\) and N\({}^{2+}\) ions. However, we do not have coverage and/or detection of the necessary [N iii] emission lines, so we make the following approximation:
\[\frac{\mathrm{N}}{\mathrm{H}}\approx\frac{\mathrm{N}^{+}}{\mathrm{H}^{+}}\times \mathrm{ICF(N)}\]
where \(\mathrm{ICF(N)}\) is the ionization correction factor accounting for higher ionization states, and it is defined as \(\mathrm{ICF(N)}=\mathrm{N}/\mathrm{N}^{+}\). We approximate this ratio as \(\mathrm{N}/\mathrm{N}^{+}\approx\mathrm{O}/\mathrm{O}^{+}\) since oxygen and nitrogen have similar ionization energies (Peimbert, 1967).
One can directly measure the temperature within the N\({}^{+}\) zone by measuring the auroral-to-nebular line ratio [N ii]\(\lambda 5756/[\mathrm{N} ii]\lambda 6585\), analogous to the [O iii]\(\lambda 4364/[\mathrm{O} iii]\lambda 5008\) ratio for the \(\mathrm{O}^{2+}\) zone. However, we do not have coverage of the [N ii]\(\lambda 5756\) line in our spectra, so we make the approximation that \(T_{e}(\mathrm{N}^{+})\approx T_{e}(\mathrm{O}^{+})\). We also use the same electron density as that determined during the calculation of the oxygen abundance. We then used getIonAbundance() employing the [N ii]\(\lambda 6585\)/H\(\beta\) ratio as the input to calculate the N\({}^{+}\)/H\({}^{+}\) abundance. The derived constraints on density, temperature, and ionic and total abundances are reported in Table 2 for the individual targets and Table 3 for the composites.
### Auroral-line Detections
The [O iii]\(\lambda 4364\) line was detected at greater than 2\(\sigma\) significance in only two targets: COSMOS-18812 and GOODS-N-8240. For COSMOS-18812, the [O ii]\(\lambda\lambda 3727,3730\) line doublet fell onto a pair of sky lines, so it was not possible to constrain the electron density or the \(\mathrm{O}^{+}\) abundance. We tentatively detect [O iii]\(\lambda 4364\) emission in GOODS-N-8240 since the Gaussian fit to the line profile places the significance of this detection at the \(>\)2\(\sigma\) level. Upon visual inspection, we report the presence of emission lying a few angstroms blueward of the [O iii]\(\lambda 4364\) line in two of the targets: GOODS-N-6699 and GOODS-N-14595. For GOODS-N-6699, there is an emission feature detected at \(>\)3\(\sigma\) significance when centering a Gaussian profile at 4360 A. It is uncertain whether this emission is associated with the [O iii]\(\lambda 4364\) line though it appears at a similar wavelength. The emission-line feature adjacent to [O iii]\(\lambda 4364\) in GOODS-N-14595 appears to be more
closely centered on the expected central wavelength for the auroral oxygen line, with a significance of just 0.38\(\sigma\) when centering on 4360 A. The spectra of these objects in this wavelength region can be seen in Figure 3.
Curti et al. (2017) find a similar feature in a sample of spectral stacks and attribute it to a forbidden transition of singly-ionized iron. They also suggest that the strength of this 4360 A contamination increases with increasing galaxy metallicity. However, we do not expect this contaminating feature to have a significant impact on our spectra given that strong-line metallicity indicators of our target sample suggest our targets are half solar metallicity or less. For completeness, we report [O iii]\(\lambda\)4364 measurements of the stacked spectra in two ways: fitting the emission around 4364 A as a single Gaussian and fitting the feature at 4360 A separately from [O iii]\(\lambda\)4364. As a shorthand, we refer to the feature at 4360 A as [Fe ii]\(\lambda\)4360, and the simultaneous [O iii] and [Fe ii] line fits are denoted by an asterisk (*) in both Tables 2 and 3.
In Table 2, we see that in most cases, choosing a single vs. a double fit does little to change the [O iii]\(\lambda\)4364 flux. The exceptions to this are GOODS-N-8240 and GOODS-N-14595 where, in the former, the single fit yields a higher S/N ratio and, in the latter, the double fit yields a better S/N ratio. Since it is unclear to what extent the emission in GOODS-N-14595 can be attributed to [O iii]\(\lambda\)4364, we report the determination of physical quantities from the single-Gaussian fit to [O iii]\(\lambda\)4364.
We perform this same exercise on the stacked spectra and report the results in Table 3, with the simultaneous double Gaussian fits marked by asterisks. Since the emission at 4360 A is only seen in GOODS-N targets, we check to see if the fitting technique has an effect on the measured line luminosities in stacks 2 and 3, finding that there is no significant effect on either stack. We do see an effect on stack 1 in that the single fit has a higher S/N ratio. Thus, we use the single Gaussian fit to [O iii]\(\lambda\)4364 in the stacks to determine physical conditions.
For the individual [O iii] auroral targets, only GOODS-N-8240 has a well-constrained oxygen abundance of \(12+\log(\mathrm{O/H})=8.02^{+0.24}_{-0.17}\), corresponding to \(\sim\)21% of the solar oxygen abundance. For the compos
Figure 3: 2D and 1D spectra of the eight [O iii] auroral targets in this study. These spectral plots span from 4335 Å to 4370 Å, covering the H\(\gamma\) and [O iii]\(\lambda\)4364 emission lines. The solid black line shows the flux density, and the shaded gray spectrum is the error on the flux density. We detect [O iii]\(\lambda\)4364 (labeled with an orange dotted line) in COSMOS-18812 and GOODS-N-8240; however, we report emission blueward of 4364 Å in the GOODS-N 6699 and 14595 spectra.
ite spectra, we find that the \(z\sim 2.5\) COSMOS stack has an oxygen abundance of \(\sim 11\%\) the solar value, while the \(z\sim 1.7\) GOODS-N stack has an oxygen abundance greater than \(\sim 21\%\) the solar value. For nitrogen, the abundances relative to the solar value are \(\sim 4\%\) and \(\gtrsim 13\%\) for stacks 1 (\(z\sim 2.5\)) and 2 (\(z\sim 1.7\)).
## 4 Discussion
We now turn to an analysis of the emerging trends in relations between strong-line ratios and direct oxygen abundance at high redshift enabled by our deep MOSFIRE observations. In addition, the new analysis of the [O i]\(\lambda 6302\)/H\(\alpha\) ratio in 21 galaxies beyond the local universe, while not representing a complete statistical sample, hints at the properties of the ionized gas and the stellar populations in high-redshift galaxies.
### Indirect Metallicity Indicators
Analyzing the accuracy of indirect metallicity indicators out to high redshifts is important for our understanding of the chemical evolution of galaxies across cosmic time. With our oxygen abundance measurements of the stacked spectra of auroral targets, we have constraints for the average metallicities of the galaxies considered in this study. We compare these stacks alongside measurements from the literature to strong-line metallicity indicators in order to understand how the accuracy of these indicators may shift with cosmic time.
In Figure 4, we show six strong-line ratios vs. oxygen abundance for our three stacks, and we plot the metallicity relations from Curti et al. (2020) determined from stacks of galaxy spectra in the local universe. The short-hand labels for the strong-line ratios are defined as follows: R3 = [O iii]\(\lambda 5008\)/H\(\beta\), R2 = [O ii]\(\lambda\lambda 3727,3730\)/H\(\beta\), R23 = [(O iii]\(\lambda\lambda 5008,4960\) + [O ii]\(\lambda\lambda 3727,3730\))/H\(\beta\), O32 = [O iii]\(\lambda 5008\)/[O ii]\(\lambda\lambda 3727,3730\), N2 = [N ii]\(\lambda 6585\)/H\(\alpha\), O3N2 = ([O iii]\(\lambda 5008\)/H\(\beta\))/([N ii]\(\lambda 6585\)/H\(\alpha\)). We note that below an oxygen abundance of \(12+\log\) (O/H) \(\approx 8.1\), there are fewer individual \(z\sim 0\) SDSS galaxies with \(>\)10\(\sigma\) [O iii]\(\lambda 4364\) detections, and the sample is biased toward higher specific SFR, representing a population more similar to our high-redshift sample than \(z\sim 0\) galaxies (refer to the discussion in the appendix of Sanders et al. (2021) for a detailed analysis of the low-metallicity Curti et al. (2017, 2020) calibration sample). Additionally, we show the relationships between strong-line ratios and metallicity determined by Bian et al. (2018) for local galaxies selected to have emission-line properties analogous to those of high-redshift galaxies. Alongside these two line-ratio relations, we show the strong-line ratio vs. oxygen abundance for a large sample of H ii regions from the literature (compiled by Sanders et al. (2017) with data from Pilyugin and Grebel (2016), Croxall et al. (2015), and Toribio San Cipriano et al. (2016)) with a running median displayed as a solid black curve, and 1\(\sigma\) intervals shown as dotted lines to visualize the spread of the distribution.
Upon visual inspection, the oxygen abundances of the stacks lie within the distribution of galaxies from the literature (compiled by Sanders et al. (2020) with two additional galaxies from Sanders et al. (2023b)) shown as blue points. In the cases of \(\log\)(R23) and \(\log\)(N2), the Bian et al. (2018) curve serves as a better metallicity indicator to high-redshift galaxies compared to the Curti et al. (2020) curves. In the cases of the \(\log\)(R3), \(\log\)(R2), \(\log\)(O3O2), and \(\log\)(O3N2) curves, the spread of galaxies is large compared to the differences between the Curti et al. (2020) and Bian et al. (2018) curves, so it is difficult to determine if there is a preference for one over the other. In general, the stacks agree most consistently with the distribution of H ii regions, though the galaxies from the literature are offset from the H ii regions to higher \(\log\)(R3) and \(\log\)(R23) at fixed oxygen abundance.
With a small existing sample size, it is difficult to make definitive conclusions about the accuracy of these strong-line ratios, especially considering that the sample of galaxies is biased towards bright, high electron temperature targets. For two of the individual auroralline targets where there were existing MOSDEF spectra (COSMOS-19439 and COSMOS-19752), we predicted the [O iii]\(\lambda 4364\) flux based on [O iii]\(\lambda 5008\) flux as well as \(T_{e}\) predictions. For COSMOS-19439 and COSMOS-19753, we predicted auroral [O iii] line fluxes in the ranges of \(0.8-1.9\times 10^{-18}\) and \(2.1-5.0\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) respectively. Since the 2\(\sigma\) upper limits are above our lowest line flux predictions, this suggests that the observations did not reach the required depth in the 10 combined hours of integration. This comparison demonstrates the limitations of 10-m-class ground-based observatories in this area of study and highlights the importance of JWST in building representative galaxy samples moving forward.
### Insights from [O i] emission
We present the properties of our sample of galaxies in the [O iii]\(\lambda 5008\)/H\(\beta\) vs. [N ii]\(\lambda 6585\)/H\(\alpha\), [S ii]\(\lambda 6716,6731\)/H\(\alpha\), and [O i]\(\lambda 6302\)/H\(\alpha\) BPT diagrams shown in Figure 5. In all diagrams, local Sloan Digital Sky Survey (SDSS; York et al., 2000; Abazajian et al., 2009) galaxies are shown as grayscale, two-dimensional histograms. A sample of local H ii regions from the literature (see Sanders et al., 2017) is shown
as a set of magenta points with an accompanying running median and a 1\(\sigma\) shaded region. In the [N ii] BPT diagram, we plot the [O iii] auroral targets as well as the [O i] composite spectra. We see that the [O iii] auroral sample (represented by squares) consists of high-excitation galaxies (log([O iii]\(\lambda\)5008/H\(\beta\)) \(\gtrsim\) 0.5) and skews toward higher [O iii]/H\(\beta\) at fixed [N ii]/H\(\alpha\) compared to the \(z\sim 1.5\) and \(z\sim 2.3\) samples from Shapley et al. (2019). When compared to H ii regions in the literature, the auroral targets are \(\sim\)0.1 dex higher in log([N ii]\(\lambda\)6585/H\(\alpha\)) than the median locus of H ii regions at fixed log([O iii]/H\(\beta\)). The characteristics are similar for the \(z\sim 2.5\) [O iii] auroral galaxy in the [S ii]/H\(\alpha\) diagram, where it is offset from the H ii regions and Shapley et al. (2019) samples at higher [S ii]/H\(\alpha\) and [O iii]/H\(\beta\).
We additionally present a novel analysis of galaxies on the [O i] BPT diagram at \(z>1\). Including the filler targets, a total of eight galaxies yielded significant (\(>\)2\(\sigma\)) [O i]\(\lambda\)6302 detections, with two of the auroral [O iii] targets (COSMOS-19753 and GOODS-N-6699) yielding detections. In order to understand the general characteristics of the galaxies in regard to the [O i] BPT diagram, we constructed two composite spectra separated by redshift, choosing to include galaxies with coverage of the [O i] lines with the exception of galaxies whose [O i] feature fall on sky lines. These criteria result in two stacks with 13 galaxies in the \(1\leq z<2\) stack and 8 galaxies in the \(2\leq z\leq 3\) stack. The line ratios associated with these stacks are plotted as "plus" (+) symbols in Figure 5.
We see that these stacks follow a similar trend of relatively high [O iii]/H\(\beta\) relative to the SDSS sample and high [O i]/H\(\alpha\) relative to the locus of H ii regions. There are several factors that can influence the [O i]/H\(\alpha\) ratio in a galaxy, including contributions of DIG, the presence of shocks, and hardness of the ionizing spectrum, the latter of which appears to be relevant in \(z>1\) galaxies (Zhang et al., 2017; Shapley et al., 2019).
Figure 4: Comparison of the direct oxygen abundance measurements of our stacks with strong-line indicators from Curti et al. (2020) in orange and Bian et al. (2018) in purple. Strong-line indicators are plotted in solid lines over their quoted metallicity ranges, and extrapolations are shown by dashed lines. We also display \(z>1\) galaxies from the literature with direct oxygen abundance measurements as blue points (see Sanders et al., 2020, 2023). Finally, we show the median relation of local H ii regions (see Sanders et al., 2017) as a solid black curve, with the 1\(\sigma\) spread illustrated as black dotted curves.
Figure 5: [N ii], [S ii], and [O i] BPT diagrams showing where the auroral and [O i] galaxy samples from this work lie in relation to SDSS galaxies in the local universe (grayscale, 2D histogram). We also compare the galaxies and stacks from this work to local H ii regions from the literature (compiled by Sanders et al. (2017)) and display a running median. The [O iii] auroral targets are shown by squares, while the non-[O iii]-auroral targets are shown by diamonds. The \(1\leq z<2\) and \(2\leq z\leq 3\) targets are displayed in blue and orange respectively. For comparison, stacks of \(z\sim 1.5\) and \(z\sim 2.3\) galaxies from Shapley et al. (2019) are shown as purple and red triangles respectively. Upper limits on [S ii] and [O i] detections are shown in black. In the bottom three panels, the same BPT diagrams are shown with CLOUDY photoionization models overlaid. The curves are color-coded by stellar metallicity indicated in the colorbar.
In the bottom three panels of Figure 5, we compare the line ratios from these [O i] stacks to CLOUDY (Ferland et al., 2017) photoionization models following the prescription laid out in Jeong et al. (2020). The models are based on stellar spectra drawn from BPASS (Eldridge et al., 2017; Stanway and Eldridge, 2018) where each model curve represents a \(10^{8.5}\) year-old stellar population with a constant star-formation history. Along each curve of fixed stellar metallicity (\(Z_{star}\)), we vary the ionization parameter and the nebular metallicity according to the Topping et al. (2020) relation: \(\log(\mathrm{U})=-1.06\times[12+\log(\mathrm{O/H})]+5.78\). In both the [N ii] and the [O i] BPT diagrams, we find that both the \(2\leq z\leq 3\) and \(1\leq z<2\) [O i] stacks agree well with the very sub-solar metallicity (\(1.0\times 10^{-5}\lesssim Z_{star}\lesssim 2\times 10^{-3}\)) stellar population curves. Since not all of the galaxies in the [O i]stacks had wavelength coverage of the [S ii]\(\lambda 6716,6731\) doublet, we do not include them on the [S ii] BPT diagram. Taken together with typical nebular oxygen abundances inferred from the MOSDEF survey (Sanders et al., 2021; Topping et al., 2021), the comparison of these observations with photoionization models supports the picture of harder ionizing spectra from low-metallicity, Fe-poor massive stars driving the line ratios of galaxies at higher redshifts (e.g., Steidel et al., 2016; Strom et al., 2017; Shapley et al., 2019; Sanders et al., 2020; Topping et al., 2020, 2021; Runco et al., 2021; Cullen et al., 2021).
Though the harder ionizing spectrum is fully capable of explaining the enhancement in [O i]/H\(\alpha\) in these galaxies, it is also possible that shocks and varying contributions of DIG affect the BPT line ratios. With upcoming spectroscopic observations from JWST, an analysis of these effects may become more robust due to a larger sample of galaxies with a wider range of properties, for which we will also have detections of [O i].
### Nitrogen abundances
We additionally comment on the nitrogen abundance patterns displayed by our [O iii] auroral galaxy sample. In the context of the star-forming galaxy population at \(z\sim 2\), the nitrogen abundances from the stacks are consistent with empirical predictions based on their average stellar masses. For example, Strom et al. (2022) determined the nitrogen abundances for a sample of \(195\ z\sim 2\) star-forming galaxies. Their linear fit to this sample predicts a nitrogen abundance of \(12+\log(\mathrm{N/H})=6.93\) at a stellar mass of \(\log(\mathrm{M/M_{\odot}})=9.5\) with an intrinsic scatter of 0.33 dex in abundance. Within their respective limits and uncertainties, all three of the stacks as well as GOODS-N-8240 and COSMOS-18812 have nitrogen abundances consistent with this prediction.
As well as analyzing the nitrogen abundance, we discuss the nitrogen to oxygen (N/O) ratio. The N/O ratios of galaxies and H ii regions are often used as a probe of the nucleosynthetic origin of nitrogen where, at low metallicity (\(12+\log(\mathrm{O/H})\lesssim 8\)), \(\log(\mathrm{N/O})\) is fixed at \(\sim-1.5\). This is referred to as the "primary" nitrogen regime since, at low metallicity, the nitrogen yield is tied to those of the \(\alpha\) elements (Pilyugin et al., 2010; Perez-Montero and Contini, 2009; Izotov et al., 2006). At higher oxygen abundances, the nitrogen yield increases in proportion to the CNO abundances, and the N/O ratio increases, comprising the "secondary" nitrogen regime. Since the [O iii] auroral targets have significantly sub-solar oxygen abundances on average, they should fall within the primary nitrogen regime.
For stacks 1 and 3, where we have constraints on the N/O ratio, we find that their abundance pattern is consistent with those found in local H ii regions (e.g., Pilyugin et al., 2010). However, for GOODS-N-8240, we find that its N/O ratio of \(\log(\mathrm{N/O})=-0.99^{+0.22}_{-0.23}\) is slightly enhanced given its oxygen abundance of \(12+\log(\mathrm{O/H})=8.02^{+0.24}_{-0.17}\). Several hypotheses have been put forward to explain the abundance pattern of objects with enhanced N/O, one of which appeals to strong winds from Wolf-Rayet stars enriching the ISM (e.g., Pagel et al., 1986; Brinchmann et al., 2008; Masters et al., 2014). A more detailed analysis is required to determine the exact source of nitrogen enhancement in this target.
Another point of interest in studying the N/O ratio is to investigate its effects on trends in the [N ii] BPT diagram across cosmic time. Specifically, if there are significant differences between the N/O vs. O/H ratio at high redshifts compared to low-redshift observations, then an evolving N/O abundance pattern may play an important role in interpreting diagnostic line ratios involving nitrogen and oxygen (Curti et al., 2022; Hayden-Pawson et al., 2022). Because the stacks are consistent with the local N/O vs. O/H relation, there does not appear to be strong evidence for an evolution in the N/O ratio in this sample, though GOODS-N-8240 does represent an outlier in this regard.
## 5 Conclusions
We present an ultra deep rest-optical spectroscopic analysis of several \(z>1\), high-excitation galaxies with up to 10 hours of integration time in some bands, and we analyze their excitation properties as well as their oxygen abundances. We selected eight galaxies with strong nebular [O iii] emission and high predicted electron temperatures to maximize the chance of detecting the [O iii]\(\lambda\)4364 emission line. We detected [O iii]\(\lambda\)4364 in two targets, and we chose to stack the eight [O iii]
auroral-selected galaxies to observe their general characteristics. Additionally, eight of the galaxies that were not targeted for auroral oxygen emission lines yielded [O i] detections, enabling the first analysis of high-redshift galaxies in this parameter space. Here are the key conclusions from this work:
1. When comparing the oxygen abundnces of the auroral-target stacks and galaxies in the literature on the strong-line indicator diagrams, we find that the stacks from this analysis are qualitatively consistent with the distribution found in the literature in both oxygen abundance and strong-line ratio. In general, it is difficult to say with a small sample size whether the Curti et al. (2020) or Bian et al. (2018) curves better describe galaxies at \(z>1\). In the case of log(R\({}_{23}\)) and log(N\({}_{2}\)), this may be the case. While the current sample size is limited, these results indicate that stacking analyses are promising.
2. When stacking together the galaxies with [O i] coverage (both auroral and non-auroral targets), we find that galaxies typically lie at higher [O i]/H\(\alpha\) at fixed [O iii]/H\(\beta\) relative to the median locus of local H ii regions. This offset is consistent with photoionization models with low-metallicity (\(1.0\times 10^{-5}\lesssim Z_{star}\lesssim 2\times 10^{-3}\)) stellar populations, supporting the picture that the line ratios in \(z>1\) galaxies are driven by harder ionizing spectra at fixed nebular oxygen abundance.
3. The N/O abundances of the [O iii] auroral stacks suggests that the nitrogen enrichment in our galaxy sample at \(z\sim 2\) is of primary origin and is consistent with the N/O vs. O/H primary abundance pattern seen in local H ii regions. Though the N/O abundance of GOODS-N-8240 is enhanced given its oxygen abundance, we do not find evidence that the line ratios in our galaxy sample are driven by an evolving N/O ratio with cosmic time.
The results of this analysis demonstrate the limits of 10-m-class ground-based facilities in the realm of nebular metallicity studies of galaxies at cosmic noon. Given that 10 hours of total integration time was still not enough to reach the required depth to consistently detect the [O iii]\(\lambda 4364\) line in all of the targets, we emphasize the importance of more sensitive facilities such as JWST and future 30-m-class observatories to make advances in this area of study. To date, JWST has already yielded a high number of auroral-line detections out to \(z\sim 8\)(e.g., Curti et al., 2023; Williams et al., 2022; Nakajima et al., 2023; Tang et al., 2023; Sanders et al., 2023). It is already playing an instrumental role in building up the sample of auroral-line measurements at cosmic noon and enabling improvements in strong-line metallicity calibrations at high redshfits.
We acknowledge support from NSF AAG grants 2009313 and 2009085. Support for this work was also provided through the NASA Hubble Fellowship grant #HST-HF2-51469.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. We finally wish to extend special thanks to those of Hawaiian ancestry on whose sacred mountain we are privileged to be guests. Without their generous hospitality, the work presented herein would not have been possible. Keck (MOSFIRE) Astropy (Astropy Collaboration et al., 2018), IDL (Landsman, 1993)
|
2307.01589 | Anomalies in String-inspired Non-local Extensions of QED | We investigate anomalies in the class of non-local field theories that have
been proposed as an ultraviolet completion of 4-D Quantum Field Theory (QFT)
with generalizing the kinetic energy operators to an infinite series of higher
derivatives inspired by string field theory and ghost-free non-local approaches
to quantum gravity. We explicitly calculate the vector and chiral anomalies in
a string-inspired non-local extension of QED. We show that the vector anomaly
vanishes as required by gauge-invariance and the Ward identity. On the other
hand, although the chiral anomaly vanishes to the leading order with massless
fermions, it nonetheless does not vanish with the massive fermions and we
calculate it to the leading order in scale of non-locality. We also calculate
the non-local vector and axial currents explicitly, and present an illustrative
example by applying our results to the decay of \pi_0 \rightarrow \gamma\gamma. | Fayez Abu-Ajamieh, Pratik Chattopadhyay, Anish Ghoshal, Nobuchika Okada | 2023-07-04T09:28:03Z | http://arxiv.org/abs/2307.01589v1 | # Anomalies in String-inspired Non-local Extensions of QED
###### Abstract
We investigate anomalies in the class of non-local field theories that have been proposed as an ultraviolet completion of 4-D Quantum Field Theory (QFT) with generalizing the kinetic energy operators to an infinite series of higher derivatives inspired by string field theory and ghost-free non-local approaches to quantum gravity. We explicitly calculate the vector and chiral anomalies in a string-inspired non-local extension of QED. We show that the vector anomaly vanishes as required by gauge-invariance and the Ward identity. On the other hand, although the chiral anomaly vanishes to the leading order with massless fermions, it nonetheless does not vanish with the massive fermions and we calculate it to the leading order in scale of non-locality. We also calculate the non-local vector and axial currents explicitly, and present an illustrative example by applying our results to the decay of \(\pi^{0}\to\gamma\gamma\).
Introduction
It is a well-known fact that strings, being non-local objects by their nature, are free from ultraviolet (UV) divergences [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. This fact inspired many physicists into trying to mimic this good UV behavior by formulating non-local QFTs as extensions of local QFTs, where non-locality is introduced to eliminate any UV divergences that could exist in the local case. The general prescription for transforming local QFTs to non-local ones is to introduce non-locality to the kinetic term via an entire function with infinite derivatives. For instance, in the scalar sector one writes
\[S_{NL}=\int d^{4}x\Big{[}\frac{1}{2}\phi\mathcal{K}(\Box)(\Box+m^{2})\phi-V( \phi)\Big{]}, \tag{1}\]
and the form factor \(\mathcal{K}\) has the function of smearing the interaction vertex, such that it becomes spatially finite in size, rather than being point-like, thereby making the interaction non-local. Apart from being an entire function of the \(\Box\) operator with infinite derivatives so that no new poles are introduced to the theory; there are no conditions on the form of \(\mathcal{K}(\Box)\), and any function that has the required properties is acceptable. However, in order for the UV behavior of loop amplitudes to be finite and avoid divergences, a common choice is to use a simple exponential function
\[\mathcal{K}(\Box)\equiv\exp\Big{(}\frac{\Box+m^{2}}{\Lambda^{2}}\Big{)}, \tag{2}\]
where \(m\) is the mass of the particle, and \(\Lambda\) is the scale of non-locality. With this choice of form factors, it is easy to see that at high energies, loop amplitudes behave like \(\sim e^{-\frac{s}{\Lambda^{2}}}\), which is suppressed when \(s>\Lambda^{2}\) and is thus free from UV divergences. However, the construction in eqs. (1) and (2) is an ansatz not derived from first principles, and should be treated as an Effective Field Theory (EFT) of yet another UV completion above the scale of non-locality. Nonetheless, non-locality introduced this way can still be used to calculate observables.
The first serious step towards constructing a realistic non-local QFT was taken in [14], where the non-local version of the Abelian gauge theory was formulated and the corresponding LHC phenomenology was studied. The formulation of non-local QED makes it possible to investigate the effects of the putative non-locality in this sector, such as the possible enhancement/suppression of scattering processes in colliders, the possible effect on Electroweak Precision Observables (EWPO), and its impact on gauge anomalies.
Local gauge anomalies were first explained in [15; 16; 17], and it is now understood that the anomaly associated with the vector current vanishes as a direct result of gauge invariance and the Ward identity, whereas the chiral anomaly associated with the axial current is non-vanishing since the axial current is global and cannot be gauged, implying that it cannot be conserved. The first (and to the best of our knowledge, only) study that attempted at investigating the \(U(1)\) gauge anomalies in non-local QED was [18], where the authors utilized a novel formalism dubbed the "Shadow Field Formalism", to show that introducing non-locality does not affect the conservation of the vector current, nor does it remove the chiral anomaly. In the present paper, we attempt at extending a similar treatment to the non-local QED version formulated in [14]. In particular, we will try to show that the vector anomaly vanishes and that the Ward identity is respected, and we derive the non-local chiral anomaly and the associated non-local Noether currents. We show that our results through explicit calculation using the non-local QED formulation in [14], agree with the results obtained in [18].
This paper is organized as follows: In Section II, we review the non-local QED theory introduced in [14]. In Section III we explicitly calculate the vector and chiral anomalies in non-local QED and we derive the associated Noether current. We relegate some technical detail to the Appendix, then we compare our results with [18] and show that they agree. In Section IV we apply our findings to the decay process of \(\pi^{0}\to\gamma\gamma\) and use the result to set an experimental bound on the scale on non-locality, and finally we present our conclusions in Section V.
## II Review of non-local QED
We begin by providing a quick overview of the non-local extension of QED that was derived in [14]. The basic idea behind obtaining the non-local version of QED, is to start with the local version, then introduce the non-locality factor represented by the exponential of an entire function of derivatives, such that the action remains gauge-invariant. With this prescription in mind, the non-local version of QED can be written as
\[\mathcal{L}_{\rm NL}=-\frac{1}{4}F_{\mu\nu}e^{\frac{\Box}{\Lambda_{g}}}F^{ \mu\nu}+\frac{1}{2}\Big{[}i\overline{\Psi}e^{-\frac{\nabla^{2}}{\Lambda_{f}^ {2}}}(\not{\nabla}+m)\Psi+h.c.\Big{]}, \tag{3}\]
where \(\nabla_{\mu}=\partial_{\mu}+ieA_{\mu}\), which implies
\[\nabla^{2}=\Box+ie(\partial\cdot A+A\cdot\partial)-e^{2}A^{2}. \tag{4}\]
Here, we have accommodated for the fact that the scale of non-locality for the fermions and photon could be different in principle. Notice that while we are using the ordinary derivative in the photon's kinetic term, the covariant derivative has to be used in the fermion sector to keep it gauge-invariant. In calculating the non-local QED anomaly, one only needs the Feynman rules for the fermion propagator and the interaction vertices. The former is easily extracted to be
\[\Pi_{f}=\frac{ie^{\frac{p^{2}}{\Lambda_{f}^{2}}}(\not{p}+m)}{p^{2}-m^{2}+i \epsilon}. \tag{5}\]
It is easy to see that in the limit \(\Lambda_{f}\rightarrow\infty\) one recovers the standard fermion propagator. On the other hand, extracting the interaction vertex is more subtle, as special care is needed to include the contribution from the covariant derivative in the exponent. To proceed, we expand the covariant derivative in the non-local factor, then only keep the terms at linear order in \(A\). The final result is given by
\[V(k_{1},k_{2})=-\frac{ie}{2}\Bigg{[}(k_{1\mu}\not{k}_{2}+k_{2\mu}\not{k}_{1}) \Bigg{(}\frac{e^{\frac{k_{1}^{2}}{\Lambda_{f}^{2}}}-e^{\frac{k_{2}^{2}}{ \Lambda_{f}^{2}}}}{k_{1}^{2}-k_{2}^{2}}\Bigg{)}+\Big{(}e^{\frac{k_{1}^{2}}{ \Lambda_{f}^{2}}}+e^{\frac{k_{2}^{2}}{\Lambda_{f}^{2}}}\Big{)}\gamma_{\mu} \Bigg{]}, \tag{6}\]
where \(k_{1,2}\) are the momenta of the fermions. In the limit \(\Lambda_{f}\rightarrow\infty\) one recovers the local QED. We refer the interested reader to [14] for the detailed derivation.
## III Anomalies in non-local QED
In this section, we will explicitly calculate the \(U(1)\) vector and axial anomalies in the non-local extension of QED formulated in [14]. In our calculation, we follow the method presented in [19] based on calculating the triangle diagrams regularized via a Pauli-Villars regulator. However, unlike the case of local QED, no regulator is needed to calculate the loop diagrams in non-local QFTs, as they are already super-renormalizable due to the non-locality form factor. Similar to the case of local QFTs, anomalies in non-local QED arise from triangle diagrams with charged fermions running in the loops, with two vector and one axial currents attached to the vertices as shown in the top row of Figure 1. In non-local QED, there is an additional contribution from the bubble diagram shown in the bottom
row of Figure 1. One can see how this type of diagrams comes into play by inspecting eqs. (3) and (4). We can see that when we expand the covariant derivative in the form factor, we obtain an infinite tower of non-renormalizable effective vertices \(\sim\overline{\Psi}\Psi A^{n}\), where we see that the bubble diagram arises from the vertex with \(n=2\). These interaction vertices are a direct consequence of the requirement of gauge invariance, which necessitated using the covariant derivative instead of the ordinary one in the non-locality form factor. We present the detailed derivation of the Feynman rule associated with the \(\overline{\Psi}\Psi A^{2}\) vertex in Appendix A.
Before we proceed with calculating the anomalies, we point out that in general, calculating loop diagrams in non-local QFTs is not doable exactly due to the complex nature of the form factor that contains loop momenta to be integrated over. However, the calculation simplifies significantly if we assume that the scale of non-locality is much larger than the external momenta, i.e. \(\Lambda\gg p,q\). Given the lower bound on \(\Lambda\sim 2.5-3\) TeV [14], the validity of this approximation is well-justified, as was demonstrated in detail in [20]. In this limit, the form factors in the propagators and the interaction vertices are simplified and reduced to \(e^{(k\pm p)^{2}/\Lambda^{2}}\simeq e^{(k\pm q)^{2}/\Lambda^{2}}\simeq e^{k^{2} /\Lambda^{2}}\), where \(k\) is the loop momentum to be integrated over.
Figure 1: _Triangle (top) and bubble (bottom) diagrams contributing to the anomalies in non-local QED._
### Vector and Chiral Anomalies with Massless Fermions
We first investigate the case where the fermions in the loops are massless. We begin by calculating the bubble diagram. In the limit of small external momenta, the corresponding matrix element reads
\[\mathcal{M}^{\mu\nu\rho}_{\bigcirc}\simeq ie^{2}\int\frac{d^{4}k}{(2\pi)^{4}}e^{ \frac{4k^{2}}{\Lambda^{2}}}\text{Tr}\Big{[}\frac{\gamma^{\mu}\gamma^{5}( \not{k}+\not{p})V^{\nu\rho}(k+p,k-q,p,q)(\not{k}-\not{q})}{(k+p)^{2}(k-q)^{2}} \Big{]}, \tag{7}\]
where \(V^{\nu\rho}\) is given by eq. (17). Using the explicit expression of \(V^{\nu\rho}(k+p,k-q,p,q)\), we find that
\[\mathcal{M}^{\mu\nu\rho}_{\bigcirc}\sim\text{Tr}\Bigg{[}\gamma^{\mu}\gamma^{5}( \not{k}+\not{p})\Big{[}(\not{k}-\not{q})(k+p)^{\nu}(k+p-q)^{\rho}-(\not{k}+\not {p})(k-q)^{\nu}k^{\rho}\Big{]}(\not{k}-\not{q})\Bigg{]}=0. \tag{8}\]
Therefore, the bubble diagram does not contribute to either the vector or the chiral anomalies. On the other hand, the triangle diagrams are given by
\[\mathcal{M}^{\mu\nu\rho}_{\bigtriangleup}\simeq-ie^{2}\int\frac{d^{4}k}{(2\pi) ^{4}}e^{\frac{\text{e}^{2}}{\Lambda^{2}}}\text{Tr}\Big{[}\frac{\gamma^{5}\gamma ^{\mu}(\not{k}+\not{p})\gamma^{\nu}\not{k}\gamma^{\rho}(\not{k}-\not{p})}{(k+p )^{2}k^{2}(k-q)^{2}}\Big{]}+\begin{pmatrix}p\leftrightarrow q\\ \nu\leftrightarrow\rho\end{pmatrix}, \tag{9}\]
in the limit of small external momenta. Notice that this is identical to the local case multiplied by the non-locality factor. The factor of 6 arises from 3 non-local vertices and 3 non-local propagators.
We begin by calculating the vector anomaly. Our aim is to verify that the vector anomaly indeed vanishes in the non-local QED and that the Ward identity is preserved. Prima facie, this should be the case, since the non-local QED action is gauge invariant by construction. To this avail, it is convenient to calculate \(p_{\nu}M^{\mu\nu\rho}_{5}\). Using \(\not{p}=\not{p}+\not{k}-\not{k}\), the trace in eq. (9) simplifies to
\[\frac{1}{k^{2}(k-q)^{2}}\text{Tr}\Big{[}\gamma^{5}\gamma^{\mu}\not{k}\gamma^{ \rho}(\not{k}-\not{q})\Big{]}-\frac{1}{(k+p)^{2}(k-q)^{2}}\text{Tr}\Big{[} \gamma^{5}\gamma^{\mu}(\not{k}+\not{p})\gamma^{\rho}(\not{k}-\not{q})\Big{]}. \tag{10}\]
It is a simple exercise to evaluate the traces. The first traces yields \(-4ik_{\nu}q_{\sigma}\epsilon^{\mu\nu\rho\sigma}\), whereas the second trace evaluates to \(-4i(k_{\nu}p_{\sigma}+k_{\nu}q_{\sigma}+p_{\nu}q_{\sigma})\epsilon^{\mu\nu\rho\sigma}\). Thus, eq. (9) becomes
\[p_{\nu}\mathcal{M}^{\mu\nu\rho}_{\bigtriangleup}\simeq-4e^{2}\epsilon^{\mu\nu \rho\sigma}\int\frac{d^{4}k}{(2\pi)^{4}}e^{\frac{6k^{2}}{\Lambda^{2}}}\Bigg{[} \frac{k_{\mu}q_{\sigma}}{k^{2}(k-q)^{2}}+\frac{k_{\nu}(p+q)_{\sigma}+p_{\nu}q_ {\sigma}}{(k-q)^{2}(k+p)^{2}}\Bigg{]}+\begin{pmatrix}p\leftrightarrow q\\ \nu\leftrightarrow\rho\end{pmatrix}. \tag{11}\]
It is sufficient to evaluate the first term. Focusing on the first part of the first term, we notice that the only external momentum it contains is \(q_{\sigma}\), which means that after integrating
over \(k_{\mu}\), Lorentz invariance implies that the result will be proportional to \(q_{\mu}q_{\sigma}\), which vanishes upon contraction with \(\epsilon^{\mu\nu\rho\sigma}\). This leaves us with the second integral to perform. Such loop integrals are fairly simple to evaluate and are UV-finite due to the non-locality form factor. Details on how to calculate these non-local momentum integrals are provided in [20]. Upon evaluating the momentum integral in eq. (11), we find
\[p_{\nu}{\cal M}_{\triangle}^{\mu\nu\rho}\sim(p_{\nu}p_{\sigma}-q_{\nu}q_{ \sigma}-p_{\nu}q_{\sigma}-q_{\nu}p_{\sigma})\epsilon^{\mu\nu\rho\sigma}, \tag{12}\]
and we can see that \(p_{\nu}p_{\sigma}\) and \(q_{\nu}q_{\sigma}\) vanish upon contraction with \(\epsilon^{\mu\nu\rho\sigma}\). This leaves \((p_{\nu}q_{\sigma}+q_{\nu}p_{\sigma})\epsilon^{\mu\nu\rho\sigma}\), and it's easy to see that after relabeling \(\nu\leftrightarrow\sigma\) in the second term and using the anti-symmetry of \(\epsilon^{\mu\nu\rho\sigma}\), the whole term vanishes. The same argument holds for \(q_{\nu}{\cal M}_{\triangle}^{\mu\nu\rho}\) since \(p\) and \(q\) are symmetric. Thus, we can see that vector anomaly vanishes in non-local QED, as it should.
Turning our attention to the chiral anomaly, we need to calculate \((p+q)_{\mu}{\cal M}_{\triangle}^{\mu\nu\rho}\). Using
\[\gamma^{5}(\not{p}+\not{q})=\gamma^{5}(\not{p}+\not{k}-\not{k}+\not{q})=\gamma ^{5}(\not{k}+\not{p})+(\not{k}-\not{q})\gamma^{5}, \tag{13}\]
the trace in (9) simplifies to
\[\frac{1}{k^{2}(k-q)^{2}}{\rm Tr}\Big{[}\gamma^{5}\gamma^{\nu}\not{k}\gamma^{ \rho}(\not{k}-\not{q})\Big{]}+\frac{1}{k^{2}(k+p)^{2}}{\rm Tr}\Big{[}\gamma^{5 }(\not{k}+\not{p})\gamma^{\nu}\not{k}\gamma^{\rho}\Big{]}. \tag{14}\]
Notice that the first term is identical to the first term in (10) with \(\mu\to\nu\), and therefore it vanishes as we saw above. On the other hand, the second traces yields \(-4i\epsilon^{\mu\nu\rho\sigma}k_{\mu}p_{\sigma}\). Therefore, the chiral anomaly reads
\[-(p+q)_{\mu}{\cal M}_{\triangle}^{\mu\nu\rho}\simeq 4e^{2}\epsilon^{\mu\nu \rho\sigma}\int\frac{d^{4}k}{(2\pi)^{4}}e^{\frac{6k^{2}}{A^{2}}}\Bigg{[}\frac {k_{\mu}p_{\sigma}}{k^{2}(k+p)^{2}}\Bigg{]}+\begin{pmatrix}p\leftrightarrow q \\ \nu\leftrightarrow\rho\end{pmatrix}, \tag{15}\]
and we see that the first term contains \(p\) only, which means that after integrating over \(k\), the result will be \(\sim p_{\mu}p_{\sigma}\), which vanishes upon contraction with \(\epsilon^{\mu\nu\rho\sigma}\), i.e. the chiral anomaly seems to vanish in non-local QED! This result is counter-intuitive, as the chiral anomaly in local QED is non-vanishing, and one would expect the same to carry on to the non-local case. The reason behind this apparent contradiction lies in the our approximations. We limited our calculation to the leading-order in the expansion of \(p,q/\Lambda\), and assumed massless fermions. However, this situation does not hold once we include the NLO expansion in external momenta and/or we use massive fermions, and the chiral anomaly no longer
vanishes. In III.2 below, we shall redo our calculation with massive fermions and show that chiral anomaly indeed persist. We will limit our calculation to the LO in \(p,q/\Lambda\) for simplicity.
### Vector and Chiral Anomalies with Massive Fermions
Here we show the effect of including fermion masses on both the vector and chiral anomalies. First, let us focus on the bubble diagram. Including the fermion masses in eq. (7) simplifying it, eq. (8) becomes
\[{\cal M}^{\mu\nu\rho}_{\bigcirc}\simeq ie^{2}\int\frac{d^{4}k}{(2\pi)^{4}}e^{ \frac{4k^{2}}{\Lambda^{2}}}{\rm Tr}\Big{[}\frac{\gamma^{\mu}\gamma^{5}(k\!\! \!/+p\!\!\!/+m)V^{\nu\rho}(k+p,k-q,p,q)(k\!\!\!/-q\!\!\!/+m)}{[(k+p)^{2}-m^{2}][( k-q)^{2}-m^{2}]}\Big{]}, \tag{16}\]
with \(V^{\nu\rho}(k+p,k-q,p,q)\) which is unchanged compared to the massless case. Here too, we find that the trace vanishes, and hence the bubble diagram does not contribute. On the other hand, the contribution of the triangle diagrams in eq. (9) becomes
\[{\cal M}^{\mu\nu\rho}_{\bigtriangleup}\simeq-ie^{2}\int\frac{d^{4}k}{(2\pi)^{4 }}e^{\frac{4k^{2}}{\Lambda^{2}}}{\rm Tr}\Bigg{[}\frac{\gamma^{5}\gamma^{\mu}(k \!\!\!/+p\!\!\!/+m)\gamma^{\nu}(k\!\!\!/+m)\gamma^{\rho}(k\!\!\!/-p\!\!\!/+m)}{ [(k+p)^{2}-m^{2}][k^{2}-m^{2}][(k-q)^{2}-m^{2}]}\Bigg{]}+\begin{pmatrix}p \leftrightarrow q\\ \nu\leftrightarrow\rho\end{pmatrix}. \tag{17}\]
First, we investigate the vector anomaly by calculating \(p_{\nu}{\cal M}^{\mu\nu\rho}_{\bigtriangleup}\). Simplifying the expression by writing \(p\!\!\!/=(p\!\!\!/+k\!\!\!/-m)-(k\!\!\!/-m)\) and then evaluating the traces explicitly, it's not hard to see that the result is identical to eq. (11) with the denominators being those of massive fermions. Therefore, the result in eq. (12) continues to hold, and the vanishing of the vector anomaly remains unaffected, as is expected.
Turning our attention to the chiral anomaly by considering \(-(p+q)_{\mu}{\cal M}^{\mu\nu\rho}_{\bigtriangleup}\) in the massive case, we first simplify the matrix element by using
\[\gamma^{5}(p\!\!\!/+q\!\!\!/) = \gamma^{5}(k\!\!\!/+p\!\!\!/-m)+\gamma^{5}(q\!\!\!/-k\!\!\!/-m)+2 m\gamma^{5} \tag{18}\] \[= \gamma^{5}(k\!\!\!/+p\!\!\!/-m)+(k\!\!\!/-q\!\!\!/-m)\gamma^{5}+ 2m\gamma^{5},\]
which simplifies the trace in eq. (17) to
\[= {\rm Tr}\Big{[}\frac{\gamma^{5}\gamma^{\nu}(k\!\!\!/+m)\gamma^{ \rho}(k\!\!\!/-q\!\!\!/+m)}{[k^{2}-m^{2}][(k-q)^{2}-m^{2}]}\Big{]}+{\rm Tr} \Big{[}\frac{\gamma^{5}(k\!\!\!/+p\!\!\!/+m)\gamma^{\nu}(k\!\!\!/+m)\gamma^{ \rho}}{[(k+p)^{2}-m^{2}][k^{2}-m^{2}]}\Big{]} \tag{19}\] \[+ 2m{\rm Tr}\Big{[}\frac{\gamma^{5}(k\!\!\!/+p\!\!\!/+m)\gamma^{ \nu}(k\!\!\!/+m)\gamma^{\rho}(k\!\!\!/-q\!\!\!/+m)}{[(k-q)^{2}-m^{2}][(k+p)^{ 2}-m^{2}][k^{2}-m^{2}]}\Big{]}.\]
Focusing first and second terms, it is a simple exercise to show that they yield identical results to eq. (14) with the mass added in the denominators, and therefore they vanish after integrating over \(k\) and contracting with \(\epsilon^{\mu\nu\rho\sigma}\). The last term, on the other hand, is proportional to the mass and does not yield a vanishing contribution. The trace yields the factor \(4imp_{\mu}q_{\sigma}\epsilon^{\mu\nu\rho\sigma}\), and thus eq. (17) becomes
\[-(p+q)_{\mu}\mathcal{M}_{\triangle}^{\mu\nu\rho}\simeq\int\frac{d^{4}k}{(2\pi) ^{4}}e^{\frac{6k^{2}}{\Lambda^{2}}}\frac{-8e^{2}m^{2}p_{\mu}q_{\sigma}\epsilon ^{\mu\nu\rho\sigma}}{[(k-q)^{2}-m^{2}][(k+p)^{2}-m^{2}][k^{2}-m^{2}]}+\begin{pmatrix} p\leftrightarrow q\\ \nu\leftrightarrow\rho\end{pmatrix}. \tag{20}\]
Evaluating the integral is fairly straightforward, and the result in terms of the Feynman parameters reads
\[-(p+q)_{\mu}\mathcal{M}_{\triangle}^{\mu\nu\rho}\simeq\frac{ie^{2}}{\pi^{2}}p_ {\mu}q_{\sigma}\epsilon^{\mu\nu\rho\sigma}\int_{0}^{1}dxdy\Big{[}\frac{1}{1- xy\frac{Q^{2}}{m^{2}}}+\frac{6m^{2}}{\Lambda^{2}}+\frac{12m^{2}}{\Lambda^{2}} \text{Ei}\Big{(}\frac{6(xyQ^{2}-m^{2})}{\Lambda^{2}}\Big{)}\Big{]}, \tag{21}\]
where \(Q^{2}\equiv(p+q)^{2}\), and the exponential integral function \(\text{Ei}(x)\) is defined as
\[\text{Ei}(x)=-\int_{-x}^{\infty}dt\frac{e^{-t}}{t}. \tag{22}\]
Linking eq. (21) to the massless case is straightforward and can be done simply by taking the limit \(m\to 0\), which leads to the vanishing of the anomaly at LO in the expansion of the external momenta, in a manner consistent with what we found in Section III.1. On the other hand, the link to the local case is more subtle. Here one expects that the local case should be obtained by taking the limit \(\Lambda\to\infty\), however, this turns out to be insufficient. The reason behind this can be best understood by calculating the local anomaly following the method in [19], where it is shown that the chiral anomaly in the local case arises purely from the regulator. However, a regulator is absent in the non-local case since it's already finite. Therefore, simply taking \(\Lambda\to\infty\) will not render the regularized local result. Instead, we use the following prescription to remedy the situation: We assume that \(m^{2}\gg Q^{2}\), which _corresponds to the mass itself acting as regulator_. In the limit \(\Lambda\gg m^{2}\gg Q^{2}\), eq. (21) becomes
\[-(p+q)_{\mu}\mathcal{M}_{\triangle}^{\mu\nu\rho}\simeq\frac{ie^{2}}{2\pi^{2}} p_{\mu}q_{\sigma}\epsilon^{\mu\nu\rho\sigma}\Big{[}1+\frac{6m^{2}}{\Lambda^{2}}+ \frac{12m^{2}}{\Lambda^{2}}\text{Ei}\Big{(}\frac{-6m^{2}}{\Lambda^{2}}\Big{)} \Big{]}, \tag{23}\]
and it's easy to see that upon taking \(\Lambda\to\infty\), the local case is retrieved.
### Noether Currents
Finally, here we derive the non-local Noether vector and axial currents. Notice that the action in eq. (3) is invariant under the global transformations
\[\Psi\to e^{i\alpha}\Psi,\hskip 28.452756pt\Psi\to e^{i\beta\gamma^{5}}\Psi. \tag{24}\]
To derive the corresponding Noether currents, we follow the usual prescription of demanding that the Lagrangian be invariant under the infinitesimal local transformations
\[\Psi\to(1+i\alpha(x))\Psi,\hskip 28.452756pt\Psi\to(1+i\beta(x)\gamma^{5})\Psi, \tag{25}\]
which leads to the current
\[J^{\mu}(x)=\frac{\delta\mathcal{L}}{\delta(\partial_{\mu}\Psi)}\Delta\Psi. \tag{26}\]
In order to derive the non-local QED Noether currents, we start with the Lagrangian
\[\mathcal{L}=\frac{i}{2}\overline{\Psi}\exp\left(\frac{-\Box-ie(\partial\cdot A +A\cdot\partial)-e^{2}A^{2}}{\Lambda^{2}}\right)(\not{\partial}\Psi+ie\not{A} \Psi)+\text{h.c.} \tag{27}\]
Notice that in order to evaluate the variation of the Lagrangian w.r.t. \(\partial\Psi\), we need to pay special attention to the derivatives in the exponent. To this avail, we use the following prescription: First we expand the derivative operators in the exponents, then we act the derivatives on the associated field leaving only terms \(\sim\partial\Psi\). Finally we exponentiate the results and restore the operator form in the currents. Let us first focus on the second term in the parentheses in eq. (27). We assume that the photon is on-shell, such that \(\Box(\not{A}\Psi)=\not{A}\Box\Psi=-k_{1}^{2}\not{A}\Psi\). Therefore we have
\[\exp\Big{(}-\frac{\Box}{\Lambda^{2}}\Big{)}(\not{A}\Psi)=\sum_{n=0}^{\infty} \Big{[}\frac{(-i)^{n}\Box^{n}}{\Lambda^{2n}n!}\Big{]}(\not{A}\Psi)=\sum_{n=0}^ {\infty}\Big{[}\frac{(k_{1}^{2})^{n}}{\Lambda^{2n}n!}\Big{]}(\not{A}\Psi)=\exp \Big{(}\frac{k_{1}^{2}}{\Lambda^{2}}\Big{)}(\not{A}\Psi). \tag{28}\]
On the other hand, the remaining derivative acting on \(\not{A}\Psi\) can be evaluated as follows:
\[\exp\Big{(}\frac{-ieA\cdot\partial}{\Lambda^{2}}\Big{)}(ie\not{A}\Psi) = ie\sum_{n=0}^{\infty}\frac{(-ieA\cdot\partial)^{n}}{\Lambda^{2n }n!}(\not{A}\Psi), \tag{29}\] \[= ie\sum_{n=0}^{\infty}\frac{(-ieA^{\mu})^{n}}{\Lambda^{2n}n!} \sum_{k=0}^{n}\binom{n}{k}(\partial_{\mu}^{n-k}\not{A})(\partial_{\mu}^{k}\Psi),\] \[= ie\not{A}A\cdot\partial\Psi\sum_{n=0}^{\infty}\frac{(-ieA^{\mu} )^{n}}{\Lambda^{2n}n!}\sum_{k=0}^{n}\binom{n}{k}(iq\cdot A)^{n-k}(-ik_{1}\cdot A )^{k-1},\] \[= -\frac{e^{2}\not{A}A\cdot\partial\Psi}{k_{1}\cdot A}\exp\Big{(} \frac{eA\cdot k_{2}}{\Lambda^{2}}\Big{)},\]
where we have used conservation on momentum to eliminate the momentum of the photon. The hermitian conjugate yields identical results with \(k_{1}\leftrightarrow k_{2}\). Thus, after restoring the operators, the second term in eq. (27) becomes
\[{\cal L}_{2}=-\frac{ie^{2}}{2}\overline{\Psi}\exp{\Bigg{(}\frac{-\Box-ieA\cdot \partial-e^{2}A^{2}}{\Lambda^{2}}\Bigg{)}}\Big{(}\frac{1}{k_{1}\cdot A}+\frac {1}{k_{2}\cdot A}\Big{)}A\!\!\!/A\cdot\partial\Psi. \tag{30}\]
Notice that when the photon is assumed to be on-shell, we have
\[\frac{1}{k_{1}\cdot A}+\frac{1}{k_{2}\cdot A}=\frac{(k_{1}+k_{2})\cdot A}{(k_ {1}\cdot A)(k_{2}\cdot A)}=\frac{q\cdot A}{(k_{1}\cdot A)(k_{2}\cdot A)}=0, \tag{31}\]
which implies that the second term in eq. (27) does not contribute to the non-local Noether currents. On the other hand, the first term will give a non-vanishing contribution. Follwing the same procedure, we obtain
\[{\cal L}_{1}=\frac{i}{2}\overline{\Psi}\exp{\Bigg{(}\frac{k_{1}^{2}+eA\cdot k _{2}-e^{2}A^{2}}{\Lambda^{2}}\Bigg{)}}\partial\!\!\!/\Psi+(1\leftrightarrow 2). \tag{32}\]
Using eq. (32) in eq. (26), then restoring the operators in the exponents, we obtain the Noether currents
\[J^{\mu}(x) = \overline{\Psi}\gamma^{\mu}\Psi\exp{\Big{(}\frac{-\Box-ieA\cdot \partial-e^{2}A^{2}}{\Lambda^{2}}\Big{)}}, \tag{33}\] \[J^{\mu 5}(x) = \overline{\Psi}\gamma^{\mu}\gamma^{5}\Psi\exp{\Big{(}\frac{-\Box -ieA\cdot\partial-e^{2}A^{2}}{\Lambda^{2}}\Big{)}}. \tag{34}\]
Notice that taking the limit \(\Lambda\to\infty\), the local limit is retrieved, i.e \(J^{\mu}\to\overline{\Psi}\gamma^{\mu}\Psi\), and \(J^{\mu 5}\to\overline{\Psi}\gamma^{\mu}\gamma^{5}\Psi\).
Before we conclude this section, there is an important point that we need to clarify. As is well-known, local anomalies are obtained by evaluating the expectation of the Noether currents. Thus, we should be able to obtain the non-local anomalies by evaluating
\[\int d^{4}xd^{4}yd^{4}ze^{-ip.x}e^{iq_{1}.y}e^{iq_{2}.z}\langle J^{\mu 5}(x)J^{\nu}(y)J^{\rho}(z)\rangle, \tag{35}\]
with the currents given by eqs. (33) and (34). However, given the field \(A\) in the exponents of the vector and axial currents, we see that the expansion in \(A\) actually corresponds to the sum of all insertions of the vector current in the fermion loop, i.e. the quantity in eq. (35) actually encodes all higher-order anomalies that correspond to an arbitrary number of the gauge field \(A\) inserted into a fermion loop (in addition to the insertions of vector and axial fields from the local piece). These anomalies in general, might not be vanishing, however,
we are only interested in the triangle anomalies. Triangle anomalies can be obtained by keeping the leading order in \(A\), i.e.
\[\exp\Big{(}\frac{-\Box-ieA\cdot\partial-e^{2}A^{2}}{\Lambda^{2}}\Big{)}\simeq \exp\Big{(}-\frac{\Box}{\Lambda^{2}}\Big{)}+O(A). \tag{36}\]
Thus we can see at this order, that eq. (35) leads to the same results we obtained above.
### Summary of the Results
In this section we summarize the results that we obtained in this paper:
* Vector anomalies in non-local QED vanish exactly, whether the fermions in the loops are massless or massive, and the Ward identity is respected. It is also not hard to show that the vanishing of the vector anomaly holds to all orders in the expansion of \(p,q/\Lambda\). This is expected, since the non-local QED action in eq. (3) is gauge-invariant by construction,
* Although in non-local QED with massless fermions, the chiral anomaly appears to vanish at the LO in \(p,q/\Lambda\); one can show that is no longer holds once higher-order corrections are included. In addition, for non-local QED with massive fermions at LO, we find that the chiral anomaly persists and that it has the expected form. We found that while obtaining the massless limit is straightforward, the local limit is more subtle and cannot be obtained by simply taking \(\Lambda\to\infty\). Instead, one needs to assume that mass of the fermions is much larger than the other momentum scales in order to act as a regulator itself in the local limit. Using this prescription, the correct local limit is obtained,
* The non-local vector and axial currents encode anomalies that correspond to all insertions of the gauge field in the fermion loop, with the triangle anomalies obtained from the LO expansion in the gauge field. This is a direct consequence of gauge invariance, which leads to rich structures in non-local QED that merit further investigation in the future.
* Our results are consistent with those found in Ref. [18] using the shadow field formalism.
Application: \(\pi^{0}\to\gamma\gamma\) Decay
We present an application to anomalies in non-local QED by studying the decay process of \(\pi^{0}\to\gamma\gamma\). This decay proceeds through triangle diagrams like the ones shown in Figure 1, with the axial current replaced with a pseudo-scalar and with protons running in the loops. The interaction Lagrangian is given by
\[\mathcal{L}_{\rm int}=-i\lambda\pi\overline{\Psi}\gamma^{5}\Psi. \tag{37}\]
The matrix element can be written as \(-\lambda e^{2}\epsilon_{1\mu}^{*}\epsilon_{2\nu}^{*}\mathcal{M}^{\mu\nu}\), where at LO in \(q_{1,2}/\Lambda\) we have
\[\mathcal{M}^{\mu\nu}\simeq\int\frac{d^{4}k}{(2\pi)^{4}}e^{\frac{5k^{2}}{ \Lambda^{2}}}{\rm Tr}\Bigg{[}\gamma^{\mu}\frac{i(\not{k}-\not{q_{1}}+m)}{(k-q_ {1})^{2}-m^{2}}\gamma^{5}\frac{i(\not{k}+\not{q_{2}}+m)}{(k+q_{2})^{2}-m^{2}} \gamma^{\nu}\frac{i(\not{k}+m)}{k^{2}-m^{2}}\Bigg{]}+\begin{pmatrix}1\leftrightarrow 2 \\ \nu\leftrightarrow\rho\end{pmatrix}, \tag{38}\]
where \(m\) is the mass of the proton. \(\mathcal{M}^{\mu\nu}\) can be evaluated following the procedure illustrated in Section III, and in the limit \(m\gg m_{\pi}\), the decay width reads
\[\Gamma_{\rm NL}(\pi^{0}\to\gamma\gamma)\simeq\Gamma_{0}\times\Bigg{[}1+\frac{ 5m^{2}}{\Lambda^{2}}+\frac{10m^{2}}{\Lambda^{2}}{\rm Ei}\Big{(}-\frac{5m^{2}} {\Lambda^{2}}\Big{)}\Bigg{]}^{2}, \tag{39}\]
where
\[\Gamma_{0}=\frac{\alpha^{2}}{64\pi^{3}}\frac{m_{\pi}^{3}}{f_{\pi}^{2}}, \tag{40}\]
is the decay width in the local case, and \(f_{\pi}\) is the pion decay constant. We can use eq. (39) to set a lower limit on the scale of non-locality. The most recent measurement of the decay width of \(\pi^{0}\to\gamma\gamma\) comes from the PrimEx-II experiment:
\[\Gamma_{\rm Exp}(\pi^{0}\to\gamma\gamma)=7.802\pm 0.052\,({\rm stat.})\pm 0.105 \,({\rm syst.})\,{\rm eV}, \tag{41}\]
which can be used to set a \(2\sigma\) limit on the scale of non-locality
\[\Lambda\gtrsim 57\,{\rm GeV}. \tag{42}\]
This bound is not very stringent and cannot compete with the collider bound of \(\Lambda\gtrsim 2.5-3\) TeV [14].
Conclusion and Outlook
In this paper, we investigated the vector and chiral anomalies in the non-local QED formulated in [14]. We found that the vanishing of the vector anomaly remains unaffected and that the Ward identity continues to hold in the non-local case as well. This is to be expected since non-local QED is gauge invariant by construction.
We also found that at leading order the chiral anomaly vanishes in the massless case, while it does not vanish in the massive case. Also, the anomaly continues to exist at next to leading order in the massless case. Naively, one might speculate that since non-local QED lacks a regulator as it is already regularized, and that since the chiral anomaly in the local case arises purely from the regulator; the chiral anomaly in the non-local case would vanish. Nonetheless, this turned out not to be the case, and the chiral anomaly is non-vanishing at next to leading order for the massless case, and can be expressed in terms of the local anomaly plus corrections suppressed by the scale of non-locality. We found that obtaining the local limit from the non-local case would require special care and we found that with the correct prescription, the local limit is obtained when \(\Lambda\to\infty\). Our results are consistent with the results found in [18] by using the shadow field formalism. We also found the corresponding vector and axial Noether currents in the non-local case and found that they encode all higher-order anomalies, with the triangle anomalies obtained from the LO expansion in the gauge field. We also showed that in the limit \(\Lambda\to\infty\), the local currents are obtained.
As a simple application of our results, we calculated the corrections to the decay width \(\pi^{0}\to\gamma\gamma\) due to non-locality and found that constraint corresponding to the current experimental measurement is weak compared to the limit obtained from the LHC.
## Acknowledgment
FA thanks Sudhir Vempati. The work of FA is supported by the C.V. Raman fellowship from CHEP at IISc. The work of PC is supported by an EPSRC fellowship. The work of NO is supported in part by the United States Department of Energy Grant, No. DE-SC0012447.
## Appendix A Derivation of the Non-local \(\overline{\Psi}\Psi\gamma\gamma\) Vertex
Here we show how to derive the Feynman rule for the \(\overline{\Psi}\Psi\gamma\gamma\) vertex in non-local QED. The Feynman rule for \(\overline{\Psi}\Psi\gamma\) vertex was derived in [14] and is shown in eq. (6). The full Feynman rule of the the \(\overline{\Psi}\Psi\gamma\gamma\) vertex is rather complex, therefore, we simplify by assuming that the photons are _on-shell_, which is the case we are interested in for calculating the anomalies, and we only keep the leading terms in \(1/\Lambda^{2}\). We start with the fermion part of the non-local QED action in eq. (3)
\[S_{\rm NL}=\frac{1}{2}\int d^{4}x\Big{[}i\overline{\Psi}e^{-\frac{\nabla^{2}}{ \Lambda^{2}}}(\not{\nabla}+m)\Psi+h.c.\Big{]}. \tag{10}\]
We first expand the non-local form factor in powers of \(1/\Lambda^{2}\) and write the covariant derivative explicitly as shown in eq. (4):
\[S_{\rm NL}=\frac{1}{2}\int d^{4}x\Bigg{\{}i\overline{\Psi}\sum_{n=1}^{\infty} \frac{(-1)^{n}}{\Lambda^{2n}n!}\Big{[}\Box+ie(\partial\cdot A+A\cdot\partial)- e^{2}A^{2}\Big{]}^{n}\Big{[}\not{\partial}+ie\not{A}\Big{]}\Psi+h.c. \Bigg{\}}. \tag{11}\]
In order to obtain the \(\overline{\Psi}\Psi\gamma\gamma\) vertex, we only keep terms that are proportional to \(A^{2}\), i.e. the terms \(\sim O(e^{2})\). Inspecting eq. (11), we can see that we can obtain terms at \(O(A^{2})\) through 3 different ways: 1) For \(n=1\), we can have the \(A^{2}\) term in the first bracket multiplied by the \(\not{\partial}\Psi\) term in the second bracket, 2) for \(n=1\), we can have the \((\partial.A+A.\partial)\) term from the first bracket multiplied by the \(\not{A}\) term in second bracket, and 3) for \(n=2\), we can have the \((\partial\cdot A+A\cdot\partial)^{2}\) term from the first bracket multiplied by the \(\not{\partial}\Psi\) term in the second term. Explicitly, we have
\[S_{\rm NL}\supset -\frac{ie^{2}}{2}\sum\limits_{n=0}^{\infty}\frac{(-1)^{n}}{ \Lambda^{2n}n!}\int d^{4}x\Big{\{}\sum\limits_{m=0}^{n-1}(\Box^{m}\overline{ \Psi})\Big{[}A^{2}\Box^{n-m-1}(\not{\partial}\Psi)+(\partial\cdot A+A\cdot \partial)\Box^{n-m-1}(\not{A}\Psi)\Big{]}\] \[+\sum\limits_{m=0}^{n-2}\sum\limits_{l=0}^{n-m-2}(\Box^{m} \overline{\Psi})(\partial\cdot A+A\cdot\partial)\Box^{l}(\partial\cdot A+A \cdot\partial)\Box^{n-m-l-2}(\not{\partial}\Psi)+h.c.\Big{\}}, \tag{12}\]
where we have integrated \(\overline{\Psi}\Box^{m}\) by parts to obtain \(\Box^{m}\overline{\Psi}\). We treat each of the three terms separately. Starting with the first term, notice that each \(\Box\) operator will pull down a factor of \(-k_{1,2}^{2}\), with \(k_{1,2}\) being the 4-momentum of \(\overline{\Psi}\) and \(\Psi\), respectively. On the other hand, the \(\not{\partial}\Psi\) will pull a factor of \(-i\not{k}_{2}\), whereas the hermitian conjugate will give a factor of \(-i\not{k}_{1}\), thereby symmetrizing the result between \(k_{1}\) and \(k_{2}\). Thus, the first term yields
\[S_{1}=\frac{e^{2}}{2}\sum\limits_{n=0}^{\infty}\frac{1}{\Lambda^{2n}n!}\sum \limits_{m=0}^{n-1}\int d^{4}x(\not{k}_{1}+\not{k}_{2})(k_{1}^{2m}k_{2}^{2(n-m- 1)})\overline{\Psi}\Psi A^{2}, \tag{13}\]
and the sums can be evaluated as follows
\[\sum_{n=0}^{\infty}\frac{1}{\Lambda^{2n}n!}\sum_{m=0}^{n-1}k_{1}^{2m}k_{2}^{2(n-m- 1)}=\sum_{n=0}^{\infty}\frac{k_{2}^{2n-2}}{\Lambda^{2n}n!}\Bigg{[}\frac{1-(k_{ 1}^{2}/k_{2}^{2})^{n}}{1-(k_{1}^{2}/k_{2}^{2})}\Bigg{]}=\frac{e^{\frac{k_{2}^{2 }}{\Lambda^{2}}}-e^{\frac{k_{1}^{2}}{\Lambda^{2}}}}{k_{2}^{2}-k_{1}^{2}}, \tag{10}\]
which, together with eq. (11), implies that the contribution of the first term is given by
\[V_{1\mu\nu}(k_{1},k_{2},q_{1},q_{2})=ie^{2}(k\!\!\!/_{1}+k\!\!\!/_{2})\Bigg{(} \frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}-e^{\frac{k_{1}^{2}}{\Lambda^{2}}}}{k_{ 2}^{2}-k_{1}^{2}}\Bigg{)}g_{\mu\nu}, \tag{11}\]
where \(q_{1,2}\) are the momenta of the photons, which will be relevant for the remaining contributions. Turning to the second term in eq. (12), we have
\[S_{\rm NL,2}=\frac{ie^{2}}{2}\sum_{n=0}^{\infty}\frac{1}{\Lambda^{2n}n!}\sum_{ m=0}^{n-1}\int d^{4}x\Big{[}(k_{1}^{2m}k_{2}^{2(n-m-1)})\overline{\Psi}( \partial\cdot A+A\cdot\partial)(A\!\!\!/\Psi)+h.c.\Big{]}, \tag{12}\]
where we have acted with the \(\Box\) operators on the respective fields, and assumed that the photon is on-shell, such that \(\Box A\!\!\!/=-q_{1}^{2}A\!\!\!/=0\). Notice that the sums are identical to eq. (10). Therefore, writing the hermitian conjugate explicitly, eq. (12) reads
\[S_{2}=\frac{ie^{2}}{2}\Bigg{(}\frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}-e^{\frac {k_{1}^{2}}{\Lambda^{2}}}}{k_{2}^{2}-k_{1}^{2}}\Bigg{)}\int d^{4}x\Big{[} \overline{\Psi}(\partial\cdot A+A\cdot\partial)(A\!\!\!/\Psi)+(\partial\cdot A +A\cdot\partial)(\overline{\Psi}A\!\!\!/)\Psi\Big{]}. \tag{13}\]
Notice that the second operator acts only on \(\overline{\Psi}A\!\!\!/\). Acting with the partial derivative on the fermions and the photon will pull down the momentum of the respective field, and one can eliminate the momentum of the photon in favor of the momenta of the two fermions, such that eq. (13) becomes
\[S_{2}=\frac{ie^{2}}{2}(k_{1\mu}+k_{2\mu})\Bigg{(}\frac{e^{\frac{k_{2}^{2}}{ \Lambda^{2}}}-e^{\frac{k_{1}^{2}}{\Lambda^{2}}}}{k_{2}^{2}-k_{1}^{2}}\Bigg{)} \int d^{4}x\overline{\Psi}\Psi A^{\mu}A\!\!\!/, \tag{14}\]
which implies that the Feynman rule corresponding to the second vertex is given by
\[V_{2\mu\nu}(k_{1},k_{2},q_{1},q_{2})=-e^{2}(k_{1\mu}+k_{2\mu})\gamma_{\nu} \Bigg{(}\frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}-e^{\frac{k_{1}^{2}}{\Lambda^{ 2}}}}{k_{2}^{2}-k_{1}^{2}}\Bigg{)}. \tag{15}\]
Finally, we turn our attention to the last term given in the second line of eq. (12). This part is quite complex, so we resort to some approximations to evaluate it. We first, we notice that
\[(\partial\cdot A+A\cdot\partial)\Box^{l}(\partial\cdot A+A\cdot\partial)\Box ^{n-m-l-2}(\partial\!\!\!/\Psi)=ik_{1\nu}k\!\!\!/_{2}(q_{1\mu}-k_{2\mu})(-k_{2 }^{2})^{n-m-2}A^{\mu}A^{\nu}\Psi, \tag{16}\]
where \(q_{1\mu}\) is the momentum of one of the photons, and we have assumed that the photons are on-shell and utilized conservation of momentum to eliminate the momenta of the photons in favor of the momenta of the fermions whenever possible. Therefore, the third term in (A3) reads
\[S_{3}=\frac{e^{2}}{2}\sum_{n=0}^{\infty}\frac{1}{\Lambda^{2n}n!}\sum_{m=0}^{n-2} \sum_{l=0}^{n-m-2}\int d^{4}x\Big{[}(k_{1}^{2m}k_{2}^{2(n-m-2)})k_{1\nu}\hbox to 0.0pt{ /}k_{2}(q_{1\mu}-k_{2\mu})A^{\mu}A^{\nu}\overline{\Psi}\Psi+h.c.\Big{]}. \tag{101}\]
We need to evaluate the sums over \(l\), \(m\) and \(n\). First notice that the sum over \(l\) is trivial and just lead to a factor of \(n-m-2\). Therefore, the sum over \(m\) becomes
\[\sum_{m=0}^{n-2}(n-m-2)(k_{1}^{2m}k_{2}^{2(n-m-2)})=(n-2)\Bigg{[} \frac{(k_{2}^{2})^{n-1}-(k_{1}^{2})^{n-1}}{k_{2}^{2}-k_{1}^{2}}\Bigg{]}\] \[-\Big{(}k_{2}^{2(n-2)}\Big{)}\Big{(}\frac{k_{1}^{2}}{k_{2}^{2}} \Big{)}\Bigg{[}\frac{1-(n-1)(k_{1}^{2}/k_{2}^{2})^{n-2}+(n-2)(k_{1}^{2}/k_{2}^ {2})^{n-1}}{(1-k_{1}^{2}/k_{2}^{2})^{2}}\Bigg{]}, \tag{102}\]
and we can now plug this in eq. (101) to evaluate the sum over \(n\). The first term in the sum over \(n\) yields
\[\sum_{n=0}^{\infty}\frac{(n-2)}{\Lambda^{2n}n!}\Big{(}\frac{k_{2}^{2(n-1)}-k_ {1}^{2(n-1)}}{k_{2}^{2}-k_{1}^{2}}\Big{)}=\frac{1}{\Lambda^{2}(k_{2}^{2}-k_{1 }^{2})}\Bigg{[}\Big{(}1-\frac{2\Lambda^{2}}{k_{2}^{2}}\Big{)}e^{\frac{k_{2}^{2 }}{\Lambda^{2}}}-\Big{(}1-\frac{2\Lambda^{2}}{k_{1}^{2}}\Big{)}e^{\frac{k_{1}^ {2}}{\Lambda^{2}}}\Bigg{]}, \tag{103}\]
whereas the second term yields
\[\sum_{n=0}^{\infty}\frac{1}{\Lambda^{2n}n!}\Big{(}k_{2}^{2(n-2)} \Big{)}\Big{(}\frac{k_{1}^{2}}{k_{2}^{2}}\Bigg{)}\Bigg{[}\frac{1-(n-1)(k_{1}^{ 2}/k_{2}^{2})^{n-2}+(n-2)(k_{1}^{2}/k_{2}^{2})^{n-1}}{(1-k_{1}^{2}/k_{2}^{2})^ {2}}\Bigg{]}\] \[=\frac{1}{(k_{2}^{2}-k_{1}^{2})^{2}}\Bigg{[}\Big{(}\frac{k_{1}^{2 }}{k_{2}^{2}}\Big{)}e^{\frac{k_{2}^{2}}{\Lambda^{2}}}+\Big{(}\frac{k_{1}^{2}}{ \Lambda^{2}}-\frac{k_{2}^{2}}{\Lambda^{2}}+\frac{k_{2}^{2}}{k_{1}^{2}}-2\Big{)} e^{\frac{k_{1}^{2}}{\Lambda^{2}}}\Bigg{]}. \tag{104}\]
We simplify our results by keeping only the leading order in \(\Lambda\), so we drop terms \(\sim O(1/\Lambda^{2})\). We plug eqs. (103) and (104) in eq. (101) and then evaluate the hermitian conjugate, which can simply be obtained from the first part by interchanging \(k_{1}\leftrightarrow k_{2}\). Finally, we arrive at the third contribution to the Feynman rule
\[V_{3\mu\nu}(k_{1},k_{2},q_{1},q_{2}) \simeq ie^{2}k_{1\nu}\hbox to 0.0pt{/}k_{2}(q_{1\mu}-k_{2\mu})\Bigg{\{} \frac{2}{k_{2}^{2}-k_{1}^{2}}\Bigg{(}e^{\frac{k_{1}^{2}}{\Lambda^{2}}}\frac{e^ {\frac{k_{2}^{2}}{\Lambda^{2}}}}{k_{1}^{2}}-\frac{e^{\frac{k_{2}^{2}}{\Lambda^{ 2}}}}{k_{2}^{2}}\Bigg{)} \tag{105}\] \[+ \frac{1}{(k_{2}^{2}-k_{1}^{2})^{2}}\Bigg{[}\Big{(}\frac{k_{1}^{2} }{k_{2}^{2}}\Big{)}e^{\frac{k_{2}^{2}}{\Lambda^{2}}}+\Big{(}\frac{k_{2}^{2}}{k_ {1}^{2}}-2\Big{)}e^{\frac{k_{1}^{2}}{\Lambda^{2}}}\Bigg{]}\Bigg{\}}+(k_{1} \leftrightarrow k_{2}).\]
Putting all the pieces together from eqs. (101), (102) and (103), we arrive at the final result
\[V_{\mu\nu}(k_{1},k_{2},q_{1},q_{2})\simeq ie^{2}\biggl{[}(\not{k}_ {1}+\not{k}_{2})g_{\mu\nu}+i(k_{1\mu}+k_{2\mu})\gamma_{\nu}\biggr{]}\Biggl{(} \frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}-e^{\frac{k_{2}^{2}}{\Lambda^{2}}}}{k_{ 2}^{2}-k_{1}^{2}}\Biggr{)}\] \[+k_{1\nu}\not{k}_{2}(q_{1\mu}-k_{2\mu})\Biggl{\{}\frac{2}{k_{2}^{ 2}-k_{1}^{2}}\Biggl{(}\frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}}{k_{2}^{2}}- \frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}}{k_{1}^{2}}\Biggr{)}+\frac{1}{(k_{2}^ {2}-k_{1}^{2})^{2}}\Biggl{[}\Bigl{(}\frac{k_{1}^{2}}{k_{2}^{2}}\Bigr{)}e^{ \frac{k_{2}^{2}}{\Lambda^{2}}}+\Bigl{(}\frac{k_{2}^{2}}{k_{1}^{2}}-2\Bigr{)}e ^{\frac{k_{2}^{2}}{\Lambda^{2}}}\Biggr{]}\Biggr{\}}\] \[+k_{2\nu}\not{k}_{1}(q_{1\mu}-k_{1\mu})\Biggl{\{}\frac{2}{k_{1}^{ 2}-k_{2}^{2}}\Biggl{(}\frac{e^{\frac{k_{2}^{2}}{\Lambda^{2}}}}{k_{2}^{2}}- \frac{e^{\frac{k_{1}^{2}}{\Lambda^{2}}}}{k_{1}^{2}}\Biggr{)}+\frac{1}{(k_{1}^ {2}-k_{2}^{2})^{2}}\Biggl{[}\Bigl{(}\frac{k_{2}^{2}}{k_{1}^{2}}\Bigr{)}e^{ \frac{k_{1}^{2}}{\Lambda^{2}}}+\Bigl{(}\frac{k_{1}^{2}}{k_{2}^{2}}-2\Bigr{)}e ^{\frac{k_{2}^{2}}{\Lambda^{2}}}\Biggr{]}\Biggr{\}}. \tag{104}\]
|
2310.04758 | Inverse transitions and disappearance of the λ-line in the
asymmetric random field Ising and Blume-Capel models | We report on reentrance in the random field Ising and Blume-Capel models,
induced by an asymmetric bimodal random field distribution. The conventional
continuous line of transitions between the paramagnetic and ferromagnetic
phases, the {\lambda}-line, is wiped away by the asymmetry. The phase diagram,
then, consists of only first order transition lines that always end at ordered
critical points. We find that while for symmetric random field distributions
there was no reentrance, the asymmetry in the random field results in a range
of temperatures for which magnetisation shows reentrance. While this does not
give rise to an inverse transition in the Ising model, for the Blume-Capel
model, however, there is a line of first order inverse phase transitions that
ends at an inverse ordered critical point. We show that the location of the
inverse transitions can be inferred from the ground state phase diagram of the
model. | Santanu Das, Sumedha | 2023-10-07T09:39:44Z | http://arxiv.org/abs/2310.04758v1 | Inverse transitions and disappearance of the \(\lambda\)- line in the asymmetric random field Ising and Blume-Capel models
###### Abstract
We report on reentrance in the random field Ising and Blume-Capel models, induced by an asymmetric bimodal random field distribution. The conventional continuous line of transitions between the paramagnetic and ferromagnetic phases, the \(\lambda\)-line, is wiped away by the asymmetry. The phase diagram, then, consists of only first order transition lines that always end at ordered critical points. We find that while for symmetric random field distributions there was no reentrance, the asymmetry in the random field results in a range of temperatures for which magnetisation shows reentrance. While this does not give rise to an inverse transition in the Ising model, for the Blume-Capel model, however, there is a line of first order inverse phase transitions that ends at an inverse ordered critical point. We show that the location of the inverse transitions can be inferred from the ground state phase diagram of the model.
## I Introduction
Inverse transitions are an unusual class of phase transitions where the ordered phase has more entropy than the disordered phase and hence occurs at a higher temperature [1]. This entropy driven phase reentrance of the ordered phase is widely observed [2]. Examples include ferroelectric thin films [3], perpendicularly magnetized ultrathin ferromagnetic films [4; 5; 6], anisotropic dipolar magnets [7], polymer systems such as Poly(4-methyl-1-pentene) [8; 9], the solutions of cyclodextrin, water and methylpyridine [10; 11], inverse melting between lattice and disordered vortex phase in high-temperature superconductors [12] and shear thickening in glasses and granular systems [13].
Models with spin-1 variables like the Ghatak-Sherrington model have been found to exhibit inverse transition (IT) in some recent studies [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. These studies have focussed on models with a glassy phase and random bond interactions, where it is expected that frustration and disorder allows for a possibility of inverse freezing (a glass to liquid transition on cooling). Reentrance is also seen in dipolar long and short range models with asymmetric random interaction and Gaussian random fields [27]. However, in general it is expected that random fields will suppress the IT [22].
In this work, we study the random field Ising model (RFIM) and Blume-Capel model (RFBCM) with ferromagnetic interactions and an asymmetric bimodal distribution (BD) for the random field. These models do not have a glass phase. Also, the models with the symmetric BD for the quenched random fields have no ITs [28; 29]. Any asymmetry in the random field distribution is expected to make the system less random and hence no ITs are expected. In this paper, we undertake an expansive study of the infinite range RFIM and RFBCM with asymmetric BD and report a number of interesting results. Infinite range interaction models usually belong to the same universality class as the mean-field models with fixed coordination number. Generically, we find that even an infinitesimal asymmetry changes the phase diagram non-trivially. Interestingly, there is a line of inverse first order transitions in the phase diagram of the asymmetric RFBCM. While there have been some studies of these models with symmetric distributions [28; 29; 30; 31], asymmetric distributions have hitherto been studied for the RFIM [32; 33; 34]. Disorder distribution is typically asymmetric in real experiments [32]. We find that asymmetric RFBCM shows first order ITs similar to those seen in experiments that display inverse melting[8; 9; 10; 11].
For symmetric BD the RFIM has a line of continuous transitions (\(\lambda\)-line) that meets a line of first order transitions at a tricritical point (TCP) [35]. We find that even a slight asymmetry wipes away the \(\lambda\)-line and the TCP in the RFIM. We instead find a phase diagram consisting of a line of first order transitions that ends at a critical point. The magnetization (\(m\)) is non zero at this point and hence we call this an ordered critical point (OCP) [36]. The location of OCP to a good approximation is determined by the location of the first order transition in the ground state phase diagram of the model. Hence even at finite temperature (\(T\)) the phase diagram is dominated by the random field disorder.
The fluid separation in porous media is considered a good realization of RFIM [37; 38; 39]. The results from experiments on these models found the value of the order-parameter exponent to be closer to the value for the pure Ising Model rather than for the RFIM with symmetric random field distribution [37]. It was suggested that these experiments should be compared with the asymmetric RFIM [32]. In more recent experiments it is shown that they exhibit out of equilibrium disorder-driven behaviour similar to the athermal non-equilibrium RFIM [39; 40]. Consistent with the experiments, we find that the value of the exponent near an OCP is the same as the pure Ising critical point.
Another interesting observation is the non-monotonic behaviour of \(m\) as a function of \(T\) for both asymmetric RFIM and asymmetric RFBCM. We find that for the values of the parameters close to an OCP, \(m\) can become non-monotonic. Though in the absence of the crystal field (\(\Delta\)), there is no IT in these models. We show that for RFBCM for a range of \(\Delta\), \(m\) jumps to a higher value on increasing \(T\). The system has a
first order IT which we show is entropy driven. The magnitude of the jump decreases with increasing \(T\) and the line of first order IT ends at an inverse OCP. We hence report a mechanism for ITs which crucially depends on the asymmetry of the disorder distribution. This is an inverse melting transition since the system goes from a less ordered state to a more ordered state on increasing \(T\). We also find that the RFBCM has two first order transitions with increasing \(T\) for a narrow range of parameters: first from a less ordered state to a more ordered state and then again to a less ordered state, similar to the two first order transitions observed in recent experiments involving solutions of cyclodextrin, water and methylpyridine [11]. The RFBCM also shows a reentrance in the quadrupole moment (\(q\)) for some range of the parameters. We show that the ground state phase diagram crucially determines the phase-diagram at finite \(T\).
## II Model
The Hamiltonian for the infinite range RFIM and RFBCM can be written as
\[\mathcal{H}=-\frac{1}{2N}\left(\sum_{i=1}^{N}s_{i}\right)^{2}+\Delta\sum_{i=1} ^{N}s_{i}^{2}-\sum_{i=1}^{N}h_{i}s_{i} \tag{1}\]
here \(s_{i}=\pm 1\) and \(s_{i}=0,\pm 1\) for RFIM and RFBCM respectively. The crystal field is represented by \(\Delta\). It is \(0\) for RFIM. The RFBCM with \(s_{i}=0,\pm 1\) and \(\Delta=0\) has a behaviour which is similar to RFIM with \(s_{i}=\pm 1\). We hence also call it RFIM with \(s=1\).
The magnetic field \(h_{i}\) associated with each site is an independent and identically distributed (i.i.d) random variable taken from the BD of the form
\[Q(h_{i})=r\delta(h_{i}-h_{0})+(1-r)\delta(h_{i}+h_{0}), \tag{2}\]
with bias \(r\) and strength \(h_{0}\). The above distribution is asymmetric when \(r\neq 1/2\). We take \(h_{0}>0\) and consider \(r\in[1/2,1]\).
The probability of a spin configuration \(C_{N}\) with magnetisation \(x_{1}=\sum_{i}s_{i}/N\) and quadrupole moment \(x_{2}=\sum_{i}s_{i}^{2}/N\) satisfies large deviation principle (LDP), i.e, \(P(C_{N}:x_{1},x_{2})\sim e^{-NI(x_{1},x_{2})}\). \(I\) is a rate function that can be calculated using large deviations. The free energy of the system is then the infimum of \(I\) with respect to \(x_{1}\) and \(x_{2}\). It is hence enough to consider only the fixed points of \(I\) to write the generalized free energy functional of the model. We hence obtain an expression for the free energy functional of the model with quenched random fields (see [30] for details) as
\[\widetilde{f}(x_{1})=\frac{1}{2}\beta x_{1}^{2}-\left\langle\log\left(c+2e^{- \beta\Delta}\cosh\beta(x_{1}+h_{i})\right)\right\rangle_{\{h_{i}\}}, \tag{3}\]
where \(\beta=1/T\), \(c=0\) for RFIM and \(c=1\) for RFBCM. The value of \(x_{1}\) that minimises \(\widetilde{f}(x_{1})\) is the magnetisation \(m\) and the quadrupole moment \(q=\frac{1}{p}\partial\widetilde{f}(x_{1})/\partial\Delta|_{x_{1}=m}\). These are given by
\[m=\left\langle\frac{2e^{-\beta\Delta}\sinh\beta(m+h_{i})}{c+2e^{-\beta\Delta }\cosh\beta(m+h_{i})}\right\rangle_{\{h_{i}\}}, \tag{4}\]
and the quadrupole moment
\[q=\left\langle\frac{2e^{-\beta\Delta}\cosh\beta(m+h_{i})}{c+2e^{-\beta\Delta }\cosh\beta(m+h_{i})}\right\rangle_{\{h_{i}\}}. \tag{5}\]
\(\langle\rangle_{\{h_{i}\}}\) represents the average over random field distribution.
## III RFIM for \(s=1/2\) and \(s=1\)
The phase diagram of the RFIM for symmetric BD is known since long [35; 41]. It has a line of continuous transitions between ordered and disordered phases for the weak disorder strength ( low \(h_{0}\)) that ends at a TCP. On further increasing \(h_{0}\), there is a line of first order transitions that ends at \(h_{0}=1/2\) and \(T=0\). The qualitative phase behaviour remains unchanged for spin-1 system in the absence of \(\Delta\) (see Fig. 1(a)).
Interestingly, we find that for an asymmetric BD (Eq. (2)), asymmetry in the distribution wipes out the line of continuous transitions along with the TCP. The phase diagram only has a line of first order transitions that starts at \(h_{0}=r\) and \(T=0\) and ends at an OCP. As \(r\) deviates from \(1/2\) and approaches 1, the OCP occurs at a lower value of T and approaches 0 as \(r\to 1\) (see Fig. 1(b)). Since \(m\) is finite at an OCP, to find the co-ordinates of the OCP we equate the first three derivatives of \(\widetilde{f}(x_{1})\) to 0. The meeting point of the solution of the three equations, for a given \(r\), \(\Delta\) and \(h_{0}\), gives the co-ordinates of the OCP [42]. In Fig. 2 for \(s=1/2\) we have plotted magnetisation and susceptibility at three different points in the phase diagram : at OCP, at a point on the line of first order transitions between \(m\approx 1\) and \(m\approx 2r-1\) and for a point near the first order
Figure 1: Phase diagram in \((T-h_{0})\) plane for RFIM \((s=\pm 1)\)(blue) and its spin-1 variant \((s=0,\pm 1)\) (black) for (a) symmetric BD (\(r=0.5\)) and (b) asymmetric BD (\(r=0.55\)). Solid lines are the lines of continuous transitions and the dashed lines are the lines of first order transitions. Rhombus(purple) represents the TCP and circle(red) represents an OCP. Inset of (b) plots the locus of the TCP (rhombus) and the OCP (circle) in the \((T-h_{0})\) plane for \(1/2\leq r\leq 1\). With increasng \(r\), the OCP occurs at a lower value of \(T\) and higher value of \(h_{0}\).
line where there is no transition but magnetisation \(m\) is nonmonotonic. Similar behaviour occurs for \(s=1\) as well. We find that both for \(s=1/2\) and \(s=1\), \(m\) shows a non-monotonic dependence on \(T\) for any \(r>1/2\) and \(h_{0}>r\). The degree of non-monotonicity is maximum when \(r\) is close to \(1/2\) and \(h_{0}\) is just above \(r\).
We find that the OCP lies in the critical Ising universality class and \(m\) scales with exponent \(\beta=1/2\) near an OCP as \(T\) increases. On the other hand, \(\beta=1/4\) for a TCP. This is verified in Fig 3, where we contrast the scaling of magnetisation near a TCP and an OCP by taking symmetric and asymmetric BD respectively.
## IV RFBCM and the reentrance transition
For spin-1, on the introduction of the \(\Delta\), i.e. for RFBCM we find that there is a first order reentrance transition for the asymmetric BD for a range of \(\Delta\). The transition becomes a continuous reentrance transition at \((\Delta_{c},T_{c})\) that depends on the values of \(r\) and \(h_{0}\) (see Fig. 4(a)). We also find that depending on the value of \(r\), there is also a possibility of a second first order transition from a more ordered to a less ordered state in the model (see Fig. 4(b)).
To understand the phase behaviour at finite temperature, we first study the ground state \((T=0)\). In the ground state, the disorder averaged energy is given by \(\underset{m}{min}\phi(m)\), where \(\phi(m)=\underset{\beta\rightarrow\infty}{lim}\beta^{-1}\widetilde{f}(m)\). We find that the ground state (\(T=0\)) phase diagram of the RFBCM has four phases (three ferromagnetic phases \(F1\),\(F2\), and \(F3\) and one nonmagnetic phase \(NM\)). These phases are separated by the lines of first order transitions (see Fig. 5). These transitions can be understood by looking at the configurational entropy of these states. For example, the phases \(F2\) and \(F3\) have same configurational entropy as in both phases spins take two values: in \(F3\pm 1\) and in \(F2\;0,1\). As \(\Delta\) increases, \(0\) spins become more favourable energetically and first there is a transition from \(F3\) to \(F2\) and finally to \(NM\) (phase with all spins \(0\)). As \(T\) increases, each point on these first order transition lines changes its position and ends at an OCP. The phase diagram of the model in the \((T-h_{0})\) plane, for different ranges of \(\Delta\) for \(r=0.55\) is shown in Fig. 6. We find that the finite \(T\) phase diagrams only have lines of first order transitions and OCPs. This is very different from the phase diagrams for RFBCM with symmetric bimodal and trimodal distributions [28; 29; 30]. For symmetric distributions, the phase diagrams consist of lines of first and second order transitions and various multicritical points.
Depending on the strength of the crystal field, there are six different finite temperature phase diagrams for the asymmetric BD. The phase diagram for the asymmetric BD for \(r=0.55\) in \((T-h_{0})\) plane for \(\Delta<\Delta_{1}(=0.211)\) is similar to the \(\Delta=0\) case : single first order line of transitions separates \(m\approx 1\) from \(m\approx 2r-1\) and ends at an OCP (see Fig. 6(a)). For \(\Delta>\Delta_{1}\), interestingly we find two lines of first order transitions, both ending at OCPs. For \(\Delta_{1}<\Delta<\Delta_{2}(=0.296)\), one of them corresponds to the usual first order transition from a more ordered to a less ordered state (shown in black) and the other is a line of first order IT (shown in blue) between states with \(m\approx 2r-1\) and \(m\approx r\) (see Fig. 6(b) and (c)). On further increasing \(\Delta\), the reentrance transition in \(m\) changes to a reentrance transition in \(q\) as shown by the green lines in Fig. 6(d), (e) and (f). For \(0.525<\Delta<0.545\), near the second triple point in the ground state (Fig. 5 (b)), IT occurs for both \(m\) and \(q\) as shown in Fig. 6 (e).
Figure 2: Magnetisation(\(m\)) and magnetic susceptibility(\(\chi_{m}\)) for RFIM with \(s=1/2\) and \(r=0.55\) is plotted at the OCP ((a)and (d)), at a point along the first transition line ((b) and (e)) and for \(h_{0}\) near the first order transition line with reentrance in \(m\) ((c) and (f)).
Figure 3: Magnetization \((m\sim t^{\beta})\) versus the scaled temperature \(t=T_{c}-T\) for RFIM is plotted in the vicinity of TCP and OCP in (a) and (b) for \(r=0.5\) and \(0.9\) respectively. The points are the numerical value of magnetisation \(m\) and the red dashed line is the scaling fit in both the cases.
We projected the OCPs onto the ground state phase diagram of the model and identified the region in the \((\Delta-h_{0})\) plane where the IT occurs. Corresponding to the first order line of transitions in the ground state phase diagram, we find a line of projection of OCPs in the \((\Delta-h_{0})\) plane (Fig. 5(a)). When this line of projections of OCPs enters into either \(F3\) or the \(NM\) phase, there is a region in the \((\Delta-h_{0})\) plane where the reentrance transition takes place. For \(r=1/2\) this region shrinks to zero and there is no reentrance. In Fig. 5(b) and (c) the range of \((\Delta,h_{0})\) for which there is a IT in \(m\) is shown shaded for \(r=0.55\). The reentrance region at first increases with \(r\) and then shrinks as \(r\to 1\).
To find the region in the phase diagram where reentrance occurs, we fixed \(h_{0}\gtrsim r\) and gradually increased \(\Delta\). For example, for \(r=0.55\) and \(h_{0}=0.56\), we find first order reentrance transition for \(0.228\leq\Delta\leq 0.235\) ( Fig. 4(a)). As \(\Delta\to 0.228\) there is still a reentrance, but without a jump in \(m\). We find that this point is in fact an OCP. Inset of Fig. 4(a) shows the divergence of the magnetic susceptibility at the OCP.
We also find that if we take \(\Delta\) and \(h_{0}\) very close to the triple-point of the \(T=0\) phase diagram for \(r\gtrsim 1/2\), then there are two first order transitions with the increase of \(T\) (see inset of Fig. 6 (b)). For example, for \(r=0.51\), when we set \(h_{0}=r+\epsilon\) and \(\Delta=h_{0}-(3r-1)/2-\delta\) (where \(\epsilon\) and \(\delta\) are small) in the vicinity of the triple-point, we observe two first order transitions. For \(r=0.51\) this is shown in Fig. 4(b). This double first order transition is similar to the one seen in experiments with solutions of cyclodextrin, water and 4-methylpyridine which go from low-density-liquid to high-density-liquid to low-density-liquid on increasing \(T\) via two first order transitions [11].
If instead of fixing \(h_{0}\), we fix \(\Delta\gtrsim(1+2r)/4\) then also we find a region in the phase diagram where the reentrance transition occurs. In fact the phase diagrams in \((T-\Delta)\) plane are similar to phase diagrams in the \((T-h_{0})\) plane.
## V Concluding remarks
We showed that the asymmetry in the random field distribution results in a non-monotonic behaviour of the order parameter in the ferromagnetic models with quenched random fields that becomes an IT on the introduction of \(\Delta\). To understand this let us look at the \(T=0\) phase diagram again. At
Figure 5: (a) The \(T=0\) phase diagram in \((\Delta-h_{0})\) plane for \(r=0.55\) with three ferromagnetic phases: \(F1\), \(F2\), \(F3\) and a non-magnetic phase \(NM\). Black dashed lines are the lines of first order transitions between the two neighboring phases at \(T=0\) given by I : \(\Delta=1/2-(1-2r)h_{0}\), II : \(\Delta=h_{0}+r/2\), III : \(\Delta=(1+r)/2-h_{0}\), IV : \(\Delta=h_{0}-(3r-1)/2\) and V : \(h_{0}=r\). Solid red lines are the projection of the OCPs in \((\Delta-h_{0})\) plane. In (b) and (c) we enlarge the vicinity of the two triple points. The shaded part shows the range of parameters for which the IT in \(m\) occurs.
Figure 7: Plots of entropy (\(s\)), \(m\) and \(q\) as a function of \(T\) for OCP (solid green line), near IT when there is only one first order transition (dotted blue) and for the case where there are two first order transitions (dotted purple).
\(T=0\), there is a residual \(m\) of order \(2r-1\) at low \(\Delta\) and high \(h_{0}\) for asymmetric BD (phase \(F3\) in Fig. 5). For symmetric BD the \(F3\) becomes a paramagnetic phase and if the \(\Delta\) and \(h_{0}\) are chosen such that the system is in this state at \(T=0\), then the system continues to stay in that state with \(m=0\) on increasing \(T\) as \(m=0\) maximises entropy. On the other hand for \(r\gtrsim 1/2\) and \((\Delta,h_{0})\) very close to the triple point in Fig. 5(c), \(m\) increases as \(T\) increases and then jumps to \(r\) at the IT transition point. The entropy (\(s\)) also jumps at that point (Fig. 7). Since an infinitesimal amount of asymmetry can give rise to IT, it is possible that the topology of finitely connected graphs with heterogeneous degree distribution can induce that asymmetry and give rise to topology induced IT as seen in some studies [43; 44].
The value of \(\Delta\) and \(h_{0}\) at which the IT occurs is close to the triple points in the ground state phase diagram. The infinite range pure Blume Capel model (\(h_{0}=0\)) gives the true behaviour of the model in finite dimensions. Also numerical study of the Ghatak-Sherrington model in three dimensions have reported first order inverse freezing transition [15; 16]. We expect our result of the appearence of IT near the triple point of the ground state will hold in finite dimensions for RFBCM as well.
The absence of IT for symmetric distribution has also been reported for continuous spin models with random fields like the random field \(XY\) model [45; 46]. We expect that the asymmetry in the distribution should induce reentrance in the case of random field models with continuous spin as well.
For RFIM it was conjectured that if the phase diagram has a TCP for the symmetric distribution, it will change to critical end point for any infintesimal asymmetry [32; 34]. Presence of the critical end point implies that the \(\lambda\)-line is still present in the phase diagram. In contrast, for the asymmetric BD defined via Eq. (2), we find that the \(\lambda\)-line and the TCP both disappear completely and there is an OCP instead of a TCP in the phase diagrams.
We also studied the asymmetric Gaussian random field distribution. The \(\widehat{f}\) of the asymmetric Gaussian RFIM is the same as that of the symmetric Gaussian RFIM in an external field of strength equal to the bias in the distribution. Since symmetric Gaussian RFIM in an external field has finite \(m\) at all \(T\) that gradually goes to \(0\) without a phase transition, asymmetric Gaussian RFIM also has no phase transition. Another interesting distribution is the double peaked asymmetric Gaussian distribution. This we expect will have the similar phase diagrams as the asymmetric BD as long as the variance of the distribution is not large.
|
2303.04660 | Neural Probabilistic Logic Programming in Discrete-Continuous Domains | Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic
background knowledge in the form of logic. It has been shown to aid learning in
the limited data regime and to facilitate inference on out-of-distribution
data. Probabilistic NeSy focuses on integrating neural networks with both logic
and probability theory, which additionally allows learning under uncertainty. A
major limitation of current probabilistic NeSy systems, such as DeepProbLog, is
their restriction to finite probability distributions, i.e., discrete random
variables. In contrast, deep probabilistic programming (DPP) excels in
modelling and optimising continuous probability distributions. Hence, we
introduce DeepSeaProbLog, a neural probabilistic logic programming language
that incorporates DPP techniques into NeSy. Doing so results in the support of
inference and learning of both discrete and continuous probability
distributions under logical constraints. Our main contributions are 1) the
semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a
proven asymptotically unbiased learning algorithm, and 3) a series of
experiments that illustrate the versatility of our approach. | Lennert De Smet, Pedro Zuidberg Dos Martires, Robin Manhaeve, Giuseppe Marra, Angelika Kimmig, Luc De Raedt | 2023-03-08T15:27:29Z | http://arxiv.org/abs/2303.04660v2 | # Neural Probabilistic Logic Programming in Discrete-Continuous Domains
###### Abstract
Neural-symbolic AI (NeSy) allows neural networks to exploit symbolic background knowledge in the form of logic. It has been shown to aid learning in the limited data regime and to facilitate inference on out-of-distribution data. Probabilistic NeSy focuses on integrating neural networks with both logic and probability theory, which additionally allows learning under uncertainty. A major limitation of current probabilistic NeSy systems, such as DeepProbLog, is their restriction to finite probability distributions, i.e., discrete random variables. In contrast, deep probabilistic programming (DPP) excels in modelling and optimising continuous probability distributions. Hence, we introduce DeepSeaProbLog, a neural probabilistic logic programming language that incorporates DPP techniques into NeSy. Doing so results in the support of inference and learning of both discrete and continuous probability distributions under logical constraints. Our main contributions are 1) the semantics of DeepSeaProbLog and its corresponding inference algorithm, 2) a proven asymptotically unbiased learning algorithm, and 3) a series of experiments that illustrate the versatility of our approach.
## 1 Introduction
Neural-symbolic AI (NeSy) (Garcez et al., 2002; De Raedt et al., 2021) focuses on the integration of symbolic and neural methods. The advantage of NeSy is that it combines the reasoning power of logical representations with the learning capabilities of neural networks. Additionally, it has been shown to converge faster during learning and to be more robust (Rocktaschel and Riedel, 2017; Xu et al., 2018; Evans and Grefenstette, 2018).
The challenge of NeSy lies in combining discrete symbols with continuous and differentiable neural representations. So far, such a combination has been realised for Boolean variables by interpreting the outputs of neural networks as the weights of these variables. These weights can then be given either a fuzzy semantics (Badreddine et al., 2022; Diligenti et al., 2017) or a probabilistic semantics (Manhaeve et al., 2021; Yang et al., 2020). The latter is also used in neural probabilistic logic programming (NPLP), where neural networks parametrise probabilistic logic programs.
A shortcoming of traditional probabilistic NeSy approaches is that they fail to capture models that integrate continuous random variables and neural networks - a feature already achieved with mixture density networks (Bishop, 1994) and more generally deep probabilistic programming (DPP) (Tran et al., 2017; Bingham et al., 2019). However, it is unclear whether DPP can be generalised to enable logical and relational reasoning. Hence, a gap exists between DPP and NeSy as reasoning is, after all, a fundamental component of the latter. We contribute towards closing this DPP-NeSy gap by introducing DeepSeaProbLog1, an NPLP language with support for discrete-continuous random variables that retains logical and relational reasoning capabilities. We achieve this integration by allowing arbitrary and differentiable
probability distributions expressed in a modern DPP language while combining knowledge compilation (Darwiche and Marquis, 2002) with the reparametrisation trick (Ruiz et al., 2016) and continuous relaxations (Petersen et al., 2021).
Our main contributions are (1) the well-defined probabilistic semantics of DeepSeaProbLog (Section 3) with an inference algorithm based on weighted model integration (WMI) (Belle et al., 2015) (Section 4.1), (2) a proven asymptotically unbiased gradient estimate for WMI that turns DeepSeaProbLog into a differentiable, discrete-continuous NPLP language (Section 4.2), and (3) an experimental evaluation showing the versatility of discrete-continuous reasoning and the efficacy of our approach (Section 6).
## 2 Logic programming concepts
A term t is either a constant c, a variable V or a structured term of the form f(t1,...,tK), where f is a functor and each t\({}_{i}\) is a term. Atoms are expressions of the form q(t1,...,tK). Here, q/\(K\) is a predicate of arity \(K\) and each t\({}_{i}\) is a term. A literal is an atom or the negation of an atom \(\neg\)q(t1,...,tK). A definite clause (also called a rule) is an expression of the form h:- b1,...,bK where h is an atom and each b\({}_{i}\) is a literal. Within the context of a rule, h is called the head and the conjunction of b\({}_{i}\)'s is referred to as the body of the rule. Rules with an empty body are called facts. A logic program is a finite set of definite clauses. If an expression does not contain any variables, it is called ground. Ground expressions are obtained from non-ground ones by means of substitution. A substitution \(\theta=\{\forall_{1}=\texttt{t}_{1},\ldots,\forall_{K}=\texttt{t}_{K}\}\) is a mapping from variables V\({}_{i}\) to terms t\({}_{i}\). Applying a substitution \(\theta\) to an expression e (denoted e\(\theta\)) replaces each occurrence of V\({}_{i}\) in e with the corresponding t\({}_{i}\).
While _pure_ Prolog (or definite clause logic) is defined using the concepts above, practical implementations of Prolog extend definite clause logic with an external arithmetic engine (Sterling and Shapiro, 1994, Section 8). Such engines enable the use of system specific routines in order to handle numeric data efficiently. Analogous to standard terms in definite clause logic, as defined above, we introduce numeric terms. A numeric term n\({}_{i}\) is either a numeric constant (a real, an integer, a float, etc.), a numeric variable N\({}_{i}\), or a numerical functional term, which is an expression of the form \(\varphi\)(n1,...,nK) where \(\varphi\) is an externally defined numerical function. The difference between a standard logical term and a numerical term is that _ground_ numerical terms are evaluated and yield a numeric constant. For instance, if add is a function, then add(3, add(5,0)) evaluates to the numerical constant 8.
Lastly, numeric constants can be compared to each other using a built-in binary comparison operator \(\bowtie\in\{<,=<,>,>=,=:=,=\backslash=\}\). Here we use Prolog syntax to write comparison operators, which correspond to \(\{<,\leq,>\)\(,\geq,=,=,\neq\}\) in standard mathematical notation. Comparison operators appear in the body of a rule, have two arguments, and are generally written as \(\varphi_{l}\)(n\({}_{l,1}\),...,n\({}_{n,K}\)) \(\bowtie\varphi_{r}\)(n\({}_{r,1}\),...,n\({}_{r,K}\)). They evaluate their left and right side and subsequently compare the results, assuming everything is ground. If the stated comparison holds, the comparison is interpreted by the logic program as true, else as false.
## 3 DeepSeaProbLog
### Syntax
While facts in pure Prolog are deterministically true, in probabilistic logic programs they are annotated with the probability with which they are true. These are the so-called probabilistic facts (De Raedt et al., 2007). When working in discrete-continuous domains, we need to use the more general concept of distributional facts (Zuidberg Dos Martires, 2020), inspired by the distributional clauses of Gutmann et al. (2011).
**Definition 3.1** (Distributional fact).: _Distributional facts_ are expressions of the form x - distribution(n\({}_{1}\),...,n\({}_{K}\)), where x denotes a term, the n\({}_{i}\)'s are numerical terms and distribution expresses the probability distribution according to which x is distributed.
**Example 3.1** (Distributional fact).: To declare a Poisson distributed variable x with rate parameter \(\lambda\), one would write x - poisson(\(\lambda\)).
The meaning of a distributional fact is that all ground instances x\(\theta\) serve as random variables that are distributed according to distribution(n\({}_{1}\theta\),...,n\({}_{K}\theta\)). To obtain a neural-symbolic interface, we will allow neural networks to parametrise these distributions.
**Definition 3.2** (Neural distributional fact).: A _neural distributional fact_ (NDF) is a distributional fact x - distribution(n\({}_{1}\),...,n\({}_{K}\)) in which a subset of the set of numerical terms \(\{n_{i}\}_{i=1}^{K}\) is implemented by neural networks that depend on a set of parameters \(\boldsymbol{\Lambda}\).
Random variables defined by NDFs can then be used in the logic in the form of comparisons, e.g., \(x>y\), to reason about desired ranges of the variables.
**Definition 3.3**.: (Probabilistic comparison formula) A _probabilistic comparison formula_ (PCF) is an expression of the form \((g(\mathbf{x})\bowtie 0)\), where \(g\) is a function applied to the set of random variables \(\mathbf{x}\) and \(\bowtie\in\{<,=<,>,>=,=:=,=\setminus=\}\) is a binary comparison operator. A PCF is called _valid_ if \(\{\mathbf{x}\mid g(\mathbf{x})\bowtie 0\}\) is a _measurable_ set.
**Example 3.2** (Probabilistic comparison formula).: If \(\mathrm{x}\) is Poisson distributed and represents the number of chocolate pieces put in a chocolate biscuit, then we can use a simple PCF to define when such a biscuit passes a quality test through the rule passes_test :- (x > l1).
Note that the general form of a PCF in Definition 3.3 has a 0 on the right hand side, which can always be obtained by subtracting the right hand-side from both sides of the relation. With the definitions of NDFs and PCFs, a DeepSeaProbLog program can now be formally defined.
**Definition 3.4** (DeepSeaProbLog program).: A _DeepSeaProbLog program_ consists of a finite set of NDFs \(\mathcal{F}_{D}\) (defining random variables), a finite set \(\mathcal{C}_{M}\) of valid PCFs and a set of logical rules \(\mathcal{R}_{L}\) that can use any of those valid PCFs in their bodies.
**Example 3.3** (DeepSeaProbLog program).: humid denotes a Bernoulli random variable that takes the value \(1\) with a probability \(p\) given by the output of a neural network humid_detector. temp denotes a normally distributed variable whose parameters are predicted by a network temperature_predictor. The program further contains two rules that deduce whether we have good weather or not. The first one expresses the case of snowy weather, while the second holds for a rather temperate and dry situation. The atom query (good_weather ()) declares that we want to compute the probability of good_weather when evaluated on the data. It illustrates the neural-symbolic nature of DeepSeaProbLog, as its ground argument is a sub-symbolic representation () of the world.
humid(Data) - bernoulli(humid_detector(Data)). temp(Data, T) - normal(temperature_predictor(Data)).
good_weather(Data):- humid(Data) =:= l, temp(Data) < 0.
good_weather(Data):- humid(Data) =:= 0, temp(Data) > 15.
query(good_weather ()). Notice how the random variables humid and temp appear in the body of a logical rule with comparison operators. In our probabilistic setting, the truth value of a comparison depends on the value of its random variables and is thus random itself.
DeepSeaProbLog generalises a range of existing PLP languages.For instance, if we were to remove the distributional fact on temp and all the PCFs using them, we would obtain a DeepProbLog program [11]. If we additionally replace the neural network in humid with a fixed probability \(\mathtt{p}\), we end up with a probabilistic logic program [10]. Replacing that constant probability \(\mathtt{p}\) by a constant \(\mathtt{1}\) yields a non-probabilistic Prolog program. Alternatively, considering all rules and facts in Example 3.3 but replacing the neural parameters of the normal distribution with numeric constants results in a Distributional Clause program [10]. We further discuss these connections in Appendix A, where we also formally prove that DeepSeaProbLog strictly generalises DeepProbLog.
### Semantics
DeepSeaProbLog programs are used to compute the probability that a ground atom \(\mathtt{q}\) is entailed. That probability follows from the semantics of a DeepSeaProbLog program. As is custom in (probabilistic) logic programming, we will define the semantics of DeepSeaProbLog with respect to ground programs. We will assume that each ground distributional fact \(f\in\mathcal{F}_{D}\) defines a different random variable, as each random variable can only have one unique distribution. Also notice that any ground neural distributional facts will contain the inputs to their neural functions. In a sense, a DeepSeaProbLog program is conditioned on these neural network inputs.
To define the semantics of ground DeepSeaProbLog programs, we first introduce the possible worlds over the PCFs. Every subset \(C_{M}\) of a set of PCFs \(\mathcal{C}_{M}\) defines a possible world \(\omega_{C_{M}}=\{C_{M}\cup h\theta\mid\mathcal{R}_{L}\cup C_{M}\models h\theta\) and \(h\theta\) is ground\(\}\). Intuitively speaking, the comparisons in such a subset are considered to be true and all others false. A rule with a comparison in its body that is not in this subset can hence not be used to determine the truth value of atoms. The
deterministic rules \(\mathcal{R}_{L}\) and the subset \(C_{M}\) together define a set of all ground atoms \(h\theta\) that are derivable, i.e., entailed by the program, and thus considered true. Such a set is called a _possible world_. We refer the reader to the paper of De Raedt and Kimmig (2015) for a detailed account of possible worlds in a PLP context. Following the distribution semantics of Sato (1995) and by taking inspiration from Gutmann et al. (2011), we define the probability of a possible world.
**Definition 3.5** (Probability of a possible world).: Let \(\mathbb{P}\) be a ground DeepSeaProbLog program and \(C_{M}=\{c_{1},\ldots,c_{H}\}\subseteq\mathcal{C}_{M}\) a set of PCFs that depend on the random variables declared in the set of distributional facts \(\mathcal{F}_{D}\). The probability \(P(\omega_{C_{M}})\) of a world \(\omega_{C_{M}}\) is then defined as
\[\int\left[\Big{(}\prod_{c_{i}\in C_{M}}\mathbb{1}(c_{i})\Big{)}\Big{(}\prod_{c _{i}\in\mathcal{C}_{M}\backslash C_{M}}\mathbb{1}(\bar{c}_{i})\Big{)}\right] \;\mathrm{d}P_{\mathcal{F}_{D}}. \tag{1}\]
Here the symbol \(\mathbb{1}\) denotes the indicator function, \(\bar{c}_{i}\) is the complement of the comparison \(c_{i}\) and \(\mathrm{d}P_{\mathcal{F}_{D}}\) represents the joint probability measure of the random variables defined in the set of distributional facts \(\mathcal{F}_{D}\).
**Example 3.4** (Probability of a possible world).: Given \(\mathbb{P}\) as in Example 3.3, where humid_detector(data1) predicts \(p\)(data1) and temperature_predictor(data1) predicts the tuple \((\mu(\texttt{data1}),\sigma(\texttt{data1}))\), the probability of the possible world \(\omega_{\{\texttt{temp}(\texttt{data1})>15,\texttt{humid}(\texttt{data1})=: =1\}}\) is given by
\[p(\texttt{data1})\cdot\int\mathbb{1}(x{>}15)\frac{\exp\left(-\frac{(x-\mu( \texttt{data1}))^{2}}{2\sigma^{2}(\texttt{data1})}\right)}{\sqrt{2\pi}\sigma (\texttt{data1})}\;\mathrm{d}x. \tag{2}\]
Indeed, the measure \(\mathrm{d}P_{\mathcal{F}_{D}}\) decomposes into a counting measure and the product of a Gaussian density function with a differential. The counting measure leads to the factor \(p\)(data1), since that is the probability that humid(data1)=:=1. Hence, the products in Equation 1 reduce to a single indicator of the PCF \((x>15)\).
**Definition 3.6** (Probability of query atom).: The probability of a ground atom \(q\) is given by
\[P(q)=\sum_{C_{M}\subseteq C_{M}:q\in\omega_{C_{M}}}P(\omega_{C_{M}}). \tag{3}\]
**Proposition 3.1** (Measureability of query atom).: Let \(\mathbb{P}\) be a DeepSeaProbLog program, then \(\mathbb{P}\) defines, for an arbitrary query atom \(q\), the probability that \(q\) is true.
Proof.: See Appendix B.
## 4 Inference and learning
### Inference via weighted logic
A popular technique to perform inference in probabilistic logic programming uses a reduction to so-called _weighted model counting_ (WMC); instead of computing the probability of a query, one computes the weight of a propositional logical formula (Chavira and Darwiche, 2008; Fierens et al., 2015). For DeepSeaProbLog, the equivalent approach is to map a ground program onto a _satisfiability modulo theory_ (SMT) formula (Barrett and Tinelli, 2018). The analogous concept to WMC for these formulas is _weighted model integration_ (WMI) (Belle et al., 2015), which can handle infinite sample spaces. In all that follows, for ease of exposition, we assume that all joint probability distributions are continuous.
**Proposition 4.1** (Inference as WMI).: Assume that the measure \(\mathrm{d}P_{\mathcal{F}_{D}}\) decomposes into a joint probability density function \(w(\mathbf{x})\) and a differential \(\mathrm{d}\mathbf{x}\), then the probability \(P(q)\) of a query atom \(q\) can be expressed as the weighted model integration problem
\[\int\left[\sum_{C_{M}\subseteq\mathcal{C}_{M}:q\in\omega_{C_{M}}}\prod_{c_{i} \in C_{M}\cup\overline{C}_{M}}\mathbb{1}\left(c_{i}(\mathbf{x})\right)\right]w(\bm {x})\;\mathrm{d}\mathbf{x}, \tag{4}\]
where \(\overline{C}_{M}\coloneqq\{\bar{c}_{i}\mid c_{i}\in\mathcal{C}_{M}\backslash C _{M}\}\).
Proof.: See Appendix C.
Being able to express the probability of a queried atom in DeepSeaProbLog as a weighted model integral allows us to adapt and deploy inference techniques developed in the weighted model integration literature for DeepSeaProbLog. We opt for the approximate inference algorithm 'Sampo' presented in Zuidberg Dos Martires et al. (2019) because of its more scalable nature. Sampo uses knowledge compilation (Darwiche and Marquis, 2002), a state-of-the-art technique for probabilistic logic inference (Chavira and Darwiche, 2008; Fierens et al., 2015). Intuitively, knowledge compilation is a two-step procedure applied to a logical formula with PCFs, i.e., an SMT formula. First, it infers the exact probability of all PCFs containing discrete variables through symbolic inference. Then, it converts the remainder of the SMT formula into a polynomial in terms of those exact probabilities and the PCFs containing continuous random variables (Figure 1). This polynomial is the integrand of Equation 4. All that remains is to approximate the integration of this polynomial by sampling from the joint probability distribution \(w(\mathbf{x})\) of the continuous random variables. In other words, Sampo computes the expression
\[P(q)=\int\text{SP}(\mathbf{x})\cdot w(\mathbf{x})\,\mathrm{d}\mathbf{x}\approx\frac{1}{| \mathcal{X}|}\sum_{\mathbf{x}\in\mathcal{X}}\text{SP}(\mathbf{x}), \tag{5}\]
where \(\mathcal{X}\) denotes a set of samples drawn from \(w(\mathbf{x})\) and \(\text{SP}(\mathbf{x})\) is the result of knowledge compilation, i.e., the sum of products of indicator functions in Equation 4.
We stress that the Sampo algorithm only samples random variables whose expected value with respect to the function \(\text{SP}(\mathbf{x})\) can not be computed exactly. Hence, in the absence of continuous random variables, our implementation of DeepSeaProbLog using Sampo coincides with DeepProbLog on both a semantics level (Proposition A.1) and inference level.
### Learning via differentiation
A DeepSeaProbLog program depend on a set of (neural) parameters \(\mathbf{\Lambda}\) (Definition 3.2). In order to optimise these parameters, we need to take their gradients of a loss function that compares the probability \(P(q)\) to a training signal. More precisely, we need to compute the derivative
\[\partial_{\lambda}\mathcal{L}(P_{\mathbf{\Lambda}}(q))=\partial_{P_{\mathbf{\Lambda}}( q)}\mathcal{L}(P_{\mathbf{\Lambda}}(q))\cdot\partial_{\lambda}P_{\mathbf{\Lambda}}(q), \tag{6}\]
where we explicitly indicate the dependency of the probability on \(\mathbf{\Lambda}\) and \(\lambda\in\mathbf{\Lambda}\). Differentiating \(P_{\mathbf{\Lambda}}(q)\) with respect to \(\lambda\) presents two obstacles. First, the question of differentiating through the sampling process of Equation 5 and second, the non-differentiability of the indicator functions in \(\text{SP}(\mathbf{x})\).
The non-differentiability of sampling is tackled using the reparametrisation trick (Ruiz et al., 2016). Reparametrisation offers better estimates than other approaches, such as REINFORCE (Williams, 1992) and is readily utilised in modern probabilistic programming languages such as Tensorflow Probability (Tran et al., 2017) and Pyro (Bingham et al., 2019). Conversely, the non-differentiability of the indicator functions prevents swapping the order of differentiation and integration (Flanders, 1973), which we resolve by applying continuous relaxations following the work of Petersen et al.
Figure 1: Diagrammatic representation of the result of knowledge compilation for the query in Example 3.3. The blue boxes originate from PCFs over discrete variables, while the orange ones are PCFs over continuous variables. Note how the discrete variable PCFs are reduced to their exact probabilities while the continuous PCFs still need to be inferred.
[2021]. Together, we obtain the gradient estimate
\[\partial_{\lambda}P_{\mathbf{\Lambda}}(q) =\partial_{\lambda}\int\text{SP}(\mathbf{x})\cdot w_{\mathbf{\Lambda}}(\bm {x})\,\mathrm{d}\mathbf{x} \tag{7}\] \[\approx\,\int\left[\partial_{\lambda}\text{SP}_{s}(r(\mathbf{u},\mathbf{ \Lambda}))\right]\cdot p(\mathbf{u})\,\mathrm{d}\mathbf{u}, \tag{8}\]
where the subscript \(s\) in \(\text{SP}_{s}(\mathbf{x})\) denotes the continuously relaxed or'softened' version of \(\text{SP}(\mathbf{x})\) and \(r(\mathbf{u},\mathbf{\Lambda})\) is the reparametrisation function.
Our gradient estimate using relaxations is asymptotically unbiased.As an example of these relaxations, consider the indicator of a PCF \((g(\mathbf{x})>0)\), which is relaxed into the sigmoid \(\sigma(\beta\cdot g(\mathbf{x}))\). Appendix D provides more details on relaxations of general PCFs. The _coolness_ parameter \(\beta\in\mathbb{R}^{0}_{+}\) determines the strictness of the relaxation. Hence, we recover the hard indicator function when \(\beta{\rightarrow}+\infty\). Note that relaxing indicator functions introduces bias. Petersen et al. [2021] already stated in their work that, in the infinite coolness limit, a relaxed function coincides with the non-relaxed one. Proposition 4.2 extends this result to the derivatives of relaxed and non-relaxed functions, proving that our gradient estimate is asymptotically unbiased.
**Proposition 4.2** (Unbiased in the infinite coolness limit).: Let \(\mathbb{P}\) be a DeepSeaProbLog program with PCFs \((g_{i}(\mathbf{x})\bowtie 0)\) and corresponding coolness parameters \(\beta_{i}\).
If all \(\partial_{\lambda}(g_{i}\circ r)\) are locally integrable over \(\mathbb{R}^{k}\) and every \(\beta_{i}\rightarrow+\infty\), then we have, for any query atom \(q\), that
\[\partial_{\lambda}P(q)=\int\partial_{\lambda}\text{SP}_{s}(r(\mathbf{u},\mathbf{ \Lambda}))\cdot p(\mathbf{u})\,\mathrm{d}\mathbf{u}. \tag{9}\]
Proof.: The proof makes use of the mathematical theory of distributions [Schwartz, 1957], which generalise the concept of functions, and is given in Appendix E.
Finally, we obtain a practical and unbiased estimate of \(\partial_{\lambda}P_{\mathbf{\Lambda}}(q)\) using a set of samples \(\mathcal{U}\) drawn from \(p(\mathbf{u})\).
\[\partial_{\lambda}P(q) \approx\,\int\left[\partial_{\lambda}\text{SP}_{s}(r(\mathbf{u},\mathbf{ \Lambda}))\right]\cdot p(\mathbf{u})\,\mathrm{d}\mathbf{u} \tag{10}\] \[\approx\,\frac{1}{|\mathcal{U}|}\sum_{\mathbf{u}\in\mathcal{U}} \partial_{\lambda}\text{SP}_{s}(r(\mathbf{u},\mathbf{\Lambda})). \tag{11}\]
Computing this gradient estimate does not require drawing new samples. Implementing the relaxations of PCFs in a'straight-through' manner allows us to directly apply automatic differentiation on the inferred probability.
### Probabilistic programming connections
Since knowledge compilation symbolically infers discrete random variables, we only have to sample from a continuous joint probability distribution. To sample such distributions, we can fully exploit the advanced inference and learning techniques [Hoffman et al., 2014] of modern probabilistic programming languages [Tran et al., 2017, Bingham et al., 2019]. Our implementation of DeepSeaProbLog utilises Tensorflow Probability for this task, effectively using knowledge compilation as a differentiable bridge between logical and probabilistic reasoning. While this bridge is limited to sampling techniques for now, it presents an interesting direction for future work to completely unify NeSy with DPP.
### Limitations
While the use of relaxations is well-known and used in recent gradient estimators [Tucker et al., 2017, Grathwohl et al., 2018], the bias they introduce is often hard to deal with in practice. In our case, this bias only reduces to zero in the infinite coolness limit (Proposition 4.2), meaning the use of annealing can be necessary. Finding a good annealing scheme for any problem is non-trivial and effectively introduces another component in need of optimisation. However, as relaxations allow the use of the reparametrisation trick, the resulting lower variance estimates together with our theoretical guarantees support our choice. A more detailed discussion of the current limitations of DeepSeaProbLog can be found in Appendix H.
Related work
From a NeSy perspective the formalism most closely related to DeepSeaProbLog is that of _Logic Tensor Networks_ (LTNs) (Badreddine et al., 2022). The main difference between LTNs and DeepSeaProbLog is the fuzzy logic semantics of the former and the probabilistic semantics of the latter. Interestingly, LTNs and other NeSy approaches based on fuzzy logic also require relaxations to incorporate continuous values. However, fuzzy-based approaches require these relaxations at the semantics level, in contrast to DeepSeaProbLog. Even more, they can only compare continuous point values instead of more general continuous random variables. LTNs' fuzzy semantics also exhibit drawbacks on a more practical level. Unlike DeepSeaProbLog with its probabilistic semantics, LTNs are not inherently capable of neural-symbolic generative modelling (Section 6.3). For a broader overview of the field of neural-symbolic AI, we refer the reader to a series of survey papers that have been published in recent years (Garcez et al., 2019; Marra et al., 2021; Garcez et al., 2022).
From a probabilistic programming perspective, DeepSeaProbLog is related to languages that handle discrete and continuous random variables such as _BLOG_(Mlich, 2006), _Distributional Clauses_(Gutmann et al., 2011) and _Anglican_(Tolpin et al., 2016), which have all been given declarative semantics, i.e., the meaning of the program does not depend on the underlying inference algorithm. However, these languages have the drawback of non-differentiability. This drawback stands in stark contrast to end-to-end (deep) probabilistic programming languages such as Pyro(Bingham et al., 2019) or Tensorflow Probability (Dillon et al., 2017), but these have only been equipped with operational semantics and do not support logical constraints. DeepSeaProbLog not only introduces the ability to express such logical constraints in the form of PCFs to construct challenging posterior distributions, but does so in an end-to-end differentiable fashion.
Finally, our gradient estimate can be related to relaxation-based methods like REBAR (Tucker et al., 2017) or RELAX (Grathwohl et al., 2018), but without the REINFORCE (Williams, 1992) inspired component. Instead, we utilise the differentiability of knowledge compilation to obtain exact gradients of discrete variables. Since our inference scheme innately requires knowledge compilation, the use of other discrete gradient estimators like (Niepert et al., 2021) does not directly apply to DeepSeaProbLog. Moreover, we exploit the structure of our problem by directly relaxing comparison formulae in a sound manner (Petersen et al., 2021), in contrast to introducing an artificial relaxation of the whole problem (Grathwohl et al., 2018).
## 6 Experimental Evaluation
We illustrate the versatility of DeepSeaProbLog by tackling three different problems. Section 6.1 discusses the detection of handwritten dates without location supervision. In Section 6.2 a hybrid Bayesian network with conditional probabilities dependent on the satisfaction of certain logical constraints will be optimised. Finally, Section 6.3 introduces neural-symbolic variational auto-encoders, inspired by Misino et al. (2022).
The details of our experimental setup, including the precise DeepSeaProbLog programs, coolness annealing schemes and hyperparameters used for the neural networks are given in Appendix F.
### Neural-symbolic attention
A problem that cannot yet be solved to a satisfactory degree by purely neural or other neural-symbolic systems is detecting handwritten years. Given a single image with a handwritten year, the task is to predict the correct year as a sequence of 4 digits together with the location of these digits (Figure 2, left). This year can be anywhere in the image and the only supervision is in the digits of the year, _not_ where these digits are in the image. In other words, the problem is equivalent to object detection _without_ bounding box supervision.
Solving such a problem seems to be out of scope for current methods. On the one hand, existing neural approaches are often complex pipelines of neural components that break end-to-end differentiability (Seker and Ahn, 2022). On the other hand, current neural-symbolic methods lack sufficient spatial reasoning capabilities in order to perform the necessary image segmentation.
We exploit probabilistic programming by modelling the location of a digit as a deep generalised normal distribution (Nadarajah, 2005). That is, we use a convolutional neural network to regress the parameters of four generalised normal distributions, one for each digit of a year. Then, we take inspiration from the spatial transformer literature (Carion et al., 2020) and convert the distribution of each location to an attention map (Figure 2, right).
In our experimental validation we compare DeepSeaProbLog to a neural baseline and logic tensor networks. The neural baseline applies the four probabilistic attention maps, one for the location of each of the four digits, to the input image. The resulting four attenuated images are then passed on to a classification network without additional reasoning. The
network is simply required to predict the digits in the right order. With DeepSeaProbLog, we encode that a year is a sequence of digits, i.e., the order matters, by enforcing an explicit order on the digit locations. Doing so requires spatial reasoning, i.e., reasoning which digit is at which location. For LTNs, we encode the same information. However, as LTNs lack a proper distribution semantics, they can only reason on the level of the expected values of the generalised normal distributions.
In our experiment, the sets of years appearing in the training, validation and test data are all disjoint. Moreover, the sets of handwritten digits used to generate those years are also disjoint. Partitioning the data in such a way leads to a challenging learning problem; the difficulty lies in out-of-distribution inference, as the years and handwritten digits in the validation and test set have never been seen during training.
We evaluate all methods in terms of accuracy and Intersection-over-Union (IoU). For the accuracy, we compare the sequence of predicted digits to the correct sequence of digits constituting a year. A prediction is correct if _all_ digits are correctly predicted in the right order. For the IoU, we map each predicted generalised normal distribution to a bounding box by using the mean as the centre and the scale parameter as the width of the box. The IoU is then given by the overlap between this box and the true location of the handwritten digit.
We present our results in Table 1. The most striking observation is the poor performance and large variance of the neural baseline. It fails to predict the location of the digits in the right order, as can be seen from the lower IoU values. Since classification depends on the predicted locations, these lower values also explain the lack in accuracy. We can conclude that the neural baseline struggles to generalise to out-of-distribution data. While LTNs fare better, the high standard error on the accuracy indicates that their continuous reasoning capabilities are insufficient to guarantee consistent solutions. DeepSeaProbLog distinguishes itself by a higher and more consistent accuracy. The reason is also clear; DeepSeaProbLog exploits the entire domain of the distribution of each location. This then leads to a higher IoU value that in turn results in a higher accuracy.
### Neural hybrid Bayesian networks
Hybrid Bayesian networks (Lerner, 2003) are probabilistic graphical models that combine discrete and continuous random variables. DeepSeaProbLog allows for the introduction of optimisable neural components and logical constraints to such models, as shown in Example 3.3. We further extend this example (Figure 3) and specify the datasets that form the input to the various neural networks. The temperature is predicted from a real meteorological dataset (Cho et al., 2020) and we use CIFAR-10 images as proxies for observing clouds and humidity. Moreover, dependencies on a number of constraints are added, which goes beyond the capabilities of traditional probabilistic programming.
\begin{table}
\begin{tabular}{c c c} \hline \hline Method & \multicolumn{2}{c}{Results} \\ \hline & acc. & IoU \\ \cline{2-3} DeepSeaProbLog & \(93.77\pm 0.57\) & \(17.69\pm 0.23\) \\ LTN & \(76.50\pm 12.10\) & \(10.73\pm 1.69\) \\ Neural Baseline & \(54.71\pm 14.33\) & \(6.26\pm 1.77\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean accuracy and IoU with standard error for classifying the correct year, taken over 10 runs.
Figure 2: On the left, an example of a handwritten year. On the right, the attention map for the digit ‘8’ as a generalised normal distribution. Intuitively, we can view generalised normal distributions as differentiable bounding boxes. This allows gradients to flow from a downstream classification network to the regression component.
Our neural Bayesian model was optimised by only giving probabilistic supervision on whether **E** was true or false, i.e., the weather was enjoyed or not. Given our model, such distant supervision only translates into a learning signal on different _ranges_ of temperature values that satisfy different PCFs. We will see that DeepSeaProbLog's reasoning over the full domain of the temperature distribution allows it to perform meaningful density estimation from such a signal.
The optimised Bayesian model can be evaluated in two ways. First, the accuracy on CIFAR-10 of the networks utilised in **C**loudy and **H**umid, which were \(95.24\pm 3.32\) and \(98.96\pm 0.11\), respectively. Second, we measure the quality of the density estimation on **T**emperature by looking at the MSE between the true and predicted mean values, which was \(0.1799\pm 0.0139\). Importantly, DeepSeaProbLog was able to approximate the standard deviation of **T**emperature from just the distant supervision, deviating by only \(0.60\pm 0.22\).
### Neural-symbolic variational auto-encoder
Probabilistic programming is well-suited to generative tasks, but it can not perform generation conditioned on logical constraints. Inspired by the work of Misino et al. (2022), we showcase how DeepSeaProbLog extends the generative power of probabilistic programming to such constraints. To this end, we will consider the task of learning to generate 2 images of digits given the value of their subtraction.
A diagrammatic overview of our DeepSeaProbLog program is given in Figure 4. It uses a conditional variational auto-encoder (CVAE) (Sohn et al., 2015) to generate images conditioned on a digit value. DeepSeaProbLog finds those digit values from a given subtraction result by logical reasoning. It can also condition generation on other variables in the CVAE latent space as this space is an integral part of DeepSeaProbLog's deep, relational model. We will exploit this property later on when we extend the task to generating digits in the same writing style as a given image without _any_ additional optimisation.
Figure 4: Given example pairs of images and the value of their subtraction, e.g., () and \(3\), the CVAE encoder vae_latent first encodes each image into a multivariate normal NDF (latent) and a latent vector. The latter is the input of a categorical NDF digit, completing the CVAE latent space. Supervision is dual; generated images are compared to the original ones in a probabilistic reconstruction loss, while both digits need to subtract to the given value.
Figure 3: Graphical model of **E**njoying the weather (**E**).**(**E**) holds when **D**epressed (**D**) is not true and there is **G**ood weather (**G**). A person has a higher probability of being depressed when it is **C**loudy (**C**), while the degree of good weather is beta distributed depending on various logical constraints on **T**emperature (**T**) and **R**ain (**R**). Finally, rain is probable when it is both **C**loudy and **H**umid (**H**).
Both the CVAE and digit classifier are successfully trained jointly. Example generations of images that satisfy the subtraction result \(\Ybox{2}-\Ybox{2}=5\) can be seen below. In general, DeepSeaProbLog finds all possible digits that subtract to a given value and generates images for each correct combination. Below, we left out 2 such combinations for clarity of exposition.
While our program is inspired by the VAEL architecture of Misino et al. (2022), conceptual differences exist. Most notably, for VAEL, the image generation resides outside the probabilistic logic program. Conversely, the CVAE, including its latent space, is explicitly declared and accessible in DeepSeaProbLog. This difference allows DeepSeaProbLog to generalise to conditional generative queries that differ significantly from the original optimisation task. For example, we can _zero-shot_ query the program to fill in the blank in \(\Ybox{2}-\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}=\Ybox{2}\). Even more, we can enforce that the generated digit is in the same writing style as the given digit by conditioning the generation on the latent space of the given image (Figure 5).
## 7 Conclusion
We presented DeepSeaProbLog, a novel neural-symbolic probabilistic logic programming language that integrates hybrid probabilistic logic and neural networks. Inference is dealt with efficiently through approximate weighted model integration while learning is facilitated by reparametrisation and continuous relaxations of non-differentiable logic components. Our experiments illustrate how DeepSeaProbLog is capable of intricate probabilistic modelling allowing for meaningful weak supervision while maintaining strong out-of-distribution performance. Moreover, they show how hybrid probabilistic logic can be used as a flexible structuring formalism for the neural paradigm that can effectively optimise and reuse neural components in different tasks.
Figure 5: Four random images of right digits (top row) and their generated left digits for 3 given random difference values (bottom row). Note the preservation of the style of the given minuends. |
2308.06305 | Discovering Local Binary Pattern Equation for Foreground Object Removal
in Videos | Designing a novel Local Binary Pattern (LBP) process usually relies heavily
on human experts' knowledge and experience in the area. Even experts are often
left with tedious episodes of trial and error until they identify an optimal
LBP for a particular dataset. To address this problem, we present a novel
symbolic regression able to automatically discover LBP formulas to remove the
moving parts of a scene by segmenting it into a background and a foreground.
Experimental results conducted on real videos of outdoor urban scenes under
various conditions show that the LBPs discovered by the proposed approach
significantly outperform the previous state-of-the-art LBP descriptors both
qualitatively and quantitatively. Our source code and data will be available
online. | Caroline Pacheco do Espirito Silva, Andrews Cordolino Sobral, Antoine Vacavant, Thierry Bouwmans, Felippe De Souza | 2023-08-11T15:04:06Z | http://arxiv.org/abs/2308.06305v1 | # Discovering Local Binary Pattern Equation for Foreground Object Removal in Videos
###### Abstract
Designing a novel Local Binary Pattern (LBP) process usually relies heavily on human experts' knowledge and experience in the area. Even experts are often left with tedious episodes of trial and error until they identify an optimal LBP for a particular dataset. To address this problem, we present a novel symbolic regression able to automatically discover LBP formulas to remove the moving parts of a scene by segmenting it into a background and a foreground. Experimental results conducted on real videos of outdoor urban scenes under various conditions show that the LBPs discovered by the proposed approach significantly outperform the previous state-of-the-art LBP descriptors both qualitatively and quantitatively. Our source code and data will be available online.
## 1 Introduction
Background subtraction (BS) is an attractive research field in computer vision and video processing. It has received increasing attention over the last few decades and has remained a very active research direction, thanks to the numerous potential applications and the availability of surveillance cameras installed in security-sensitive areas such as banks, train stations, highways, and borders [17, 26]. BS aims to obtain an effective and efficient background model to remove the moving parts of a scene by segmenting it into background and foreground [8]. Generally, it is challenging to design a promising BS algorithm in real environments due to the sudden illumination changes, dynamic backgrounds, bad weather, noise, and strong shadows. Several visual feature representations have been proposed to deal with these situations. Color intensities are the classic features used in BS, but they only reflect the visual perception properties of scene pixels, and usually discard the spatial information between adjacent pixels, resulting in the sensitivity to noise and sudden illumination changes.
A variety of local texture descriptors have attracted great attention for BS, especially the Local Binary Pattern (LBP) [22] because it is simple and quick to calculate. It is a powerful gray scale invariant texture descriptor. The computation of the LBP for a neighborhood of size \(P\) = 8 is illustrated in Figure 1. It combines the characteristics of statistical and structural texture analysis, describing the texture with micro-primitives and their statistical placement rules. The LBP presents many properties which allow its use in BS, especially because LBP shows great invariance to changes in monotonic lighting common in natural images. It does not require many parameters to be set, and has a high discriminative power. Nevertheless, the original LBP formula has been reformulated in recent years to make it ca
pable of dealing with several challenges found in different types of scenes, such as dynamic backgrounds, bad weather, noise, shadows, and others [2]. Discovering effective hand-crafted formulas based on LBP is not an easy task as it requires a deep knowledge of the scene and a trial and error process by experts until they identify a LBP formula that achieves meaningful results for a particular dataset. Moreover, human design cannot fully explore the space of all possible mathematical LBP formulas, often resulting in sub-optimal LBPs. However, we believe that it is possible to automatically discover accurate and efficient LBP formulas. Although symbolic regression approaches are less used in BS, they can improve the foreground segmentation in complex scenes thanks to their capability to discover a function \(f:\mathbb{R}^{p}\rightarrow\mathbb{R}\) that represents a given dataset \((\mathcal{X},\mathcal{Y})\), where each input points \(\mathcal{X}_{i}\in\mathbb{R}^{p}\), output points \(\mathcal{Y}_{i}\in\mathbb{R}\) and \(f\) is a symbolic mathematical equation [23, 27]. In recent years, symbolic regression has reached a remarkable increase in its popularity [1, 3, 19, 20, 21, 24, 25, 27, 13]. One crucial aspect for this progress is decades of machine learning research into several aspects of the field, ranging from search strategies to novel neural architectures [6, 7]. Symbolic regression can be reformulated as an optimization problem with an optimality criterion within a given search space. Unlike previous approaches that apply optimization algorithms to constrained search spaces heavily reliant on human design without considering its innovation potential, we propose a novel symbolic regression for foreground object removal that discovers LBP formulas that we have not yet imagined. The proposed method is designed to automatically find the best LBP formula to distinguish the moving objects from a set of videos without requiring huge manual efforts of human experts. In our method, the search for the optimal solution is performed by a variant of Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [9, 15] which is called (1+1)-CMA-ES [16], that seeks for the optimal LBP on the search space designed by a Variational Autoencoder (VAE) [18]. In summary, our contributions are as follows:
* a set of optimal features based on LBP for dealing with specific challenges (e.g., changes in lighting, dynamic backgrounds, noise, strong shadows and others) in a real-world scene.
* an efficient symbolic regression capable of automatically finding new LBP that outperforms the previous state-of-the-art human-designed LBP.
* detailed results showing the potential of our approach to distinguish moving objects from the background in video sequences.
```
1:Require: training set images \(X\), small input LBP equation set given by user \(D\), VAE instance \(\nu\), (1+1)-CMA-ES instance, Texture-BGS instance, arithmetic operators \(A=\{+,-,*,/\}\), user parameter \(K\)
2:// generate a new unseen set of LBP equations
3://\(\varepsilon=\{\epsilon_{1},\epsilon_{2},\dots,\epsilon_{K}\}\)
4:\(\{\varepsilon\}\leftarrow\)VAE (D)
5:\(k\gets 1\)
6:repeat
7:for\(k=1:K\)do
8:// mutate each equation \(\epsilon\subseteq\varepsilon\) by \(A\)
9:\(\{\epsilon^{{}^{\prime}}_{k}\}\leftarrow\) (1+1)-CMA-ES (\(\epsilon_{k}\), A)
10:\(\mathcal{P}^{{}^{\prime}}_{k}\leftarrow\)Texture-BGS\((\epsilon^{{}^{\prime}}_{k},X)\)
11:\(\textit{lbp-scores}\leftarrow\{\epsilon^{{}^{\prime}}_{k},\mathcal{P}^{{}^{\prime}}_{k}\}\)
12:endfor
13:// Select the best equation by its accuracy
14:\(\{\mathcal{P},\varepsilon^{{}^{\prime}}\}\) = \(\arg\max(\textit{lbp-score})\)
15:Output: Best equation \(\epsilon^{{}^{\prime}}\)
```
**Algorithm 1** Equation Discovery
## 2 Proposed Method
For the background subtraction task, a group of weighted LBP histograms are initially learned for each pixel contained in \(N\) images, say video sequences \(x=\{x_{1},x_{2},...,x_{N}\}\) where each \(x_{j}(j=1,...,N)\) is a certain pixel over time. Let a pixel at a certain location, considered as the center pixel \(c=(x_{c},y_{c})\) of a local neighborhood composed of \(P\) equally spaced pixels on a circle of radius \(R\). The LBP descriptor applied to \(c\) can be expressed as:
\[LBP_{P,R}\left(x_{c},y_{c}\right)=\sum_{p=0}^{P-1}s\left(g_{p}-g_{c}\right)2^{p} \tag{1}\]
Figure 1: The LBP descriptor. From the original image to the histogram of its LBP image.
where \(g_{c}\) is the gray value of the center pixel \(c\) and \(g_{p}\) is the gray value of each neighboring pixel, and \(s\) is a thresholding function defined as:
\[s(x)=\begin{cases}1&\text{if }x\geq 0\\ 0&\text{otherwise}\end{cases} \tag{2}\]
From (1), it is easy to show that the number of binary terms to be summed is \(\sum_{i=0}^{P-1}2^{i}=2^{P}-1\), so that the length of the resulting histogram (including the bin 0 location) is \(2^{P}\). According to the authors [10], a limitation of the LBP original equation is that it does not work very robustly on flat images where the gray values of the neighboring pixels are very close to the value of the center pixel. In order to make the LBP equations more robust, as in [10] our VAE (see section 2.1 for further details) is able to generate equations with the \(a\) term modifying the thresholding scheme of the descriptor by adding \(a\) term to \(s\left(g_{p}-g_{c}\right)\) in (1).
### Generate Multiple Equations Based on LBP
Considered a search space containing a variety of possible equations based on LBP given by \(\varepsilon=\{\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{K}\}\), where \(K\) is the user-defined number of equations that can be generated. Instead of hand-designed search space, we opted to design it through a Variational Autoencoder (VAE). It is a powerful neural generative architecture capable of learning an encoder and a decoder that leads to data reconstruction and the ability to generate new samples from the input data. Given a small set of input equations \(l\) the VAE maps the \(l\) onto a latent space with a probabilistic encoder \(q_{\phi}(z\mid l)\) and reconstructs samples with a probabilistic decoder \(p_{\theta}(l\mid z)\). A Gaussian encoder was used with the following reparameterization trick:
\[q_{\phi}(z\mid l)=\mathcal{N}(z\mid\mu_{\phi}(l),\sigma_{\phi}(l))=\mathcal{N }(\in]0,I)\cdot\sigma_{\phi}(l)+\mu_{\phi}(l) \tag{3}\]
Gaussian distribution \(\mathcal{N}(0,I)\) is the most popular choice for a prior distribution \(p_{\psi}(z)\) in the latent space. The Kullback-Leibler divergence, or simply, the \(\mathcal{KL}\) divergence is a similarity measure commonly used to calculate difference between two probability distributions. To ensure that \(q(z\mid l)\) is similar to \(p(l\mid z)\), we need to minimize the \(\mathcal{KL}\) divergence between the two probability distributions.
\[\min\mathcal{KL}(q_{\phi}(z\mid l)\|p_{\theta}(z\mid l)) \tag{4}\]
Figure 2: Brief overview of the proposed framework. A set of equations based on LBP \(\varepsilon=\{\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{K}\}\), where \(\varepsilon\) expresses each equation and \(k\) is a user parameter that determines the number of elements \(\epsilon\subseteq\varepsilon\). Initially, the (1+1)-CMA-ES seeks the best equation by mutating the arithmetic operators of each equation \(\epsilon\subseteq\varepsilon\) resulting in a new one of the mutated equations \(\varepsilon^{{}^{\prime}}=\{\epsilon_{1}^{{}^{\prime}},\epsilon_{2}^{{}^{ \prime}},\ldots,\epsilon_{M}^{{}^{\prime}}\}\). The performance of each equation is estimated by a background subtraction algorithm to distinguish the moving objects from the background of a set of videos. Finally, the \(\epsilon^{{}^{\prime}}\) that presented the maximum accuracy is selected as a best equation.
We can minimize the above expression by maximizing the following:
\[L(\phi,\theta:l)=E_{z\sim q_{\phi}(z|l)}(\log(p_{\theta}(l\mid z))-\mathcal{KL}(q _{\phi}(z\mid l)\|p(z)) \tag{5}\]
As we see in Eq.(5), the loss function for VAE consists of two terms, the _reconstruction term_ that penalizes error between the reconstruction of input back from the latent vector and the _divergence term_ that encourages the learned distribution \(q_{\phi}(z\mid l)\) to be similar to the true prior distribution \(p(z)\), which we assume following a unit Gaussian distribution, for each dimension \(j\) of the latent space.
### Search Strategy by (1+1)-Cma-Es
Given a set of equations \(\varepsilon=\{\epsilon_{1},\epsilon_{2},\ldots,\epsilon_{K}\}\), a variant of Covariance Matrix Adaptation Evolution Strategy (CMA-ES) [9, 15] which is called (1+1)-CMA-ES [16] looks for the best equation by mutating the arithmetic operators \((+,-,*,/)\) of each equation \(\varepsilon\underset{\tau}{\subset}E\) resulting in a new set of mutated equations given by \(\varepsilon^{{}^{\prime}}=\{\varepsilon_{1}^{{}^{\prime}},\varepsilon_{2}^{{}^ {\prime}},\ldots,\varepsilon_{K}^{{}^{\prime}}\}\). (1+1)-CMA-ES is an elitist algorithm based on an evolution strategy that operates on the Cholesky factors of the covariance matrix, reducing the computational complexity to \(O(n^{2})\) instead of \(O(n^{3})\). Given fitness functions \(f:\mathbb{R}^{n}\rightarrow\mathbb{R},\Phi\mapsto f(\Phi)\) to be minimized. In (1+1)-CMA-ES strategy, at each update iteration a new offspring \(\Phi_{offspring}\in\mathbb{R}^{n}\) (candidate solution) is generated from its parent \(\Phi_{parent}\in\mathbb{R}^{n}\) (current solution), the worse solution is replaced by the better one for the next iteration. The success of the last mutation is determined as follows.
\[\gamma_{succ}=\begin{cases}1&\text{if }f(\Phi_{offspring})\leq f(\Phi_{parent})\\ 0&\text{otherwise}\end{cases} \tag{6}\]
After sampling the new candidate solution, the step size is updated based on the success \(\gamma_{succ}\) from a learning rate and a target success rate. The step size procedure and the update of the Cholesky factors are described in detail in [16].
### Background Detection
We estimate the performance of a set of mutated equations given by \(\varepsilon^{{}^{\prime}}=\{\epsilon_{1}^{{}^{\prime}},\epsilon_{2}^{{}^{ \prime}},\ldots,\epsilon_{K}^{{}^{\prime}}\}\) by an efficient and fast non-parametric method for background subtraction algorithm called Texture BGS [10, 11]. The \(\epsilon^{{}^{\prime}}\subseteq\varepsilon^{{}^{\prime}}\) that presented the maximum accuracy \(\{\arg\max_{j\in{1,\ldots,K}}\mathcal{P}_{j}(x)\}\) is chosen to deal with a particular challenge that is commonly encountered in complex scenes. The \(\mathcal{P}\) represents a set of accuracy for each \(\epsilon^{{}^{\prime}}\subseteq\varepsilon^{{}^{\prime}}\). In Texture BGS initially, a new LBP histogram computed from a new video sequence is compared with the current model histograms using the histogram intersection as the proximity measure represented by \(\jmath\). If the threshold for the proximity measure is below for all model histograms, the model histogram with the lowest weight is replaced with the new histogram and it receives a low initial weight. In addition, if a model histogram close enough to the new histogram was found, the bins of this histogram and weights are updated as described in [10]. Next, the histogram's persistence is used to determine whether the histogram models are more likely to belong to the background histograms or not. The persistence is directly associated with the histogram's weight. As a last phase of the updating process, the model's histograms are ordered in descending order according to their weights, and the first histograms are select as background histograms. Comparing the histogram computed from a new video sequence to the current background histograms using the same proximity measure as in the background model update step for \(x\) as follows:
\[H(x)=\begin{cases}1&\text{if }j<T_{P}\\ 0&\text{otherwise}\end{cases} \tag{7}\]
A pixel is classified as a background pixel if \(H(x)\) = 0 for at least one background histogram. The proposed approach is summarized in Algorithm 1. Note that, we explain the LBP equation discovery procedure for a particular scene, but the procedure is identical for different scenes that present distinct challenges.
## 3 Experimental Results
In this section, we evaluated each component of our proposed method by conducting different experiments. First, we started by finding the set of optimal hyper parameters to train a VAE instance that generates a set of valid equations that are different from those within the training set.
Figure 3: Hyperparameters Importance. The _ENC_DROPOUT_, _ENC_LAYERS_ and _DEC_DROOPOUT_ were the hyperparameters that had the greatest impact on training step.
Our training set contains 80 equations based on LBP created manually by us. We also defined an evaluation measure called a number of _Unseen_ & _Valid Equations_ (UVE), and our main objective in this first experiment was to find a configuration for the VAE instance that maximizes the UVE measure. A total of 300 VAE instances with different hyperparameter values were initially trained to generate a set of equations. The validity of each equation was verified by a regular expression. VAE models were based on the Gated Recurrent Unit (GRU) [5]. It's an enhanced version of the standard recurrent neural network that aims to solve the vanishing gradient problem. We chose the GRU due to its simplicity and computation time. To perform the experiments optimally, we used an open-source platform called _ProActive AI Orchestration (PAIO)_1. It allows researchers to easily automate machine learning (AutoML) [6, 14], providing functions to automatically search for hyperparameters with parallel and distributed execution. We implemented the proposed method using PyTorch 1.0 and Python 3.7. The experiments were conducted on an NVIDIA GeForce RTX 2070 graphics card with 3.5 GHz Intel i7-3770K CPU that consists of 4 cores, 8 threads, and 16GB of RAM. Initially, we ran 150 interactions of two parallel instances of the VAE using the PAIO platform. We trained the models into 150 epochs using an early stopping mechanism. In Table 1, we tabulate the hyperparameters used in the VAE models in the training step. The first column contains the definitions of each hyperparameter. In the second column is the range value that each of these hyperparameters can assume, and finally the last column shows the combination of the best values for each hyperparameter. The set of the best hyperparameter values was used in the configuration of our VAE that will be responsible for generating our search space containing a set of valid and unique equations based on LBP. Note that the _choice_ represents a list of possible val
\begin{table}
\begin{tabular}{|l|l|r|} \hline
**Hyperparameters** & \multicolumn{1}{c|}{**Range Values**} & **Best Values** \\ \hline \hline \multicolumn{3}{|c|}{**VAE instance hyperparameters**} \\ \hline ENC\_HDDEN - Number of features in the hidden of the encoder & _choice_([125, 256, 512]) & 125 \\ DEC\_HDDEN - Number of features in the hidden of the decoder & _choice_([512, 800]) & 512 \\ ENC\_LAYERS - Number of recurrent layers of the encoder & _choice_([1, 2, 4, 6]) & 6 \\ DEC\_LAYERS - Number of recurrent layers of the decoder & _choice_([1, 2, 4, 6]) & 1 \\ ENC\_DROPOUT - Dropout rate of the encoder & _choice_([0.01, 0.02, 0.01, 0.1, 0.2]) & 0.1 \\ DEC\_DROPOUT - Dropout rate of the decoder & _choice_([0.01, 0.02, 0.01, 0.1, 0.2]) & 0.01 \\ \hline \multicolumn{3}{|c|}{**Training hyperparameters**} \\ \hline N\_BATCH - Samples per batch to load & _choice_([32, 64, 512]) & 32 \\ LEARNING\_RATE - Learning rate & _choice_([0.001, 0.005]) & 0.005 \\ OPTIMIZER - Optimization algorithms & _choice_([Adam, Adadelta, RMSprop]) & RMSprop \\ \hline \multicolumn{3}{|c|}{**If (EXEX-LABERS) (or \(\text{DEC}\_LAYERS)) \(\times\) 1.** **Inference a bidirectional GRU (otherwise unidirectional GRU:} \\ \hline \end{tabular}
\end{table}
Table 1: List of hyperparameters used in the VAE network in the training step. In the second column the range value that each of these hyperparameters can assume and finally the last column shows the combination of the best values for each hyperparameter. The set of the best hyperparameter values is used as the configuration of our VAE that will be responsible for generating our search space containing a set of equations based on LBP.
\begin{table}
\begin{tabular}{|c|l|} \hline
**Scenes** & **Challenges** & **Best Discovered LBP Equation** \\ \hline \hline _people in shade_ & \begin{tabular}{l} Consists of pedestrian walking outdoor. The main challenges are: hard shadow cast on the ground by the walking persons and illumination changes. \\ \end{tabular} & \begin{tabular}{l} \begin{tabular}{l} \left((g_{p}/g_{c}+a*g_{c})-(g_{p}+g_{c})-(g_{p}+g_{c})+(g_{p}+g_{c}))+a \\ \end{tabular} \\ \end{tabular} \\ \hline _snow fall_ & \begin{tabular}{l} Contains a traffic scene in a blizzard. The main challenges are: low-visibility winter storm conditions, snow accumulation, the dark tire tracks left in the snow. \\ \end{tabular} & \begin{tabular}{l} \left((g_{p}-(g_{p}-g_{c})*(g_{p}-g_{c}))+a\right.\) \\ \end{tabular} \\ \hline _canoe_ & \begin{tabular}{l} Shows people in a boat with strong background motion. The main challenges are: outdoor scenes with strong background motion. \\ \end{tabular} & \begin{tabular}{l} \begin{tabular}{l} \left((g_{p}/g_{c})-g_{p})+a \\ \end{tabular} \\ \end{tabular} \\ \hline _bus station_ & \begin{tabular}{l} Presents people waiting in a bus station. The main challenges are: hard shadow cast on the ground by the walking persons. \\ \end{tabular} & \begin{tabular}{l} \left((g_{p}/g_{c})-g_{p})+a \\ \end{tabular} \\ \hline _slating_ & \begin{tabular}{l} Shows people skating in the snow. The main challenges are: low-visibility winter storm conditions and snow accumulation. \\ \end{tabular} & \begin{tabular}{l} \left((g_{p}+g_{c})*(a)-(g_{p}*g_{c})\right)\right.\) \\ \end{tabular} \\ \hline _full_ & \begin{tabular}{l} Shows cars passing next to a fountain. The main challenges are: outdoor scenes with strong background motion. \\ \end{tabular} &
\begin{tabular}{l} \left((g_{p}+g_{c})/(g_{p}-g_{c})+a\right)\end{tabular} \\ \hline \end{tabular}
\end{table}
Table 2: List of the best equations for dealing with different challenges encountered in real-world scenarios. From the left to the right: (a) different types of scenes, (b) with their main challenge descriptions, (c) the best LBP structure, and (d) the best LBP equation.
ues for each hyperparameter. In Figure 3, we show the importance of each hyperparameter when trying to maximize the UVE measure. As we can see the _ENC_DROPOUT, ENC_LAYERS and DEC_DROPOUT_ were the hyperparameters that most impacted VAE training. This can be explained by the fact that the dropout layer is very important in VAE architectures, because they prevent all neurons in a layer from synchronously optimizing their weights, preventing them all from converging to the same goal, thus decorrelating the weights. Another hyperparameter that impacted the performance of our VAE was the number of recurrent layers in the encoder that compresses the original high-dimension input into the latent low-dimensional code. A curious fact is that the number of encoder layers is bigger than the decoder. This was explained in [4], the authors showed if the decoder layer presents the same number of layers as the encoder, than there is a risk that VAE might completely ignore learning, therefore, keeping the \(\mathcal{KL}\) loss to 0. Our VAE generated a total of 305 new equations. Some of the equations generated by our best VAE may present the \(a\) term (see section 2). The (1+1)-CMA-ES modified each equation using (\(4^{\eta}\) permutations), where 4 represents the four basic arithmetic operations in maths \([+,-,*,/]\)) and \(\eta\) is the number of arithmetic operators presented in each equation. However, to prevent the generation of exponential mutations, we limited our experiment to \(4^{5}=1024\) mutations. In (1+1)-CMA-ES step, our main goal was to seek an equation for each scene that presents a better loss (1- F-score) than the hand-crafted equations compared in this paper, instead of conducting an exhaustive search. Note that this step of our approach can be fast if it finds the best equation at the beginning of the search. The search for the equations by (1+1)-CMA-ES was performed in a parallel and distributed manner. In order to verify if the mutated equations are useful for dealing with different challenges found in real-world scenarios, we considered as our performance estimation strategy the Texture BGS approach (see subsection 2.3 for more details) and a set of real-world scene sequences from CDnet 2014 dataset [28]. We compared the new equations of the LBP descriptors discovered by our approach with three other texture descriptors among the reviewed ones, namely:
* Original LBP [22]
* Modified LBP [10]
* CS-LBP [12]
We chose this one last descriptor because many recent
\begin{table}
\begin{tabular}{|c|l|c|c|c|} \hline
**Scenes** & **Descriptors** & **Precision** & **Recall** & **F-score** \\ \hline \hline \multirow{4}{*}{_people in shade_} & Original LBP & 0.6352 & 0.7321 & 0.6802 \\ & Modified LBP & 0.7305 & 0.7725 & 0.7509 \\ & CS-LBP & 0.4244 & **0.8125** & 0.5576 \\ & Proposed LBP & **0.8163** & 0.8098 & **0.8130** \\ \hline \multirow{4}{*}{_snow fall_} & Original LBP & **0.9331** & 0.1425 & 0.2473 \\ & Modified LBP & 0.8098 & 0.9119 & 0.8578 \\ & CS-LBP & 0.5503 & 0.3150 & 0.4007 \\ & Proposed LBP & 0.8592 & **0.8825** & **0.8707** \\ \hline \multirow{4}{*}{_canoe_} & Original LBP & 0.2295 & 0.2816 & 0.2529 \\ & Modified LBP & 0.3410 & 0.4287 & 0.3798 \\ & CS-LBP & 0.1866 & 0.3474 & 0.2428 \\ & Proposed LBP & **0.8813** & **0.5428** & **0.6719** \\ \hline \multirow{4}{*}{_bus station_} & Original LBP & 0.3192 & 0.6127 & 0.4197 \\ & Modified LBP & 0.6939 & 0.4022 & 0.5093 \\ & CS-LBP & 0.0445 & **0.8083** & 0.0844 \\ & Proposed LBP & **0.7120** & 0.5124 & **0.5959** \\ \hline \multirow{4}{*}{_skating_} & Original LBP & 0.6900 & 0.2704 & 0.3886 \\ & Modified LBP & 0.6839 & **0.8902** & 0.7735 \\ & CS-LBP & 0.1687 & 0.3527 & 0.2283 \\ & Proposed LBP & **0.9178** & 0.7616 & **0.8324** \\ \hline \multirow{4}{*}{_fall_} & Original LBP & 0.6329 & 0.8778 & 0.7355 \\ & Modified LBP & **0.8777** & 0.6328 & 0.7354 \\ \cline{1-1} & CS-LBP & 0.3758 & **0.8783** & 0.5264 \\ \cline{1-1} & Proposed LBP & 0.8651 & 0.6701 & **0.7552** \\ \hline \end{tabular}
\end{table}
Table 3: Performance using the CDnet 2014 dataset.
hand-crafted LBP variants have been proposed based on their mathematical formulation [2]. The CS-LBP generates compact binary patterns by working only with the center-symmetric pairs of pixels. For all descriptors, the neighborhood size is empirically selected so that \(P=8\), and \(R=1\). In addition, the thresholding value, \(T\), from the CS
Figure 4: Background subtraction results using the CDnet 2014 dataset. From top to bottom: Original frame, Ground truth, Foreground masks by Original LBP, Texture by Original LBP, Foreground masks by Modified LBP, Texture by Modified LBP, Foreground masks by CS-LBP, Foreground masks by LBP proposed and Texture by LBP proposed. The true positives (TP) pixels are in white, the true negatives (TN) pixels are in black, the false positives (FP) pixels are in red, and the false negatives (FN) pixels are in green.
LBP was selected to \(T=0.01\) as in [12]. We evaluated the performance of each descriptor in _'people in shade','snow fall', 'canoe', 'bus station','skating'_, and 'fall'_ sequences. The description of each scene and its main challenges are presented in Table 2. We limited the number of sequences for each scene to around \(150\) frames choosing the sequences with a high degree of variability in terms of background changes, environmental conditions, hard shadows, suddenly start moving, etc. In addition, we down scaled the sequences to half the starting resolution due to the limitation of computational resources, since our approach can have high computation time. We present the visual results on individual frames from six different scenes: _'people in shade'_ (frame #316), _'snow fall'_ (frame #2758), _'canoe'_ (frame #904), _'bus station'_ (frame #350), _'skating'_ (frame #1884) and _'fall'_ (frame#3987) of the CDnet 2014 dataset. The best \(a\) values were _4.46, 8.03, 11.05, 8.28, 13.87, 0.67_ for the Modified LBP and _1.67, 57.97, 83.51, 56.10, 31.55, 3.32_ for the proposed approach to scenes 'people in shade','snow fall', 'canoe', 'bus station','skating' and 'fall', respectively. The \(a\) term was varied as \(a=[10^{-2},...,10^{2}]\). Note that the values of \(a\) varied from small values to high values for both Modified LBP and proposed LBP's. Despite the authors in [10] have suggested using a small value for \(a\) for Modified LBP in order to retain the discriminate power of the LBP descriptor. Figure 4 shows the foreground detection results using the Texture BGS method on the CDnet 2014 dataset. They were shown without any post-processing technique, except for the CS-LBP descriptor (see Figure 4, line 8). In order to increase the reader's understanding, we adjusted brightness and contrast for texture by CS-LBP. It is because the length of the resulting histogram of CS-LBP is more compact binary patterns \(2^{4}\) than \(2^{8}\) for the Original LBP, Modified LBP and LBP's proposed. The results obtained by the best equations discovered by the proposed method (see Table 2) clearly appears to be less sensitive to background subtraction challenges and are able to detect moving objects with fewer false detection, especially in the _'people in shade'_ video that presents challenges such as hard shadow and intermittent shades, and in the _'skating'_ and _'snow fall'_ which are complex scenes with bad weather. Next, given the ground truth data, the accuracy of the foreground segmentation is measured using three classical measures: recall, precision, and F-score. Table 3 shows the proposed approach evaluated in the six scenes. The best scores are in bold. The proposed approach presented the best F-score for all scenes. However, our approach encountered challenges in finding the best equation for scenes with dynamic background motion such as _'canoe'_ and _'fall'_. Another challenging scene was _'bus station'_ which presents hard shadows. Nevertheless, the proposed LBP for _'bus station'_ scene (see F-score in Table 3) achieved an improvement of up to 3 times, overcoming the hand-crafted equations compared in this work. All the best equations discovered by our method, presented the \(a\) term showing the importance of this term in image areas where the gray values of the neighboring pixels are very close to the center pixel one, e.g. sky, grass, etc. [10]. We can see in Table 3 that the original LBP and CS-LBP presented the worst performance for most scenes, such as _'snow', 'fall', 'canoe'_, _'bus station'_ and _skating_. This can be explained by the fact that these two descriptors do not have the \(a\) term in its equations. On the other hand, Modified LBP achieved a good F-score for most scenes, being behind only the descriptors discovered by the proposed method. The original LBP, Modified LBP, and Proposed LBP had a very close F-score for the _'fall_ scene. While CS-LBP performed worse in this scene. This can also be explained by the strategy used in the CS-LBP which the texture is obtained by comparing only center-symmetric pairs of pixels producing more compact binary patterns. The experiments also showed the scenes that presented similar challenges did not have the same best LBP equation. This can be noticed by the couples of scenes [_'people in shade'_ and _'bus station'_], [_'canoe'_ and _'fall'_] and [_'snow fall'_ and _'skating'_] (see Table 2). In contrast, scenes like _'canoe'_ [ dynamic background motion] and _'bus station'_ [hard and soft shadow and intermittent shades] had the same best equation. It shows us that each scene is unique and its dynamics also contribute to the choice of the best equation. Discovering a specific equation for a given scene manually, is a laborious task that requires an in-depth knowledge of the scene and a trial and error process by experts. Therefore, the approach presented in this paper can be very useful for discovering equations by machine computing while saving a lot of manual time. The results presented in this work are a non-exhaustive search for the best equations due to our limited computational resources. The best equation search time depended on the scene type and the LBP equation generated by our VAE. Finally, we noticed that the most expensive step of our approach is in computing the LBP equations, taking up to \(3\) seconds per frame. Future improvements can be the implementation of the LBPs to CUDA programming to accelerate and optimize the computing of our method.
## 4 Conclusion
In this paper, we propose a novel approach able to discovering suitable LBP equations for background removal in videos. The main objective of this method is to reduce human time by automatically discovering LBP formulas, in the hope that this will eventually lead to the discovery of equations that we may never have thought of yet. Experimental results in video sequences show the potential of the proposed approach and its effectiveness in dealing with the main challenges encountered in real-world scenarios. |
2301.05307 | Partial entropy decomposition reveals higher-order structures in human
brain activity | The standard approach to modeling the human brain as a complex system is with
a network, where the basic unit of interaction is a pairwise link between two
brain regions. While powerful, this approach is limited by the inability to
assess higher-order interactions involving three or more elements directly. In
this work, we present a method for capturing higher-order dependencies in
discrete data based on partial entropy decomposition (PED). Our approach
decomposes the joint entropy of the whole system into a set of strictly
non-negative partial entropy atoms that describe the redundant, unique, and
synergistic interactions that compose the system's structure. We begin by
showing how the PED can provide insights into the mathematical structure of
both the FC network itself, as well as established measures of higher-order
dependency such as the O-information. When applied to resting state fMRI data,
we find robust evidence of higher-order synergies that are largely invisible to
standard functional connectivity analyses. This synergistic structure distinct
from structural features based on redundancy that have previously dominated FC
analyses. Our approach can also be localized in time, allowing a frame-by-frame
analysis of how the distributions of redundancies and synergies change over the
course of a recording. We find that different ensembles of regions can
transiently change from being redundancy-dominated to synergy-dominated, and
that the temporal pattern is structured in time. These results provide strong
evidence that there exists a large space of unexplored structures in human
brain data that have been largely missed by a focus on bivariate network
connectivity models. This synergistic "shadow structures" is dynamic in time
and, likely will illuminate new and interesting links between brain and
behavior. | Thomas F Varley, Maria Pope, Maria Grazia Puxeddu, Joshua Faskowitz, Olaf Sporns | 2023-01-12T21:37:56Z | http://arxiv.org/abs/2301.05307v1 | # Partial entropy decomposition reveals higher-order structures in human brain activity
###### Abstract
The standard approach to modeling the human brain as a complex system is with a network, where the basic unit of interaction is a pairwise link between two brain regions. While powerful, this approach is limited by the inability to assess higher-order interactions involving three or more elements directly. In this work, we present a method for capturing higher-order dependencies in discrete data based on partial entropy decomposition (PED). Our approach decomposes the joint entropy of the whole system into a set of strictly non-negative partial entropy atoms that describe the redundant, unique, and synergistic interactions that compose the system's structure. We begin by showing how the PED can provide insights into the mathematical structure of both the FC network itself, as well as established measures of higher-order dependency such as the O-information. When applied to resting state fMRI data, we find robust evidence of higher-order synergies that are largely invisible to standard functional connectivity analyses. This synergistic structure is symmetrical across hemispheres, largely conserved across individual subjects, and is distinct from structural features based on redundancy that have previously dominated FC analyses. Our approach can also be localized in time, allowing a frame-by-frame analysis of how the distributions of redundancies and synergies change over the course of a recording. We find that different ensembles of regions can transiently change from being redundancy-dominated to synergy-dominated, and that the temporal pattern is structured in time. These results provide strong evidence that there exists a large space of unexplored structures in human brain data that have been largely missed by a focus on bivariate network connectivity models. This synergistic "shadow structures" is dynamic in time and, likely will illuminate new and interesting links between brain and behavior. Beyond brain-specific application, the PED provides a very general approach for understanding higher-order structures in a variety of complex systems.
**Keywords:** Higher-Order Interactions, Entropy, Information Theory, Functional Connectivity, fMRI, Neuroimaging
Since the notion of the "connectome" was first formalized in neuroscience [1], network models of the nervous system have become ubiquitous in the field [2; 3]. In a network model, elements of a complex system (typically neurons or brain regions) are modelled as a graph composed of vertices (or nodes) connected by edges, which denote some kind of connectivity or statistical dependency between them. Arguably the most ubiquitous application of network models to the brain is the "functional connectivity" (FC) framework [3; 4; 5]. In whole-brain neuroimaging, FC networks generally define connections as correlations between the associated regional time series (e.g. fMRI BOLD signals, EEG waves, etc). The correlation matrix is then cast as the adjacency matrix of a weighted network, on which a wide number of network measures can be computed [6].
Despite the widespread adoption of functional connectivity analyses, there remains a little-discussed, but profound limitation inherent to the entire methodology: the only statistical dependencies directly visible to pairwise correlation are bivariate, and in the most commonly performed network analyses, every edge between pairs \(X_{i}\) and \(X_{j}\) is treated as independent of any other edge. There are no _direct_ ways to infer statistical dependencies between three or more variables. "Higher order" interactions are constructed by aggregating bivariate couplings in analyses such as motifs [7] or community detection [8]. One of the largest issues holding back the direct study of higher-order interactions has been the lack of effective, accessible mathematical tools with which such interactions can be recognized [9]. Recently, however, work in the field of multivariate information theory has enabled the development of a plethora of different measures and frameworks for capturing statistical dependencies beyond the pairwise correlation [10].
The few applications of these techniques to brain data have suggested that higher-order dependencies can encode meaningful bio-markers (such as discriminating between health and pathological states induced by anesthesia or brain injury [11]) and reflect changes associated with age [12]. Since the space of possible higher-order structures is so much vaster than the space of pairwise dependencies, the development of tools that make these structures accessible opens the doors to a large number of possible studies linking brain activity to cognition and behavior.
Of the tools that have been applied, one of the most well developed is the _partial information decomposition_[13; 14] (PID), which reveals that multiple interacting variables can participate in a variety of distinct information-sharing relationships, including redundant,
unique, and synergistic modes. Redundant and synergistic information sharing represent two distinct, but related "types" of higher order interaction:._Redundancy_ refers to information that is "duplicated" over many elements, so that the same information could be learned by observing \(X_{1}\lor X_{2},\vee,\ldots,\lor X_{N}\). In contrast, _synergy_ refers to information that is _only_ accessible when considering the joint-states of multiple elements and no simpler combinations of sources. Synergistic information can only be learned by observing \(X_{1}\wedge\ldots\wedge X_{N}\).
Redundant, and synergistic information sharing modes can be combined to create more complex relationships. For example, given three variables \(X_{1}\), \(X_{2}\), and \(X_{3}\), information can be redundantly common to all three, which could be learned by observing \(X_{1}\lor X_{2}\lor X_{3}\). We can also consider the information redundantly shared by joint states: for example, the information that could be learned by observing \(X_{1}\vee(X_{2}\wedge X_{3})\) (i.e. observing \(X_{1}\) or the joint state of \(X_{2}\) and \(X_{3}\)). For a finite set of interacting variables, it is possible to enumerate all possible information-sharing modes, and given a formal definition of "redundancy", they can be calculated (for details see below).
The identification of redundancy and synergy as possible families of statistical dependence raises questions about how such relationships might be reflected (or missed) by the standard, pairwise correlation-based approach for inferring networks. We propose two criteria by which we might assess the performance of bivariate functional connectivity. The first we call _specificity_: the degree to which a pairwise correlation between some \(X_{i}\) and \(X_{j}\) reports dependencies that are unique to \(X_{i}\) and \(X_{j}\) alone, and not shared with any other edges. In a sense, it reflects how appropriate the ubiquitous assumption that edges are independent is. The second criterion we call _completeness_: whether all of the statistical dependencies present in a data set are accounted for and incorporated into the model, or if predictive structure is "lost" when restrictive analyses are used.
We hypothesized that classical functional connectivity would prove to be both non-specific (due to the presence of multivariate redundancies that get repeatedly "seen" by many pairwise correlations) and incomplete (due to the presence of synergies). To test this hypothesis, we used the a framework derived from the PID: the _partial entropy decomposition_[15] (PED, explained in detail below) to fully retrieve all components of statistical dependencies in sets of three and four brain regions. As part of this analysis, we propose a measure of redundant entropy applicable to arbitrarily-sized collections, which allows us to fully explore the space of higher order interactions.
We chose the PED over the PID because the PID requires partitioning the system into predictors and "targets" (the elements whose behavior we are predicting). This distinction is often artificial, and makes it difficult to analyze the system itself as a structured whole. The PED does not require making a source/target distinction, and serves to generalize the PID to the analysis of whole systems.
By computing the full PED for _all_ triads of 200 brain regions, and a subset of approximately two million tetrads, we can provide a rich and detailed picture of beyond-pairwise dependencies in the brain. Furthermore, by separately considering redundancy and synergy instead of assessing just which one is dominant (as is commonly done [12; 16]), we can reveal previously unseen structures in resting state brain activity.
## I Theory
### A Note on Notation
In this paper, we will be making reference to multiple different "kinds" of random variables. In general, we will use uppercase italics to refer to single variables (e.g. \(X\)). Sets of multiple variables will be denoted in boldface (e.g. \(\mathbf{X}=\{X_{1},\ldots,X_{N}\}\), with subscript indexing). Specific instances of a variable will be denoted with lower case font: \(X=x\). Functions (such as the probability, entropy, and mutual information), will be denoted using caligraphic font. Finally, we will make a distinction between expected values of information-theoretic quantities using upper case function notation (e.g. the Shannon entropy of \(X\) is \(\mathcal{H}(X)\), while the local entropy/surprisal is \(h(x)\)). For a brief review of local information theory, see the Supplementary Material Section S2. Finally, when referring to the partial entropy function \(\mathcal{H}_{\partial}\) (described below), we will use superscript index notation to indicate the full set of variables that contextualizes the individual atom. For example, \(\mathcal{H}_{\partial}^{123}(\{1\}\{2\})\) refers to the information redundantly shared by \(X_{1}\) and \(X_{2}\), when both are considered as part of the triad \(\mathbf{X}=\{X_{1},X_{2},X_{3}\}\), while \(\mathcal{H}_{\partial}^{12}(\{1\}\{2\})\) refers to the information redundantly shared by \(X_{1}\) and \(X_{2}\) qua themselves.
### Partial Entropy Decomposition
The _partial entropy decomposition_ (PED) provides a framework with which we can extract _all_ of the meaningful "structure" in a system of interacting random variables [15]. By "structure", we are referring to the (possibly higher-order) patterns of information-sharing between elements. Consider a system \(\mathbf{X}=\{X_{1},X_{2},\ldots,X_{N}\}\), comprised of \(N\) interacting, discrete random variables: the set of all informative relationships between elements (and ensembles of elements) in \(\mathbf{X}\) forms its "structure." We begin by defining the total entropy of \(\mathbf{X}\) using the Shannon entropy:
\[\mathcal{H}(\mathbf{X}):=-\sum_{\mathbf{x}\in\mathbf{X}}\mathcal{P}(\mathbf{x })\log_{2}\mathcal{P}(\mathbf{x}) \tag{1}\]
Where \(\mathbf{x}\) indicates a particular configuration of \(\mathbf{X}\) and
\(\mathbf{X}\) is the support set of \(\mathbf{X}\). This joint entropy quantifies, on average, how much it is possible to "know" about \(\mathbf{X}\) (i.e. how many bits of information would be required, on average, to reduce our uncertainty to zero). The entropy is a summary statistic describing an entire distribution \(\mathcal{P}(X)\):
\[\mathcal{H}(\mathbf{X})=\mathbb{E}[-\log_{2}\mathcal{P}(\mathbf{x})] \tag{2}\]
Where \(-\log_{2}\mathcal{P}(x)\) is the _local entropy_\(h(\mathbf{x})\). We can intuitively understand the local entropy with the logic of local probability mass exclusions [17; 18]. Suppose that we observe \(\mathbf{X}=\mathbf{x}\). Upon observing \(\mathbf{x}\), we can immediately _rule out_ the possibility that \(\mathbf{X}\) is in any state \(-\mathbf{x}\), and by ruling out those possibilities, we exclude all the probability mass associated with \(\mathcal{P}(\mathbf{X}=-\mathbf{x})\). If \(\mathcal{P}(\mathbf{x})\) is very low, then upon learning \(\mathbf{X}=\mathbf{x}\), we exclude a large amount of probability mass (\(1-\mathcal{P}(\mathbf{x})\)), and consequently, \(h(\mathbf{x})\) is high. Conversely, if \(\mathcal{P}(\mathbf{x})\) is large, then only a small amount of probability mass is excluded, and so \(h(\mathbf{x})\) is low.
#### ii.1.1 Quantifying Shared Entropy
The measure \(h(\mathbf{x})\) is a very crude one: it gives us a single summary statistic that describes the behaviour of the "whole" without making reference to the structure of the relationships between \(\mathbf{x}\)'s constituent elements. If \(\mathbf{X}\) has some non-trivial structure that integrates multiple elements (or ensembles of elements), then we propose that those elements must "share" entropy. This notion of shared entropy forms the cornerstone of the PED. The way all of the parts of \(\mathbf{X}\) share entropy forms the "structure" of the system. In the original proposal of the PED by Ince [15], shared entropy (\(\mathcal{H}_{cs}\)) was defined using the local co-information, which treats the entropy of variables as sets and defines the shared entropy using inclusion-exclusion criteria. Unfortunately, as discussed by Finn and Lizier, the set-theoretic interpretation of mutlivariate mutual information is complex, as both the expected and local co-information can be negative [19], and the PED computed using Ince's proposed method can result in negative values that are difficult to interpret.
Here, we propose an alternative way to operationalize the notion of "redundant entropy" by saying that two variables \(X_{1},X_{2}\in\mathbf{X}\) share entropy if they induce the same exclusions: i.e. if learning \(X_{1}\) or \(X_{2}\) rules out the same configurations of the whole [17]. Our goal, then, becomes to determine how the entropy of the whole is parcellated out over (potentially multivariate) sharing modes between parts.
In our toy system given by Table 1, suppose we learn that \(X_{1}=0\) OR \(X_{2}=0\). Only one global state is excluded: \(\mathbf{X}=(1,1)\) is incompatible with both possibilities, regardless of which is true. Consequently we are only excluding \(P_{11}\) from the overall distribution. We can quantify this "shared entropy" using the _local entropy of shared exclusions_\(h_{sx}\):
\[h_{sx}^{\mathbf{x}}(\{1\}\{2\})=-\log_{2}\mathcal{P}(x_{1}\cup x_{2}) \tag{3}\]
Here, we are adapting the partial entropy notation first introduced by Ince in [20]. The function \(h_{sx}^{\mathbf{x}}(\{1\}\{2\})\) quantifies the total probability mass of \(\mathcal{P}(\mathbf{X})\) excluded by learning either \(X_{1}=x_{1}\) or \(X_{2}=x_{2}\). Said differently, it is the amount of information that could be learned from either variable alone. Importantly, while it _is_ a measure of dependency, it is distinct from the classic mutual information.
We term this function \(h_{sx}\) to indicate that it is the shared entropy based on common exclusions ("entropy of shared exclusions") from some set of sources. We also note that the form of \(h_{sx}\) is equivalent to the informative part of the local redundancy function derived by Makkeh et al., [21], which they term \(i_{sx}\). For a discussion of how \(h_{sx}\) is related to \(i_{sx}\) and the deeper connections between partial _entropy_ decomposition and partial _information_ decomposition, see Appendix 1.
So far, we have restricted our examples to the simple case of two variables, \(x_{1}\) and \(x_{2}\), however, we are interested in the general case of information common to arbitrarily large, potentially overlapping subsets of a system that has adopted a particular state \(\mathbf{x}\). This requires first enumerating the set of subsets, \(\mathbf{s}\), which we will call the set of _sources_. It is equivalent to the power set of \(\mathbf{x}\), excluding the empty set. For example, if \(\mathbf{x}=\{x_{1},x_{2},x_{3}\}\), then the source set \(\mathbf{s}\) is equal to:
\[\mathbf{s}=\left\{\begin{aligned} &\{x_{1}\},\{x_{2}\},\{x_{3}\},\\ &\{x_{1},x_{2}\},\{x_{1},x_{3}\},\{x_{2},x_{3}\},\\ &\{x_{1},x_{2},x_{3}\}\end{aligned}\right\} \tag{4}\]
We are interested in how collections of sources \(\mathbf{a}\in\mathbf{s}\) might share entropy (i.e. to what extent the exclude the same possible global configurations of \(\mathbf{x}\)), which allows us to write our redundant entropy function in full generality. For a collection of sources \(\{\mathbf{a}_{1},\ldots,\mathbf{a}_{k}\}\):
\[h_{sx}(\mathbf{a}_{1},\ldots,\mathbf{a}_{k}):=\log_{2}\frac{1}{\mathcal{P}( \mathbf{a}_{1}\cup\ldots\cup\mathbf{a}_{k})} \tag{5}\]
\begin{table}
\begin{tabular}{l c c} \hline \(P\) & \(X_{1}\) & \(X_{2}\) \\ \hline \(P_{00}\) & \(0\) & \(0\) \\ \(P_{01}\) & \(0\) & \(1\) \\ \(P_{10}\) & \(1\) & \(0\) \\ \(P_{11}\) & \(1\) & \(1\) \\ \hline \end{tabular}
\end{table}
Table 1: **Joint entropy of two discrete random variables that together make up the macro-variable X.**
\(h_{sx}\) can be interpreted in terms of logical conjunctions and dysjunctions of variables [14]. Consider the example: \(h_{sx}(\{x_{1}\}\{x_{2},x_{3}\})\), which quantifies the amount of probability mass about the state of the "whole" that would be excluded by observing just the part \(x_{1}\)**or** the joint state of \(x_{2}\)**and**\(x_{3}\). This relationship between probability mass exclusions on one hand, and formal logic on the other, places \(h_{sx}\) on a sound conceptual footing. While initially defined locally, it is possible to compute an expected value \(\mathcal{H}_{sx}\) for a joint distribution:
\[\mathcal{H}_{sx}(\mathbf{A}_{1},\ldots,\mathbf{A}_{k}):=\mathbb{E}[h_{sx}( \mathbf{a}_{1},\ldots,\mathbf{a}_{k})] \tag{6}\]
#### ii.1.2 The Partial Entropy Lattice
Our function \(h_{sx}\) has a number of appealing mathematical properties, which collectively satisfy the set of Axioms initially introduced by Williams & Beer for the problem of information decomposition [13] as applied to local information [18; 21]:
**Symmetry**: \(h_{sx}\) is invariant under permutation of it's argument: \(h_{sx}(\mathbf{a}_{1},\ldots,\mathbf{a}_{k})=h_{sx}(\sigma(\mathbf{a}_{1}), \ldots,\sigma(\mathbf{a}_{k}))\)
**Monotonicity**: \(h_{sx}\) decreases as more sources are added: \(h_{sx}(\mathbf{a}_{1},\ldots,\mathbf{a}_{k})\leq h_{sx}(\mathbf{a}_{1},\ldots,\mathbf{a}_{k},\mathbf{a}_{k+1})\)
**Self-redundancy**: In the special case of a single source, \(h_{sx}\) is equivalent to the classic local Shannon entropy: \(h_{sx}(\mathbf{a})=h(\mathbf{a})\).
For proof of these, see [21] Appendix A. Based on these properties, it is possible to specify the domain of \(h_{sx}\) (all non-degenerate combinations of sources) in terms of a partially-ordered lattice structure \(\mathfrak{A}\)[13; 18]. Not every combination of sources \(\mathbf{a}_{1}\ldots\mathbf{a}_{k}\) is a valid partial entropy atom, only those where no source is a subset of any other:
\[\mathfrak{A}=\{\boldsymbol{\alpha}\in\mathbb{P}_{1}(\mathbf{s}):\forall \mathbf{a}_{i},\mathbf{a}_{j}\in\boldsymbol{\alpha},\mathbf{a}_{i}\not\subset \mathbf{a}_{j}\} \tag{7}\]
Where \(\mathbb{P}_{1}(\mathbf{s})\) indicates the power set of \(\mathbf{s}\), excluding the empty set. For an in-depth derivation of the lattice, see [13; 18; 14], for a visualization of the lattice, see Fig. 1. The value of any element \(h_{\partial}(\boldsymbol{\alpha})\) on the lattice can be computed via Mobius inversion:
\[h_{\partial}^{\mathbf{x}}(\boldsymbol{\alpha})=h_{sx}(\boldsymbol{\alpha})- \sum_{\boldsymbol{\beta}\preceq\boldsymbol{\alpha}}h_{\partial}^{\mathbf{x}}( \boldsymbol{\beta}) \tag{8}\]
The result is the entropy specific to a particular \(\boldsymbol{\alpha}\)_and no simpler combination of sources._ Furthermore, the structure of the lattice and the properties of \(h_{sx}\) ensure that \(h_{\partial}^{\mathbf{x}}(\boldsymbol{\alpha})\) will always be non-negative. We can re-compute the total joint entropy of \(\mathbf{x}\) as:
\[h(\mathbf{x})=\sum_{i=1}^{|\mathfrak{A}|}h_{\partial}^{\mathbf{x}}( \boldsymbol{\alpha}_{i}) \tag{9}\]
Like \(h_{sx}\), it is also possible to compute an expected value of \(h_{\partial}\) (which will also be strictly non-negative):
\[\mathcal{H}_{\partial}^{\mathbf{X}}(\boldsymbol{\alpha})=\mathbb{E}[h_{ \partial}^{\mathbf{x}}(\boldsymbol{\alpha})] \tag{10}\]
#### ii.1.3 Decomposing Marginal and Joint Entropies
Having defined \(h_{sx}\) and the Mobius inversion on the partial entropy lattice, we can now do a complete decomposition of the joint entropy, and its marginal components. For example, consider the bivariate system \(\mathbf{X}=\{X_{1},X_{2}\}\). We can decompose the joint entropy:
\[\mathcal{H}(\mathbf{X}) =\mathcal{H}_{\partial}^{12}(\{1\}\{2\})+\mathcal{H}_{\partial}^{ 12}(\{1\}) \tag{11}\] \[+\mathcal{H}_{\partial}^{12}(\{2\})+\mathcal{H}_{\partial}^{12}( \{1,2\})\]
Furthermore, we can decompose the associated marginal entropies in a manner consistent with the partial information decomposition [13]:
\[\mathcal{H}(X_{1}) =\mathcal{H}_{\partial}^{12}(\{1\}\{2\})+\mathcal{H}_{\partial}^{ 12}(\{1\}) \tag{12}\] \[\mathcal{H}(X_{2}) =\mathcal{H}_{\partial}^{12}(\{1\}\{2\})+\mathcal{H}_{\partial}^{ 12}(\{2\})\]
These decompositions can be done for larger ensembles, or more statistical dependencies (see below) and can reveal how higher-order interactions can complicate (and in some cases, compromise) the standard bivariate approaches to functional connectivity.
#### ii.1.4 Mathematical Analysis of the PED
The partial entropy decomposition reveals a rich and complex structure of statistical dependencies even in small systems. Before considering the empirical results, it is worth discussing how the PED relates to classic measures from information theory and what it reveals about the limitations of bivariate FC measures.
The first key finding is that the PED provides interesting insights into the nature of bivariate mutual information. Typically, mutual information is conflated with redundancy at the outset (for example, in Venn diagrams), however, when considering the PED of two variables \(X_{1}\) and \(X_{2}\), it becomes clear that:
\[\mathcal{I}(X_{1};X_{2})=\mathcal{H}_{\partial}^{12}(\{1\}\{2\})-\mathcal{H}_{ \partial}^{12}(\{1,2\}) \tag{13}\]
This relationship was originally noted by Ince [15] and later re-derived by Finn and Lizier [19]. In a sense, the higher-order information present in the joint-state of (\(X_{1}\) and \(X_{2}\)) "obscures" the lower-order structure. This issue is also inherited by parametric correlation measures based on the Pearson correlation coefficient, since the mutual information is a deterministic function of Pearson's \(\rho\) for Gaussian variables [22].
When considering the decomposition of local mutual information into informative and misinformative components proposed by Finn and Lizier, it is clear that redundancy corresponds to the informative component of local mutual information, while synergy corresponds to the misinformative component.
We can do a similar analysis extracting the bivariate mutual information from the trivariate PED, which reveals that the bivariate correlation is not _specific_:
\[\mathcal{I}(X_{1};X_{2}) =\mathcal{H}_{\partial}^{123}(\{1\}\{2\}\{3\})+\mathcal{H}_{ \partial}^{123}(\{1\}\{2\}) \tag{14}\] \[-\mathcal{H}_{\partial}^{123}(\{3\}\{1,2\})-\mathcal{H}_{ \partial}^{123}(\{1,2\}\{1,3\}\{2,3\})\] \[-\mathcal{H}_{\partial}^{123}(\{1,2\}\{1,3\})-\mathcal{H}_{ \partial}^{123}(\{1,2\}\{2,3\})\] \[-\mathcal{H}_{\partial}^{123}(\{1,2\})\]
It is clear from Eq. 15 that the _bivariate_ mutual information incorporates information that is _triple_-redundant across three variables (\(\mathcal{H}_{\partial}^{123}(\{1\}\{2\}\{3\})\)), and if one were to take the standard FC approach to a triad (pairwise correlation between all three pairs of elements), that the triple redundancy would be triple counted and erroneously ascribed to three separate interactions. Fur
Figure 1: **The partial entropy lattice.** The lattice of partial entropy atoms induced by the \(\mathcal{H}_{sx}\) function. Each vertex of the lattice corresponds to a single PE atom, and the Venn diagram describes the associated structure of probability mass exclusions. The blue area indicates the probability mass from \(P(\mathbf{x})\) that is excluded by some combination of observations. For example, in the legend, we can see the probability mass excluded by observing \(X_{1}\lor X_{2}\). The blue area is all of the probability mass one would exclude after learning the state of _either_ component alone. The lowest atom is the entropy redundant to all three elements (\(\mathcal{H}_{sx}(\{1\}\{2\}\{3\})\)), and the dependencies get increasingly synergistic higher on the lattice.
thermore, not only does bivariate mutual information double-count redundancy, but it also penalizes higher-order synergies. Any higher-order atom that includes the joint state of \(X_{1}\wedge X_{2}\) counts _against_\(\mathcal{I}(X_{1};X_{2})\).
Having established that the presence of higher-order redundancies explicitly precludes bivariate correlation from being specific, we now ask: can we improve the specificity using common statistical methods? One approach aimed at "controlling" for the context of additional variables in a bivariate correlation analysis is using conditioning or partial correlation. Typically, these analyses are assumed to improve the _specificity_ of a pairwise dependency by removing the influence of confounders, however, by decomposing the conditional mutual information between three variables, we can see that conditioning does _not_ ensure specificity:
\[\mathcal{I}(X_{1};X_{2}|X_{3}) =\mathcal{H}_{0}^{123}(\{1\}\{2\}) \tag{15}\] \[+\mathcal{H}_{\partial}^{123}(\{1\}\{2,3\})+\mathcal{H}_{ \partial}^{123}(\{2\}\{1,3\})\] \[+\mathcal{H}_{0}^{123}(\{1,2\}\{1,3\}\{2,3\})\] \[+\mathcal{H}_{\partial}^{123}(\{1,3\}\{2,3\})\] \[-\mathcal{H}_{\partial}^{123}(\{1,2\})-\mathcal{H}_{\partial}^{1 23}(\{1,2,3\})\]
The decomposition of \(\mathcal{I}(X_{1};X_{2}|X_{3})\) confates the true pairwise redundancy (\(\mathcal{H}_{\partial}^{123}\{1\}\{2\}\)) with the a higher-order redundancy involving the joint state of \(X_{1}\wedge X_{3}\) and \(X_{2}\wedge X_{3}\): \(\mathcal{H}_{\partial}^{123}\{1,3\}\{2,3\}\). Furthermore, the conditional mutual information penalizes synergistic entropy shared in the joint state of all three variables (\(\mathcal{H}_{\partial}^{123}\{1,2,3\}\)). Consequently, we can conclude that the specificity of bivariate functional connectivity _cannot_ be salvaged using conditioning or partial correlation. Not only does controlling fail to provide specificity, it also actively compromises completeness, since it brings in higher-order interactions. Given that conditional mutual information and partial correlation are equivalent for Gaussian variables [23], this issue also affects standard, parametric approaches to conditional connectivity, just as with bivariate mutual information/Pearson correlation.
It is important to understand that these analytic results are **not** a consequence of the particular form of \(h_{sx}\): any shared entropy function that allows for the formation of a partial entropy lattice will produce these same results (many were first derived by Ince, who used a different measure based on the local co-information [15]).
#### ii.2.5 Higher-Order Dependency Measures
In addition revealing the structure of commonly-used correlations (bivariate and partial correlations), the PED can also be used to develop intuitions about multivariate generalizations of the mutual information. Many of these generalizations exist, and here we will focus on four: the total correlation [24], the dual total correlation [25], the O-information [16; 26] (also called the "enigmatic" information [27]) and the S-information [26] (also called the "exogenous" information [27]). While useful, these measures are often difficult to intuitively understand, and can display surprising behavior. Since they can all be written in terms of sums and differences of joint and marginal entropies, we can use the PED framework to more completely understand them.
The oldest measure is the total correlation, defined as:
\[\mathcal{T}(\mathbf{X}):=\sum_{i=1}^{|\mathbf{X}|}\mathcal{H}(X_{i})- \mathcal{H}(\mathbf{X}) \tag{16}\]
which is equivalent to the Kullback-Leibler divergence between the true joint distribution \(\mathcal{P}(\mathbf{X})\) and the product of the marginals:
\[\mathcal{T}(\mathbf{X})=D_{KL}(\mathcal{P}(\mathbf{X})||\prod_{i=1}^{|\mathbf{ X}|}\mathcal{P}(X_{i}) \tag{17}\]
Based on equation 17, we can understand the total correlation as the divergence from the maximum entropy distribution to the true distribution, implying that it might be something like a measure of the "total" structure of the system (as it's name would suggest). We can decompose the 3-variable case to get a full picture of the structure of the TC:
\[\mathcal{T}(X_{1},X_{2},X_{3}) =(2\times\{1\}\{2\}\{3\}) \tag{18}\] \[+\{1\}\{2\}+\{1\}\{3\}+\{2\}\{3\}\] \[-\{1,2\}\{1,3\}\{2,3\}\] \[-\{1,2\}\{1,3\}-\{1,2\}\{2,3\}-\{1,3\}\{2,3\}\] \[-\{1,2\}-\{1,3\}-\{2,3\}\] \[-\{1,2,3\}\]
We can see that the total correlation is largely a measure of redundancy, sensitive to information shared between single elements, but penalizing higher-order information present in joint states. This can be understood by considering the lattice in Figure 1: each of the \(\mathcal{H}(X_{i})\) terms will only incorporate atoms preceding (or equal to) the unique entropy term \(\mathcal{H}_{\partial}^{123}(i)\) - anything that can only be seen by considering the joint-state of \(\mathbf{X}\) will be negative.
The second generalization of mutual information is the dual total correlation [25]. Defined in terms of entropies by:
\[\mathcal{D}(\mathbf{X}):=\mathcal{H}(\mathbf{X})-\sum_{i=1}^{|\mathbf{X}|} \mathcal{H}(X_{i}|\mathbf{X}^{-i}) \tag{19}\]
where \(\mathbf{X}^{-i}\) refers to the set of every element of \(\mathbf{X}\)_excluding_ the \(i^{th}\). The dual total correlation can be under
stood as the difference between the total entropy of \(\mathbf{X}\) and all of the entropy in each element of \(X\) that is "intrinsic" to it and not shared with any other part. When we decompose the three-variable case, we find:
\[\mathcal{D}(X_{1},X_{2},X_{3}) =\{1\}\{2\}\{3\} \tag{20}\] \[+\{1\}\{2\}+\{1\}\{3\}+\{2\}\{3\}\] \[+\{1\}\{23\}+\{2\}\{1,3\}+\{3\}\{1,2\}\] \[+\{1,2\}\{1,3\}\{2,3\}\] \[-\{1,2\}-\{1,3\}-\{2,3\}-(2\times\{1,2,3\})\]
This shows that dual total correlation is a much more "complete" picture of the structure of a system than total correlation. It is sensitive to both shared redundancies and synergies, penalizing only the un-shared, higher-order synergy terms such as \(\mathcal{H}_{\partial}^{123}(\{1,2\})\).
The sum of the total correlation and the dual total correlation is the exogenous information [27], also called by the S-information.
\[\mathcal{E}(\mathbf{X}):=\mathcal{T}(\mathbf{X})+\mathcal{D}(\mathbf{X}) \tag{21}\]
Prior work has shown the exogenous entropy to be very tightly correlated with the Tononi-Sporns-Edelman complexity [28, 16, 26], a measure of global integration/segregation balance. James also showed that the S-information quantified the total information that every element shares with every other element [27]. We can see that:
\[\mathcal{E}(X_{1},X_{2},X_{3}) =(3\times\{1\}\{2\}\{3\})\] \[+2\times(\{1\}\{2\}+\{1\}\{3\}+\{2\}\{3\}))\] \[+\{1\}\{2,3\}+\{2\}\{1,3\}+\{3\}\{1,2\}\] \[-\{1,2\}\{1,3\}-\{1,2\}\{2,3\}-\{1,3\}\{2,3\}\] \[-2\times(\{1,2\}+\{1,3\}+\{2,3\})\] \[-(3\times\{1,2,3\})\]
This reveals that S-information to be an unusual measure, in that it counts each redundancy term multiple times (i.e. in the case of three variables, the triple redundancy term appears three times, each double-redundancy term appears twice, etc), and penalizes them likewise when considering unshared synergies.
The final, and arguably most interesting measure is the difference between the total correlation and the dual total correlation is often referred to as the O-information [26], and has been hypothesized to give a heuristic measure of the extent to which a given system is dominated by redundant or synergistic interactions:
\[\mathcal{O}(\mathbf{X}):=\mathcal{T}(\mathbf{X})-\mathcal{D}(\mathbf{X}) \tag{22}\]
where \(\mathcal{O}(\mathbf{X})>0\) implies a redundancy-dominated structure and \(\mathcal{O}(\mathbf{X})<0\) implies a synergy dominated one. PED analysis reveals:
\[\mathcal{O}(X_{1},X_{2},X_{3}) =\{1\}\{2\}\{3\} \tag{23}\] \[-\{1\}\{2,3\}-\{2\}\{1,3\}-\{3\}\{1,2\}\] \[-(2\times\{1,2\}\{1,3\}\{2,3\})\] \[-\{1,2\}\{1,3\}-\{2,3\}\{1,3\}-\{1,2\}\{2,3\}\] \[+\{1,2,3\}\]
This shows that the O-information generally satisfies the intuitions proposed by Rosas et al., as it is positively sensitive to the non-pairwise redundancy (in this case just \(\mathcal{H}_{\partial}^{123}(\{1\}\{2\}\{3\})\)) and negatively sensitive to any higher-order shared information. Curiously, \(\mathcal{O}(X_{1},X_{2},X_{3})\) positively counts the highest, un-shared synergy atom (\(\mathcal{H}_{\partial}^{123}(\{1,2,3\})\). Conceivably, it may be possible for a set of three variables with _no redundancy_ to return a positive O-information, although whether this can actually occur is an area of future research.
For three-element systems, the O-information is also equivalent to the co-information [26], which forms the base of the original redundant entropy function \(\mathcal{H}_{cs}\) proposed by Ince [15]. From this we can see that, at least for three variables, co-information is not a pure measure of redundancy, conflating the true redundancy and the highest synergy term, as well as penalizing other higher-order modes of information-sharing. A similar argument was made by Williams and Beer using the mutual information-based interpretation of co-information [13]. While the O-information and co-information diverge for \(N>3\), we anticipate that the behavior of the co-information will remain similarly complex at higher \(N\). These results reveal how the PED framework can provide clarity to the often-murky world of multivariate information theory.
#### ii.2.6 Novel Higher-Order Measures
From these PED atoms, we can construct a novel measures of higher-order dependence that extends beyond TC, DTC, O-Information and S-Information.
When considering higher-order redundancy, we are interested in all of those atoms that duplicate information over three or more individual elements. We define this as the _redundant structure_. For a three element system:
\[\mathcal{S}_{R}(X_{1},X_{2},X_{3})=\{1\}\{2\}\{3\} \tag{24}\]
For a four-element system:
\[\mathcal{S}_{R}(X_{1},X_{2},X_{3},X_{4}) =\{1\}\{2\}\{3\}\{4\} \tag{25}\] \[+\{1\}\{2\}\{3\}+\{1\}\{2\}\{4\}\] \[+\{1\}\{3\}\{4\}+\{2\}\{3\}\{4\}\]
And so on for larger systems.
We can also define an analogous measure of synergistic structure: all those atoms representing information shared over the joint state of two or more elements. For example, for a three element system:
\[\mathcal{S}_{S}(X_{1},X_{2},X_{3}) =\{1\}\{2,3\}+\{2\}\{1,3\}+\{3\}\{1,2\}\] \[+\{1,2\}\{1,3\}\{2,3\}\] \[+\{1,2\}\{1,3\}+\{2,3\}\{1,3\}\] \[+\{1,2\}\{2,3\} \tag{26}\]
For three element systems, the difference \(\mathcal{S}_{R}-\mathcal{S}_{S}\) is analagous to a "corrected" O-information: the atom \(\{1,2\}\{1,3\}\{2,3\}\) is only counted once and the confounding triple synergy \(\{1,2,3\}\) is not included. Finally, we can define a measure of total (integrated) structure (i.e. all shared information) as the sum of all atoms composed of multiple sources:
\[\mathcal{S}=\sum_{\mathbf{\alpha}\in\mathfrak{M}}\mathbf{\alpha}\iff|\mathbf{\alpha}|>1 \tag{27}\]
### Applications to the Brain
The mathematical structure of the PED is domain agnostic: any complex system composed of discrete random variables is amenable to this kind of information-theoretic analysis. In this paper, we focus on data collected from the human brain with functional magnetic resonance imaging (fMRI). For detailed methods, see the Materials & Methods section (V, but in brief, data from ninety five human subjects resting quietly was recorded as part of the Human Connectome Project [29]. All of the scans were concatenated and each channel binarized about the mean [30] to create multidimensional, binary time series. We then computed the full PED for all trials, and approximately two million tetrads, to compare to the standard, bivariate functional connectivity network (computed with mutual information).
By looking at the redundant and synergistic structures, and relating them to the standard FC, we can explore how higher-order dependencies are represented in bivariate networks, as well as what brain regions participate in more redundancy- or synergy-dominated ensembles.
## II Results
### PED Reveals the Limitations of Bivariate Networks
We now discuss how the PED relates multivariate measures of bivariate network structure commonly used in the functional connectivity literature. These measures describe statistical dependencies between ensembles of regions, but mediated by the topology of bivariate connections. We hypothesized that this emergence from bivariate dependencies would render them largely insensitive to synergies, which in turn would mean that such measures do not solve the issue of incompleteness in functional connectivity.
Following [31], we compared the redundant and synergistic structure of triads and tetrads to a measure of subgraph strength: the arithmetic mean of all edges in the subgraph. We found that the arithmetic mean FC density was positively correlated with redundancy for triads (\(\rho=0.999\), \(p<10^{-20}\)) and tetrads (\(\rho=0.995\), \(p<10^{-20}\)), indicating that information duplicated over many brain regions contributes to multiple edges, leading to double-counting. In contrast, for triads, arithmetic mean FC density was largely independent of synergistic structure (\(\rho=-0.05\), \(p<10^{-20}\)), but for tetrads they were strongly anticorrelated (\(\rho=-0.988\), \(p<10^{-20}\)).
In addition to subgraph structure, another common method of assessing polyadic interactions in networks is via community detection [8]. Using the multi-resolution consensus clustering algorithm [32], we clustered the bivariate functional connectivity matrix into non-overlapping communities. We then looked at the distributions of higher-order redundant and synergistic structure for triads and tetrads that spanned different numbers of consensus communities. We found that triads where all nodes were members of one community had significantly less synergy than triads that spanned two or three communities (Kolmogorov-Smirnov two sample test, \(D=0.44\), \(p<10^{-20}\)). The pattern was more pronounced when considering tetrads: tetrads that all belonged to one community had lower synergy than those that spanned two communities (\(D=0.45\), \(p<10^{-20}\)), who in turn had lower synergy than those that spanned three communities (\(D=0.37\), \(p<10^{-20}\)). In Figure 2 (top row), we show cumulative probability density plots for the distribution of synergies for triads and tetrads that spanned one, two, three, and four FC communities, where it is clear that participation in increasingly diverse communities is associated with greater synergistic structure. In contrast, redundant structure was higher in triads that were all members of a small number of communities. Triads that spanned three communities had lower redundancy than triads that spanned two communities (\(D=0.48\), \(p<10^{-20}\)), which in turn had lower redundancy than those that were all members of one community (\(D=0.47\), \(p<10^{-20}\)) (see Fig. 2, bottom row). These results, coupled with the mathematical analysis of the PED discussed in Section I provide strong theoretical and empirical evidence that bivariate, correlation-based FC measures are largely sensitive to redundant information duplicated over many individual brain regions, but largely insensitive to (or even anti-correlated with) higher-order synergies involving the joint state of multiple regions. These results imply the possibility that there is a vast space of neural dynamics and structures that have not previously been captured in FC analyses.
#### iv.1.1 PED with \(\mathcal{H}_{sx}\) is consistent with O-information
To test whether the PED using the \(\mathcal{H}_{sx}\) redundancy function was consistent with other, information-theoretic measures of redundancy and synergy, we compared the average redundant and synergistic structures (as revealed by PED), to the O-information. We hypothesized that redundant structure would be positively correlated with O-information (as \(\mathcal{O}>0\) implies redundancy dominance) and that synergistic structure would be negatively correlated, for the same reason.
For both triads and tetrads, our hypothesis was bourne out. The Pearson correlation between O-information and redundant structure was significantly positive for both triads (\(\rho=0.72\), \(p<10^{-20}\)) and tetrads (\(\rho=0.82\), \(p<10^{-20}\)). Conversely, the Pearson correlation between the O-information and the synergistic structure was significantly negative (triads: \(\rho=-0.7\), \(p<10^{-20}\), tetrads: \(\rho=-0.72\), \(p<10^{-20}\)). These results show that the structures revealed by the PED are consistent with other, non-decomposition-based inference methods and serves to validate the overall framework.
Interestingly, when comparing the triadic O-information (which does not double-count \(\mathcal{H}_{0}^{123}(\{1,2\}\{1,3\}\{2,3\})\) and does not add back in the atom \(\mathcal{H}_{0}^{123}(\{1,2,3\})\)), we can see that the addition of \(\mathcal{H}_{0}^{123}(\{1,2,3\})\) can lead to erroneous conclusions. Of all those triads that had a negative corrected O-information (i.e. had a greater synergistic structure than redundant structure), 61.7% had a positive O-information, which could only be attributable to the presence of the triple-synergy being (mis)interpreted as redundancy and overwhelming the true difference. This suggests that, for small systems, the O-information may not provide an unbiased estimator of redundancy/synergy balance.
### Characterizing Higher-Order Brain Structures
Having established the presence of beyond-pairwise redundancies and synergies in brain data, and shown that standard, network-based approaches show an incomplete picture of the overall architecture, we now describe the distribution of redundancies and synergies across the human brain.
We began by applying a higher-order generalization of the standard community detection approach using a hypergraph modularity maximization algorithm [33]. this algorithm partitions collections of (potentially overlapping) sets of nodes called _hyperedges_ into communities that have a high degree of internal integration and a lower
Figure 2: **The limits of bivariate functional connectivity.****A.** In triads, bivariate functional connectivity is largely independent of synergistic structure, and **B,** is very positively correlated with redundant structure. **C.** In tetrads, bivariate functional connectivity is strongly negatively correlated with synergistic structure and **D,** is strongly correlated with redundant structure. **E-F.** Triads that have all elements within one FC community have significantly less synergistic structure than those that have elements with two communities, while for redundant structure, there was a clear pattern that the more FC communities a triad straddled, the lower it’s overall redundant structure. **G-H.** The same pattern was even more pronounced in tetrads: as the number of FC communities a tetrad straddled increased, the expected synergistic structure climbed, while expected redundant structure fell.
degree of between-community integration. We selected all those triads that had a greater synergistic structure than any of the one million maximum entropy null triads (see Materials and Methods), which yielded a set of 3,746 unique triads. From these, we constructed an unweighted hypergraph with 200 nodes and 3,746 hyperedges (casting each triad as a hyperedge incident on three nodes). We then performed 1,000 trials of the hypergraph clustering algorithm proposed by Kumar et al., [33], from which we built a consensus matrix that tracked how frequently two brain regions \(X_{i}\) and \(X_{j}\) were assigned to the same hyper-community. We repeated the process for the 3,746 maximally redundant triads to create two partitions: a synergistic structure and a redundant structure.
In Figure 3 we show surface plots of the resulting communities computed from the concatenated time series comprising all ninety-five subjects and all 4 runs. The redundant structure (left) is very similar to the canonical seven Yeo systems [34]: we can see a well-developed DMN (orange), a distinct visual system (sky blue), a somato-motor strip (violet), and a fronto-parietal network (dark blue). In contrast, when considering the syn
Figure 3: **Redundant and synergistic hypergraph community structure.****A-B.** Surface plots of the two communities structures: on the left is the redundant structure and on the right is the synergistic structure. We can see that both patterns are largely symmetrical for both information-sharing modes, although the synergistic structure has two large, lateralized communities. **C-D.** The co-classification matrices for redundant structure (left) and the synergistic structure (right). The higher the value of a pair, the more frequently the hypergraph modularity maximization [33] assigns those two regions to the same hyper-community. The yellow squares indicate the seven canonical Yeo functional networks [34], and we can see that the higher-order redundant structure matches the bivariate Yeo systems well (despite consisting of information shared redundantly across three nodes). In contrast, the synergistic structure largely fails to match the canonical network structure at all. **E.** For each of the 95 subjects and for each of the 1000 permutation nulls used to significance test the NMI between subject-level community structure and the master level structure, we computed the log-ratio of the empirical NMI to the null NMI. For redundancy, there was not a single null, over any subject, that was greater than the associated empirical NMI. For the case of the synergy, only 0.6% of nulls were greater than their associated empirical NMI.
ergistic structure (right), a strikingly different pattern is apparent. Synergistic connectivity appears more lateralized over left and right hemispheres (orange and violet communities respectively), although there is a high degree of symmetry along the cortical midline comprised of apparently novel communities. These include a synergistic coupling between visual and limbic regions (sky blue), as well a occipital subset of the DMN (green) and a curious, symmetrical set of regions combining somato-motor and DMN regions (red).
These results show two things: the first is further confirmation that the canonical structures studied in an FC framework can be interpreted as reflecting primarily patterns of redundant information. The second is that higher-order synergies are structured in non-random ways, combining multiple brain regions into integrated systems that are usually thought to be independent when considering just correlation-based analyses. If the synergistic structure were reflecting mere noise, then we would not expect the high-degree of symmetry and structure we observe.
To test whether the patterns we observed were consistent across individuals, we re-ran the entire pipeline (PED of all triads, hypergraph clustering of redundant and synergistic triads, etc) for each of the 95 subjects seperately. Then, for each subject, we computed the normalized mutual information (NMI) [6] between the subject-level partition and the relevant master partition (redundancy or synergy) created from the concatenated time series of all four scans from each of the ninety-five subjects. We significance tested each comparison with a permutation null model. For each null, we permuted the subject-level community assignment vector of nodes, re-computing the NMI between the master partition and a shuffled subject-level partition (1,000 permutations). In the case of the redundant partition, we found that that no subjects ever had a shuffled null that was greater than the empirical NMI: all had significant NMI (\(0.52\pm 0.07\)). In the case of the synergistic partition, 91 of the 95 subjects showed significant NMI (\(0.1\pm 0.03\), \(p<0.05\), Benjamini-Hochberg FDR corrected). These results suggest that both structures (redundant and synergistic) are broadly conserved across individuals, however, it appears that the synergistic partitions are generally more variable between subjects than the redundant partition (which hews closer to the master partition constructed by combining the data from all subjects). When we computed the normalized mutual information of all the subject level redundancy partitions to the canonical Yeo systems, we found a high degree of correlation (NMI = 0.6196\(\pm\)0.0117, \(p<10^{-20}\)). The same analysis with the subject level synergy partitions found a much lower degree of concordance (NMI = 0.2290\(\pm\)0.0117, \(p<10^{-20}\)).
#### iv.2.1 Redundancy-synergy gradient & time-resolved analysis
Thus far, we have analyzed higher-order redundancy and synergy separately. To understand how they interact, we began by replicating the analysis of Luppi et al., [35]. We counted how many times each brain region appeared in the set of 3,746 most synergistic and 3,746 most redundant triads. We then ranked each node to create two vectors which rank how frequently each region participates in high-redundancy and high-synergy configurations. By subtracting those two rank vectors, we get a measure of _relative_ redundancy/synergy dominance. A value greater than zero indicates that a region's relative redundancy (compared to all other regions) is greater than its relative synergy (compared to all other regions), and vice versa.
By projecting the rank-differences onto the cortical surface (Fig. 4A), we recover the same gradient-like pattern first reported by Luppi et al., with relatively redundant regions located in primary sensory and motor cortex, and relatively synergistic regions located in multimodal and executive cortex. This replication is noteworthy, as Luppi et al., used an entirely different method of computing synergy (based on the information flow from past to future in pairs of brain regions), while we are looking at generalizations of static FC for which dynamic order does not matter. The fact that the same gradient appears when using both analytical methods strongly suggests it is a robust feature of brain activity.
A limitation of the analysis by Luppi et al. is the restriction that only _average_ values of synergy and redundancy are accessible: the results describe expected values over all TRs and obscure any local variability. The PED analysis using \(h_{sx}\) can be localized (see Sec. I) to individual frames. This allows us to see how the redundant and synergistic structure fluctuate over the course of a resting state scan, and how the distributions of relative synergies and redundancies vary over the cortex. Figure 4B shows how the redundant and synergistic structure fluctuate over the course of 1100 TRs taken from a single subject (for scans concatenated). This allows us to probe the information structure of previously identified patterns in frame-wise dynamics. Analysis of instantaneous pairwise co-fluctuations (also called "edge time series") reveals a highly structured pattern, with periods of relative disintegration interspersed with high co-fluctuation "events" [36; 37]. The distribution of these co-fluctuations reflect various factors of cognition [38], generative structure [39], functional network organization [30], and individual differences [40]. By correlating the instantaneous average whole-brain redundant and synergistic structures with instantaneous whole-brain co-fluctuation amplitude (RSS), we can get an understanding of the "informational structure" of high-RSS "events." We found that redundancy is positively correlated with co-fluctuation RSS (\(\rho=0.6\), \(p<10^{-50}\)) and synergy is negatively correlated with co-fluctuation amplitude (\(\rho=-0.43\), \(p<10^{-50}\)). Given that synergy is known to drive bivariate functional con
nectivity [36], this is again consistent with the hypothesis that FC patterns largely reflect redundancy and are insensitive to higher-order synergies.
With full PED analysis completed for every frame, it is possible to compute the instantaneous distribution of relative redundancies and synergies across the cortex for every TR. The resulting multidimensional time-series can be seen in Fig. 4F. When sorted by Yeo systems [34], we can see that different systems show distinct relative redundancy/synergy profiles. The nodes in the somato-motor system had the highest median value (\(22.0\pm 73\)), followed by the visual system (\(14.0\pm 80\)), indicating that they were, on-average relatively more redundant than synergistic. In contrast, the ventral attentional system
Figure 4: **Time-resolved analysis.****A.** Surface plots for the distributions of relative synergies and relative redundancies across the human brain. These results match prior work by Luppi et al., [35], with primary sensory and motor cortex being relatively redundant, while multi-modal association areas being relatively synergistic. **B.** Over the course of one subject’s scan (1100 TRs), the total redundant and synergistic structure varies over time, although never so much that the curves cross (i.e. there is never more redundant structure than synergistic structure present). **C.** Instantaneous redundant and synergistic structure are anti-correlated (\(\rho=-0.83\), \(p<10^{-50}\)). **D.** Redundancy is positively correlated with the amplitude of bivariate co-fluctuations (\(\rho=0.6\), \(p<10^{-50}\)) and **E.** synergy is negatively correlated with co-fluctuation amplitude (\(\rho=-0.43\), \(p<10^{-50}\)). **F.** For each TR, we show the difference in the rank-redundancy and rank-synergy for each node (red indicates a higher rank-redundancy than rank-synergy and vice versa for blue). When nodes are stratified by Yeo system [34] (grey, horizontal lines), it is clear that different systems alternate between high-redundancy and high-synergy configurations in different ways. **G.** For every pair of columns in Panel F. we compute the Pearson correlation between them to construct a time \(\times\) time similarity matrix, which we then clustered using the MRCC algorithm [32]. Note that rows and columns are not in time order, but rather, re-ordered to reveal the state-structure of the time series. **H.** Five example states (centroids of each community show in Panel G.) projected onto the cortical surface. It is clear that the instantaneous pattern of relative synergies and redundancies varies from the average structure presented in Panel A. For example, in States 3 and 4, the visual system is highly redundant (as in the average), however in state 5, the visual system is synergistic.
had the lowest median value (\(-11.0\pm 66\)), indicating a relatively synergistic dynamic. Other systems seemed largely balanced: with median values near zero but a wide spread between them, such as the dorsal attention network (\(1.0\pm 70\)), fronto-parietal control system (\(-5.0\pm 56\)), and the DMN (\(-2.0\pm 67\)). These are systems that transiently shift from largely redundancy-dominated to synergy-dominated regimes in equal measure. Finally, the limbic system had small values and relatively little spread (\(-5.0\pm 18\)), indicating a system that never achieved either extreme.
We then correlated every TR against every other frame to construct a weighted, signed recurrence network [41], which we could then cluster using the MRCC algorithm [32] (Fig. 4G). This allowed us to assign every TR to one of nine discrete "states", each of which can be represented by its centroid (for five examples see Fig 4H). We can see that these states are generally symmetrical, but show markedly different patterns relative redundancy and synergy across the cortex, and some systems can change valance entirely. For example, in states three and four the visual system is highly redundant (consistent with the average behavior), while in state five the same regions are more synergy-dominated. In the same vein, the somato-motor strip is highly redundant in state 4, but slightly synergy-biased in state 3. This shows that the dynamics of information processing are variable in time, with different areas of cortex transiently becoming more redundant or more synergistic in concert.
The sequence of states occupied at each TR is a discrete time series which we can analyze as a finite-state machine (for visualization, see Figure 5). Shannon temporal mutual information found that the present state was significantly predictive of the future state (1.59 bit, \(p<10^{-50}\)), and that the transitions between states were generally more deterministic [42, 43] (2.29 bit \(p<10^{-50}\)) than would be expected by chance. While the sample size is small (1099 transitions), these results suggest that
Figure 5: **State-to-state transitions.** For each of the nine distinct states, we can see how many times each state transitions another (self-loops are not shown for visual clarity). We can see that the various states have meaningful differences between each-other (e.g. the visual system or the somato-motor systems both transition from redundancy- to synergy-dominated configurations over time), however, within a state, the patterns are largely symmetrical across hemispheres.
the transition between states is structured in non-random ways.
## III Discussion
In this paper, we have explored a novel framework for extracting higher-order dependencies from data and applied it to fMRI recordings. We found that the human brain is rich in beyond-pairwise, synergistic structures, as well as redundant information copied over many brain regions. Based on a partial entropy decomposition framework [15; 19] our method returns strictly non-negative values, does not require grouping elements into "sources" and "targets", and is localizable, permitting a time-resolved analysis of the system's dynamics.
Prior work on the partial entropy decomposition has analytically shown that the bivariate mutual information between two elements incorporates non-local information that is redundantly present over more than two elements [15; 19]. This means that classic approaches to functional connectivity are _non-specific_: the link between two elements does not reflect information uniquely shared by those two but double (or triple-counts) higher-order redundancies distributed over the system. We verified this empirically by comparing the distribution of higher-order (beyond pairwise) redundancies to a bivariate correlation network and found that the redundancies closely matched the classic network structure.
These non-local redundancies shed new light on a well-documented feature of bivariate functional connectivity networks: the transitivity of correlation [44]. In functional connectivity networks, if \(X_{i}\) and \(X_{j}\) are correlated, as well as \(X_{j}\) and \(X_{k}\), then there is a much higher-than expected chance that \(X_{i}\) and \(X_{k}\) are correlated (even though this is not theoretically necessary [45]). Since the Pearson correlation related the mutual information under Gaussian assumptions [22], we claim that the observed transitivity of functional connectivity is a consequence of previously-unrecognized, non-local redundancies copied over ensembles of nodes. This hypothesis is consistent with our findings that redundancies correlate with key features of functional network topology, including subgraph density and community structure.
In addition to higher-order redundancies, we also found strong evidence of higher order synergies: information present in the joint states of multiple brain regions and only accessible when considering "wholes" rather than just "parts." These synergies appear to be structured in part by the physical brain (for example, being largely symmetric across hemispheres), but also don't readily correspond to the standard functional connectivity networks previously explored in the literature. Since synergistic structures appear to be largely anti-correlated with the standard bivariate network structures, it is plausible that these synergistic systems represent a novel organization of human brain activity.
These higher-order interactions represent a vast space of largely unexplored, but potentially significant aspects of brain activity. One possible avenue of study is how higher-order synergies reflect individual differences [40; 46] and subject identifiability [47]. The finding that the synergistic community structure was more variable across subjects than the redundant structure suggests that synergistic dependencies may reflect more unique, individualized differences, while the redundant structure (reflected in the functional connectivity) represents a more conserved architecture. This is consistent with recent theoretical work linking synergy to individuality [48], as well as empirical findings that the evolution of humans is associated with an enrichment of synergistic cortical structures [35]. The ability to expand beyond pairwise network models of the brain into the much richer space of beyond-pairwise structures offers a the opportunity to explore previously inaccessible relationships between brain activity, cognition, and behavior.
Since normal cognitive functioning requires the coordination of many different brain regions [49; 50; 51], and pathological states are associated with the dis-integrated dynamics [52; 53; 54], it is reasonable to assume that alterations to higher-order, synergistic coordination may also reflect clinically significant changes in cognition and health. Recent work has already indicated that changes in bivariate synergy track loss of consciousness under anesthesia and following traumatic and anoxic brain injury [11] suggesting that higher-order dependencies can encode clinically significant biomarkers. We hypothesize that beyond-pairwise synergies in particular may be worth exploring in the context of recognizing early signs of Alzheimer's and other neurodegenerative diseases, as synergy requires the coordination of many regions simultaneously and may begin to show signs of fragmentation earlier than standard, functional connectivity-based patterns (which are dominated by non-local redundancies may obscure early fragmentation of the system).
Finally, the localizable nature of the \(\mathcal{H}_{sx}\) partial entropy function allows us a high degree of temporal precision when analyzing brain dynamics. The standard approach to time-varying connectivity is using a sliding-windows analysis, however, this approach blurs temporal features and obscures higher-frequency events [55]. By being able to localize the redundancies and synergies in time, we can see that there is a complex interplay between both "types" of integration. When considering expected values, we find a distribution of redundancies and synergies that replicates the findings of Luppi et al., [35], however, when we localize the analysis in time, we find a high degree of variability between frames. It appears that there are not consistently "redundant" or "synergistic" brain regions (or ensembles), but rather, various brain regions can transiently participate in highly synergistic or highly redundant behaviors at different times. The structure of these dynamics appears to be non-random (based on the structure of the state-transition matrix), however, the significance of the various combinations of redundancy and synergy remains a topic for much future
work. The fact that some systems (such as the visual system) can be either redundancy- or synergy-dominated at different times complicates the notion of a "synergistic core". Instead, there may be a "synergistic landscape" of configurations that the system traverses, with different configurations of brain regions transiently serving as the core and providing a flexible architecture for neural computation in response to different demands.
This analysis does have some limitations, however. The most significant is that the size of the partial entropy lattice grows explosively as the size of the system increases: a system with only eight elements will have a lattice with 5.6\(\times\)10\({}^{22}\) unique partial entropy atoms. While our aggregated measures of redundant and synergistic structure can summarize the dependencies in a principled way, simply computing that many atoms is computationally prohibitive. In this paper, we took a large system of 200 nodes, and calculated every triad and a large number of tetrads, however, this also quickly runs into combinatorial difficulties, as the number of possible groups of size \(k\) one can make from \(N\) elements grows with the binomial coefficient. Heuristic measures such as the O-information can help, although as we have seen, this measure can conflate redundancy and synergy in sometimes surprising ways. One possible avenue of future work could be to leverage optimization algorithms to find small, tractable subsets of systems that show interesting redundant or synergistic structure, as was done in [56; 57; 16]. Alternately, coarse-graining approaches that can reduce the dimensionality of the system while preserving the informational or causal structure may allow the analysis of a compressed version of the system small enough to be tractable [58; 42].
In the context of this study, the use of fMRI BOLD data presents some inherent limitations, such as a small number of samples (TRs) from which to infer probability distributions, and the necessity of binarizing a slow, continuous signal. Generalizing the logic of shared probability mass exclusions remains an area of on-going work [59], although for the time being, the \(h_{sz}\) function requires discrete random variables. BOLD itself is also fundamentally a proxy measure of brain activity based on oxygenated blood flow and not a direct measure of neural activity. Applying this work to electrophysiological data (M/EEG, which can be discretized in principled ways to enable information-theoretic analysis [60]), and naturally discrete spiking neural data [61], will help deepen our understanding of how higher-order interactions contribute to cognition and behavior. The applicability of the PED to multiple scales of analysis highlights one of the foundational _strengths_ of the approach (and information-theoretic frameworks more broadly): being based on the fundamental logic of inferences under conditions of uncertainty, the PED can be applied to a large number of complex systems (beyond just the brain), or to multiple scales within a single system to provide a detailed, and holistic picture of the system's structure.
## IV Conclusions
In this work, we have shown how the joint entropy of a complex system can be decomposed into atomic components of redundancy and synergy, which reveal higher-order, beyond-pairwise dependencies in the structure of the system. When applied to human brain data, this partial entropy decomposition framework reveals previously unrecognized, higher-order structures in the human brain. We find that the well-known patterns of functional connectivity networks largely reflect redundant information copied over many brain regions. In contrast, the synergies for a kind of "shadow structure" that is largely independent from, or anticorrelated with, the bivariate network and has consequently remained less well explored. The patterns of redundancy and synergy over the cortex are dynamic across time, with different ensembles of brain regions transiently forming redundancy- or synergy-dominated structures. This space of beyond-pairwise dynamics is likely rich in previously unidentified links between brain activity and cognition. The PED can also be applied to problems beyond neuroscience and may provide a general tool with which higher-order structure can be studied in any complex system.
## V Materials & Methods
### Human Connectome Project fMRI Data
The data used in this study was taken from a set of 100 unrelated subjects included in the Human Connectome Project (HCP) [29]. Refs [29; 62] provide a detailed description of the acquisition and preprocessing of this data, which have been used in many previous studies[30; 39]. Briefly, all subjects gave informed consent to protocols approved by the Washington University Institutional Review Board. Data was collected with a Siemens 3T Connectom Skyra using a head coil with 32 channels. Functional data analysed here was acquired during resting state with a gradient-echo echo-planar imaging (EPI) sequence. Collection occurred over four scans on two separate days (scan duration: 14:33 min; eyes open). The main acquisition parameters included TR = 720 ms, TE = 33.1 ms, flip angle of 52\({}^{\circ}\), 2 mm isotropic voxel resolution, and a multiband factor of 8. Resting state data was mapped to a 200-node parcellation scheme [63] covering the entire cerebral cortex.
Considerations for subject inclusion were established before the study and are as follows. The mean and mean absolute deviation of the relative root mean square (RMS) motion throughout any of the four resting scans were calculated. Subjects that exceeded 1.5 times the interquartile range in the adverse direction for two or more measures they were excluded. This resulted in the exclusion of four subjects, and an additional subject due to a software error during diffusion MRI processing. The included subjects had demographic characteristics of: 56%
female, mean age = 29.29 \(\pm\) 3.66, age range = 22-36 years.
#### iv.1.1 Preprocessing
The minimal preprocessing of HCP rs-fMRI data can be found described in detail in ref. [62]. Five main steps were followed: 1) susceptibility, distortion, and motion correction; 2) registration to subject-specific T1-weighted data; 3) bias and intensity normalization; 4) projection onto the 32k_fs_LR mesh; and 5) alignment to common space with a multimodal surface registration (81). This pipeline produced an ICA+FIX time series in the CIFTI grayordinate coordinate system. We included two additional preprocessing steps: 6) global signal regression and 7) detrending and band pass filtering (0.008 to 0.08 Hz) [64]. We discarded the first and last 50 frames of each time series after confound regression and filtering to produce final scans with length 13.2 min (1,100 frames). All four scans from 95 subjects were then z-scored and concatenated to give a final time-series of 200 brain regions and 418,000 time points.
#### iv.1.2 Discretizing BOLD Signals
Unfortunately, the \(\mathcal{H}_{sx}\) measure is only well-defined for discrete random variables. Consequently, we discretized our data by binarizing the z-scored time series: setting any value greater than zero to one and any value less than zero to zero. Prior work has established that transforming BOLD signals into binary point processes preserves the majority of the total correlation structure [65, 30], so we are confident that our analysis is robust, especially considering the large number of samples.
We chose to binarize around the z-score (as opposed to alternative point-processing techniques such as local maxima), as the z-score ensures that each individual channel is generally maximally entropic (i.e. \(\mathcal{P}(X_{i}=1)\approx\mathcal{P}(X_{i}=0)\approx 1/2\)). This ensures that every individual channel has approximately the same entropy, and so deviations from maximum entropy at the level of the entire triad or tetrad can _only_ emerge from correlations between two or more channels, rather than being influenced by biases at the channel-level. The choice to binarize about the mean also links this work to previous work on decomposing functional connectivity into discrete partitions [30].
### Statistical Analyses
#### iv.2.1 Triads & tetrads
In standard FC analysis, it is typical to compute the pairwise correlation between all pairs of brain regions, resulting \(\binom{N}{2}\) unique pairs. For this analysis, we computed all triads of brain regions, resulting in \(\binom{200}{3}=1,313,400\) unique triples. For each triad, we computed the joint entropy, and performed the full partial entropy decomposition to compute each of the eighteen partial entropy atoms. Finally, each of the atoms was normalized by the total joint entropy, to give a measure of how much each atom contributes to the whole entropy. This allows us to directly compare triads that have different joint entropies.
It was not feasible to brute-force all possible tetrads, which is a set of approximately sixty-four million. Instead, we randomly sub-sampled sets of four randomly, collecting 1954000 tetrads (\(\approx 3\%\) of the total space) and analyzing them.
#### iv.2.2 Bivariate functional connectivity networks
To directly compare the PED framework to the standard, correlation-based FC network framework, we constructed single, representative FC network by computing the pairwise mutual information between every pair of regions in the fMRI scan (as was done in [39]).
\[\mathcal{I}(X;Y)=\mathcal{H}(X)+\mathcal{H}(Y)-\mathcal{H}(X,Y) \tag{28}\]
#### iv.2.3 Subgraph Analysis
Since we are interested in how the bivariate FC framework reflects (or fails to reflect) higher-order redundancies and synergies, we also compute a battery of structure metrics on matching subgraphs taken from the FC network. Formally presented by Onnela et al., [31], we consider arithmetic mean of the subgraph connectivity:
\[\mathcal{G}_{\mathfrak{X}}(\mathbf{X})=\frac{\sum_{i\neq j}\mathcal{I}(X_{i} ;X_{j})}{|\mathbf{X}|^{2}-|\mathbf{X}|} \tag{29}\]
For a given triad of tetrad \(\mathbf{X}\), we compared the mean FC density to the various redundant and synergistic information-sharing structures of \(\mathbf{X}\).
#### iv.2.4 Community Detection on Bivariate Matrices
Multi-resolution consensus clustering [32] was used to detect network communities in the functional connectivity matrix across multiple scales. The algorithm proceeds in three main stages. In the first stage, modularity maximization using the Louvain method is performed for 1,000 different values of the resolution parameter, \(\gamma\). This produced a range of \(\gamma\) values that resulted with partitions having between 2 and \(N\) communities. The second stage consisted of a more fine-grained sweep (10,000 steps) over the \(\gamma\) values defined in the first stage of the
process. We aggregate the partitions produced by this sweep into a node-by-node co-classification matrix storing how frequently nodes are partitioned into the same community. A null model with expected values of co-classification based on the size and number of communities was subtracted from the co-classification matrix [32]. Finally, in the third stage, the null-adjusted co-classification matrix was clustered again using consensus clustering with 100 repetitions and a consensus threshold \(\tau\) of 0 [66]. The resulting partition was used for analyses.
We assessed the similarity between single-subject partitions and consensus partitions using Normalized Mutual Information (NMI). Each partition can be formalized as a vector of integers of dimension \(N\) whose entries denote the nodes' allegiance to communities. NMI estimates the similarity between two partitions by counting co-occurrences in the two vectors.
We computed NMI between each one of the 95 single-subject partitions and the consensus partition, in both cases of redundancy and synergy hypergraphs. We assessed the significance of NMI values by comparing them with a null case obtained by randomly shuffling 1000 times communities labels in the single-subject partitions. The \(p\)-values of the statistical test, calculated as the fraction of null-case NMI greater than the actual NMI, have been corrected with a Benjamini-Hochberg test.
#### iv.1.5 Null Model
To ensure that the statistical dependencies we were observing reflect non-trivial interactions, we significance-tested triads and tetrads against a null distribution composed of one million, maximum entropy null models. We constructed sets of totally independent, maximum entropy binary time series and computed the PED on each set of three or four null channels. From this, we can construct distributions of the expected null structure and expected synergistic structure against which to compare triads and tetrads.
#### iv.1.6 Hypergraph Community Detection
Each of the triads can be thought of as a hyper-edge on a 3-uniform hypergraph of 200 nodes. For the synergistic structure, we selected only those hyperedges who had a _greater_ synergistic structure than any of the one million maximum-entropy nulls that formed our null distribution. This resulted in a hypergraph with 200 hundred nodes and 3,746 regular hyper-edges. We used the same criteria to build a redundant structure hypergraph using the top 3,746 most redundant hyperedges.
Both hypergraphs were clustered using the HyperNetX package (available on Github: [https://github.com/pnnl/HyperNetX](https://github.com/pnnl/HyperNetX)) implementation of the hypermodularity optimization by Kumar and Vaidyanathan et al., [33].
Briefly, the algorithm by Kumar and Vaidyanathan et al., takes a modularity maximization approach to partitioning the vertices of a hypergraph into non-overlapping communities. In dyadic networks, the modularity function compares the distribution of within- and betweenness-community edges to the expected distribution based on a degree-preserving, configuration null model [67]. In the case of hypergraphs, a hyper-configuration model can be used instead. A generalized modularity metric can then be used as an objective function in a Louvain-based, modularity maximization search.
#### iv.1.7 Temporal Structure
To explore the temporal structure of the state-transition series, we used the active information storage [68; 69] (a measure of how predictable is the future given the past) and the determinism [42; 43], (a measure of how constrained the future is given the past). For a one dimensional, discrete random variable \(X\) that evolves through time, we can compute the information that the past \(X_{t-1}\) discloses about the future \(X_{t}\) with the mutual information:
\[AIS(X)=I(X_{t-1};X_{t}) \tag{30}\]
This measure quantifies the degree to which knowing the past reduces our uncertainty about the future. This term can be further decomposed into two components: the determinism and the degeneracy [42]:
\[I(X_{t-1};X_{t})=Det(X)-Deg(X) \tag{31}\]
Where determinism is:
\[Det(X)=\log_{2}(N)-H(X_{t}|X_{t-1}) \tag{32}\]
And degeneracy is:
\[Deg(X)=\log_{2}(N)-H(X_{t}) \tag{33}\]
The determinism quantifies how reliably a given past state \(x_{t-1}\) leads to a single future state \(x_{t}\). If \(P(x_{t}|x_{t-1})\approx 1\), then we say that \(x_{t-1}\)_deterministically_ leads to \(x_{t}\).
We significance tested both the active information storage and the determinism by comparing the empirical values to an ensemble of ten thousand randomly permuted nulls generated by shuffling the time series. Since the degeneracy is unchanged by permutation of the temporal structure (since the marginal entropy \(H(X_{t})\) is the same), any changes in active information storage produced by shuffling must be driven by changes in the determinism.
### Software
All partial information/entropy decompositions were done using the SxPID package released with [21] and can be accessed on Github: [https://github.com/Abzinger/SxPID](https://github.com/Abzinger/SxPID). All scripts required to reproduce this analysis will be attached as supplementary material to the final published work.
|
2310.13224 | Adaptive Experimental Design for Intrusion Data Collection | Intrusion research frequently collects data on attack techniques currently
employed and their potential symptoms. This includes deploying honeypots,
logging events from existing devices, employing a red team for a sample attack
campaign, or simulating system activity. However, these observational studies
do not clearly discern the cause-and-effect relationships between the design of
the environment and the data recorded. Neglecting such relationships increases
the chance of drawing biased conclusions due to unconsidered factors, such as
spurious correlations between features and errors in measurement or
classification. In this paper, we present the theory and empirical data on
methods that aim to discover such causal relationships efficiently. Our
adaptive design (AD) is inspired by the clinical trial community: a variant of
a randomized control trial (RCT) to measure how a particular ``treatment''
affects a population. To contrast our method with observational studies and
RCT, we run the first controlled and adaptive honeypot deployment study,
identifying the causal relationship between an ssh vulnerability and the rate
of server exploitation. We demonstrate that our AD method decreases the total
time needed to run the deployment by at least 33%, while still confidently
stating the impact of our change in the environment. Compared to an analogous
honeypot study with a control group, our AD requests 17% fewer honeypots while
collecting 19% more attack recordings than an analogous honeypot study with a
control group. | Kate Highnam, Zach Hanif, Ellie Van Vogt, Sonali Parbhoo, Sergio Maffeis, Nicholas R. Jennings | 2023-10-20T02:02:51Z | http://arxiv.org/abs/2310.13224v1 | # Adaptive Experimental Design for
###### Abstract
Intrusion research frequently collects data on attack techniques currently employed and their potential symptoms. This includes deploying honeypots, logging events from existing devices, employing a red team for a sample attack campaign, or simulating system activity. However, these observational studies do not clearly discern the cause-and-effect relationships between the design of the environment and the data recorded. Neglecting such relationships increases the chance of drawing biased conclusions due to unconsidered factors, such as spurious correlations between features and errors in measurement or classification. In this paper, we present the theory and empirical data on methods that aim to discover such causal relationships efficiently. Our **adaptive design (AD)** is inspired by the clinical trial community: a variant of a randomized control trial (RCT) to measure how a particular "treatment" affects a population. To contrast our method with observational studies and RCT, we run the first controlled and adaptive honeypot deployment study, identifying the causal relationship between an ssh vulnerability and the rate of server exploitation. We demonstrate that our AD method decreases the total time needed to run the deployment by at least 33%, while still confidently stating the impact of our change in the environment. Compared to an analogous honeypot study with a control group, our AD requests 17% fewer honeypots while collecting 19% more attack recordings than an analogous honeypot study with a control group.
## 1 Introduction
Automated cyber intrusion attacks continuously scan and probe internet-connected systems [1, 2]. The state of the art in cyber intrusion defenses employ observational techniques augmented with automated statistical techniques, including temporal point processes and machine learning. This approach has been effective, but is susceptible to a variety of biases that might mislead or confuse such solutions from generalizing or learning quicker [3, 4].
We aim to limit the impact of potential bias by improving the datasets that train intrusion detection methods.
Intrusion datasets can be acquired from third party vendors or compiled by recording logs from existing, simulated, or newly deployed research infrastructure [5, 6, 7, 8, 9, 10]. Conventionally, these methods provide **observational data**, containing information on current attacks implemented against the given systems and how to observe the attacks in the given environment. However, even in large volumes, observational data has a high potential for bias due to uncontrolled characteristics, including possible spurious correlations between variables and outcomes or measurement error [11, 4, 12, 9, 13].
To limit potential erroneous conclusions by both statistical models and researchers, we explore intrusion data collection with a **control group**: a collection of systems studied that are not altered to compare with identical systems that have been altered. Our usage of control groups in an experimental study draws inspiration from clinical research, one of the oldest fields conducting control-based studies [14, 15]. In healthcare, a typical control group study randomly recruits a subset of a population to remain untreated (as the control) and treated (as the altered version). This is known as a **randomized-control trial** (RCT), the gold standard for clinical trial methods; its random assignment to groups minimizes the impact of researcher biases while evaluating causal relationships [16]. Our method is based on **adaptive design** (AD): a variant of RCT that adds pre-planned opportunities to modify aspects of an ongoing trial in response to data accumulated during the study, without invalidating its integrity [17, 18, 19]. RCT and AD both account for known conditions and unforeseen events (e.g., a pandemic or war) which might require the trial to end early by separating the trial into multiple stages to run interim analysis.
Unlike clinical trials with human patients, intrusion research aims to increase the occurrence of events of interest (i.e., intrusions or exploits). See Table 1 for some of the terminology from healthcare mapped to security as it is used to define our work. To demonstrate our intrusion-focused interventional methods, we use a **honeypot**, a common tool for recording intrusion data. A honeypot is an intentionally vulnerable system with covert monitoring that is used to both entice and observe attackers without their knowing [20].
Traditional honeypot deployments - or "vanilla" deployments as we will call them in this
\begin{table}
\begin{tabular}{c|c}
**Healthcare** & **Security** \\ \hline \hline “trial” & “a study comparing honeypots with and without a vulnerability” \\ \hline “study population” & “our Ubuntu honeypots with our host-based sensors” \\ \hline “patient” or “participant” & “a honeypot” \\ \hline “recruiting more subjects” & “starting more honeypots with specific characteristics” \\ \hline “disease” & “attacker technique for exploit” \\ \hline “intervention” or “treatment” & “corruption” or “the presence or insertion of a vulnerability” \\ \hline “treated” & “corrupted” \\ \end{tabular}
\end{table}
Table 1: Mapping of healthcare terminology to security terminology for this paper.
paper - expose a large number of identical vulnerable systems for a particular (extended) length of time to collect intrusion data [21, 22, 23, 20, 24, 25]. While sufficiently large and long-lived vanilla deployments all but guarantee observations and can summarize the general state of automated threats, they carry several risks and costs that could be unacceptable. If a meaningful quantity of identical honeypots were left online, it would provide an opportunity for adversaries to identify the presence of the employed monitoring tools. This can hinder observations (i.e., bias the data) and render the tools useless (i.e., when adversaries stop acting after detecting active monitoring or debugging tools). Additionally, large scale deployments cost time and money, which absorbs budget, and can hinder or preclude timely observations.
In this paper, we present the first control-based deployment method for honeypots to optimize resource allocation and limit honeypot exposure. Our method is used in an exemplary study to determine the impact of an ssh vulnerability on cloud servers across the United States. When compared to the vanilla deployment method with the same setup, we find that AD can determine the impact of the vulnerability in 33% of the total trial duration, while limiting the likelihood of error. and requesting 17% fewer honeypots overall. With a control group, our AD collects 19% more attack recordings than the RCT trial.
Our contributions in this work are as follows:
* The first adaptive method for a control study in security, optimizing resource allocation and duration of the study based on the events seen in prior stages and error tolerance.
* The first interventional study using honeypots, demonstrating the effectiveness of our adaptive method and how it helps attribution of environment changes during data collection.
Although we showcase our method with honeypot deployments, it can be used for other control studies in security applications. For example, one could study the impact of a new spam email training on the rate of spam emails being opened, or on the removal of local file inclusion access on the exploitation rate of a web application hosting other vulnerabilities. Our AD strategy uses a new interpretation of clinical trial methodologies, encouraging infections rather than preventing them mid-trial. Additionally, our study ran using automated scripts, presenting the first fully-automated experimental study. This automation and cheaper application setting enables future inventors of new clinical trial methodologies a new venue to showcase their improvements, rather than run an expensive trial with patients.
This paper is structured as follows. Section 2 reviews control studies in security and provides a brief background on the healthcare-based methods that inspired this work. We then introduce our new AD method in Section 3 while contrasting it with the vanilla and RCT methods. In Section 4, we implement our method against the vanilla and RCT methods in a exemplary honeypot deployment. We conclude and consider how the method might not behave the same in other settings in Sections 5.
Background
An experimental study starts from a hypothesis on how a change or treatment will alter an aspect of a given environment [26]. The hypothesis is then tested, in the simplest form, by observing a control group that is unchanged and comparing it with another (ideally identical) group that is then changed. An RCT provides one of the strongest evidence on an intervention's impact due to its random allocation of participants to treatment arms (there might be multiple treatments available) or control arm (standard care or a placebo) [27, 28]. This process removes potential bias of unaccounted factors in the environment. Participants are then observed and their outcomes recorded.
Part of the rigor of RCTs is that all aspects of the trial conduct and (interim) analysis must be documented prior to the execution of the study. This prospective approach avoids the introduction of bias from investigators and statisticians mid-trial. An important part of this planning process is the sample size calculation. Using estimates and prior knowledge of interventions, trialists (those conducting the trial) can estimate the required number of participants that must be recruited in order to detect a significant difference in outcomes between the groups [29]. After approximating the needs and impacts of the study, the execution of the study should be justified.
For medicine, developing the justification to go from drug discovery to licensing can take an average of over 10 years [30]; recent advances in trial methodology have been able to improve the efficiency of trials in order to reduce this time [31]. The conduct and methodology of a clinical trial are highly regulated because of the direct involvement of patients [28]. However, this is not often a barrier to research in cyber security.
Security has several advantages in running experimental studies. Digital infrastructure is cheap with the advancement of cloud technologies [32]. Our honeypot study could have cost up to $2,000USD with 600 participants, compared to the millions of USD required in drug testing [33]. Digital resources can also be exactly copied as many times as needed, whereas biological studies must make strong assumptions about the similarity between patients. Security experimental studies can be quicker to complete if they consider the attacks that occur at a higher frequency and pace of development than a biological infection or disease.
In this section, we review previous experimental studies in security settings that consider or run control groups. We finish this section by briefly highlighting the other techniques developed to improve the classic experimental design that have spawned from the constrained medical setting.
### Control Trials in Security
Our work is not the first to apply clinical methodology within security; prior works focus on the interaction of security and users of digital systems. For example, Simoiu et al. [34] survey user awareness of ransomware in the general U.S. population. Lin et al. [35] analyze spearphishing cyber attacks and its correlation with various human demographics. A common human-oriented security study involves antivirus software and how it is used by the lay
person [36, 37, 38]. These works implement an experimental study to find strong indications of how successful antivirus software can be based on human performance. Yen et al. [39] further extends this research area by incorporating the users job title and responsibilities to contextualize the impact of malware within a company. However, we circumvent the recruitment (and cost) of human involvement by focusing on how these methods can be applied to digital systems with automated, autonomous threats.
Few experimental studies have been published without humans in cyber security. Bosnjak et al. [40] prepare an experimental study to systematically evaluate defenses for shoulder surfing attacks after an extensive literature review. Gil et al. [41] approach this by using a case-control study to identify complex relationship of threats to a single host within a large network. Although it is called a "study," a case-control study filters and randomly selects data from purely observational studies for its patient population. There is no interaction with the data collection process. Causal relations can be learned from such data but there is no control for error or bias. We discuss how our method controls for error in Section 3.
These studies indicate a major challenge in security experimental studies: the need for human-interpretable interventions. Recording data in medical settings is relatively straightforward, e.g., heartbeats per minute or body temperature indicating there is a fever. Understanding how these indicate a particular disease is also fairly intuitive. But translating host-based logs to indications of unwanted activity in a system is immensely difficult, let alone stating the type of unwanted activity. Thus, mapping "symptoms" from sensor logs can be difficult unless we control our human-level interventions.
### Advances in Control Trial Methodology
Randomization limits the impact of unknown external factors in influencing a participant's chance of receiving a treatment, we therefore expect the baseline characteristics of participants to be similar between studied groups. If there is a concern about some baseline characteristics that may be prognostic, then we can stratify randomization based on these variables without loss of statistical strength [42].
Traditional RCTs are known for their rigor and complete pre-specification of procedure. A common adaptation to an RCT is the inclusion of stopping rules for efficacy, safety, or futility [43]. If the trial has gathered enough evidence that an intervention is effective, or conversely that the intervention is harmful, then the study can cease, saving resources for a future study or a re-run of the same study with corrections. Similarly, interim checks would also catch if there is not enough evidence of an intervention's effect to reach a significant conclusion [44].
Contemporary approaches to running RCTs aim to make the process of evaluating an intervention faster and more efficient [31]. As mentioned, we implement one such method, adaptive design (AD). The principle of AD is that it permits certain aspects of a study to be modified intermittently based on available evidence.
Interim data used to inform stopping decisions can also be used to inform an updated sample
size calculation, or even to update randomization allocation proportions [19]. A well-known example of this is the REMAP-CAP study, which has many treatments available across multiple domains for treating community acquired pneumonia, including severe COVID-19, in intensive care unit settings [45]. Platform trials have also been used to successfully evaluate many different types of interventions simultaneously [46]. Monthly interim analyses are conducted and a Bayesian model is used to update randomization probabilities for new participants entering the trial, so that patients are randomized to treatments that are more likely to benefit them [47]. In the present study, we draw on the concept of response adaptive randomization to optimize the allocation of honeypots.
## 3 Adaptive Design for Security Applications
In this section, we define our methods for interventional data collection in security applications. This setting typically studies adversarial effects, i.e., encouraging intrusions as data are collected. We call the change or treatment (e.g., a drug or surgical procedure) made to a population a **corruption1**. We shall now enumerate the key terms to document prior to executing a study as we define our AD method.
Footnote 1: We chose corruption to remove the benevolent intentions frequently affiliated with “treatment” from healthcare settings. A reminder that the translations for other clinical trial terms can be found in Table 1.
The **population** considered in the experiment is assumed to be a device or contained system that is or is not corrupted. Deploying a copy of these devices or systems is the same as recruiting a patient into a study. The goal of the study is to achieve a set of objectives, evident from observing a particular **event of interest**. One can identify an event of interest through the recorded logs on the population; these events should be clear evidence that the corruption caused some change in system behavior. For example, if the corruption is a new login website and we are interested in its effect on attempted SQL injections, then events of interest should be a record of when an SQL injection occurs.
The **objectives** of our methods are always two fold:
1. Confirm evidence of corruption's impact within the population.
2. Maximize the recording of events of interest.
Returning to the SQL injection example, if we wanted to collect a diverse range of attacks rather than automated repeated uses of the same attack, the events of interest could be only recorded if not seen prior.
Before recording events and running the study, it is crucial to accurately define **endpoints** to anticipate possible errors or miscalculations. Similar to clinical trials, we recommend setting an endpoint bounding the trial resources by the given budget. We also recommend stopping the trial early if the adaptive design tries to deploy a group that is too small. If this occurs in the early stages of the trial, it can indicate the rates of allocation have converged to nothing conclusive. This should be followed with a manual review by human experts. While it might seem inconsequential, it is good practice to list obvious endpoints. This might
include recording an unexpected exploit technique or an overwhelming number of exploits that break data collection infrastructure. All of these details must be defined prior to the study execution to maintain the robustness of the trial.
### Trial Methodologies
In this section, we review each method for comparison to our AD before using them in a hon-eypot study (Section 4). The pseudo code for the traditional observational study (vanilla), RCT, and our AD are presented in Methods 1, 2, and 3, respectively. See Appendix A for the definitions of the functions. The highlighting indicates the similar lines between the algorithms. Notably, the RCT and AD trials are split into \(s\) stages with early stopping - shown as the loops on line 2 in both Methods 2 and 3, highlighted in pink. Each stage deploys some proportion of control and corrupted systems, waits for the stage duration, saves the logs, cleans up the deployment and reviews the logs from the stage to see if an endpoint condition has been reached. The difference between the standard RCT and our AD is what occurs during the interim update.
The vanilla deployment (Method 1) takes a given budget \(b\) and the maximum trial duration2\(t\) to determine the maximum number of devices \(N\) that can be observed within this study - noted as GetNumToDeploy(\(b\), \(t\)) on line 1. Then \(N\) altered devices are then deployed for observation during the "trial"; no control devices are present.
Footnote 2: We say in the methods that this is given in hours, but any time duration is works here.
In contrast, the RCT (Method 2) and AD (Method 3) account for the risk of error into selecting how many devices to study using **power analysis3**. We pass four parameters into the power analysis equation from HECT [48]: the probability of committing a Type I error (\(\alpha\)), the probability of committing a Type II error (\(\beta\)), and the rate of incidence for the control and corrupted groups. A Type I error means claiming an effect is present due to the corruption when it is not true. A Type II error means the study did not collect evidence of an effect when it is correct. The **power** of a study is the inverse of the likelihood of committing a Type II error (\(1-\beta\)). The rate of incidence for the control and corrupted groups is initially determined by a pilot study or educated guess based on related reports. It is an approximation of the rate an event of interest should be observed within the stage. The power analysis equation returns the total number \(N_{\text{total}}\) of devices needed to deploy in each stage. We equally split this value for RCT and the initial stage of AD. After the first stage of AD, we use the updated rates from the prior stage to weight the split of \(N_{\text{total}}\), adapting the allocation of resources mid-trial.
Footnote 3: This calculation can be found in more detail in Appendix A.
Based on the responses seen in the previous stage within an AD trial, trialists can use interim analysis to make pre-defined changes that will not invalidate or weaken the power of a study. Our AD updates the next population counts for control and corrupted. To not risk weakening the power of our study, we apply this update indirectly between stages through the assumed rates of incidence (\(p_{1}\) and \(p_{2}\)).
From the logs we have a complete view into the events of interest for the study. Our AD method assumes that each participant will have at most one event of interest before terminating the system, removing it from the trial. In the case of honeypots, this would be to protect the honeypot from becoming a launchpad or providing free resources to attackers. To calculate the likelihood of an event of interest occurring, i.e., the rate of incidence within a group, we use a **Kaplan-Meier (KM) Function**, a popular approach for survival analysis within healthcare applications[49, 50].
During a stage, the KM function is updates the likelihood \(S\) upon every event of interest recorded. At \(t=0\), all participants are at risk and \(S(t)=1\). The remaining time steps update following this:
\[S_{t+1}=S_{t}\times\frac{N_{t}-D_{t+1}}{N_{t}} \tag{1}\]
where \(N_{t}\) is the current number of participants at risk and \(D_{t+1}\) is the number of participants that have seen events of interest since \(t\). The difference (\(N_{t}-D_{t+1}\)) is not always one. If we know exactly when all participants see an event of interest, the KM function is calculated when an infection is recorded. In trials without this ability, a time interval must be set to
check with the partipants (e.g., every hour or half-hour) to collect data and see if an event has been recorded.
## 4 Adaptive Design for Honeypot Deployments
We demonstrate the capabilities of our AD for intrusion data collection in a sample study using honeypots. Our method can be applied in other intrusion data collections, but we chose one to illustrate its specific capabilities. In this study we analyze the risk of an ssh vulnerability within misconfigured cloud servers. This scenario is based on a dataset used as a pilot study for our trials4. The dataset contains a variety of attacks via the ssh vulnerability, but we can only empirically infer how the presence of this vulnerability affects its likelihood of exploitation. Although the presence of an ssh vulnerability is well known to affect the rate of exploitation in a server, this study to emphasizes how our method provides evidence on a corruption's impact and the benefits of its adaptation.
Footnote 4: This citation is removed for anonymity.
This study includes Methods 1, 2, and 3 in separate trials, each attempting to collect evidence of the corruption's impact. Our budget restricts each trial to a maximum of 200 honeypots (approximately $650USD) over 12 hours. As recommended in Section 3, this threshold is noted as the first of our early stopping criteria. Based on the pilot study, we assume an initial rate of incidence in the control to be 0.01 and in the corrupted to be 0.4. We ensure the study only considers strong evidence by limiting the chance of error5, setting \(\alpha=0.05\) and \(\beta=0.10\). The remaining details for our honeypot studies are summarized in our Study Synopsis (Section 4.1). We then review the results of the study in Section 4.2.
Footnote 5: It is generally accepted in healthcare settings to set \(\alpha=0.05\) and \(\mathrm{power}=80\%\) (meaning \(\beta=0.2\)). We assume a power of 90% (\(\beta=10\%\)) because we know there is a large difference between the control and the corrupted.
### Study Synopsis
Following the guidance issued for clinical trials [51], we summarize the characteristics of our study below:
**Study Duration**: : The maximum total duration per trial is 12 hours. This is applied across the three trial methodologies compared in this study
**Objectives**: : (1) Determine if the corruption causes a significant increase in exploitation rate. (2) Maximize the exploitation rate for honeypots in the U.S. by region in time specified by the trial or stage.
**Endpoints**: : (a) The maximum number of honeypots that can be recruited into the study is 200. (b) The total number of honeypots allocated in the corruption group is below 10 (indicating the event of interest is not recorded frequently enough to study in this duration) (c) The number of honeypots allocated is identical to the last stage of the AD trial, indicating strong evidence has been collected regarding the current rates of
incidence.
**Study Population**: : Cloud-based honeypots monitored with a kernel-level sensor recording all create, clone, and kill system calls. Each honeypot runs with 1 vCPU, 32GB memory, and Ubuntu 20.04. There are no additional programs or fake activity and no active connection between honeypots. They are all hosted by the same large-scale cloud provider within the U.S. that instantiates identical servers with unique, randomly assigned IP addresses upon request. The IP address ranges are based on the requested region; our study only considers four regions within the U.S.: east-1, east-2, west-1, and west-2.
**Study Corruption and Control**: : The corruption is an ssh vulnerability that accepts any password for four fake user accounts mimicking IT support accounts on industrial infrastructure: user, administrator, serv, and support. This corruption is an exaggerated version of a common misconfiguration seen in cloud servers [2, 52, 53]. Control honeypots host the same user accounts but only accept "password" as the password. We chose the word "password" as the password based on evidence of attackers scanning for it in cloud provider networks [25].
**Event of Interest**: : We record an event when a user login is seen in one of the four user accounts. Because we never login or generate fake calls to login, any user login seen is considered malicious.
**Measuring Corruption Effect**: : This study assumes a binary state model to describe each honeypot as whether an intrusion has or has not occurred. The state of the honeypot is determined by real-time monitoring of the logs to deal with ethical issues that may arise from purposefully exposing compute to adversaries. An exploit is assumed to not have occurred until this event is seen.
To prevent providing free resources to the attacker or opportunities to launch further attacks, we terminate the instances upon recording an event of interest. Although this limits the data we acquire from the study, it satisfies our objectives in recording events of interest. This can be re-evaluated in alternative studies based on new objectives.
Each honeypot functions independently with no communication between the honeypots in the same trial. Their logs are aggregated on a central queuing system within their region (for their trial) and downloaded before termination. Because we use kernel-level sensors, our implementation can be easily extended for other objectives and vulnerabilities.
### Results
The total number of honeypots deployed and attacks recorded is shown in Table 2. As expected, the AD trial deployed the fewest honeypots overall while recording more attacks than RCT. The AD trial recorded around 36% of the total attacks seen in the vanilla trial, which saw the highest number of intrusions. This is because the vanilla trial did not deploy any control honeypots, which had a small rate of incidence. However, by not including a control group, it does not account for potential bias in the corruption implementation,
preventing it from confidently identifying the causal relationship of the corruption's effect. Even so, the data collected could still be used with the rigorous documentation stating the assumptions made in the study. This enables other researchers to review it independently and determine if the study's findings are relevant for their environments.
From the trials with a control group, it is clear that the corruption causes an increase in exploitation rate. As can be seen in the trial summary in Table 3, more control honeypots were exploited than expected in the AD trial, causing a disparity between our initial assumption (\(p_{1}=1\%\)) and the results from the first stage (marginalized \(p_{1}=15\%\)). This caused our AD method to reallocate resources, requesting more honeypots to accommodate for error in the initial assumptions without triggering an endpoint or requiring the study to be reevaluated. After the second stage of the AD trial concluded with no exploits in the control group, the control arm was dropped, confirming that the corruption led to more infections.
We could have ended the AD trial after the first stage (saving 66% of the trial's budget) if Endpoint (b) included the control group in the group size minimum. This was not done because of the second objective to collect intrusion data. Even with the larger request in stage two, the AD trial requested fewer honeypots than both the vanilla trial and the RCT. The stages within the trial also limited the time a honeypot was online, preventing adversaries for extended time to develop a signature for our trap.
Although there was an exponential rate of exploit across the trials, we noticed signs of instability in the four-hour stages, where some regions were observed to have different exploitation rates. This was especially apparent by comparing their respective survival curves by the regions and stages within the RCT trial, shown in Figure 0(a) and Figure 0(b). Around 60-120 minutes into the stage, the rate of infection in the regions diverges as though us-west-1 was
\begin{table}
\begin{tabular}{c||c|c||c|c}
**Method for Trial** & **Control** & **Corrupted** & **Total Deployed** & **Total Attacks Seen** \\ \hline \hline Vanilla & 0 & 140 & 140 & **137** \\ \hline RCT & 72 & 72 & 144 & 42 \\ \hline AD & 32 & 87 & **119** & 50 \\ \end{tabular}
\end{table}
Table 2: Comparison of honeypot deployment methods used in each 12-hour trial.
\begin{table}
\begin{tabular}{c c||c c c c|c c c|c} & & \multicolumn{4}{c}{**Control**} & \multicolumn{4}{c}{**Corrupted**} & \multicolumn{2}{c}{**Total**} \\ & & e-1 & e-2 & w-1 & w-2 & e-1 & e-2 & w-1 & w-2 & \\ \hline \hline
**Stage 1** & RCT & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 48 \\ & AD & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 48 \\ \hline
**Stage 2** & RCT & 6 & 6 & 6 & 6 & 6 & 6 & 6 & **48** \\ & AD & 4 & 4 & 0 & 0 & 8 & 12 & 8 & 16 & 52 \\ \hline
**Stage 3** & RCT & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 48 \\ & AD & 0 & 0 & 0 & 0 & 2 & 5 & 8 & 4 & **19** \\ \end{tabular}
\end{table}
Table 3: The deployed honeypots by region and stage for the RCT and the AD trial. Region names are abbreviated to e for “east” and w for “west”.
hit first, then us-east-2, us-west-2, and us-east-1, respectively. From the IP addresses of the hosts, there is no obvious indication of sequential IP scanning. This instability is another result of this study so future work can note the impact of smaller time windows. Because this was unanticipated at the start of this study and not a prior listed early stopping criterion, future work will include it to provide an opportunity for the trialists to discuss whether the trial should continue.
## 5 Conclusions
Our work is the first to apply an adaptive experimental study in intrusion data collection and discuss the benefits of collecting counterfactual information with a control group. We provide general details on running an experimental study with necessary factors to document prior to conducting the study. Our AD method extends this by optimizing resource allocation based on events seen at every stage, ensuring the statistical confidence through power analysis based on updated exploitation likelihoods with the assumption that an event only occurs once per participant. Because the interventional data collected contains true relations between features known through experimentation, future statistical models trained with this data are given higher confidence in learning general trends. This method is especially applicable for security studies seeking to identify causal relations between a corruption and automated attacks in the wild.
We then implemented our method in a honeypot study, confirming that the corruption (an ssh vulnerability) increased the infection rate of misconfigured cloud servers. This study also found that while recording more intrusions in observational studies (i.e., in the vanilla trial), the presence of the control group (as in RCT and AD) enables us to identify the corruption effect. Our AD shows it is capable of confirming corruption effect than RCT, requiring only 33% of the total trial duration to conclude corruption effect and using 17% fewer honeypots to see 19% more attacks. Prior to conducting the study, we knew the corruption would increase infection rate because attackers were provided more options for password entry, including the control's "password" for the same user accounts. Had the difference due to the corruption been less apparent (e.g., in altering multiple points of entry or limiting sequences
Figure 1: Comparison of the control and corrupted honeypots by 4-hour stage and region across the RCT trial.
of vulnerability exploits), our study would have taken more time and resources to collect evidence.
Future work should consider implementing multiple vulnerabilities to study the interaction of corruptions. For example, one could add vulnerable applications within the honeypots to either study the scanning and exploit of multiple existing programs or tracing the sequence of exploits from the ssh vulnerability to a vulnerable application. This would require introducing a new methodology that can simultaneously consider multiple treatment arms, such as REMAP-CAP [45]. By isolating causal relationships, we hope these data can assist in generalizing solutions, remove some bias in the data, and enable other improvements in the intrusion detection community.
|
2309.01743 | ALMA observations of the Extended Green Object G19.01$-$0.03: II. A
massive protostar with typical chemical abundances surrounded by four
low-mass prestellar core candidates | We present a study of the physical and chemical properties of the Extended
Green Object (EGO) G19.01$-$0.03 using sub-arcsecond angular resolution Atacama
Large Millimeter/submillimeter Array (ALMA) 1.05mm and Karl G. Jansky Very
Large Array (VLA) 1.21cm data. G19.01$-$0.03 MM1, the millimetre source
associated with the central massive young stellar object (MYSO), appeared
isolated and potentially chemically young in previous Submillimeter Array
observations. In our $\sim0.4''$-resolution ALMA data, MM1 has four low-mass
millimetre companions within 0.12pc, all lacking maser or outflow emission,
indicating they may be prestellar cores. With a rich ALMA spectrum full of
complex organic molecules, MM1 does not appear chemically young, but has
molecular abundances typical of high-mass hot cores in the literature. At the
1.05mm continuum peak of MM1,
$\mathrm{N}(\mathrm{CH}_{3}\mathrm{OH})=(2.22\pm0.01)\times10^{18}$cm$^{-2}$
and $T_{\mathrm{ex}} = 162.7\substack{+0.3 \\ -0.5}$K based on pixel-by-pixel
Bayesian analysis of LTE synthetic methanol spectra across MM1. Intriguingly,
the peak CH$_{3}$OH $T_{\mathrm{ex}}=165.5\pm0.6$ K is offset from MM1's
millimetre continuum peak by $0.22''\sim880$au, and a region of elevated
CH$_{3}$OH $T_{\mathrm{ex}}$ coincides with free-free VLA 5.01cm continuum,
adding to the tentative evidence for a possible unresolved high-mass binary in
MM1. In our VLA 1.21cm data, we report the first NH$_{3}$(3,3) maser detections
towards G19.01$-$0.03, along with candidate 25GHz CH$_{3}$OH $5(2,3)-5(1,4)$
maser emission; both are spatially and kinematically coincident with 44GHz
Class I CH$_{3}$OH masers in the MM1 outflow. We also report the ALMA detection
of candidate 278.3GHz Class I CH$_{3}$OH maser emission towards this outflow,
strengthening the connection of these three maser types to MYSO outflows. | Gwenllian M. Williams, Claudia J. Cyganowski, Crystal L. Brogan, Todd R. Hunter, Pooneh Nazari, Rowan J. Smith | 2023-09-04T18:00:04Z | http://arxiv.org/abs/2309.01743v1 | ALMA observations of the Extended Green Object G19.01-0.03: II. A massive protostar with typical chemical abundances surrounded by four low-mass prestellar core candidates
###### Abstract
We present a study of the physical and chemical properties of the Extended Green Object (EGO) G19.01\(-\)0.03 using sub-arcsecond angular resolution Atacama Large Millimeter/submillimeter Array (ALMA) 1.05 mm and Karl G. Jansky Very Large Array (VLA) 1.21 cm data. G19.01\(-\)0.03 MM1, the millimetre source associated with the central massive young stellar object (MYSO), appeared isolated and potentially chemically young in previous Submillimeter Array observations. In our \(\sim 0.4\arcsec\)-resolution ALMA data, MM1 has four low-mass millimetre companions within 0.12 pc, all lacking maser or outflow emission, indicating they may be prestellar cores. With a rich ALMA spectrum full of complex organic molecules, MM1 does not appear chemically young, but has molecular abundances typical of high-mass hot cores in the literature. At the 1.05 mm continuum peak of MM1, N(CH\({}_{3}\)OH) \(=(2.22\pm 0.01)\times 10^{18}\) cm\({}^{-2}\) and \(T_{\rm ex}=162.7^{+0.3}_{-0.5}\) K based on pixel-by-pixel Bayesian analysis of LTE synthetic methanol spectra across MM1. Intriguingly, the peak CH\({}_{3}\)OH \(T_{\rm ex}=165.5\pm 0.6\) K is offset from MM1's millimetre continuum peak by \(0.22\arcsec\sim 880\) au, and a region of elevated CH\({}_{3}\)OH \(T_{\rm ex}\) coincides with free-free VLA 5.01 cm continuum, adding to the tentative evidence for a possible unresolved high-mass binary in MM1. In our VLA 1.21 cm data, we report the first NH\({}_{3}\)(3,3) maser detections towards G19.01\(-\)0.03, along with candidate 25 GHz CH\({}_{3}\)OH \(5(2,3)-5(1,4)\) maser emission; both are spatially and kinematically coincident with 44 GHz Class I CH\({}_{3}\)OH masers in the MM1 outflow. We also report the ALMA detection of candidate 278.3 GHz Class I CH\({}_{3}\)OH maser emission towards this outflow, strengthening the connection of these three maser types to MYSO outflows.
keywords: stars: individual: G19.01-0.03 - stars: formation - stars: massive - stars: protostars - masers - techniques: interferometric
## 1 Introduction
High-mass stars (M\({}_{*}>8\) M\({}_{\odot}\)) are influential in the dynamical and chemical evolution of the interstellar medium (ISM), through their strong feedback, outflows and jets, and through enrichment of the ISM with heavy elements (e.g. Peters et al., 2017; Rosen and Krumholz, 2020; Mignon-Risse et al., 2021; Grudic et al., 2022). Constraining exactly how high-mass stars form however is hampered by their natal molecular clouds being significantly more distant (\(d>1\) kpc) and more clustered (\(n_{*}>100\) pc\({}^{-3}\)) than those of their low-mass counterparts (M\({}_{*}<8\) M\({}_{\odot}\)). Furthermore, the short pre-main sequence lifetimes (\(<1\) Myrs; Mottram et al., 2011) of high-mass stars ensures that the entirety of their formation is obscured in regions of high extinction (e.g. Chevance et al., 2020; Kim et al., 2021). Now with the advent of facilities capable of high-angular resolution observations in the (sub)millimetre such as the Atacama Large Millimeter/(sub)millimetre Array (ALMA), we have the ability to resolve and disentangle the thermal emission of the early stages of high-mass star formation.
The core-fed theory of massive star formation (McKee and Tan, 2003; Tan et al., 2014) describes the monolithic collapse of virialised high-mass prestellar cores that have ceased accreting material from their surroundings, and are supported against fragmentation into low-mass cores by magnetic and turbulent pressures. In this picture, high-mass prestellar cores (e.g. Motte et al., 2007) - starless and self-gravitating structures thought to form in infrared dark clouds (IRDCs; Rathborne et al., 2006; Peretto and Fuller, 2009) - are the earliest stage of massive star formation. Observationally, however, very few candidates of truly quiescent high-mass prestellar cores exist (e.g. Cyganowski et al., 2014, 2022; Duarte-Cabral et al., 2014; Wang et al., 2014; Kong et al., 2017; Nony et al., 2018; Barnes et al., 2023), suggesting they may either be very short lived (e.g. Motte et al., 2007; Kauffmann et al., 2013; Sanhueza et al., 2019), or not exist at all (e.g. Motte et al., 2018).
High-mass protostars (or massive young stellar objects, MYSOs, e.g. Hoare et al., 2005; Urquhart et al., 2008) are instead widely associated with active star formation signatures such as 6.7 GHz CH\({}_{3}\)OH,
22 GHz H\({}_{2}\)O, and NH\({}_{3}\)(3,3) masers (e.g. Pillai et al., 2006; Urquhart et al., 2011; Brogan et al., 2019; Jones et al., 2020; Towner et al., 2021). Signatures of ongoing accretion are also prevalent towards MYSOs, such as high-velocity bi-polar outflows (e.g. Beuther et al., 2002; Duarte-Cabral et al., 2013; Yang et al., 2022) and less commonly circumstellar accretion discs (e.g. Beltran et al., 2014; Johnston et al., 2015; Ilee et al., 2016; Cesaroni et al., 2017; Maud et al., 2018; Dewangan et al., 2022). MYSOs are typically observed to have strong (sub)millimetre continuum, and a sub-population of MYSOs(s) - classed as Extended Green Objects (EGOs; Cyganowski et al., 2008, 2009) - exhibit extended 4.5\(\mu\)m emission attributed to shocks in outflows driven by the central MYSO(s). Many of these EGOs only exhibit weak centimetre continuum emission (Cyganowski et al., 2011; Towner et al., 2021). The clump-fed theory of massive star formation (Bonnell et al., 2001; Smith et al., 2009) describes the competitive accretion of clusters of initially low-mass star progenitors. High-mass star progenitors develop through the continued accretion of material by protostellar sources that find themselves at the centre of the gravitational potential of the cluster. This requirement for a continually accreting protocluster in the clump-fed theory is a key factor that distinguishes it from the monolithic core-fed theory, and is a crucial observable for constraining models of early massive star formation (e.g. Cyganowski et al., 2017; Issac et al., 2020; Law et al., 2022).
Complex organic molecules (COMs) are molecular species that contain at least one carbon atom and a total of at least 6 atoms (Herbst and van Dishoeck, 2009). They are thought to form on dust grain surfaces during both the cold collapse and "warm-up" phases of massive star formation (e.g. Garrod and Herbst, 2006; Garrod, 2013; Oberg, 2016; Garrod et al., 2022), with COMs forming earlier and at lower temperatures in models including non-diffusive chemistry (Garrod et al., 2022). In general, in gas-grain astrochemical models radiative heating from MYSOs ultimately heats the grains enough to sublimate their ice mantles, releasing both COMs and simpler molecules into the gas phase where they are observable via (sub)millimetre wavelength spectral line emission (e.g. Oberg, 2016; Garrod et al., 2022, and references therein). While COMs can also be produced by gas-phase mechanisms, production on grains dominates in recent models (e.g. Garrod et al., 2022). MYSOs that are characterised by a forest of molecular lines in the (sub)millimetre are classed as hot cores (e.g. Sanchez-Monge et al., 2017; Sewilo et al., 2018; Liu et al., 2021), and typically have temperatures that exceed 100 K (e.g. Oberg, 2016). Later in the evolution of the system (though not necessarily independent of the hot core stage), hypercompact (HC) Hi regions form, signposted by centimetre continuum due to the ionization of the surrounding material by the high-mass protostar(s) (e.g. Kurtz, 2005; Yang et al., 2019, 2021). With the exquisite sensitivities achievable by ALMA in the (sub)millimetre and the Karl G. Jansky Very Large Array (VLA) in the centimetre, combined studies of both the chemical and physical properties of massive star progenitors and their environments are now possible.
### Target: EGO G19.01-0.03
The EGO G19.01-0.03 (hereafter G19.01) is an intriguing example of early massive star formation. As in Williams et al. (2022), we adopt D=\(4.0\pm 0.3\) kpc, the near kinematic distance estimated using the Galactic rotation curve parameters of Reid et al. (2014) and the NH\({}_{3}\) LSRK velocity from Cyganowski et al. (2013). As shown in Figure 1, emission from the surrounding clump is detected in the Hi-GAL and ATLASGAL surveys; the clump is also detected at 1.1 mm by the Bolocam Galactic Plane Survey (BGPS; Rosolowsky et al., 2010). The clump mass is \(\sim\)1000 M\({}_{\odot}\): the Hi-GAL catalogue of Elia et al. (2017) reports a clump mass, radius and temperature of 974 M\({}_{\odot}\), 0.16 pc and 17.9 K respectively for D=3.658 kpc (\(\sim\)1165 M\({}_{\odot}\) and 0.18 pc scaled to D=4.0 kpc), while Schuller et al. (2009) report a clump mass of 1070 M\({}_{\odot}\) for D=4.3 kpc based on the 870\(\mu\)m ATLASGAL data (\(\sim\)926 M\({}_{\odot}\) scaled to D=4.0 kpc). Towards the clump, a single millimetre continuum core (hereafter called MM1) is seen in isolation with the Submillimeter Array (SMA) at 1.3 mm) and with the Combined Array for Research in Millimeter-wave Astronomy (CARMA) at 3.4 mm (at 2.4 and 5.4\({}^{\prime\prime}\) angular resolution respectively; Cyganowski et al., 2011). MM1 coincides with 6.7 GHz Class II CH\({}_{3}\)OH maser emission (Cyganowski et al., 2009) placing it as a massive source (e.g. Urquhart et al., 2013; Billington et al., 2020; Jones et al., 2020), and is seen to drive a highly collimated, high-velocity bi-polar outflow observed with the SMA in \({}^{12}\)CO(2-1) and with CARMA in HCO\({}^{+}\)(1-0) and SiO(2-1) emission (Cyganowski et al., 2011). 44 GHz Class I CH\({}_{3}\)OH maser emission is seen to trace the edges of the outflow lobes (Cyganowski et al., 2009). At the sensitivity of the SMA and CARMA data, MM1 exhibited some hot core emission but lacked chemical richness, conspicuously lacking in emission from Oxygen-bearing COMs. Only two COMs were detected (CH\({}_{3}\)OH and CH\({}_{3}\)CN) with only two lines with excitation temperature \(>\)100 K (Cyganowski et al., 2011). MM1 was not detected in deep, arcsecond-resolution VLA observations at 3.6 and 1.3 cm to 4\(\sigma\) limits of 0.12 and 1.04 mJy beam\({}^{-1}\) respectively (Cyganowski et al., 2011), suggesting a very low ionising luminosity capable of producing only a very small Hi region. Put together, MM1
Figure 1: _Spitzer_ GLIMPSE three-colour image (RGB: 8.0, 4.5, 3.6\(\mu\)m). (a) The dotted orange contour marks the 30 per cent response level of the ALMA mosaic, designed to encompass the full extent of the ATLASGAL clump. ATLASGAL 870\(\mu\)m emission contours are shown in grey (at 12, 16, 20 and 24\(\sigma\), where \(\sigma=0.08\) Jy beam\({}^{-1}\); Schuller et al. 2009, 18\({}^{\prime\prime}\) resolution). The zoomed inset of (b) is shown by the dashed black box. (b) High-velocity blue- and red-shifted SMA \({}^{12}\)CO(2-1) emission contours are shown in blue (7.2, 12.0, 15.6, 19.2, 22.8 Jy beam\({}^{-1}\) km s\({}^{-1}\)) and red (4.8, 7.2, 9.6 Jy beam\({}^{-1}\) km s\({}^{-1}\)) respectively (Cyganowski et al., 2011), and ALMA 1.05 mm continuum contours are shown in black (0.00125, 0.004, 0.016, 0.200 Jy beam\({}^{-1}\); Williams et al. 2022). Magenta +’s mark the VLA positions of 44 GHz Class I CH\({}_{3}\)OH masers (Cyganowski et al., 2009), and the cyan \(\times\) marks the intensity-weighted VLA 6.7 GHz Class II CH\({}_{3}\)OH maser position from Williams et al. (2022). Herschel PACS 70\(\mu\)m emission (Poglitsch et al., 2010) of the Hi-GAL clump (Elia et al., 2017) is plotted with white contours (25, 4.5, 6.5, 8.5, 10.5 Jy pixel\({}^{-1}\)). The PACS 70\(\mu\)m beam 6 is \({}^{\prime\prime}\times 12^{\prime\prime}\) (FWHM \(\sim 8^{\prime\prime}\); Herschel Explanatory Supplement Volume III, 2017). The ALMA beam is plotted in the bottom left.
appeared as an isolated, high-mass millimetre continuum source, in a state of ongoing accretion, without strong centimetre continuum or rich hot-core line emission. As such, MM1 was until recently considered an excellent candidate for a very early stage of evolution, with potential to shed light on the core-fed theory of high-mass star formation.
The first paper of our ALMA Cycle 2 follow-up study (Williams et al., 2022, hereafter Paper I) presented the highest angular resolution observations of G19.01 to date, at \(\sim 0.4\arcsec\) angular resolution in Band 7 at 1.05 mm. With ALMA, MM1 was observed to exhibit a rich millimetre spectrum with a variety of COMs, in contrast to the earlier lower resolution and sensitivity SMA observations. Kinematic analysis of the strongest, most isolated ALMA-detected molecules revealed the first direct evidence of a rotationally supported accretion disc around MM1 traced by a velocity gradient perpendicular to the bi-polar outflow direction, with an enclosed mass of \(40-70\) M\({}_{\odot}\) within a 2000 AU radius. In conjunction with new VLA observations at 5.01 and 1.21 cm, the centimetre-millimetre spectral energy distribution (SED) was best described by a two-component model, with millimetre emission dominated by thermal dust, and the \(\sim 5\) cm continuum dominated by free-free emission interpreted as a hypercompact Hii region, placing MM1 in a later stage of evolution than that concluded with previous observations. Furthermore, the ALMA 1.05 mm continuum revealed for the first time the detection of four neighbouring millimetre sources in the vicinity of MM1, hinting at the possibility of the early stages of protocluster formation.
In this paper (Paper II), we use our ALMA Cycle 2 1.05 mm and VLA 1.21 cm observations to study the chemistry and protocluster environment of G19.01-0.03 MM1 and the properties of the newly-detected millimetre sources. In Section 2 we describe the observations, in Section 3 we present the ALMA continuum and molecular line emission, as well as VLA ammonia and methanol emission. In Section 4 we present our modelling of the COM emission, rotation diagram analysis, continuum properties of the millimetre neighbours, and discuss MM1's chemistry in the context of sources from the literature. We summarise our main conclusions in Section 5.
## 2 Observations
### Atacama Large Millimetre/submillimetre Array (ALMA)
Our ALMA Cycle 2 observations were designed to search for low-mass cores within the clump-scale gas reservoir associated with G19.01-0.03 MM1: the extent of the ALMA mosaic (\(\sim\)40\(\arcsec\)\(\approx\)0.78 pc at D=4 kpc) is shown in Figure 1 and observing parameters are summarised in Table 1 and below. These observations are also described in detail in Paper I.
For our observations, the ALMA correlator was configured to cover seven spectral windows (spws), including five narrow spws targeting particular spectral lines and two wide spws. Details of the narrow spws are given in Table 2. The wide spws, with central frequencies of \(\sim\)278.2 GHz and \(\sim\)292.0 GHz, each have a bandwidth of 1.875 GHz, a Hanning-smoothed spectral resolution of 1.13 MHz (1.156\(\times\) the channel spacing of 0.977 MHz because of online channel averaging in the ALMA correlator).
As detailed in Paper I, the data were calibrated using the casa 4.2.2 version of the ALMA calibration pipeline and line-free channels were identified using the approach of Brogan et al. (2016); Cyganowski et al. (2017). These line-free channels were used to construct a pseudo-continuum dataset and to perform continuum subtraction in the \(u\),\(v\)-plane. For the narrow spw targeting C\({}^{33}\)S, this process was problematic due to wide lines and possible absorption (see also Paper I and Cyganowski et al., 2017) and we excluded this spw - which overlaps one of the wide spws in our tuning - from the pseudo-continuum dataset and our line analysis. The aggregate continuum bandwidth of the final pseudo-continuum dataset is \(\sim\)1.6 GHz. The continuum data were iteratively self-calibrated and were imaged using Briggs weighting with a robust parameter of 0 and multi-frequency synthesis; the key parameters of the resulting image are listed in Table 1. The synthesised beamsize of the continuum image (0\(\aas@@fstack{\prime\prime}\)52\(\times\)0\(\aas@@fstack{\prime\prime}\)35) corresponds to a physical scale of 2080\(\times\)1400 AU at 4 kpc. As our observations included only the ALMA 12 m array, the maximum recoverable scale is 4.2\(\arcsec\)-16, 800 AU (at 4 kpc).
After applying the solutions from the continuum self-calibration, the continuum-subtracted line data were imaged with Briggs weighting with a robust parameter of 0.5. For the narrow spws, which were imaged with 0.5 km s\({}^{-1}\) channels for better sensitivity to faint emission, the synthesised beamsizes and rms noise levels of the image cubes are listed in Table 2. For the wide spws, which were imaged with the native channel spacing, the synthesised beamsizes are 0\(\aas@@fstack{\prime\prime}\)60\(\times\)0\(\aas@@fstack{\prime\prime}\)43 [P.A. 79.8\(\arcdeg\)] and 0\(\aas@@fstack{\prime\prime}\)57\(\times\)0\(\aas@@fstack{\prime\prime}\)41 [P.A. 78.9\(\arcdeg\)] for the spws centred at \(\sim\)278.2 GHz and \(\sim\)292.0 GHz, respectively, and the rms noise is \(\sim\)3 mJy beam\({}^{-1}\) in emission-free channels (in channels with complex emission, the rms is up to \(\sim\)1.5 times higher; see also Paper I). All measurements were made from images corrected for the response of the primary beam.
\begin{table}
\begin{tabular}{l c c} \hline \hline Parameter & ALMA 1.05 mm & VLA 1.21 cm \\ \hline Observing date & 14 May 2015 & 11-12 Nov 2013 \\ Project code (PI) & 2013.1.00812.S & 13B-359 \\ & (C. Cyganowski) & (T. Hunter) \\ Gain calibrator & J1733–1304 & J1832–1035 \\ Bandpass calibrator & J1733–1304 & J1924\(-\)2914 \\ Flux calibrator & Titan\({}^{a}\) & J1331+3030 \\ On-source integration time & 44 min & 139 min \\ Number of antennas & 37 & 25 \\ Antenna configuration & C43-3/(4) & B \\ Phase Centre (J2000): & & \\ R.A. (\({}^{\rm min\,s}\)) & 18:25:44.61\({}^{b}\) & 18:25:44.80 \\ Dec. (\(\arcmin\)\(\arcsec\)) & -12:22:44.00\({}^{b}\) & -12:22:46.00 \\ Projected baseline lengths & 20-533 m & 0.14-9.98 km \\ & 19–508 \(\&\lambda\) & 12–825 \(\&\lambda\) \\ Mean frequency\({}^{c}\) & 285.12 GHz & 24.81 GHz \\ Mean wavelength\({}^{c}\) & 1.05 mm & 1.21 cm \\ Number of pointings & 7 & 1 \\ Field of view\({}^{e,d}\) & \(\sim\)40\(\arcsec\) & 1.8\({}^{e}\) \\ Synthesised beam\({}^{c}\) & 0\(\aas@@fstack{\prime\prime}\)52 \(\times\) 0\(\aas@@fstack{\prime\prime}\)35 & 0\(\aas@@fstack{\prime\prime}\)33 \(\times\) 0\(\aas@@fstack{\prime\prime}\)22 \\ Beam position angle\({}^{e,e}\) & 88.4\({}^{e}\) & 0.5\({}^{e}\) \\ Maximum Recoverable Scale\({}^{f}\) & 4\(\aas@@fstack{\prime\prime}\)2 & 4\(\aas@@fstack{\prime\prime}\)5 \\ Continuum rms noise\({}^{g}\) & 0.25 mJy beam\({}^{-1}\) & 6.0 \(\mu\)Jy beam\({}^{-1}\) \\ \hline \end{tabular} \({}^{a}\) Using Butler-JPL-Horizons 2012 models.
\({}^{b}\) For the central pointing of the mosaic.
\({}^{c}\) For the continuum image.
\({}^{d}\) ALMA: to 30% level of mosaic response. VLA: primary beam FWHP at mean frequency.
\({}^{e}\) Measured East of North i.e. positive in the anti-clockwise direction.
\({}^{f}\) Calculated from the fifth percentile shortest baseline (as stated in the ALMA Technical Handbook) and mean frequency, using au.estimateMRS from the analysisUtils Python package.
\({}^{g}\) Estimated from emission-free regions within the 30% response level of the ALMA mosaic.
\end{table}
Table 1: Observing parameters for ALMA 1.05 mm and VLA 1.21 cm data.
### Karl G. Jansky Very Large Array (VLA)
In this paper, we present our K-band VLA spectral line observations of G19.01-0.03. Our VLA tuning included 10 narrow spws targeting NH\({}_{3}\) and CH\({}_{3}\)OH lines observed in other EGOs, including lines that exhibit maser activity in some EGOs (e.g. Brogan et al., 2011; Towner et al., 2017, see also SS3.2.3; SS3.2.4). The VLA observing parameters are summarised in Table 1 and details of the narrow spws are given in Table 2. The VLA K-band tuning also included 16\(\times\)0.128 GHz spws for continuum, as detailed in Paper I which presented the continuum results; for completeness, key parameters of the VLA 1.21 cm continuum image are included in Table 1.
As explained in Paper I, the VLA data were calibrated using the casa 4.7.1 version of the VLA calibration pipeline. The data were Hanning smoothed, and phase-only self-calibration was performed using the channel with the strongest NH\({}_{3}\)(3,3) emission (after continuum subtraction in the \(u,v\)-plane). These solutions were then applied to all of the line data (as well as to the continuum; see Paper I). For each narrowband spw, continuum subtraction was performed in the \(u,v\)-plane and the continuum-subtracted line data were imaged with 0.4 km s\({}^{-1}\) channels, Briggs weighting with a robust parameter of 0.5, and a \(uv\) taper of 200 k\(\lambda\) to improve the brightness temperature sensitivity. The synthesised beamsizes and rms noise levels of the resulting image cubes are presented in Table 2. Measurements were made from images corrected for the primary beam response.
## 3 Results
### ALMA 1.05 mm continuum emission
In Paper I, we presented the ALMA 1.05 mm continuum towards MM1, and noted the detection of a further four continuum sources in the field for the first time (see Figure 2), named MM2...MM5 in order of decreasing peak intensity. Their observed properties (as well as the observed properties of MM1 as presented in Paper I) are listed in Table 3, with their FWHM extents represented by pink ellipses in Figure 2b. As detailed in Paper I, sources were extracted using the astrospandro algorithm (Rosolowsky et al., 2008), with a minimum isocontour value (\(I_{\rm min}\)) of 5\(\sigma_{\rm rms}\) (where \(\sigma_{\rm rms}=0.25\) mJy beam\({}^{-1}\)), minimum isocontour spacing (\(\Delta I_{\rm min}\)) of 1\(\sigma_{\rm rms}\), and minimum size of a structure (\(n_{\rm pix}\)) of \(\approx n_{\rm pix,beam}/2\) (i.e. half the beam size, where \(n_{\rm pix,beam}\approx 50\)). Two sources to the south-west of MM4 are detected with peak emission \(>5\sigma_{\rm rms}\) (Figure 2) but are only extracted by the dendrogram algorithm if the parameters are dropped to \(\Delta I_{\rm min}=0.9\sigma_{\rm rms}\) and \(n_{\rm pix}=15\) pixels, meaning that they are only equivalent to a third of a beam in size. We therefore do not consider these firm detections. MM2...MM5 have angular separations from MM1 of 1.6, 5.5, 6.1 and 2.6'' respectively, ranging between \(0.03-0.12\) pc at the 4 kpc distance, marking the first detection of other millimetre sources within the parent clump of MM1. MM4 appears non-gaussian in its emission morphology, whilst MM2 lies within a common contour to MM1 with lower surface-brightness emission connecting the two sources. This suggests that MM2 may be fragmenting out of material that also feeds MM1.
### Line emission
#### 3.2.1 COMs towards MM1 with ALMA
A forest of molecular lines is observed towards MM1 with ALMA, as seen in the wideband spectra shown here in Figure 3. We follow the criteria presented by Herbst & van Dishoeck (2009) in identifying molecular lines (also see Maret et al., 2011). As outlined in Paper I, we attribute peaks in emission to cataloged rest frequen
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Telescope & Targeted line & Line Rest Frequency\({}^{a}\) & E\({}_{u}/k_{B}\)\({}^{a}\) & Bandwidth & \(\Delta\nu^{b}\) & \(\Delta\nu^{c}\) & Synthesised beam & rms noise\({}^{d}\) \\ & & (GHz) & (K) & (MHz) & (kHz) & (km s\({}^{-1}\)) & \({}^{\prime\prime}\times{}^{\prime\prime}\) [\({}^{\circ}\)] & (mJy beam\({}^{-1}\)) & (K) \\ \hline ALMA & N\({}_{2}\)H\({}^{\star}\)(3–2) & 279.5117491 & 26.8 & 468.75 & 122.07 & 0.5 & 0.57 \(\times\) 0.41 [83.2] & 6.1 & 0.41 \\ ALMA & DCM (4–3) & 289.648417 & 34.8 & 117.1875 & 122.07 & 0.5 & 0.55 \(\times\) 0.40 [82.0] & 6.3 & 0.42 \\ ALMA & \({}^{34}\)SO \(G_{7}\)\({}^{-5}\) & 290.562238 & 63.8 & 117.1875 & 122.07 & 0.5 & 0.55 \(\times\) 0.40 [82.2] & 6.0 & 0.39 \\ ALMA & H\({}_{2}\)CO \(A_{0.4}\) – 3,0.3 & 290.623405 & 34.9 & 117.1875 & 122.07 & 0.5 & 0.55 \(\times\) 0.40 [82.3] & 6.0 & 0.39 \\ ALMA & C\({}^{33}\)S (6–5) & 291.485935 & 49.0 & 117.1875 & 122.07 & 0.5 & 0.55 \(\times\) 0.40 [82.4] & — &... \\ VLA & NH\({}_{3}\) (1,1) & 23.694496 & 24.4 & 8.0 & 15.625 & 0.4 & 0.56 \(\times\) 0.54 [28.0] & 1.24 & 8.93 \\ VLA & NH\({}_{3}\) (2,2) & 23.722631 & 65.6 & 8.0 & 15.625 & 0.4 & 0.56 \(\times\) 0.54 [28.5] & 1.18 & 8.47 \\ VLA & NH\({}_{3}\) (3,3) & 23.870130 & 124.7 & 8.0 & 15.625 & 0.4 & 0.56 \(\times\) 0.54 [30.5] & 1.13 & 8.01 \\ VLA & NH\({}_{3}\) (5,5) & 24.532985 & 296.5 & 8.0 & 15.625 & 0.4 & 0.55 \(\times\) 0.54 [38.4] & 1.10 & 7.52 \\ VLA & NH\({}_{3}\) (6,6) & 25.056025 & 409.2 & 4.0 & 15.625 & 0.4 & 0.55 \(\times\) 0.54 [41.1] & 0.95 & 6.23 \\ VLA & N\({}_{3}\) (7,7) & 25.715182 & 539.7 & 4.0 & 15.625 & 0.4 & 0.54 \(\times\) 0.53 [47.8] & 1.06 & 6.84 \\ VLA\({}^{e}\) & CH\({}_{3}\)OH 32-1)-(3,1)-(2) & 24.925707 & 36.2 & 2.0 & 15.625 & 0.4 & 0.56 \(\times\) 0.55 [31.5] & 1.04 & 6.76 \\ VLA\({}^{e}\) & CH\({}_{3}\)OH 5(2,3)-5)(1,4) & 24.9590789 & 57.1 & 2.0 & 15.625 & 0.4 & 0.55 \(\times\) 0.54 [49.7] & 1.01 & 6.67 \\ VLA\({}^{e}\) & CH\({}_{3}\)OH 8(2,6)-8)(1,7) & 25.2944165 & 105.8 & 2.0 & 15.625 & 0.4 & 0.57 \(\times\) 0.55 [52.8] & 0.85 & 5.18 \\ VLA\({}^{e}\) & CH\({}_{3}\)OH 10(2,8)-10(1,9) & 25.8782661 & 150.0 & 2.0 & 15.625 & 0.4 & 0.56 \(\times\) 0.54 [65.0] & 1.19 & 7.18 \\ \hline \end{tabular} \({}^{a}\) From CDMS (Müller et al., 2001, 2005) for lines observed with ALMA and from the TopModel line list for NH\({}_{3}\) lines, both accessed via the NRAO spectral line catalogue (Splatalogue; [https://splatalogue.online/](https://splatalogue.online/)). CH\({}_{3}\)OH line frequencies are from Müller et al. (2004).
\({}^{b}\) Channel spacing. For the ALMA data, the Hanning-smoothed spectral resolution is 0.244 MHz for all of the narrow spws.
\({}^{c}\) Velocity channel width of image cubes; see §2.1 and §2.2.
\({}^{d}\) Median rms noise estimated from emission-free channels; the rms noise is up to \(\sim\)1.5 times higher in channels with bright and/or complex emission. The conversion to brightness temperature assumes the Rayleigh-Jeans approximation: \(T=1.222\times 10^{3}\frac{I}{\nu^{2}\epsilon_{\rm min}^{2}\epsilon_{\rm min}}\), where \(I\) is the rms in mJy beam\({}^{-1}\), \(\nu\) is the frequency in GHz, and \(\theta_{\rm min}\times\theta_{\rm min}\) is the synthesised beam size in arcseconds.
\({}^{e}\) As in Towner et al. (2017), these 25 GHz CH\({}_{3}\)OH lines will be referred to in the main text by the following shorthand notation, given by the first two values of the upper state quantum number of each transition respectively: \(3_{2}\), \(5_{2}\), \(8_{2}\), \(10
cies from the JPL (Pickett et al., 1998) and CDMS (Muller et al., 2001) databases at the source systemic velocity (\(59.9\pm 1.1\) km s\({}^{-1}\); Cyganowski et al., 2011a). We further produced LTE synthetic spectra of the molecular emission (using the Weeds extension of class; Maret et al., 2011) - a molecular transition was positively identified when all lines for that species predicted in the synthetic spectrum were present in the observed spectrum, for typical model parameters expected of a hot core (e.g. \(T>100\) K). The use of LTE synthetic spectra also allowed the identification of some emission peaks that were consistent with multiple line rest frequencies within our spectral resolution (\(\sim\)1 km s\({}^{-1}\) in the wide spws), for example blended lines and lines with shoulder features. Using this approach, we identify 43 line transitions from 11 different species in the wide ALMA spws at \(\sim\)278 GHz and \(\sim\)292 GHz (see Figure 3 and Table 4), including isolated lines, blended lines, and lines with shoulder features. We note that many lines remain unidentified despite their strong detection above the noise, generally due to rest frequencies and/or synthetic line profiles that cannot be confidently distinguished at our spectral resolution. Of particular note, we report compact emission towards MM1 (see Figure 3 of Paper I) from a range of complex organic molecules (COMs), including Oxygen and Nitrogen-bearing COMs such as CH\({}_{3}\)OCH\({}_{3}\), CH\({}_{3}\)CHO, CH\({}_{3}\)OH, NH\({}_{2}\)CHO and CH\({}_{3}\)OHO. We also include in our analysis of MM1's line emission (SS4.1 and SS4.3) additional lines from three of these species that are serendipitously included in the narrow, targeted ALMA bands: CH\({}_{3}\)OCHO (\(271_{,27}-26_{,26}\)) with \(v_{\rm rest}=289.62659\) GHz and \(E_{u}/k_{B}=385.4\) K (blended with the CH\({}_{3}\)OH\({}_{6}\)\(1_{,2}-5_{,12}\)) line with \(v_{\rm rest}=289.62430\) GHz and \(E_{u}/k_{B}=731.4\) K, CH\({}_{3}\)OH (\(112_{,10}-10_{3,7}\)) with \(v_{\rm rest}=279.35193\) GHz and \(E_{u}/k_{B}=190.9\) K, and OCS (\(23-22\)) with \(v_{\rm rest}=279.6853\) GHz and \(E_{u}/k_{B}=161.1\) K. The majority of the lines targeted with narrow spws (SS2.1 and Table 2) exhibit extended emission and are discussed in Section 3.2.2. The exception is \({}^{34}\)SO 6\({}_{7}\)-5\({}_{6}\), which exhibits compact emission but which we do not include in our analyses due to line blending in the narrowband cube.
In contrast, in the SMA data presented by Cyganowski et al. (2011a), MM1 was relatively line-poor (see SS1): the only detected COM emission in their 2 GHz-wide bands, centred at \(\sim\) 220 and \(\sim\) 230 GHz, was from CH\({}_{3}\)OH and CH\({}_{3}\)CN. With ALMA, we also detect molecular lines with up to four times higher \(E_{u}/k_{B}\) than detected with the SMA (e.g. CH\({}_{3}\)OH(\(23_{,19}-22_{,5,18}\)) at E\({}_{u}/k_{B}=736\) K), and identify 19 lines with E\({}_{u}/k_{B}>200\) K. The relative dearth of molecular lines in the SMA spectra is likely attributable to a combination of sensitivity and beam dilution; our \(\sim 0.4^{\prime\prime}\)-resolution ALMA observations improve on the \(2.4^{\prime\prime}\)-resolution SMA observations by a factor of \(\sim\)30 in beam area, and a factor of \(\sim\)1.8 in brightness temperature sensitivity. Comparing our ALMA detections with those from the sensitive, \(29^{\prime\prime}\)-resolution 1 mm single-dish survey of He et al. (2012) confirms the importance of beam dilution: their observations
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Source & J2000.0 Coordinates\({}^{a}\) & Peak & Integ. & Source size \({}^{c}\) & Source size\({}^{c}\) \\ & \(\alpha\) & \(\delta\) & intensity \({}^{b}\) & flux \({}^{b}\) & Maj. \(\times\) Min. [PA.] & \\ & (\({}^{h\,m\,s}\)) & (\({}^{c\,\prime\prime}\)\({}^{a}\)) & (mJy beam\({}^{-1}\)) & (mJy) & (arcsec \(\times\) arcsec [\({}^{c}\)]) & (\(\alpha\)\(\nu\)) \\ \hline MM1 & 18:25:44.782 & -12:22:45.92 & 266.3 & 303.1 & \(1.15\times 0.84\) [78.7] & \(4600\times 3360\) \\ MM2 & 18:25:44.888 & -12:22:45.68 & 4.7 & 8.1 & \(0.83\times 0.74\) [\(-82.1\)] & \(3310\times 2970\) \\ MM3 & 18:25:44.446 & -12:22:43.52 & 3.0 & 7.1 & \(1.14\times 0.88\) [\(-53.0\)] & \(4560\times 3530\) \\ MM4 & 18:25:44.880 & -12:22:51.86 & 2.3 & 6.0 & \(1.56\times 0.65\) [\(-47.7\)] & \(6200\times 2610\) \\ MM5 & 18:25:44.622 & -12:22:47.06 & 1.6 & – & – & – \\ \hline \end{tabular} \({}^{a}\) Peak position. The number of significant figures reflects a one pixel uncertainty.
\({}^{b}\) Evaluated within the intensity-weighted second moment size (not the total dendrogram structure). For sources smaller than a beam, integrated fluxes are marked “\(-\)”.
\({}^{c}\) Deconvolved major and minor axes sizes; position angle is measured East of North i.e. positive in the anti-clockwise direction. Sizes are the intensity-weighted second moment, converted to FWHM (see Paper i). Sources smaller than a beam are marked “\(-\)”.
\end{table}
Table 3: Observed properties of extracted 1.05 mm continuum sources.
Figure 2: (a) ALMA 1.05 mm continuum image, corrected for the primary beam response, as shown in Paper I. The field shown is a sub-region of that mosaiced, which contains all emission detected \(\geq 5\sigma\). Black contours are plotted at [5, 8, 16, 32, 64, 200, 400 and 800]\(\times\sigma\), where \(\sigma=0.25\) mJy beam\({}^{-1}\). The synthesised beam and scalerbar are plotted in the bottom-left and bottom-right respectively. (b) Same as (a), but with the outlines of the extracted dendrogram structures in dashed black, and ellipses representing the source sizes (Table 3) in pink. The peak positions of the sources (Table 3) are marked by the black \(\times\)s.
of G19.01\(-\)0.03 have a 1\(\sigma\) rms of \(0.015-0.022\) K (compared to 0.25 K and 0.44 K for the ALMA and SMA observations, respectively), but the only COM detected is CH\({}_{3}\)OH. In sum, our ALMA results affirm the hot core classification of G19.01-0.03 MM1 and illustrate the importance of sensitive, high-resolution observations for studying the chemistry of MYSOs.
#### 3.2.2 Extended emission with ALMA
Figure 4 presents peak intensity maps of a selection of lines that are representative of the extended emission in our ALMA tuning. These include three of the lines targeted in our narrow spectral windows (N\({}_{2}\)H\({}^{+}\)(3-2), DCN v=0 (4 - 3) and H\({}_{2}\)CO(4\({}_{0.4}-3_{0.3}\)), with E\({}_{4}/k_{B}=26.8\) K, 34.8 K and 34.9 K respectively), and therefore point our wide spectral windows (H\({}_{2}\)CO(4\({}_{2.3}-3_{2.1}\)), CH\({}_{3}\)OH(6\({}_{1.6}-5_{1.4}\)) and CH\({}_{3}\)OH(9\({}_{-1.9}-8_{0.8}\)), with E\({}_{u}/k_{B}=82.1\) K, 63.7 K and 110.0 K respectively). The H\({}_{2}\)CO(4\({}_{2.3}-3_{2.1}\)) and CH\({}_{3}\)OH(6\({}_{1.5}-5_{1.4}\)) lines in Figure 4(c) and 4(d) were identified as having extended emission around MM1 in Paper I.
The four H\({}_{2}\)CO and CH\({}_{3}\)OH lines shown in Figure 4 appear to spatially trace the same bi-polar outflow structure identified by Cyganowski et al. (2011a) in \({}^{12}\)CO(2-1) with the SMA (see Figure 1) and in HCO\({}^{+}\)(1-0) and SiO(2-1) with CARMA. Kinematically, however, the ALMA H\({}_{2}\)CO and CH\({}_{3}\)OH emission traces lower velocity gas than the SMA \({}^{12}\)CO or the CARMA HCO\({}^{+}\) emission, with a median full velocity extent of 25 km s\({}^{-1}\) compared to \(\sim\)135 km s\({}^{-1}\) for \({}^{12}\)CO(2-1) and \(\sim\)76 km s\({}^{-1}\) for HCO\({}^{+}\)(1-0) (Cyganowski et al., 2011a). The H\({}_{2}\)CO and CH\({}_{3}\)OH kinematics seen with ALMA do however exhibit a similar asymmetry as observed with the SMA and CARMA, with the blue-shifted lobe extending to higher velocities (up to \(\sim\)20 km s\({}^{-1}\) from the systemic velocity, V\({}_{\rm syss}\)) than the red-shifted lobe (up to \(\sim\)9 km s\({}^{-1}\) from V\({}_{\rm syss}\)). Overplotted in Figure 4(e) are the positions of 44 GHz Class I CH\({}_{3}\)OH masers from Cyganowski et al. (2009), which appear to trace the outer edge of the CH\({}_{3}\)OH and H\({}_{2}\)CO outflow lobes. Taken together, our results are consistent with the ALMA CH\({}_{3}\)OH and H\({}_{2}\)CO emission tracing lower velocity, outflow-cloud interaction regions or outflow cavity walls, as also seen in the EGO G11.92-0.61 by Cyganowski et al. (2017).
The CH\({}_{3}\)OH(9\({}_{-1.9}-8_{0.8}\)) line in Figure 4(e) (with \(\nu_{\rm rest}=278.30451\) GHz) is known to exhibit Class I maser emission towards other sources in the literature (e.g. Voronkov et al., 2012; Yanagida et al., 2014; Cyganowski et al., 2017). Comparing the emission of this line with that of CH\({}_{3}\)OH(6\({}_{1.5}-5_{1.4}\)) (\(\nu_{\rm rest}=292.67291\) GHz), shown in Figure 4(d), the two lines have similar emission morphologies. As shown in Figure 5, these lines also have similar fluxes towards the ALMA 1.05 mm continuum peak of MM1 (equivalent to brightness temperatures of T\({}_{b}\)\(\geq\) 39 and 333 K respectively). However, towards the region of brightest CH\({}_{3}\)OH(9\({}_{-1.9}-8_{0.8}\)) emission in the outflow (T\({}_{b}\)\(=\) 49 K at 18\({}^{\rm h}\)25\({}^{\rm m}\)44.5\({}^{\rm s}\)17 - 12\({}^{\circ}\)22\({}^{\prime}\)32\({}^{\prime\prime}\).652 (J2000)), the CH\({}_{3}\)OH(6\({}_{1.5}-5_{1.4}\)) line is an order of magnitude weaker (Figure 5) with T\({}_{b}\)\(=\) 4 K. The CH\({}_{3}\)OH(9\({}_{-1.9}-8_{0.8}\)) line is also notably narrower at the outflow position than towards the MM1 continuum peak (\(\Delta\)V\({}_{\rm FWHM}=2.01\pm 0.01\) km s\({}^{-1}\) and \(6.95\pm 0.08\) km s\({}^{-1}\), respectively), and is spatially and kinematically coincident with 44 GHz Class I CH\({}_{3}\)OH masers (Cyganowski et al., 2009), candidate 229.759 GHz CH\({}_{3}\)OH maser emission (Cyganowski et al., 2011a, see their Fig.12), and NH\({}_{3}\)(3,3) masers (see SS3.2.3). The coincidence with probable 229.759 GHz maser emission is notable because the 229.759 and 278.305 GHz transitions are in the same Class I maser series (the 36 GHz maser series; Voronkov et al., 2012), and Cyganowski et al. (2018) found that all 229 GHz masers in the EGO G11.92-0.61 have probable 278 GHz maser counterparts. Though the angular resolution of our ALMA observations is insufficient to rely on line brightness temperatures to distinguish between thermal and maser behaviour (as was also the case for G11.92-0.61 in Cyganowski et al., 2017), the line properties shown in Figure 5, and the coincidence of the brightest CH\({}_{3}\)OH(9\({}_{-1.9}-8_{0.8}\)) emission in the outflow with other shock
Figure 3: ALMA spectra towards the 1.05 mm continuum peak of MM1 in the broad spectral windows centred at 278.209 GHz and 292.021 GHz. Unlabelled lines could not be confidently identified, mostly due to blending at the coarse spectral resolution of our observations. Lines labelled in black were used for the kinematic analysis presented in Paper I, while others are labelled in grey. All marked lines are listed in Table 4.
excited masers, strongly suggest thermalised emission towards the MM1 hot core, and 278 GHz Class I CH\({}_{3}\)OH maser emission tracing outflow-cloud interaction regions (e.g. Voronkov et al., 2012; Yanagida et al., 2014; Cyganowski et al., 2017). Higher angular resolution observations of the CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) line would be required to confirm the presence of maser emission based on brightness temperature.
As a dense gas tracer, N\({}_{2}\)H\({}^{+}\) is known to trace infrared-dark clumps at both quiescent and active evolutionary stages: Sanhueza et al. (2012), for example, detect N\({}_{2}\)H\({}^{+}\)(1-0) towards the majority of their 92 infrared-dark clumps with the Mopra telescope at 38'' angular resolution (typically \(\sim\)0.8 pc across their sample). Notably, while we observe extended N\({}_{2}\)H\({}^{+}\)(3-2) emission in G19.01-0.03 (Figure 4a), its morphology does not resemble that of the 1.05 mm dust emission or that of the outflow structure. Cyganowski et al. (2017) found similar results in their ALMA N\({}_{2}\)H\({}^{+}\)(3-2) observations of the EGO G11.92-0.61 (their Fig. 3c), suggesting that in high-mass clumps the N\({}_{2}\)H\({}^{+}\)(3-2) and millimetre continuum emission do not trace the
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Species\({}^{a}\) & Transition & Frequency & \(E_{u}/k_{B}\) & \(S_{ij}\mu^{2}\) & \(g_{u}\) & Catalogue\({}^{b}\) & Paper 1\({}^{c}\) \\ & & (GHz) & (K) & (D\({}^{2}\)) & & \\ \hline
**CH\({}_{3}\)OH (v\({}_{t}\) = 0)** & 23\({}_{4,19}\)\(\rightarrow\)22\({}_{5,18}\) & 278.96513 & 736.0 & 7.05321 & 47 & JPL & Y \\
**CH\({}_{3}\)OH (v\({}_{t}\) = 0)** & 21\({}_{-2,20}\)\(\rightarrow\)20\({}_{-3,18}\) & 278.47989 & 563.2 & 64.5393 & 43 & JPL & Y \\ HC\({}_{3}\)CN (v\(\gamma\)/1) & J = 32–31, I = If & 292.19837 & 551.4 & 443.01616 & 65 & CDMS & N \\ HC\({}_{3}\)N (v\(\gamma\)/1) & J = 32–31, I = te & 291.78201 & 551.1 & 442.99035 & 65 & CDMS & N \\ CH\({}_{3}\)OH (v\({}_{t}\) = 0) & 176\({}_{1,12}\)\(\rightarrow\)18\({}_{4,13}\)\({}^{-}\) & 291.90814 & 548.6 & 3.98258 & 35 & JPL & Y \\
**CH\({}_{3}\)OH (v\({}_{t}\) = 0)** & 18\({}_{5,13}\)\(\rightarrow\)19\({}_{4,16}\)\({}^{-}\) & 278.72314 & 534.6 & 5.26251 & 37 & JPL & Y \\
**CH\({}_{3}\)OH (v\({}_{t}\) = 0)** & 18\({}_{5,14}\)\(\rightarrow\)19\({}_{4,15,4}\)\(\rightarrow\) & 278.67303 & 534.6 & 5.26201 & 37 & JPL & N \\ CH\({}_{3}\)OH (v\({}_{t}\) = 0) & 10\({}_{10}\)\(\rightarrow\)9\({}_{9,0}\) & 292.51744 & 418.8 & 5.01720 & 21 & JPL & Y \\ CH\({}_{3}\)OHO (v\({}_{t}\) = 1) & 23\({}_{4,19}\)\(\rightarrow\)22\({}_{4,18}\)A & 292.31711 & 365.4 & 59.03484 & 94 & JPL & N \\ t-CH\({}_{3}\)CH\({}_{2}\)OH & 276\({}_{2,22}\)\(\rightarrow\)27\({}_{5,23}\) & 278.84892 & 363.7 & 28.22445 & 55 & JPL & N \\
**CH\({}_{3}\)OH (v\({}_{t}\) = 0)** & 14\({}_{10}\)\(\rightarrow\)15\({}_{3,12}\) & 278.59008 & 339.6 & 4.14979 & 29 & JPL & Y \\ CH\({}_{3}\)OH (v\({}_{t}\) = 0) & 15\({}_{1,0}\)\(\rightarrow\)14\({}_{2,0}\) & 291.24057 & 295.3 & 5.70014 & 31 & JPL & N \\ t-CH\({}_{3}\)CH\({}_{2}\)OH & 236\({}_{17}\)\(\rightarrow\)23\({}_{5,18}\) & 278.68234 & 277.5 & 23.74792 & 47 & JPL & N \\ CH\({}_{3}\)CH\({}_{2}\)CN (v\({}\) = 0) & 317\({}_{2,24}\)\(\rightarrow\)30\({}_{2,23}\) & 278.00758 & 267.8 & 435.64965 & 63 & JPL & Y \\ CH\({}_{3}\)CH\({}_{2}\)CN (v\({}\) = 0) & 31\({}_{6,25}\)\(\rightarrow\)30\({}_{6,24}\) & 278.26670 & 253.4 & 441.91785 & 63 & JPL & Y \\ CH\({}_{3}\)CH\({}_{2}\)CN (v\({}\) = 0) & 31\({}_{6,26}\)\(\rightarrow\)30\({}_{6,25}\) & 278.25123 & 253.4 & 441.86236 & 63 & JPL & N \\ CH\({}_{3}\)CH\({}_{2}\)CN (v\({}\) = 0) & 31\({}_{6,25}\)\(\rightarrow\)30\({}_{5,25}\) & 278.86581 & 241.4 & 447.13304 & 63 & JPL & N \\ g-CH\({}_{3}\)CH\({}_{2}\)OH & 16\({}_{6,10}\)\(\rightarrow\)15\({}_{6,9}\) (v\({}_{t}\)=0-0) & 277.41431 & 213.8 & 21.96640 & 33 & JPL & Y \\ g-CH\({}_{3}\)CH\({}_{3}\)OHOH & 16\({}_{1,12}\)\(\rightarrow\)15\({}_{4,11}\) (v\({}_{t}\)=0-0) & 278.64299 & 189.7 & 23.96604 & 33 & JPL & Y \\ CH\({}_{3}\)OHO (v\({}_{t}\) = 0) & 24\({}_{2,22}\)\(\rightarrow\)23\({}_{2,21}\)E & 279.05752 & 178.5 & 620.00988 & 98 & JPL & N \\ CH\({}_{3}\)OHO (v\({}_{t}\) = 0) & 24\({}_{2,22}\)\(\rightarrow\)23\({}_{2,21}\)A & 279.06596 & 178.5 & 62.01870 & 98 & JPL & N \\ OCS (v\({}\) = 0) & J = 24–23 & 291.83965 & 175.1 & 12.27714 & 49 & CDMS & N \\ CH\({}_{3}\)OHCHO (v\({}\) = 0) & 22\({}_{6,16}\)\(\rightarrow\)21\({}_{6,15}\)E & 279.05067 & 175.0 & 54.25478 & 90 & JPL & N \\ CH\({}_{3}\)OHO (v\({}_{t}\) = 0) & 22\({}_{6,16}\)\(\rightarrow\)26\({}_{16,15}\)A & 279.07471 & 175.0 & 54.26998 & 90 & JPL & N \\ CH\({}_{3}\)OHCHO (v\({}\) = 0) & 23\({}_{2,20}\)\(\rightarrow\)24\({}_{1,9}\)A & 277.74543 & 173.4 & 58.74803 & 94 & JPL & N \\ H\({}_{2}\)CO & 4\({}_{3,2}\)\(\rightarrow\)3\({}_{3,1}\) & 291.38049 & 140.9 & 28.54373 & 27 & CDMS & N \\ t-CH\({}_{3}\)CH\(
same structures on small size scales.1 We also note that larger-scale N\({}_{2}\)H\({}^{+}\)(3-2) emission will be affected by spatial filtering due to missing short-spacing information (Table 1; SS2.1). The morphology of the DCN(4-3) emission, on the other hand (Figure 4(f)), more closely resembles the ALMA 1.05 mm continuum. DCN(4-3) is detected towards MM1 to 38\(\sigma\) (where \(\sigma=6.3\) mJy beam\({}^{-1}\)): the position of the DCN emission peak is offset from the ALMA 1.05 mm continuum peak by 0.1\({}^{\prime\prime}\sim\) 400 au (within a DCN beam; Table 2), likely due to line opacity and self-absorption towards the continuum peak. The DCN emission also traces some of the nearby millimetre companions to MM1, which is discussed in Section 3.2.5.
Footnote 1: N\({}_{2}\)H\({}^{+}\)(3–2) is detected in absorption against the millimetre continuum of MM1, see Appendix A.
#### 3.2.3 Ammonia emission with the VLA
Of the ammonia lines in our VLA tuning, NH\({}_{3}\)(3,3) and NH\({}_{3}\)(6,6) are known to exhibit maser behaviour thought to arise due to outflow-induced shocks (e.g. Mangum & Wootten 1994; Kraemer & Jackson 1995; Brogan et al. 2011). Towards G19.01\(-\)0.03, spatially compact regions of NH\({}_{3}\)(3,3) emission are detected at the \(\geq\)5\(\sigma\) level (shown as white contours in Figure 4(d)) coincident both spatially and kinematically with the outflow-tracing ALMA \(H_{2}\)CO and CH\({}_{3}\)OH emission shown in Figure 4(b)-(e). To characterise the NH\({}_{3}\)(3,3) emission, we identify as firm detections, and potential maser candidates, locations where \(\geq\)5\(\sigma\) emission in \(\geq\)2 consecutive velocity channels is
Figure 4: ALMA peak intensity maps of (a) N\({}_{2}\)H\({}^{+}\)(3 – 2), (b) H\({}_{2}\)CO(4\({}_{0,4}-3_{0,3}\)), (c) H\({}_{2}\)CO(4\({}_{2,3}-3_{2,1}\)), (d) CH\({}_{3}\)OH v\({}_{t}=0\) (\(6_{1,5}-5_{1,4}\)), (e) CH\({}_{3}\)OH v\({}_{t}=0\) (\(9_{-1,9}-8_{0,8}\)), and (f) DCN v=0 (\(4-3\)), in units of Jy beam\({}^{-1}\) shown on a logarithmic scale (images shown are not primary beam corrected). Millimetre sources are labelled in (f), where two zoomed insets are also shown centred on MM1/MM2 and MM4. A zoomed inset of MM1/MM2 is also shown in (a). Overplotted magenta contours in all panels show the 1.05 mm continuum emission at 5-\(\sigma\). Three contours are also shown in the zoomed insets, but at 5, 8, 16, 64 and 600\(\sigma\) (where \(\sigma=0.25\) mJy beam\({}^{-1}\)). In panel (d), white contours show the VLA NH\({}_{3}\)(3,3) peak intensity map (in 10\(\sigma\) steps from \(5-145\sigma\), where \(\sigma=1.1\) mJy beam\({}^{-1}\), non-primary beam corrected), black contours show the VLA NH\({}_{3}\)(6,6) peak intensity map (at the 5\(\sigma\) level, where \(\sigma=0.9\) mJy beam\({}^{-1}\), non-primary beam corrected), and the orange cross indicates the peak position of the candidate VLA 25 GHz CH\({}_{3}\)OH 5(2,3)-5(1,4) maser emission (see §3.2.4). The NH\({}_{3}\)(3,3) maser groups from §3.2.3 are numbered in panel (d) as in Table 5. Overplotted white crosses in (e) mark the positions of 44 GHz Class I CH\({}_{3}\)OH masers from Cyganowski et al. (2009). Each panel shows the ALMA synthesised beam in the bottom left (as well as the VLA synthesised beam in panel (d)), a 1\({}^{\prime\prime}\) scale bar in the bottom right, and the molecule name, frequency and upper energy in the top right.
Figure 5: Spectra of the CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) line at \(E_{\mu}/k_{B}=110.0\) K (in blue) and the CH\({}_{3}\)OH(\(6_{1,5}-5_{1,4}\)) line at at \(E_{\mu}/k_{B}=63.7\) K (in orange) towards (a) the ALMA 1.05 mm continuum peak of MM1 (Table 3), and (b) the position of peak CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) emission in the northern outflow lobe (§3.2.2). The systemic velocity of MM1 (\(59.9\pm 1.1\) km s\({}^{-1}\); Cyganowski et al. 2011a) is marked by the vertical dashed line.
spatially contiguous within a VLA beam. Following the terminology of Towner et al. (2021), we refer to emission in a single velocity channel (each 0.4 km s\({}^{-1}\) wide; Table 2) as a "spot", and emission from multiple spatially contiguous spots as a "group": a group thus contains at least two spots.
To extract the position, peak intensity and velocity of each \(\geq 5\sigma\) emission spot, we fit the observed emission with a 2D Gaussian using the casa imfit task (as in Towner et al., 2021). The rms noise is measured in each channel to ascertain the signal-to-noise of the detection as the noise is higher in channels with bright emission due to dynamic range limitations. As we expect the emission to be unresolved, we fit each emission spot as a point source by fixing the size to that of the synthesised beam (e.g. Hunter et al., 2018; Towner et al., 2021). In total, we identify 50 emission spots at the \(\geq 5\sigma\) level (where the mean \(\sigma\) across all channels with emission is \(\sigma=1.14\) mJy beam\({}^{-1}\)) that reside in 8 emission groups. The position of each emission spot is plotted in Figure 6, each maser group is labelled in Figure 4(d), and Table 5 lists the properties of each group. Unlabelled contours in Figure 4(d) correspond to emission that does not meet our criteria of \(\geq\)5\(\sigma\) emission in \(\geq\)2 consecutive velocity channels within a VLA beam (generally because the emission centroids shift by more than a beam in consecutive channels) and so is not included in our analysis. The strongest emission group (ID 4 in Table 5) has a fitted peak intensity (\(216.0\pm 1.3\) mJy beam\({}^{-1}\)) corresponding to a brightness temperature \(\sim\)1550 K, strongly suggestive of masing behaviour. Four additional emission groups (i.e. IDs 1, 2, 5 and 6) also exhibit brightness temperatures greater than the \(E_{u}/k_{B}\) of the line. While the brightest group (ID 4) spans both blue- and red-shifted velocities (Table 5), blue-shifted and red-shifted NH\({}_{3}\)(3,3) emission groups generally reside towards the blue- and red-shifted outflow lobes respectively. The same pattern was observed in 44 GHz Class I CH\({}_{3}\)OH masers by Cyganowski et al. (2009) (their Figure 5f), though the maser velocities are modest compared to the velocity extent of the thermal molecular outflow lobes (Cyganowski et al., 2009, 2011). All NH\({}_{3}\)(3,3) emission spots are coincident both spatially and kinematically (within a beam and a channel, respectively) with 44 GHz Class I CH\({}_{3}\)OH maser spots from Cyganowski et al. (2009) (see Figure 6), similar to the results seen in the EGO G35.03+0.35 by Brogan et al. (2011). We note that in G19.01\(-\)0.03, not all 44 GHz methanol masers are coincident with NH\({}_{3}\)(3,3) masers.
In G19.01\(-\)0.03, all identified NH\({}_{3}\)(3,3) emission spots are located in the outer portions of the bipolar outflow lobes, whilst the 44 GHz Class I methanol maser spots are also found closer to the driving source. This distinction is shown in Figure 6, which plots the normalised, cumulative distribution of the angular separation (\(\theta_{\rm sep}\)) of each NH\({}_{3}\)(3,3) and 44 GHz Class I CH\({}_{3}\)OH maser spot from the ALMA 1.05 mm dust continuum peak of MM1. The two-sample Kolmogorov-Smirnov (K-S) and \(k\)-sample Anderson-Darling (A-D) tests are both nonparametric null hypothesis tests often used to compare two such distributions. The null hypothesis in this case states that the two distributions may be drawn from the same underlying distribution if the tests return \(p\)-values larger than 0.05. Running both the K-S and A-D tests on the full range of angular separations (\(\theta_{\rm sep}\)) shown in Figure 6 returns vanishingly small \(p\)-values, meaning the null hypothesis may be rejected and a conclusion drawn that the NH\({}_{3}\)(3,3) and 44 GHz methanol maser distributions are significantly different. However, running both tests for \(\theta_{\rm sep}\geq 12.5\arcsec\) returns \(p\gg 0.05\), meaning that the two distributions at large angular separations are effectively indistinguishable and may be drawn from the same underlying distribution.
We note that thermal NH\({}_{3}\)(1,1) and (2,2) emission are detected towards G19.01-0.03 at 73\(\arcsec\)-resolution by Cyganowski et al. (2013) but are undetected in our VLA observations (to 5\(\sigma\) limits of 6.2 and 5.9 mJy beam\({}^{-1}\) respectively). Our non-detection of these low-J transitions is primarily attributable to the limited brightness temperature sensitivity of our high-resolution observations (Table 2), with spatial filtering potentially contributing to the non-detection of the NH\({}_{3}\)(2,2) line (the maximum recoverable scale of our observations is only \(\sim\)4.5\(\arcsec\) - 0.08 pc at D\(-\)4 kpc, see Table 1). Cyganowski et al. (2013) also detect NH\({}_{3}\)(3,3) towards G19.01-0.03 that is likely dominated by thermal emission, presumably arising near MM1. Extended NH\({}_{3}\)(3,3) emission is also seen towards the mm continuum sources in the EGO G35.03+0.35 by Brogan et al. (2011) in 3.7\(\arcsec\times\)3.0\(\arcsec\) resolution VLA observations (\(\sim 8,580\times 6,960\) AU at the parallax distance of \(2.32^{+0.24}_{-0.20}\) kpc from Wu et al., 2014). In our data, NH\({}_{3}\)(3,3) emission at \(\geq\)5\(\times\) median rms in Table 2 is present in a single channel within the area of MM1's mm continuum emission. This channel, however, has bright maser emission and so an elevated rms, and the NH\({}_{3}\)(3,3) peak is \(<\)5\(\times\) the channel rms. Since the NH\({}_{3}\)(3,3) peak is also offset from the mm continuum peak and our tentative NH\({}_{3}\)(6,6) and (7,7) detections (see below), we conclude that NH\({}_{3}\)(3,3) is undetected towards MM1 in our data. NH\({}_{3}\)(5,5) is also undetected in our VLA observations, to a 5\(\sigma\) limit of 5.5 mJy beam\({}^{-1}\).
NH\({}_{3}\)(6,6) emission (shown in black contours in Figure 4(d)) is detected at the 5.7\(\sigma\) level (I\({}_{\rm peak}\)\(\sim\)5.4 mJy beam\({}^{-1}\)) towards a position in the outskirts of the northern, blue-shifted outflow lobe (18\({}^{h}\)25\({}^{m}\)44\(\aas@@fstack{s}\)50 \(-\)12\({}^{\circ}\)22\(\arcmin\)32\(\arcmin\)52 (J2000)), spatially and kinematically coincident with an NH\({}_{3}\)(3,3) maser group (ID 4 in Table 5), 44 GHz Class I CH\({}_{3}\)OH masers (Cyganowski et al., 2009), and candidate ALMA 278.3 GHz Class I CH\({}_{3}\)OH maser emission (Section 3.2.2). Towards MM1, NH\({}_{3}\)(6,6) is detected at the 5.6\(\sigma\) level (I\({}_{\rm peak}\)\(=\)5.3 mJy beam\({}^{-1}\)), spatially and kinematically coincident with a tentative 4.4\(\sigma\) (4.6 mJy beam\({}^{-1}\)) detection of NH\({}_{3}\)(7,7). These tentative high-J detections towards MM1 could represent tentative evidence of emission from hot gas originating in the inner regions of the circumstellar accretion disc around the central MYSO(s) (Paper I).
Figure 6: _Left panel:_ Fitted positions of each NH\({}_{3}\)(3,3) spot across all groups (blue \(\times\)), and 44 GHz Class I CH\({}_{3}\)OH maser spot positions from Cyganowski et al. (2009) (orange filled circles). ALMA 1.05 mm continuum contours are plotted in black (at [5, 8, 16, 64, 60])\(\times\), where \(\sigma=0.25\) mJy beam\({}^{-1}\)). High-velocity, blue- and red-shifted \({}^{12}\)CO(2–1) emission observed with the SMA (Cyganowski et al., 2011) are plotted in blue and red contours, as in Figure 1. The VLA beam and a scalerbar are plotted in the bottom left and top left of the panel respectively. _Right panel:_ Normalised, cumulative histogram of the angular separation (in arcseconds) of each NH\({}_{3}\)(3,3) spot (blue), and each 44 GHz Class I CH\({}_{3}\)OH maser spot (orange), from the ALMA 1.05 mm continuum peak of MM1 (Table 3). Angular separations were calculated using the au.angularSeparation task of the analysisUtils python package.
#### 3.2.4 25 GHz methanol emission with the VLA
The four CH\({}_{3}\)OH lines in our VLA tuning (Table 2) can exhibit thermal and/or Class I maser emission and are commonly detected towards EGOs: in their study of 20 EGOs, Towner et al. (2017) detected thermal and/or maser emission towards 16 (80%) of their sources. As Towner et al. (2017) adopt a 4\(\sigma\) detection criterion, we consider \(\geq\)4\(\sigma\) 25 GHz CH\({}_{3}\)OH emission so that we can compare to their results; we also adopt their shorthand notation of 3\({}_{2}\), 5\({}_{2}\), 8\({}_{2}\) and 10\({}_{2}\) for these lines. Towards MM1, we tentatively detect all four lines at the 4.8, 5.5, 4.5 and 4.5\(\sigma\) levels, respectively (see Figure 7, and Table 2 for \(\sigma\) values), equivalent to brightness temperatures (\(T_{B}\)) of 32.6, 36.8, 23.2 and 32.4 K. Towner et al. (2017) detect all four 25 GHz CH\({}_{3}\)OH lines in half of their sample (i.e. \(\sim\)63% of those with a detection of at least one line). The strongest emission towards MM1 is in the 5\({}_{2}\) line, a result also reported by Towner et al. (2017) towards the majority of the sources in their sample. We also detect 7.8\(\sigma\)\(5_{2}\) emission (\(T_{B}=52.3\) K) offset by 6\({}^{\prime\prime}\sim 24,000\) au to the north-west of MM1 (see Figures 4(d) and 7). This 5\({}_{2}\) emission is positionally and kinematically coincident with outflow-tracing H\({}_{2}\)CO and CH\({}_{3}\)OH emission observed with ALMA (SS3.2.2 and Figure 4) and with 44 GHz Class I CH\({}_{3}\)OH maser emission (Cyganowski et al., 2009).
To distinguish between thermal and maser 25 GHz CH\({}_{3}\)OH emission, we consider the two approaches detailed by Towner et al. (2017): (i) following criteria on the spatial and spectral extent of the line emission, and (ii) comparing observed line intensities to predictions for optically thin LTE emission. For the first approach, spatially and spectrally broad emission (i.e. resolved \(\geq 4\sigma\) emission in \(\geq 5\) channels \(\sim 2\) km s\({}^{-1}\)) is considered thermal, whilst unresolved/point-like spectrally narrow emission (i.e. \(\geq 4\sigma\) emission in \(\leq 4\) channels \(\sim 1.6\) km s\({}^{-1}\)) is considered a candidate maser. The strongest 5\({}_{2}\) emission towards the MM1 outflow appears both unresolved (i.e. with spatial extent less than a beam) and spectrally narrow with a fitted full-width half-maximum velocity width (\(\Delta V_{\rm FWHM}\)) of \(0.6\pm 0.1\) km s\({}^{-1}\). We therefore consider this a candidate maser. Towards MM1, the emission from all four 25 GHz CH\({}_{3}\)OH lines appears spatially unresolved. Both broad and narrow Gaussian components are needed to describe the 5\({}_{2}\) emission towards MM1, with velocity widths of \(3.5\pm 0.1\) and \(0.7\pm 0.1\) km s\({}^{-1}\) respectively, while the spectral profiles of the 3\({}_{2}\), 8\({}_{2}\) and 10\({}_{2}\) lines are best fit by single Gaussians with velocity widths of \(3.4\pm 0.1,6.0\pm 0.2\) and \(6.7\pm 0.2\) km s\({}^{-1}\) respectively. All of the fitted Gaussian components have centred velocities consistent with the systemic velocity of MM1 (59.9\(\pm\)1.1 km s\({}^{-1}\); Cyganowski et al., 2011). The 8\({}_{2}\) and 10\({}_{2}\) lines also have velocity widths of a similar order to the thermal CH\({}_{3}\)OH emission seen with ALMA towards MM1 (see left panel of Figure 5). Line models under LTE and optically thin conditions (produced using the Weeds package of the class software) reasonably describe the broad emission components of all four lines towards MM1. As a whole, our data suggest that both thermal (broad components) and weak maser (narrow 5\({}_{2}\) component) may be present towards MM1, though we emphasise that this is a tentative result due to the low S/N of the 25 GHz CH\({}_{3}\)OH emission towards MM1.
#### 3.2.5 Mm2...MM5 with ALMA and the VLA
In Figure 8, we show spectra for the broad ALMA spectral windows towards the 1.05 mm continuum peaks of MM2, MM3, MM4 and MM5. The lines shown in Figure 4(c)-(e) lie within these broad bands. MM3 and MM5 appear devoid of molecular line emission in these bands, whilst MM2 and MM4 both have \(>5\sigma\) detections of a handful of lines (labelled in Figure 8). All of the line emission detected towards MM2 and MM4 in these broad bands appears spatially and kinematically coincident with emission attributed to the outflow driven by MM1 (including the H\({}_{2}\)CO and CH\({}_{3}\)OH lines shown in Figures 4(c), (d) and (e); see SS3.2.2). As such, it is likely that this line emission is associated with the MM1 outflow, rather than being physically associated with MM2 and MM4. N\({}_{2}\)H\({}^{+}\)(3-2) emission (see Figure 4(a)) is coincident with all four new millimetre sources, however given the emission does not morphologically resemble the ALMA 1.05 mm continuum, it is unlikely to arise from the same volumes as MM2...MM5. As shown in Figure 4(f), DCN(\(4-3\)) does exhibit an emission morphology similar to the ALMA 1.05 mm continuum. MM2 and MM4 are detected in emission at the 10\(\sigma\) and 9\(\sigma\) levels respectively (where \(\sigma=6.3\) mJy beam\({}^{-1}\)), and do not exhibit signs of self-absorption (unlike MM1, see SS3.2.2), whilst MM3 and MM5 are undetected to a 5\(\sigma\) limit of 31.5 mJy beam\({}^{-1}\). The positions of peak DCN emission towards MM2 and MM4 are offset
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Group\({}^{a}\) & & J2000.0 centroid position\({}^{b}\) & \(\mathbf{l_{peak}}\)\({}^{c}\) & Angular spread \({}^{d}\) & V\({}_{\rm min}\), V\({}_{\rm max}\)\({}^{e}\) & V\({}_{\rm peak}\)\({}^{f}\) \\ ID & \(\alpha\) (\({}^{\rm FIR,a}\)) & \(dx\) (\({}^{\prime\prime}\)) & \(\delta\) (\({}^{\prime\prime}\) \({}^{\prime\prime}\)) & \(dy\) (\({}^{\prime\prime}\)) & (mJy beam\({}^{-1}\)) & \(\alpha\), \(\delta\) (\({}^{\prime\prime}\), \({}^{\prime\prime}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) \\ \hline
1 & 18:25:44.89 & 0.02 & -12:23:01.55 & 0.02 & 20.1 (1.2) & 0.012, 0.044 & 60.2, 61.4 & 60.6 \\
2 & 18:25:44.79 & 0.02 & -12:23:01.02 & 0.02 & 18.9 (1.3) & 0.073, 0.091 & 59.8, 61.4 & 60.2 \\
3 & 18:25:48.85 & 0.03 & -12:22:37.44 & 0.03 & 9.5 (1.2) & 0.016, 0.022 & 57.4, 57.8 & 57.4 \\
4 & 18:25:44.49 & 0.01 & -12:22:32.60 & 0.01 & 216.0 (1.3) & 0.262, 0.059 & 51.4, 61.8 & 59.8 \\
5 & 18:25:44.53 & 0.01 & -12:22:30.54 & 0.01 & 59.0 (1.3) & 0.092, 0.030 & 57.8, 59.4 & 59.0 \\
6 & 18:25:45.11 & 0.02 & -12:22:29.89 & 0.02 & 26.2 (1.3) & 0.048, 0.019 & 59.0, 59.4 & 59.4 \\
7 & 18:25:45.69 & 0.03 & -12:22:29.73 & 0.03 & 12.4 (1.3) & 0.039, 0.067 & 58.6, 59.4 & 59.4 \\
8 & 18:25:44.60 & 0.03 & -12:22:27.85 & 0.03 & 11.9 (1.3) & 0.041, 0.037 & 59.4, 59.8 & 59.8 \\ \hline \end{tabular} \({}^{a}\) Labelled from south to north, see Figure 4(d).
\({}^{b}\) Intensity-weighted centroid position, and intensity-weighted mean statistical error from the Gaussian fitting, of each emission group.
\({}^{c}\) Fitted peak intensity of the brightest emission spot in each group (i.e. from a single channel). The statistical error from the Gaussian fitting in listed in parentheses. \(T_{B}\)(K) \(\approx 7186\times I\)(Jy beam\({}^{-1}\)).
\({}^{d}\) The angular spread of the emission group, calculated as the standard deviation of the difference between the position of each individual emission spot (i.e. from a single channel) and the intensity- weighted centroid position of the group.
\({}^{e}\) Velocity spread over which \(\geq 5\sigma\) emission is present, see §3.2.3.
\({}^{f}\) Velocity of the channel at which the brightest emission in each group appears.
\end{table}
Table 5: Properties of NH\({}_{3}\)(3,3) emission groups.
from their corresponding ALMA 1.05 mm continuum peaks by 0.09 and 0.40'' respectively (equivalent to 360 and 1600 au), within a DCN beam (\(\sim\)0.47''; see Table 2). Systemic velocities for MM2 and MM4 are found from Gaussian fitting of the DCN peak emission to be \(59.88\pm 0.06\) km s\({}^{-1}\) and \(59.72\pm 0.05\) km s\({}^{-1}\) respectively, consistent with the \(59.9\pm 1.1\) km s\({}^{-1}\) systemic velocity of MM1 (Cyganowski et al., 2011), kinematically linking MM2 and MM4 to MM1 and the G19.01\(-\)0.03 clump.
A handful of Class I 44 GHz CH\({}_{3}\)OH maser spots lie \(0.60\arcsec\sim 2400\) au to the east of the ALMA 1.05 mm continuum peak of MM4 (see Figure 4(e)), coincident spatially and kinematically with H\({}_{2}\)CO and CH\({}_{3}\)OH emission attributed to the MM1 outflow (see SS3.2.2). No NH\({}_{3}\)(\(J=K=1,2,3,5,6,7\)) emission is detected towards, or in the immediate vicinity of, MM2...MM5 in our VLA data, to 5\(\sigma\) limits of 6.2, 5.9, 5.7, 5.5, 4.8, 5.3 mJy beam\({}^{-1}\) respectively. Similarly, MM2...MM5 are undetected in the 25 GHz CH\({}_{3}\)OH lines in our VLA observations (5\(\sigma\) limits 5.2, 5.1, 4.3 and 6.0 mJy beam\({}^{-1}\) for the 3\({}_{2}\), 5\({}_{2}\), 8\({}_{2}\) and 10\({}_{2}\) lines). The CH\({}_{3}\)OH 5\({}_{2}\) emission spot detected towards the outflow driven by MM1 (see SS3.2.4) is \(3.6\arcsec\sim 14\), 200 au away from the ALMA 1.05 mm continuum peak of MM3. In all, with no clear evidence for protostellar activity in MM2...MM5 in our ALMA and VLA data, the current evidence suggests that these sources may be starless condensations, prestellar cores, millimetre knots in MM1's outflow lobes, or a combination of the three (see also Section 4.2).
## 4 Discussion
### Physical properties of MM1 from CH\({}_{3}\)OH emission
In the following section, we estimate the column density and temperature of MM1 from the observed CH\({}_{3}\)OH emission using two complementary methods: (i) a rotation diagram analysis (SS4.1.1), and (ii) a set of synthetic spectra that represent the observed emission (SS4.1.2).
#### 4.1.1 Rotation diagram analysis of CH\({}_{3}\)Oh
Assuming local thermodynamic equilibrium and optically thin emission, a single temperature may describe the emission of all observed
Figure 8: ALMA spectra towards the 1.05 mm continuum peaks of MM2, MM3, MM4 and MM5 (Table 3) for the two broad spectral windows. The break in the frequency axis between the two broad spectral windows is marked by the bold black lines. The 5\(\sigma\) level (where \(\sigma=3\) mJy beam\({}^{-1}\), §2.1) is marked by the horizontal dashed black line. Lines with emission \(>5\sigma\) are labelled.
Figure 7: VLA peak intensity maps of the four \(-\)25 GHz CH\({}_{3}\)OH lines in our VLA tuning (Table 2), here referred to by the short hand notation 3\({}_{2}\), 5\({}_{2}\), 8\({}_{2}\) and 10\({}_{2}\). Each panel shows the peak intensity map of the labeled transition in colourscale and black contours, with contour levels of 4\(\sigma\) for the 3\({}_{2}\), 8\({}_{2}\) and 10\({}_{2}\) lines (where \(\sigma=1.04,0.85\) and 1.19 mJy beam\({}^{-1}\) respectively), and [4, 5, 6, 7]\(\times\sigma\) for the 5\({}_{2}\) line (where \(\sigma=1.01\) mJy beam\({}^{-1}\)). ALMA 1.05 mm continuum contours are overplotted in white at 5, 8, 16, 64 and 600\(\sigma\) (where \(\sigma=0.25\) mJy beam\({}^{-1}\)). Magenta crosses mark the positions of 44 GHz Class I CH\({}_{3}\)OH masers from Cyganowski et al. (2009). Each synthesised beam, and a 1′′ scale bar, are plotted in the bottom left and bottom right of each panel respectively.
lines. This temperature is defined by the Boltzmann distribution, written as:
\[\log\left(\frac{N_{u}}{g_{u}}\right)=\log\left(\frac{N}{Q(T_{\rm rot})}\right)- \frac{E_{u}}{k_{B}T_{\rm rot}}\,, \tag{1}\]
where \(N_{u}\) is the upper energy level column density, \(g_{u}\) is the upper energy state degeneracy, \(N\) is the total column density, \(Q(T_{\rm rot})\) is the partition function, \(E_{u}\) is the upper level energy, \(k_{B}\) is the Boltzmann constant, and \(T_{\rm rot}\) is the rotational temperature (e.g. Goldsmith & Langer, 1999). The left-hand side of equation 1 may be re-written as:
\[\log\left(\frac{N_{u}}{g_{u}}\right)=\log\left(\frac{3k_{B}\int T_{\rm mb}{ \rm d}V}{8\pi^{3}\nu S_{ij}\mu^{2}}\right)\,, \tag{2}\]
where \(\int T_{\rm mb}{\rm d}V\) is the integrated line strength over velocity with units of \({\rm K}\times{\rm km\,s^{-1}}\), \(\nu\) is the rest frequency of the transition, \(\mu\) is the permanent dipole moment with units of Debye, and \(S_{ij}\) is the unit-less intrinsic line strength (e.g. Cummins et al., 1986; Herbst & van Dishoeck, 2009). When multiple lines of a species are detected, assuming that their emission arises from the same physical structure, and assuming that all lines are optically thin, one may construct a rotation diagram by plotting \(\log(N_{u}/g_{u})\) against \(E_{u}/k_{B}\). Fitting a straight line will yield \(T_{\rm rot}\) from the slope, and \(N/Q(T_{\rm rot})\) from the y-intercept. If a straight line reasonably fits the rotation diagram, the assumption is made that all transitions are thermalised, and thus \(T_{\rm rot}\) is expected to equal the excitation temperature, \(T_{\rm ex}\)(e.g. Goldsmith & Langer, 1999). Taking this \(T_{\rm rot}\), the total column density (\(N\)) may be found with the partition function, which following Townes & Schawlow (1955) and Purcell et al. (2009), for example, may be written for CH\({}_{3}\)OH as:
\[Q(T_{\rm rot})=1.2327\,T_{\rm rot}^{1.5}\,\,. \tag{3}\]
The most detected species towards MM1 in our spectral tuning at \(>\)5\(\sigma\) is CH\({}_{3}\)OH, with thirteen transitions identified across both the wide and narrow bands (see SS3.2.1) with \(E_{u}/k_{B}\) ranging 32 - 736 K. We produce a rotation diagram towards the ALMA 1.05 mm continuum peak of MM1 (Table 3) using eleven CH\({}_{3}\)OH transitions, excluding the CH\({}_{3}\)OH(\(15_{1,0}-14_{2,0}\)) line from the wide bands, and CH\({}_{3}\)OH(\(6_{1,2}-5_{1,2}\)) from the narrow bands, as they are significantly blended by the H\({}_{2}\)CO(\(4_{2,3}\)-\(3_{2,2}\)) and CH\({}_{3}\)OHO(\(27_{1,27}-26_{0,26}\)) lines respectively. We include both A and E transitions in this analysis since we do not detect enough transitions to to fit them separately. The integrated intensity of each line is evaluated from Gaussian profiles fitted to the CH\({}_{3}\)OH spectra at the ALMA 1.05\(\micron\) continuum peak position (see Table 3). Errors on the Gaussian fits are propagated through to the rotation diagram, and the best fit is found following a weighted least-squares fit. This rotation diagram is shown in Figure 9(a), where it can be seen that two data points (marked white, belonging to the CH\({}_{3}\)OH(\(6_{1,5}-5_{1,4}\)) and CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) lines with \(E_{u}/k_{B}=63.7\) and 110.0 K respectively) deviate from the general trend of the other line lines. These lines are seen in Figure 4 to be outflow tracing, and their generally low \(\log\) (\(N_{u}/g_{u}\)) is an indication that they may also be optically thick. The derived rotational temperature is \(186\pm 8\) K.
As previously stated, a rotation diagram is constructed under the assumption that all lines are optically thin. Optically thick lines can inflate derived rotational temperatures due to an artificially shallow slope (e.g. Goldsmith & Langer, 1999). We counter this by applying an opacity correction, \(C_{\tau_{i}}=\tau_{i}/(1-e^{\tau_{i}})\), where the subscript \(i\) refers to the \(i^{\rm th}\) CH\({}_{3}\)OH transition. We follow Brogan et al. (2007, 2009) by iteratively solving for the \(\tau_{i}\) and \(T_{\rm rot}\) values that produce the best fit, minimising the \(\chi^{2}\) value. Figure 9(b) shows the best fitting opacity-corrected rotation diagram. The \(\tau_{i}\) values for this best fit range between \(\sim 0.4-11.7\), with indeed the CH\({}_{3}\)OH(\(6_{1,5}-5_{1,4}\)) and CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) lines being the most optically thick (with \(\tau=11.7\) and \(11.3\) respectively). The highest-\(J\), CH\({}_{3}\)OH(\(23_{4,19}-22_{5,18}\)) line with \(E_{u}/k_{B}=736\) K is unsurprisingly the most optically thin. Of the lines with compact emission morphologies (i.e. that trace MM1 and not the outflow) in the 278 GHz wideband spectral window, the CH\({}_{3}\)OH(\(14_{4,10}-15_{3,12}\)) line is the most optically thick, with \(\tau=1.9\), implying an emitting region of \(\sim\)0\(\aas@@fstack{\prime\prime}\)25\(\sim\) 1000 AU (following Section 4.4 of Brogan et al., 2009). The derived opacity-corrected rotational temperature is \(166\pm 9\) K, and the column density is \((2.0\pm 0.4)\times 10^{18}\) cm\({}^{-2}\).
#### 4.1.2 Synthetic CH\({}_{3}\)OH spectra
We make a second estimate of the column density and temperature of MM1 by producing synthetic spectra that are representative of the observed data using the weeds extension (Maret et al., 2011) of the class software. weeds solves the radiative transfer equation assuming a state of local thermodynamic equilibrium (LTE), and takes into account the background continuum emission. Using values for the frequency, Einstein A coefficients, partition function, and upper level degeneracy and energy from the JPL and CDMS catalogues (Pickett et al., 1998; Muller et al., 2001), Weeds calculates the opacity of each line, meaning that corrections such as line broadening are applied to optically thick lines. The free parameters of the synthetic spectra are the column density, excitation temperature, systemic velocity,
Figure 9: Rotation diagram of CH\({}_{3}\)OH at the ALMA 1.05 mm dust continuum peak of MM1, (a) before opacity correction, and (b) following opacity correction. The best fit is shown by the dashed black line, with the statistical uncertainty on the slope and y-intercept represented by the shaded blue region. The CH\({}_{3}\)OH(\(6_{1,5}-5_{1,4}\)) and CH\({}_{3}\)OH(\(9_{-1,9}-8_{0,8}\)) lines (with \(E_{u}/k_{B}=63.7\) and 110.0 K respectively) are coloured in white as they are outflow tracers (see Figure 4) and optically thick.
FWHM velocity width, and source size. The projected diameter of the telescope is also required to calculate the beam dilution/filling factor. We refer the reader to Maret et al. (2011) for a more detailed description of the full procedure.
We explore the posterior distribution of the free parameters of the LTE line models using the Python package emcee(Foreman-Mackey et al., 2013), an affine-invariant Monte Carlo Markov Chain (MCMC) sampler (Goodman & Weare, 2010). A posterior distribution of model parameters consistent with the data is evaluated by multiple "walkers", allowed to walk for a number of iterations. A "burn-in" period of \(n_{\rm burn}\) iterations allows the walkers to converge on the posterior region. These "burn-in" solution chains are then discarded before beginning a "production" run with walkers initialised around the posterior region identified during the burn-in. Walkers are allowed to walk through the posterior for \(n_{\rm prod}>n_{\rm burn}\) iterations, to allow probing of the covariance of the model parameters. The model parameters that are most representative of the data are found following the maximisation of the log-likelihood function:
\[\mathcal{L}(T_{\rm mb}|\theta)=-0.5\sum_{i}\left(\frac{T_{\rm mb,i}^{\rm mod}- T_{\rm mb,i}^{\rm obs}}{\sigma_{\rm rms}^{\rm obs}}\right)^{2}, \tag{4}\]
where \(\theta\) are the model parameters, \(T_{\rm mb,i}^{\rm mod}\) and \(T_{\rm mb,i}^{\rm obs}\) are the model and observed main-beam brightness temperature of the \(i^{\rm th}\) channel, respectively, and \(\sigma_{\rm rms}^{\rm obs}\) is the observed rms noise of the spectrum (see SS2.1). The statistical uncertainty on each model parameter is evaluated from the 0.16 and 0.84 quantiles of the posterior (i.e. the 1\(\sigma\) level). Our scripts are publicly available2, and are packaged together in a reusable form we call weedsp_mcmc where they are generalised to take in user-defined parameters such as the priors, number of walkers, initial walker positions within the parameter space, and the number of burn-in and production iterations, from a text file. A class-readable text file is created containing the \(\theta\) model parameters currently walked to and the name of the molecule to be modelled, and then a class script is called that runs the LTE model for those \(\theta\). Plots are generated of the highest likelihood synthetic spectrum, the corner plot of the 1D and 2D posterior distributions of the free parameters (as in Figure 10(a)), and the trace plot of the emcee walkers through the parameter space. To efficiently sample the probability distributions for the free parameters of the synthetic spectra generated with wreens, spectral windows are treated separately in our scripts.
Footnote 2: [https://github.com/gwen-williams/Weedsp_MCMC](https://github.com/gwen-williams/Weedsp_MCMC)
#### 4.1.3 Pixel-by-pixel analysis of CH\({}_{3}\)OH synthetic spectra
Applying this procedure to our ALMA data, we find the highest likelihood model on a pixel-by-pixel basis across MM1, allowing investigation of any spatial variation in the column density, temperature, centroid velocity and velocity width. As in SS4.1.1, this analysis focuses on CH\({}_{3}\)OH emission, as it is the species with the highest number of identified transitions in our tuning. Of the thirteen total CH\({}_{3}\)OH lines in the tuning, seven are located in the \(\sim\)278 GHz band, four are in the \(\sim\)292 GHz band (see Figure 3 and Table 4), one lies in the narrow spectral window that targets N\({}_{2}\)H\({}^{+}(3-2)\), and one lies in the narrow spectral window that targets DCN\((4-3)\). As the CH\({}_{3}\)OH\((6_{1,5}-5_{1,4})\) and CH\({}_{3}\)OH\((9_{-1,9}-8_{0,8})\) lines (with \(v_{\rm rest}=292.67291\) and 278.30451 GHz respectively) are shown in Figure 4 to be outflow tracing, we exclude them from this analysis since an assumption made here is that the emitting region is the same for all modelled transitions. We also exclude the CH\({}_{3}\)OH\((15_{1,0}-14_{2,0})\) line (\(v_{\rm rest}=291.24057\) GHz) as it is heavily blended by a close-by H\({}_{2}\)CO line (see also SS4.1.1), the CH\({}_{3}\)OH\((6_{1,2}-5_{1,2})\) line (\(v_{\rm rest}=289.62430\) GHz) as it is blended with a CH\({}_{3}\)OCHO line (see SS3.2.1 and SS4.1.1), and the CH\({}_{3}\)OH\((10_{1,10}-9_{0,9})\) line (\(v_{\rm rest}=292.51744\) GHz) as it is the only identified line in the first torsionally excited state (i.e. \(v_{I}=1\)). This leaves 6 lines in the \(\sim\)278 GHz band, 1 line in the \(\sim\)292 GHz band, and 1 line in a narrow band. Since modelling one line is not sufficient to constrain four free parameters, we conduct our MCMC analysis on the six remaining CH\({}_{3}\)OH lines in the \(\sim\)278 GHz band (indicated in bold in Table 4). We then use the best-fit parameters to generate synthetic spectra for the other spectral windows and check the resulting spectra for goodness of fit (see also Section 4.3.1).
For our pixel-by-pixel modelling, we fix the source size to the geometric mean of the \(\sim\)278 GHz synthesised beam (\(\sim\)0.5'').3 We also limit the pixel area being modelled to within the continuum source size of MM1 from Table 3 (see white contour in Figures 11(a) and (b)), shown in Paper I to match well the region of compact molecular line emission (see their Fig.3). The projected diameter of the ALMA array was calculated to be 558 metres4, and we estimate the background continuum on a pixel-by-pixel basis (McGuire et al., 2018), with the ALMA 1.05 mm continuum converted to brightness temperature5. We initialise 60 walkers, and allow them to run for \(n_{\rm burn}=200\) and \(n_{\rm prod}=500\) iterations (see Appendix B for a discussion of these choices). The remaining free parameters are the column density, excitation temperature, systemic velocity and FWHM velocity width. We assume uniform, uninformative priors, and initialise the walkers in a four-dimensional sphere about an initial value that lies within those priors. Our priors are estimated from initial "dummy" runs of the LTE synthetic spectra outside of weedspgr_mcmc, and are set to be \((0.05-3.0)\times 10^{18}\) cm\({}^{-2}\) for the column density, \(80-300\) K for the temperature, \(56.0-64.0\) km s\({}^{-1}\) for the centroid velocity, and \(2.0-8.0\) km s\({}^{-1}\) for the velocity width.
Footnote 3: The image cube is converted to brightness temperature using the Rayleigh-Jeans approximation and beamsize with the tt.brightnessImage function of toddTools.
Footnote 4: Calculated using the au.getBaselineLengths function from the analysisUtils Python package
Footnote 5: Conversion done under the Rayleigh-Jeans approximation using the tt.brightnessImage function of toddTools.
Figure 10(a) shows the 1D and 2D posterior distributions of column density, temperature, centroid velocity and velocity width, evaluated for the spectrum at the ALMA 1.05 mm continuum peak of MM1. The narrow, normally distributed 1D posteriors for column density, temperature and centroid velocity show that these parameters are reasonably constrained. The slope in the 2D column density-temperature posterior distribution indicates an unsurprising, but only slight, degeneracy between these two parameters, meaning the statistical errors are small despite this. There is no obvious covariance of the centroid velocity with either column density or temperature. The velocity width is less well constrained, with two slight peaks around the mode of the posterior distribution (within \(<0.03\) km s\({}^{-1}\)). Though significantly smaller than the spectral resolution of our data (\(\sim\)1 km s\({}^{-1}\)), this uncertainty is attributed to line blending rather than under-sampling as the double peak persists in tests where \(n_{\rm prod}\) is increased to \(>\)3000. The double peak also persists when walkers are initialised with a different random number seed, so is independent of initial walker position. The highest likelihood model spectrum at the continuum
peak position of MM1 is shown in Figure 10(b), with parameters N(CH\({}_{3}\)OH) = \((2.22\pm 0.01)\times 10^{18}\) cm\({}^{-2}\), T\({}_{\rm ex}\) = \(162.7^{+0.3}_{-0.5}\) K, V\({}_{\rm cen}\) = \(59.73\pm 0.01\) km s\({}^{-1}\) and \(\Delta V\) = \(6.38^{+0.01}_{-0.03}\) km s\({}^{-1}\). These remain unchanged (within the errors) when walkers are initiated with a different random number seed, and when \(n_{\rm prod}\) is increased to \(>3000\). These column density and temperature values are remarkably consistent with those derived from the opacity-corrected rotation diagram of \((2.0\pm 0.4)\times 10^{18}\) cm\({}^{-2}\) and \(166\pm 9\) K respectively (see SS4.1.1), and the centroid velocity is also consistent with the known systemic velocity of \(59.9\pm 1.1\) km s\({}^{-1}\)(Cyganowski et al., 2011a). The line optical depths returned by class are lower than those estimated in the rotation diagram analysis with an average \(r=0.3\) compared to \(\tau=1.0\) (excluding CH\({}_{3}\)OH(\(9_{-1.9}-8_{0.8}\)) with \(E_{\rm au}/k_{B}=110.0\) K). We attribute the higher line opacities estimated from the rotation diagram to the source size not being fixed in the rotation diagram approach (see SS4.1.1). Figure 10(b) shows the synthetic spectrum well reproduces the observed CH\({}_{3}\)OH emission, other than significantly over-estimating the line strength of the optically thick, outflow tracing CH\({}_{3}\)OH(\(9_{-1.9}-8_{0.8}\)) line (see Figure 4) that was excluded from the maximisation of the log-likelihood. This vindicates its exclusion from the analysis, further suggesting that the emission from the CH\({}_{3}\)OH(\(9_{-1.9}-8_{0.8}\)) line may not occur in the same volume as the other modelled, optically thin lines.
#### 4.1.4 Images of CH\({}_{3}\)OH physical parameters
Maps of the pixel-by-pixel, highest likelihood CH\({}_{3}\)OH model parameters towards MM1 are shown in Figure 11. The highest likelihood centroid velocity map (Figure 11a) shows a clear gradient in velocity across MM1 in agreement with the gradient reported in Paper 1 from a simpler moment analysis of individual lines. The CH\({}_{3}\)OH column density (Figure 11b) peaks coincident with, and follows the same morphology as, the ALMA 1.05 mm continuum emission. The position of peak CH\({}_{3}\)OH temperature (\(18^{\rm h}25^{\rm m}44\aas@@fstack{\prime\prime}763\)\(-12\degr 22\arcmin 46\aas@@fstack{\prime\prime}00\) (J2000); see Figure 11c) is however offset from the ALMA 1.05 mm dust continuum and CH\({}_{3}\)OH column density peaks by \(0.22\arcsec\sim 880\) AU. Whilst the column density and temperature are somewhat degenerate (see Figure 10a), the difference between the offset peak temperature (\(165.5\pm 0.6\) K) and the temperature at the column density peak (\(162.7\pm 0.4\) K) is significantly greater than its propagated statistical error (i.e. \(2.8\pm 0.7\) K). In Paper 1, a centimetre counterpart to MM1 (called CM1) was detected for the first time with the VLA at 5.01 and 1.21 cm. The VLA 1.21 cm emission peak was coincident with the ALMA 1.05 mm continuum peak of MM1, whilst the VLA 5.01 cm emission peak of CM1 (\(18^{\rm h}25^{\rm m}44\aas@@fstack{\prime\prime}773\)\(-12\degr 22\arcmin 46\aas@@fstack{\prime\prime}00\) (J2000); Paper 1) was found to be offset from MM1 by \(0.16\arcsec\sim 640\) AU (Paper 1). Indeed, the entire morphology of the region of elevated CH\({}_{3}\)OH temperature matches that of the VLA 5.01 cm emission, with both being elongated along a south-easterly direction relative to MM1 (as shown in Figure 11c). The velocity
Figure 10: (a) One-dimensional and two-dimensional histograms of the posterior distributions for the free parameters in the LTE synthetic spectral modelling (N(CH\({}_{3}\)OH), T\({}_{\rm ex}\), V\({}_{\rm cen}\) and \(\Delta\)V) towards the ALMA 1.05 mm continuum peak. Black contours on the 2D histograms are placed at 2, 1.5 and 1 \(\sigma\), whilst the dashed black lines on the 1D histograms are placed at the 0.16, 0.5 and 0.84 quantiles. (b) Observed spectrum towards the ALMA 1.05 mm continuum peak (black) overplotted with the corresponding highest likelihood model (red) i.e. model parameters defined by the mode of the posterior, printed in panel (a). The six CH\({}_{3}\)OH lines that are modelled and included in the maximisation of the log-likelihood function are marked by vertical grey lines. The excluded CH\({}_{3}\)OH(\(9_{-1.9}-8_{0.8}\)) line with \(E_{\rm au}/k_{B}=110.0\) K is labelled.
width map in Figure 11(d) also shows a similar elongation to the VLA 5.01 cm continuum, however it does not peak coincident with either the CH\({}_{3}\)OH column density, the CH\({}_{3}\)OH temperature, nor the VLA 5.01 cm continuum position. Following analysis of the combined centimetre-millimetre spectral energy distribution of CM1/MM1 in Paper I, it was revealed that a free-free emission component was required to describe the VLA 5.01 cm emission. This component was interpreted as a small, 66 \(\mathrm{\SIUnitSymbolMicro\meter}\) diameter gravitationally trapped hypercompact (HC) Hii region, which directly implies the presence of a source causing ionisation and heating of its surroundings. Given that the morphology of the elevated CH\({}_{3}\)OH temperature aligns with that of the VLA 5.01 cm continuum, and that the position of peak CH\({}_{3}\)OH temperature is only offset from the VLA 5.01 cm position of CM1 by \(0.077^{\prime\prime}\sim 280\)\(\mathrm{\SIUnitSymbolMicro\meter}\) (equivalent to the absolute positional uncertainty of the VLA 5.01 cm data of \(0.07^{\prime\prime}\); Paper I), it is reasonable to surmise that both are attributable to the same underlying source/mechanism. We further speculated in Paper I that the VLA 5.01 cm emission being misaligned with respect to the direction of the bipolar outflow emanating from MM1 was suggestive of the ionisation being driven by a second object, perhaps indicative of an unresolved high-mass binary system. In that case, a second possible origin for the offset elevated CH\({}_{3}\)OH temperature could be the presence of an ionised jet driven by the unresolved high-mass binary companion. Outflows and jet-like outflows are noted in the literature to cause heating of molecular gas (e.g. Zhang et al., 2007; Wang et al., 2012).
### Physical properties of MM1...MM5 from dust emission
As mentioned in SS1 and SS4.1.4, it was shown in Paper I from an analysis of the centimetre-millimetre spectral energy distribution of MM1 that its ALMA 1.05 mm flux was 99.99 per cent dominated by thermal dust emission. MM2...MM5 are also likely dominated by thermal dust emission, given their non-detection in the centimetre at 1.21 and 5.01 cm with the VLA to 5\(\sigma\) limits of 30 and 25\(\mu\)Jy beam\({}^{-1}\) respectively (Paper I). As such, the masses (M\({}_{\mathrm{gas}}\)) of MM1...MM5 may be calculated from their integrated flux densities at 1.05 mm (\(S_{1.05\,\mathrm{mm}}\)) assuming isothermal dust emission following Hildebrand (1983):
\[\mathrm{M_{gas}}=\frac{d^{2}\,R\,S_{1.05\,\mathrm{mm}}\,C\,_{T_{\mathrm{dust}} }}{\kappa_{1.05\,\mathrm{mm}}\,\bar{B}_{\nu}(T_{\mathrm{dust}})}\, \tag{5}\]
where \(d\) is the distance, \(R\) is the gas-to-dust mass ratio (here assumed to be 100), \(\kappa_{1.05\,\mathrm{mm}}\) is the dust opacity at 1.05 mm, \(B_{\nu}(T_{\mathrm{dust}})\) is the Planck function, and \(T_{\mathrm{dust}}\) is the dust temperature. We include a correction for dust optical depth, \(C_{T_{\mathrm{dust}}}\) as:
\[C\,_{T_{\mathrm{dust}}}=\tau_{\mathrm{dust}}/(1-e^{-\tau_{\mathrm{dust}}})\, \tag{6}\]
where \(\tau_{\mathrm{dust}}\) may be estimated as:
\[\tau_{\mathrm{dust}}=-\ln\left(1-\frac{T_{D}}{T_{\mathrm{dust}}}\right). \tag{7}\]
We estimate \(T_{B}\) as the mean brightness temperature across the size of each source (as listed in Table 3), though this may underestimate \(\tau_{\mathrm{dust}}\) for small-scale structures. From Ossenkopf & Henning (1994), we set \(\kappa_{1.05\,\mathrm{mm}}=1.45\,\mathrm{cm}^{2}\mathrm{g}^{-1}\) for dust grains with thick ice mantes in high density gas (as in Paper I; Cyganowski et al., 2017). We also assume that the dust is well-coupled to the gas such that \(T_{\mathrm{dust}}=T_{\mathrm{gas}}\). The temperatures we assume for each of the sources is discussed below in SS4.2.1 and SS4.2.2. Assuming spherical geometry, a mean molecular weight per hydrogen molecule (\(\mu_{\mathrm{H_{2}}}\)) of 2.8 (e.g. Kauffmann et al., 2008), and a radius equal to half the geometric mean of the source sizes presented in Table 3, we further calculate the average H\({}_{2}\) column density and H\({}_{2}\) volume density. All derived source properties are listed in Table 6.
#### 4.2.1 Temperature, mass and stability of MM1
In Paper I, we assumed a temperature range for MM1 of \(100-130\) K based on the CH\({}_{3}\)CN(J=12-11) line fitting of Cyganowski et al. (2011a) from \(2.3^{\prime\prime}\) angular resolution SMA data. Our new ALMA data (with \(0.4^{\prime\prime}\) angular resolution) improves on that by a factor of \(\sim\)30 in beam area. We revise our temperature assumption to 163 K, as calculated at the ALMA 1.05 mm continuum peak of MM1 in SS4.1.3 from the CH\({}_{3}\)OH synthetic spectra. This is an increase of 33-66 K on the SMA-derived temperature. Observations of the EGO G11.92-0.61 (hereafter G11.92) by both Cyganowski et al. (2011a) and Ilee
Figure 11: Pixel-by-pixel highest likelihood model parameters from the Weeds CH\({}_{3}\)OH synthetic spectra towards MM1: (a) centroid velocity in \(\mathrm{km\,s^{-1}}\), (b) CH\({}_{3}\)OH column density in \(\mathrm{cm^{-2}}\), (c) excitation temperature in Kelvin, and (d) FWHM velocity width in \(\mathrm{km\,s^{-1}}\). ALMA 1.05 mm continuum contours are plotted in panels (a) and (b) at 8, 16, 64 and 200\(\sigma\) (where \(\sigma=0.25\,\mathrm{mJy\,beam^{-1}}\)). The white contour (at 64\(\sigma\)) represents the level above which pixels were fed into wisely\({}_{\mathrm{J}}\)_mcmc, and matches the source size of MM1 (see Table 3). Contours of VLA 5.01 cm emission (from Paper I) are plotted in panels (c) and (d) at the 4 and 5\(\sigma\) levels (where \(\sigma=5.0\,\mu\)Jy beam\({}^{-1}\)) in panel. A scale bar and the ALMA beam are plotted in panel (a), and the VLA beam is plotted in panel (c).
et al. (2016) provide us with a point of comparison on temperature. With 2.4\(\arcsec\) angular resolution SMA observations, Cyganowski et al. (2011a) derived 77 and 166 K temperatures for G11.92 from a two-component fit to CH\({}_{3}\)CN emission. Taking the same approach but with 0.5\(\arcsec\) angular resolution SMA observations (a factor of \(\sim\)27 improvement in beam area), Ilee et al. (2016) find the two G11.92 temperature components to be 150 and 230 K. This is an increase of 60-70 K in temperature, attributed to the difference in probed spatial scales between the two data sets. The same resolution-dependent behaviour is observed here for the temperature of G19.01 MM1.
The mass of MM1 is here revised, from M\({}_{\rm gas}=5.4-7.2\) M\({}_{\odot}\) at 130-100 K derived in Paper 1, to M\({}_{\rm gas}=4.3\) M\({}_{\odot}\) at 163 K (Table 6). Attributing this gas mass to the now known circumstellar disc around MM1, the stability of the disc can be assessed by estimation of the disc-to-star mass ratio (as done in Paper I), with unstable discs exhibiting typical values of \(M_{\rm gas}/M_{*}>0.1\) (e.g. Kratter & Lodato 2016). Following Paper I, the stellar mass (\(M_{*}\)) can be calculated as \(M_{*}=M_{\rm enc}-M_{\rm gas}\) where \(M_{\rm enc}\) is the enclosed mass within the disc outer radius (estimated as \(\sim 40-70\) M\({}_{\odot}\) from kinematic modelling in Paper 1). The disc-to-star mass ratio was found in Paper I to be \(\sim 0.08-0.22\) hinting that the disc could be unstable and be undergoing fragmentation into as-of-yet undetected low-mass stellar companions. It was noted however that should the temperature of MM1 increase on the size-scales probed by the ALMA data (as it did for the SMA observations of G11.92; Cyganowski et al. 2011a; Ilee et al. 2016), then the resulting lower \(M_{\rm gas}\) could push the disc towards stability. Our revised \(M_{\rm gas}/M_{*}\) is 0.07-0.12, indeed indicative of the disc being potentially stable against fragmentation. This approach does however risk underestimating the disc mass, as it does not account for variations in the temperature, dust opacity and dust optical depth in the disc (Johnston et al. 2015; Forgan et al. 2016).
#### 4.2.2 Low mass protocluster members
As there are insufficient molecular lines detected with ALMA towards MM2...MM5 for the estimation of temperature, we use clump-scale observations of G19.01 to inform our temperature assumptions. Elia et al. (2017) derive \(T_{\rm dust}=17.9\) K from SED fitting of HI-GAL observations, whilst Schuller et al. (2009) and Wienen et al. (2012) derive \(T_{\rm gas}=19.5\) K from NH\({}_{3}\) hyperfine structure fitting of \(\sim\)40\(\arcsec\) angular resolution Effelsberg 100 m data. Cyganowski et al. (2013) also estimated temperature for the G19.01 clump using NH\({}_{3}\) observations, taken with the Nobeyama 45 m telescope. Their single-temperature fit to the NH\({}_{3}\) spectra yielded a kinetic temperature of \(23.8\pm 0.4\) K, but they found that a two-component fit with T\({}_{\rm kin,cool}=14.7\pm 0.5\) K and T\({}_{\rm kin,warm}=50.2\pm 4.4\) K better represented the observed data (Cyganowski et al. 2013). As the warm \(\sim\)50 K component is likely attributable to the central MYSO (MM1), we exclude it and use the remaining temperature estimates to inform an assumed temperature range for MM2...MM5 of \(15-24\) K. Table 6 lists derived properties for \(T_{\rm dust}=15\) and 24 K. We note that H\({}_{2}\) column and volume densities are not included for MM5 as its size is smaller than a beam.
The gas masses of MM2...MM5 range between \(0.1-1.3\) M\({}_{\odot}\) and \(0.2-2.7\) M\({}_{\odot}\) at 24 and 15 K respectively (see Table 6), and their median radius is \(\sim\)2000 AU (excluding the unresolved MM5; see Table 3). As such, the physical properties of these millimetre sources appear consistent with typical low-mass pre/protostellar cores observed in infrared dark clouds (with masses and radii of a few solar masses and a few thousand au respectively, e.g. Sanhueza et al. 2019; Mori et al. 2021; Redaelli et al. 2022) rather than discs around low-mass YSOs (with masses and radii \(<\)0.1 M\({}_{\odot}\) and a few hundred au respectively, e.g. Huang et al. 2018; Dullemond et al. 2018). Furthermore, their lack of COM emission or signs of outflow activity indicate that they may be candidate low-mass pre-stellar rather than protostellar cores. Interestingly, this result contrasts with the situation in the EGO G11.92\(-\)0.61, where several of the low-mass cores identified by Cyganowski et al. (2017) drive molecular outflows, clearly identifying them as protostellar cores.
The number and spatial distribution of the low-mass cores in G19.01\(-\)0.03 also contrasts with G11.92\(-\)0.61, where Cyganowski et al. (2017)'s 1.05 mm ALMA observations of the ATLASGAL clump detected 16 new compact mm continuum sources (for a total of 19, including 3 previously-known massive sources). Compared to G11.92\(-\)0.61, the low-mass sources detected in G19.01\(-\)0.03 are located closer to the most massive member of the protocluster (median projected separation \(\sim\)0.08 pc compared to \(\sim\)0.17 pc) and comprise a lower fraction of the clump mass (\(<\)1% compared to 3-5%; Cyganowski et al. 2017). We consider our results in the context of expectations for thermal Jeans fragmentation. For the simple assumption of a uniform density sphere, the minimum mass necessary for a fragment to be bound, and so collapse, is
\[M_{J}=\left(\frac{5\,c_{s}^{2}}{2\,G}\right)^{3/2}\left(\frac{3}{4\pi\rho} \right)^{1/2}\,, \tag{8}\]
where \(\rho\) is the density, and \(c_{s}\) is the sound speed defined as:
\[c_{s}=\left(\frac{k_{B}\,T}{\mu H_{2}\,m_{H}}\right)^{1/2}\,, \tag{9}\]
where \(\mu_{H_{2}}\) is the mean molecular weight per hydrogen molecule (equal to 2.8; see also SS4.2), and \(m_{H}\) is the mass of a Hydrogen atom. The length scale for a fragment to be bound, under the same assumptions, is the Jeans radius
\[R_{J}=\left(\frac{15\,c_{s}^{2}}{8\pi\,G\rho}\right)^{1/2}\,, \tag{10}\]
where \(R_{J}\) is the radius of a sphere of mass \(M_{J}\) and the Jeans length (\(\lambda_{J}\)) is \(\lambda_{J}\approx 2R_{J}\). Using the Hi-GAL clump properties (1165 M\({}_{\odot}\) and R=0.175 pc when scaled to D=4 kpc, T=17.9 K; SS1,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Source & T\({}_{\rm B}\) & T\({}_{\rm dust}\) & \(\tau_{\rm dust}\) & M\({}_{\rm gas}\) & N(H\({}_{2}\)) & n(H\({}_{2}\)) \\ & (K) & (K) & & (M\({}_{\odot}\)) & (cm\({}^{-2}\)) & (cm\({}^{-3}\)) \\ \hline MM1 & 10.9 & 163 & 0.07 & 4.3 & 6.7 & 17.0 \\ MM2 & 3.5 & 24 & 0.16 & 1.3 & 3.2 & 10.3 \\ & & 15 & 0.27 & 2.7 & 6.6 & 21.1 \\ MM3 & 3.0 & 24 & 0.13 & 1.0 & 1.5 & 3.9 \\ & & 15 & 0.22 & 2.1 & 3.1 & 7.8 \\ MM4 & 2.9 & 24 & 0.13 & 0.9 & 1.4 & 3.4 \\ & & 15 & 0.22 & 1.8 & 2.7 & 6.8 \\ MM5 & 2.8 & 24 & 0.13 & 0.1 & – & – \\ & & 15 & 0.21 & 0.2 & – & – \\ \hline \end{tabular} Column 1: Source name. Col. 2: Planck brightness temperature averaged over the source size listed in Table 3. Col. 3: Assumed dust temperature, derived from the LTE synthetic spectra for MM1 (see §4.1.3), and a temperature range of 15 – 24 K assumed for MM2...MM5 based on large scale clump observations (Schuller et al. 2009; Cyganowski et al. 2013; Elia et al. 2017). Col. 4: Dust optical depth, calculated from the average \(T_{B}\) in column 2. Col. 5: Opacity-corrected gas mass, in solar masses. Col. 6: H\({}_{2}\) column density, and Col. 7: H\({}_{2}\) volume density, both assuming spherical symmetry.
\end{table}
Table 6: Derived properties for mm continuum sources.
Elia et al., 2017), we calculate the thermal Jeans radius and thermal Jeans mass of the G19.01\(-\)0.03 clump to be \(R_{J}\approx 0.012\) pc and \(M_{J}\approx 0.36\) M\({}_{\odot}\) respectively. Using the ATLASGAL clump properties (926 M\({}_{\odot}\) and R=0.358 pc for D=4.0kpc, T=19.5 K, SS1, Schuller et al., 2009), \(R_{J}\approx 0.041\) pc and \(M_{J}\approx 1.36\) M\({}_{\odot}\). The linear resolution of our ALMA data is \(\sim\)0.008 pc, and the 5\(\sigma\) detection limit (converted to mass) is \(0.1-0.3\) M\({}_{\odot}\) at \(24-15\) K, so we are sensitive to the relevant fragmentation scales. With median masses of \(1.0-2.0\) M\({}_{\odot}\) at \(24-15\) K, respectively, MM2...MM5 are of order the thermal Jeans mass, consistent with the results of Palau et al. (2015) from their study of mm fragments within the inner 0.1 pc of \(\sim\)20 massive dense cores. The number of millimetre sources detected in G19.01 is however notably lower than expected for thermal Jeans fragmentation. For the 13% core formation efficiency (CFE) found by Palau et al. (2015), \(N_{\rm Jeans}=(M_{\rm clump}CFE)/M_{J}\sim\)420 and \(\sim\)90 for the Hi-GAL and ATLASGAL-based estimates, respectively. Strong magnetic fields are expected to suppress fragmentation, and recent observational work found a tentative correlation between the number of mm fragments and the mass-to-flux ratio for a sample of 18 massive dense cores with polarization data (Palau et al., 2021, and references therein). Observations of polarized dust emission in G19.01\(-\)0.03 are needed to assess whether a dynamically important magnetic field contributes to the low level of observed fragmentation.
#### 4.2.3 Virial analysis of MM4
The virial parameter (\(\alpha_{vir}\)) is a commonly used diagnostic in determining the boundedness of structures. To qualify as a stellar progenitor, a source must be in a gravitationally bound state. This is satisfied by \(\alpha_{vir}<2\) in the absence of pressure terms such as magnetic energy density. Furthermore, a core is said to be in hydrostatic equilibrium when \(\alpha_{vir}\sim 1\), or to be gravitationally unstable when \(\alpha_{vir}<1\). The virial parameter is often expressed as follows:
\[\alpha_{vir}=\frac{M_{vir}}{M_{\rm core}}\,, \tag{11}\]
where \(M_{vir}\) is the virial mass, and \(M_{\rm core}\) is the observed core mass. The virial mass for a spherical core may be calculated following MacLaren et al. (1988):
\[M_{vir}=3\left(\frac{5-2n}{3-n}\right)\frac{\sigma_{NT}^{2}R}{\rm G}\,, \tag{12}\]
where \(n\) is the index of the density profile of the source (where \(\rho\propto r^{-n}\)), \(R\) is the radius, and G is the gravitational constant. The non-thermal velocity dispersion, \(\sigma_{NT}\), may be calculated following:
\[\sigma_{NT}^{2}=\sigma_{\rm obs}^{2}-\sigma_{\rm th}^{2}=\sigma_{\rm obs}^{2} -\frac{k_{B}\,T_{\rm dust}}{\mu\,m_{H}}\,, \tag{13}\]
where \(\sigma_{\rm obs}\) and \(\sigma_{\rm th}\) are the observed and thermal velocity dispersions respectively (Bertoldi & McKee, 1992), \(T_{\rm dust}\) is the dust temperature (assumed to equal the gas temperature), \(\mu\) is the molecular weight of the molecular tracer, and \(m_{H}\) is the mass of a Hydrogen atom.
As discussed in SS3.2.2 and SS3.2.5, of the cold and dense gas tracers we detect it is only the emission from DCN (with \(\mu=28\)) that appears morphologically similar to the ALMA 1.05 mm dust continuum emission, with \(\geq 5\sigma\) detections towards MM1, MM2 and MM4. As we cannot safely disentangle the DCN emission of MM2 from the surrounding envelope emission (see Figure 4), we focus our virial analysis on MM4 as it appears isolated. As we aim to test whether MM4 could be a prestellar core, we assume a flat density profile where \(n=0\)(e.g. Ward-Thompson et al., 1994). DCN exhibits nuclear quadrupole hyperfine structure, with seven lines in the \(J=4-3\) transition with velocity offsets from the main component between -1.6 to 2.1 km s\({}^{-1}\)(Muller et al., 2001) with a mean velocity offset of 0.1 km s\({}^{-1}\) (within our 1 km s\({}^{-1}\) spectral resolution). The effect of these components is most important in the lower energy DCN transitions such as \(J=1-0\) or \(J=2-1\)(e.g. Parise et al., 2009), with the strongest central component dominating in transitions such as \(J=4-3\). We fit the hyperfine structure lines on a pixel-by-pixel basis across MM4 using the hfs fitting routine of the class software. We set \(\sigma_{\rm obs}\) to the mean velocity dispersion across the FWHM size of MM4 of \(0.58\pm 0.03\) km s\({}^{-1}\). Taking the radius as half the geometric mean of the deconvolved size of MM4 listed in Table 3 (equal to \(\sim\)2000 AU), the virial mass of MM4 is calculated to be \(3.8\pm 0.4\) M\({}_{\odot}\). With \(M_{\rm core}\) calculated from the 1.05 mm dust emission (1.84 M\({}_{\odot}\) at 15 K, or 0.91 M\({}_{\odot}\) at 24 K; see Table 6), this yields \(\alpha_{vir}=2.0-4.1\)(\(\pm 0.4\)). This indicates that MM4 could be an unbound structure, perhaps a knot in the red-shifted outflow lobe driven by MM1, likely to disperse back into the ISM if internal pressures continue to dominate over gravity. On the other hand, MM4 could be on the cusp of boundedness at the lower end of our temperature range, possibly indicative of the early stages of a prestellar core.
### Chemistry of MM1 in context
As discussed in Sections 1 and 3.2.1, the detection of a line forest towards MM1 with ALMA (see Figure 3) reveals a chemical richness that was lacking in the SMA observations of Cyganowski et al. (2011a). We here estimate the abundances of these newly detected molecular species towards MM1, to see where MM1 sits in relation to other sources in the literature.
#### 4.3.1 Estimation of likely column densities and abundances
In Sections 4.1.2 and 4.1.3, we produced LTE synthetic spectra to evaluate the highest likelihood column density, temperature, centroid velocity and velocity width of CH\({}_{3}\)OH towards MM1. However, the other detected species listed in Table 4 do not have enough identified transitions for the constraint of four free parameters, nor for the construction of a rotation diagrams as in Section 4.1.1. We instead evaluate order of magnitude estimates for the column densities of these species (CH\({}_{3}\)OCH\({}_{3}\), \({}^{13}\)CH\({}_{3}\)OH, CH\({}_{3}\)CH\({}_{2}\)OH, CH\({}_{3}\)OHO, OCS, H\({}_{2}\)CO, CH\({}_{3}\)CHO\({}_{2}\)CS, CH\({}_{3}\)CH\({}_{2}\)CN, NH\({}_{2}\)CHO, HC\({}_{3}\)N and \({}^{13}\)CS) at the ALMA 1.05 mm continuum peak of MM1 by keeping only the column density as a free parameter (e.g. Csengeri et al., 2019) in the emcee sampler with we hedgedy_mCMC. The synthetic spectra for each molecular species are evaluated individually. As in Section 4.1.3, the source size is fixed to the geometric mean of the \(\sim\)278 GHz synthesised beam (\(\sim\)0.5\(\arcsec\)). The temperature, centroid velocity and velocity width are set to the highest likelihood values evaluated at the ALMA 1.05 mm continuum peak from the CH\({}_{3}\)OH spectra (i.e. 162.7 K, 59.7 km s\({}^{-1}\) and 6.4 km s\({}^{-1}\) respectively, see Figure 10). As discussed in Paper I, for molecular lines associated with MM1 (i.e. not outflow-tracing) the extent of the line emission is generally consistent across species and with the extent of the 1.05 mm continuum emission (e.g. Fig. 3 of Williams et al., 2022). For molecular species with transitions in multiple spectral windows, the synthetic spectra were evaluated in the spectral window with the most transitions. The resulting highest likelihood column density was then used to generate synthetic spectra for the other spectral windows.
The highest likelihood column density of each species is listed in
Table 7, and Figure 12 shows the combined LTE synthetic spectrum for the two wide-band spectral windows for all identified molecular species. On the whole the combined model reproduces the observed emission well. Some molecular lines appear in the synthetic spectrum that were not listed in Table 4 (or labelled in Figure 3 or Figure 12), which is attributed to overlapping lines from the same and/or multiple species making their unequivocal identification in Section 3.2.1 difficult. Some of the identified transitions have their emission over-estimated. These include the CH\({}_{3}\)OH(\(6_{1,5}-5_{1,4}\)) and CH\({}_{3}\)OH(\(10_{1,10}-9_{0,9}\)) lines (with \(v_{\rm rest}=292.67291\) and \(292.51744\) GHz respectively), the former being of an outflow-tracing nature (see Figure 4) with an apparently low integrated intensity in the rotation diagram of Figure 9(a), and the latter being the only transition identified in the tuning to be in the first torsionally excited state. The heavily blended H\({}_{2}\)CO(\(4_{2,3}-3_{2,2}\)) and CH\({}_{3}\)OH(\(15_{1,0}-14_{2,0}\)) lines (with \(v_{\rm rest}=291.23777\) and \(291.24057\) GHz respectively) also have their intensity over-estimated, which we attribute to the line blending and the limitations of linearly combining independent models of different species. The H\({}_{2}\)CO(\(4_{2,2}-3_{2,1}\)) line (\(v_{\rm rest}=291.94807\) GHz) also has its intensity over-estimated, attributed to this line also being heavily blended. Finally, the CH\({}_{3}\)OCH\({}_{3}\)(\(16_{1,16}-15_{0,15}\)) line (\(v_{\rm rest}=292.41225\) GHz) is over-estimated despite all other CH\({}_{3}\)OCH\({}_{3}\) lines in the tuning having their intensities well reproduced, and it being shown in Paper 1 to exhibit compact emission consistent with the other identified lines.
Fractional abundances of each species with respect to CH\({}_{3}\)OH and H\({}_{2}\) are evaluated from the ratio of their respective column densities. We take the CH\({}_{3}\)OH column density at the ALMA 1.05 mm continuum peak from the synthetic spectra in Section 4.1.3, and the H\({}_{2}\) column density found in Section 4.2 from the ALMA 1.05 mm continuum. As listed in Table 7, we report fractional abundances of the identified species ranging from \(10^{-1}-10^{-4}\) with respect to CH\({}_{3}\)OH, and \(10^{-6}-10^{-9}\) with respect to H\({}_{2}\).
towards low-mass COM-emitting sources, and high CH\({}_{3}\)OH column density and temperature towards high-mass COM-emitting sources. With beam dilution a potential contributing factor to this, Oberg et al. (2014) also collate a handful of resolved SMA observations, noting the distribution to be more continuous rather than separated across mass when including resolved observations. With the more recent ALMA observations collated here, this behaviour is even more pronounced, with the chemistry of low- and high-mass sources appearing similar in this set of interferometric observations. A similar trend has recently been noted by Chen et al. (2023), who compare the abundance ratios of Oxygen-bearing COMs towards their ALMA sample of 14 high-mass sources with interferometric observations of 5 low-mass sources from the literature. The Chen et al. (2023) sample includes G19.01-0.03 MM1, and their values for the CH\({}_{3}\)OCH\({}_{3}\), CH\({}_{3}\)OCHO and CH\({}_{3}\)CHO column densities towards MM1 - derived using LTE synthetic spectra - are within a factor of \(2-4\) of the order of magnitude estimates in Table 7. We note that, for consistency, we do not include datapoints from Chen et al. (2023) in Figure 13 because they derive CH\({}_{3}\)OH column densities from modelling of the CH\({}_{3}^{18}\)OH isotopologue and an assumed \({}^{16}\)O/\({}^{18}\)O isotope ratio, and do not derive T\({}_{\rm CH_{3}OH}\) values for most (13/14) of the sources in their sample.
It is worth emphasising that comparisons of column densities are subject to a number of varying effects including beam sizes, source distance and beam dilution. The sources collated here, despite all being observed with ALMA, appear to trace different spatial scales across mass. The low-mass data probe physical scales of \(13-140\) au (physical scale of the synthesised beam; median and standard deviation of 60 and 40 au respectively), a factor of up to two orders of magnitude smaller than the \(400-2600\) au spatial scales (physical scale of the synthesised beam; median and standard deviation of 990 and 870 au respectively) probed by the high-mass data. It is suggested by van Gelder et al. (2022) that methanol production could be less efficient towards MYSOs due to their warmer prestellar phase and/or shorter prestellar lifetimes. Nazari et al. (2023) model the methanol emission towards different configuration high-mass sources. Comparing their results to a similar study of theirs towards low-mass sources (Nazari et al., 2022), they find that optically thick millimetre dust, as well as disc shadowing causing a reduction in environment temperature, are more effective in lowering the methanol emission towards low-mass YSOs than they are towards MYSOs with high luminosity (\(10^{4}-10^{5}\) L\({}_{\odot}\)). It is therefore further suggested by Nazari et al. (2023) that other factors such as the presence of Hii regions, could contribute to the lowering of methanol emission towards some MYSOs. It is possible a combination of these factors could be contributing to the methanol column densities, temperature and fractional abundances appearing so similar across mass scales in Figure 13. It may also be explained by considering the conditions under which these COMs are formed. Recent works have suggested that the chemical similarity between various sources may be explained by the COMs being formed under similar conditions (e.g. Quenard et al., 2018). Across mass scales this would correspond to the ices of the pre-stellar phase (e.g. Coletta et al., 2020; Nazari et al., 2022; Chen et al., 2023).
In addition to CH\({}_{3}\)OCH\({}_{3}\) and CH\({}_{3}\)CHO, Csengeri et al. (2019) also report the fractional abundances of CH\({}_{3}\)OCHO, CH\({}_{3}\)CH\({}_{2}\)OH and CH\({}_{3}\)CH\({}_{2}\)CN with respect to CH\({}_{3}\)OH towards three positions around the high-mass hot core G328.2551-0.5321. In every instance, the corresponding molecular abundance reported here towards MM1 is of the same order as those towards G328.2551-0.5321. Cyganowski et al. (2011a) find a CH\({}_{3}\)CN column density towards MM1 of \(1.7\times 10^{16}\) cm\({}^{-2}\) with their SMA 1.3 mm observations (with 32\(\times\) larger beam area than our ALMA data). They also find that the CH\({}_{3}\)CN spectrum is best-fit by a model with a CH\({}_{3}\)CN-emitting region of size 0.6''(equivalent to 2400 AU at D = 4 kpc), intermediate between our ALMA beam (Table 1) and the size of the MM1 1.05 mm dust continuum from Table 3. Nazari et al. (2022) present CH\({}_{3}\)CN column densities towards 37 high-luminosity, potentially high-mass, protostars; 13 have column densities of a similar order to MM1 (Cyganowski et al., 2011a), whilst Law et al. (2021) reports a CH\({}_{3}\)CN column density for G10.6 HC 1 an order of magnitude larger than MM1.
Figure 13: (a) CH\({}_{3}\)OH column density, (b) abundance of CH\({}_{3}\)OCH\({}_{3}\) with respect to CH\({}_{3}\)OH, and (c) abundance of CH\({}_{3}\)CHO with respect to CH\({}_{3}\)OH, all plotted against CH\({}_{3}\)OH rotational temperature, of both low-mass (green) and high-mass (blue) sources from the literature (inspired by Figs.7&8 of Öberg et al., 2014). Our G19.01 data is plotted in yellow. All sources were observed with ALMA: the synthesised beams of the collated low- and high-mass literature observations range from \(13-140\) au and \(400-2600\) au respectively. High-mass sources were collated from Csengeri et al. (2019), Molet et al. (2019), Law et al. (2021) and Baek et al. (2022), and low-mass sources were collated from Lee et al. (2019), Manigand et al. (2020), Bianchi et al. (2022), Chahine et al. (2022) and Hsu et al. (2022). The G19.01 errors bars are smaller than the marker size, and not all literature sources have error bars.
The comparatively weak, sparse COM emission towards MM1 in the SMA observations of Cyganowski et al. (2011a) led to the suggestion that MM1 was a relatively young source. Now, with a rich line forest detected with ALMA, and fractional abundances of COMs similar to other sources in the literature, it is clear that MM1 is not especially chemically young and in fact appears to be a typical hot core.
#### 4.3.3 \({}^{12}\)C/\({}^{13}\)C isotope ratio
With the identification of both the \({}^{12}\)C and \({}^{13}\)C isotopologues of CH\({}_{3}\)OH, we can estimate the \({}^{12}\)C/\({}^{13}\)C isotope ratio from the ratio of their column densities (see Table 7). This is valid if the molecular transitions are optically thin, described by the same rotational temperature, and exhibit similar spatial emission extents (e.g. Wirstrom et al., 2011). Towards the ALMA 1.05 mm continuum peak of MM1, we estimate that \({}^{12}\)C/\({}^{13}\)C \(\approx\) 6. This is a factor of \(\sim\)7 lower than expected from the relations of Wilson and Rood (1994) and Milam et al. (2005) for galactic molecular clouds (where \({}^{12}\)C/\({}^{13}\)C \(\simeq\) 39 - 46 at the MM1 galactocentric distance of 4.4 kpc).
Though low-J transitions of CH\({}_{3}\)OH are commonly optically thick (e.g. Ginsburg et al., 2017), 5 out of the 6 lines included in our LTE line modelling (see SS4.1.3) are relatively high-J transitions (see Table 4). The estimated opacities of the 6 lines are \(\tau\sim\) 0.1 to 0.6 based on the class modelling (with an average of 0.3; see SS4.1.3), and \(\tau\sim\) 0.4 to 2.1 from the rotation diagram analysis (with an average of 1.0; see SS4.1.1). The opacity of the \({}^{13}\)CH\({}_{3}\)OH (\(3_{2,2}-4_{1,3}\)) line reported by class is \(\tau=\) 1.0. It is worth noting that the LTE modelling reproduces the observed emission of the highest-J and lowest-\(\tau\)\({}^{12}\)CH\({}_{3}\)OH line very well (i.e. CH\({}_{3}\)OH(\(23_{4,19}\)\(\rightarrow\)\(22_{5,18}\)) with \(\tau=\) 0.1 and 0.4 from class and the rotation diagram respectively), and that the spatial extent of emission from this line is consistent with that of \({}^{13}\)CH\({}_{3}\)OH(\(3_{2,2}-4_{1,3}\)) (see Fig.3 of Paper I). To test for dependence on the assumed source size, we regram our wedsply_mcmcm analysis at the 1.05 mm continuum peak for the case of unresolved emission (source size 0\(\aas@@fstack{\prime\prime}\)2, SS4.1.1) and re-estimated the \({}^{13}\)CH\({}_{3}\)OH column density as described in Section 4.3.1 using the resulting parameters. We emphasise that the best-fitting model with this smaller source size is a poorer representation of the observed \({}^{12}\)CH\({}_{3}\)OH emission than the model described in Sections 4.1.3-4.1.4, and we use it only to check the effect of the assumed source size on the \({}^{12}\)C/\({}^{13}\)C ratio. From the models with a 0\(\aas@@fstack{\prime\prime}\)2 source size, we derive a \({}^{12}\)C/\({}^{13}\)C ratio of \(\sim\)7.5, indicating that the result of low \({}^{12}\)C/\({}^{13}\)C is robust to the assumption of resolved or unresolved emission.
There are suggestions in the literature that \({}^{12}\)C/\({}^{13}\)C may be generally lower on the smaller scales of high-mass hot cores (e.g. IRAS 20126+4104, G31.41\(-\)0.31, AFGL 4176, W3 IRS4 and G10.6\(-\)0.4; Palau et al., 2017; Beltran et al., 2018; Begelund et al., 2019; Mottram et al., 2020; Law et al., 2021, respectively) and low-mass hot corinos (Hsu et al., 2022) than on the larger scale of molecular clouds. Proposed chemical explanations include cold temperature enhancement of \({}^{13}\)C in H\({}_{2}\)CO formation on dust grains (Mottram et al., 2020) and the destruction of HC\({}_{3}\)N in chemical reactions (Begelund et al., 2019). The possible contribution of optical depth effects has also been considered (Beltran et al., 2018; Begelund et al., 2019; Law et al., 2021). Beltran et al. (2018) and Begelund et al. (2019) conduct synthetic analyses of CH\({}_{3}\)CN and HC\({}_{3}\)N emission respectively (taking similar approaches to that detailed here towards G19.01), and comment that optically thick emission could artificially lower their observed \({}^{12}\)C/\({}^{13}\)C ratios. Law et al. (2021) conclude that optical depth effects are unlikely to drive their results as \(\tau\sim\) 0.1 - 0.4 for their CH\({}_{3}\)OH lines (derived using a rotation diagram analysis), but note they cannot definitively discount optical depth effects in unresolved structures contributing to their results.
Similarly to Law et al. (2021), our derived line opacities are not suggestive of optical depth being the primary explanation for the low \({}^{12}\)C/\({}^{13}\)C ratio observed in G19.01, but we cannot rule out optical depth effects contributing to this result. We emphasise that our observations provide only a tentative addition to the emerging picture of low \({}^{12}\)C/\({}^{13}\)C isotope ratios towards hot cores. Observations of additional and optically thin transitions of the \({}^{12}\)CH\({}_{3}\)OH and \({}^{13}\)CH\({}_{3}\)OH isotopologues, as well as the isotopologues of other molecular species, would be required for confirmation.
## 5 Conclusions
In this paper (Paper II), we have presented a study of the physical properties and chemistry of the high-mass (proto)star G19.01-0.03 MM1 and its environment, using sub-arcsec-resolution ALMA 1.05 mm and VLA 1.21 cm data. Our main findings are as follows:
(i) A (sub)millimetre forest of molecular line emission is observed towards MM1. We analyse 47 lines from 11 different species (43 of which were identified in our ALMA 1.875 GHz wide-bands alone), including COMs such as CH\({}_{3}\)OCHO, CH\({}_{3}\)CH\({}_{2}\)CN, CH\({}_{3}\)CH\({}_{2}\)OH, CH\({}_{3}\)OCH\({}_{3}\), NH\({}_{2}\)CHO and CH\({}_{3}\)OH.
(ii) A Bayesian analysis (using our publicly available websp_mcmcr scrips) of CH\({}_{3}\)OH LTE synthetic spectra towards the MM1 dust peak returns highest likelihood column density and temperature of \((2.22\pm 0.01)\times 10^{18}\) cm\({}^{-2}\) and \(162.7^{+0.3}_{-0.5}\) K respectively. These are consistent with high-mass hot core sources from the literature, and with values found from our opacity-corrected rotation diagram analysis of \((2.0\pm 0.4)\times 10^{18}\) cm\({}^{-2}\) and \(166\pm 9\) K respectively.
(iii) The peak CH\({}_{3}\)OH temperature (\(165.5\pm 0.6\) K) is offset from the MM1 dust peak by \(0.22\arcsec\sim 880\) AU. The morphology of the region of elevated temperature is aligned with the VLA 5.01 cm continuum emission presented in Paper I.
(iv) We report abundances of all identified molecular species (CH\({}_{3}\)OCH\({}_{3}\), \({}^{13}\)CH\({}_{3}\)OH, CH\({}_{3}\)CH\({}_{2}\)OH, CH\({}_{3}\)OCHO, OCS, H\({}_{2}\)CO, CH\({}_{3}\)CHO, H\({}_{2}\)CS, CH\({}_{3}\)CH\({}_{2}\)CN, NH\({}_{2}\)CHO, HC\({}_{3}\)N and \({}^{13}\)CS) of \(10^{-1}-10^{-4}\) and \(10^{-6}-10^{-9}\) with respect to CH\({}_{3}\)OH and H\({}_{2}\) respectively, consistent with high-mass hot core sources from the literature.
(v) The known bipolar molecular outflow driven by MM1 is traced by thermal CH\({}_{3}\)OH and H\({}_{2}\)CO emission and by newly-identified NH\({}_{3}\)(3,3) and 278.3 GHz Class I CH\({}_{3}\)OH maser candidates, strengthening the connection of these types of masers with outflows driven by MYSOs. We identify a total of 50 \(>\)5\(\sigma\) NH\({}_{3}\)(3,3) emission spots across 8 compact emission groups, which are in the outer lobes of the outflow and spatially and kinematically coincident with 44 GHz Class I CH\({}_{3}\)OH masers. Candidate 25 GHz CH\({}_{3}\)OH 5(2,3)-5(1,4) maser emission is also detected towards the outflow, offset from MM1 by 6\(\arcsec\sim\) 24,000 AU.
(vi) Four new millimetre continuum sources (MM2...MM5) are detected between 0.03 - 0.12 pc from MM1, but all are undetected in NH\({}_{3}\)(\(J\)=\(K\)=1,2,3,5,6,7) and 25 GHz CH\({}_{3}\)OH emission. Two of these sources, MM2 and MM4, are associated with \(>\) 5\(\sigma\) DCN(4-3) emission, a tracer of cold and dense gas.
(vii) The median mass of MM2...MM5 (assuming isothermal dust emission) is 1.0-2.0 M\({}_{\odot}\) for a temperature range of \(24-15\) K, and their median radius is \(\sim\) 2000 AU. Since none of these sources are
associated with any masers or have any sign of outflows, they are candidate low-mass pre-stellar companions to MM1.
In all, our results place G19.01-0.03 MM1 as a typical high-mass hot core source, with four low-mass, potentially pre-stellar, cores within a projected linear distance of 0.12 pc. Both these findings are in contrast with the results of previous, lower-resolution SMA observations (Cyganowski et al., 2011), which identified G19.01-0.03 MM1 as a candidate for isolated high-mass star formation, potentially at a very early stage due to its lack of chemical complexity. Our new results show that G19.01-0.03 MM1 is in fact not especially chemically young, and that its physical and chemical properties are typical of other high-mass hot core sources in the literature. Intriguingly, our analysis of the ALMA CH\({}_{3}\)OH lines reveals a region of elevated CH\({}_{3}\)OH temperature that is aligned with VLA 5.01 cm continuum emission attributed to free-free emission, which is offset from the thermal dust emission peak (as discussed in detail in Paper I). This positional coincidence is tentative support for the possibility, suggested in Paper I, that the 5.01 cm continuum might be associated with an unresolved high-mass binary companion to MM1. Our ALMA Cycle 6 follow-up study, tuned to probe \(-0.09\arcsec\)\(\sim\)360 au scales, will be a next step in testing this possibility with higher angular resolution observations.
## Acknowledgements
We thank the anonymous referee for their constructive report and comments that helped improve the quality of this paper. GMW and CJC thank Ian Bonnell for helpful discussions. GMW acknowledges support from the UK's Science and Technology Facilities Council (STFC) under ST/W00125X/1. CJC acknowledges support from the UK's STFC under ST/M001296/1. PN acknowledges support by grant 618.000.001 from the Dutch Research Council (NWO) and support by the Danish National Research Foundation through the Center of Excellence "InterCat" (Grant agreement no.: DNRF150). This research has made use of: NASA's Astrophysics Data System Bibliographic Services, cidlabs ([https://www.iram.fr/IRAMFR/GILDAS](https://www.iram.fr/IRAMFR/GILDAS)) and python packages astropy (Astropy Collaboration et al., 2013), astrobedro (Rosolowsky et al., 2008), apply ([http://aplpy.github.com](http://aplpy.github.com)), cmocean (Thyng et al., 2016), corner (Foreman-Mackey, 2016), emcee (Foreman-Mackey et al., 2013), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), pandas (McKinney, 2010), scipy (Virtanen et al., 2020), sctkt-image (Van der Walt et al., 2014), and analysisUtils (Hunter et al., 2023). This paper makes use of the following ALMA data: ADS/JAO.ALMA#2013.1.00812.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. For the purposes of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
## Data Availability
The ALMA 1.05 mm dust continuum, VLA 1.21 and 5.01 cm continuum images, and the line cubes for the two broad (1.875 GHz bandwidth) ALMA spectral windows are available at doi:10.5281/zenodo.8059531. These data underlie both this article and Williams et al. (2022). The scripts used to conduct the Bayesian analysis of LTE synthetic spectra are packaged together in a reusable form we call www.esbpy.mcmc, and are made freely available at [https://github.com/gwen-williams/WeedsPy_MCMC](https://github.com/gwen-williams/WeedsPy_MCMC).
|
2307.06580 | Quantum Simulation of Boson-Related Hamiltonians: Techniques, Effective
Hamiltonian Construction, and Error Analysis | Elementary quantum mechanics proposes that a closed physical system
consistently evolves in a reversible manner. However, control and readout
necessitate the coupling of the quantum system to the external environment,
subjecting it to relaxation and decoherence. Consequently, system-environment
interactions are indispensable for simulating physically significant theories.
A broad spectrum of physical systems in condensed-matter and high-energy
physics, vibrational spectroscopy, and circuit and cavity QED necessitates the
incorporation of bosonic degrees of freedom, such as phonons, photons, and
gluons, into optimized fermion algorithms for near-future quantum simulations.
In particular, when a quantum system is surrounded by an external environment,
its basic physics can usually be simplified to a spin or fermionic system
interacting with bosonic modes. Nevertheless, troublesome factors such as the
magnitude of the bosonic degrees of freedom typically complicate the direct
quantum simulation of these interacting models, necessitating the consideration
of a comprehensive plan. This strategy should specifically include a suitable
fermion/boson-to-qubit mapping scheme to encode sufficiently large yet
manageable bosonic modes, and a method for truncating and/or downfolding the
Hamiltonian to the defined subspace for performing an approximate but highly
accurate simulation, guided by rigorous error analysis. In this paper, we aim
to provide such an exhaustive strategy. Specifically, we emphasize two aspects:
(1) the discussion of recently developed quantum algorithms for these
interacting models and the construction of effective Hamiltonians, and (2) a
detailed analysis regarding a tightened error bound for truncating the bosonic
modes for a class of fermion-boson interacting Hamiltonians. | Bo Peng, Yuan Su, Daniel Claudino, Karol Kowalski, Guang Hao Low, Martin Roetteler | 2023-07-13T06:46:25Z | http://arxiv.org/abs/2307.06580v1 | Quantum Simulation of Boson-Related Hamiltonians: Techniques, Effective Hamiltonian Construction, and Error Analysis
###### Abstract
Elementary quantum mechanics proposes that a closed physical system consistently evolves in a reversible manner. However, control and readout necessitate the coupling of the quantum system to the external environment, subjecting it to relaxation and decoherence. Consequently, system-environment interactions are indispensable for simulating physically significant theories. A broad spectrum of physical systems in condensed-matter and high-energy physics, vibrational spectroscopy, and circuit and cavity QED necessitates the incorporation of bosonic degrees of freedom, such as phonons, photons, and gluons, into optimized fermion algorithms for near-future quantum simulations. In particular, when a quantum system is surrounded by an external environment, its basic physics can usually be simplified to a spin or fermionic system interacting with bosonic modes. Nevertheless, troublesome factors such as the magnitude of the bosonic degrees of freedom typically complicate the direct quantum simulation of these interacting models, necessitating the consideration of a comprehensive plan. This strategy should specifically include a suitable fermion/boson-to-qubit mapping scheme to encode sufficiently large yet manageable bosonic modes, and a method for truncating and/or downfolding the Hamiltonian to the defined subspace for performing an approximate but highly accurate simulation, guided by rigorous error analysis. In this paper, we aim to provide such an exhaustive strategy, focusing on encoding and simulating certain bosonic-related model Hamiltonians, inclusive of their static properties and time evolutions. Specifically, we emphasize two aspects: (1) the discussion of recently developed quantum algorithms for these interacting models and the construction of effective Hamiltonians, and (2) a detailed analysis regarding a tightened error bound for truncating the bosonic modes for a class of fermion-boson interacting Hamiltonians.
###### Contents
* I Introduction
* II Qubit Mappings and Quantum Simulation Techniques
* II.1 Boson-to-Qubit mapping
* II.1.1 Boson-Preserving Hamiltonian
* II.1.2 Spin-Boson Hamiltonian
* II.1.3 Boson-Fermion Hamiltonian
* II.2 Prepare Initial Bosonic States
* II.2.1 Capture Ground States
* II.2.2 Trotterization
* II.2.3 Qubitization
* III Effective Hamiltonian Construction Through Coupled-Cluster Approach
* IV Error Analysis of Truncating Bosonic Mode
* IV.1 Technical conditions
* IV.2 State truncation
* IV.3 Hamiltonian truncation
* IV.4 Multiple bosonic modes
* IV.5 Time-dependent Hamiltonians
* V Conclusion
* VI Acknowledgments
* A Quantum Simulation of Spin-Boson Model
* B Unitary Flow for Many-Body-Localization
* C Trotterized D-UCCSD Ansatz for Three-Level Two-Boson Model System
## I Introduction
Understanding the physics of open quantum systems, which involve crucial interactions between a quantum system and its environment, is a highly non-trivial and sometimes challenging problem [26]. The past few decades have seen monumental advances in classical computing for simulating quantum systems in various fields, including quantum mechanics, molecular dynamics, quantum chemistry, condensed matter physics, and quantum field theory [10; 85; 123; 150]. However, despite this progress, the rapidly increasing computational cost makes it unrealistic to fully treat quantum many-body effects, particularly when associated with strong system-environment interactions, using classical computing [101; 114].
Specifically, common approaches such as the Lindblad master equation [67], Redfield equation [26], and quantum Monte Carlo methods [58], which are often employed to treat weak interactions in the open quantum systems, rely on cer
tain approximations that may not be valid in all situations or for all types of systems. For strong system-environment interactions, one would usually resort to methods like non-Markovian quantum master equations [25], exact diagonalization methods [155], path integral methods [53], hierarchical equations of motion [144], tensor network approaches [137], and variational approaches [147]. Nevertheless, these methods can be computationally demanding and may require additional approximations or simplifications. Consequently, there is a growing need for novel techniques and algorithms that can efficiently simulate quantum systems with both weak and strong system-environment interactions.
This dilemma can be solved using computational devices that build upon the laws of quantum mechanics themselves. Indeed, the efficient simulation of quantum systems is one of the main motivations for Feynman and others to propose the idea of quantum computers [54; 108]. Overall the years, many quantum simulation algorithms have been developed based on product formulas [21; 101] as well as more advanced techniques [20; 104] that can be deployed on a scalable quantum computer. Quantum simulations have also recently emerged as a promising playground for noisy intermediate-scale quantum (NISQ) demonstrations of quantum advantage compared to classical computing [127; 132; 44; 59; 80; 14; 45].
Nevertheless, quantum simulations of open quantum systems with strong system-environment interaction are not straightforward and require careful consideration of various aspects (Figure 1). For example, the size and complexity of the system, as well as the number of environmental degrees of freedom, can significantly impact these considerations [137; 153; 62; 14]. Furthermore, the simulations on the NISQ devices also requires robust error mitigation/correction and software infrastructure [41; 68]. In the context of these multifaceted considerations, it becomes imperative to direct concerted attention towards model Hamiltonian selection, quantum algorithms, and efficient preprocessing and post-processing steps. These areas form the crucial foundation that directly impacts the efficiency, accuracy, and practical applicability of the simulations.
Regarding the choice of model Hamiltonian, it determines the level of abstraction and the trade-off between computational efficiency and the accuracy of the representation of the physical system [27; 135]. A suitable model can capture essential physics while still being amenable to efficient simulation on quantum hardware. For example, decoherence or measurement is often said to cause a system to become entangled with its environment [154]. Such entanglement can often be studied through the simple spin-boson model [97], where the system-environment interaction is abstracted through a finite number of qubits interacting with a collection of harmonic oscillators. Here, the model Hamiltonians are able to describe ultra-strong coupling regime that has been investigated experimentally in many scenarios, such as circuit QED [24; 56; 57; 96; 115; 160; 115], trapped ions [105], photonic [40] and semiconductor quantum systems [146; 69]. When dealing with larger quantum systems, such as many-fermion systems coupled with the environment, as often encountered in quantum chemistry and condensed matter physics, the complexity of the systems requires more convoluted model Hamiltonians featuring explicit system-environment interactions for physically important theories [64]. For example, electron-phonon interactions [4; 65] in the context of nonrelativistic quantum field theory are usually characterized through a fermion-boson interacting term in the model Hamiltonians. This is because phonons, the most common bosonic excitations in solids, can usually interact with electrons to significantly renormalize the electrical and transport properties of materials or lead to dramatic effects, such as superconductivity or Jahn-Teller distortions [107; 52]. Similar Hamiltonians with explicit system-environment treatment can also be employed to address the interaction of electrons with other bosonic collective excitations in solids, such as spin, orbital, and charge.
In addition to model selection, the choice of quantum algorithms, together with efficient pre-/post-processing techniques, is also crucial to harness the full potential of quantum computing in simulating open quantum systems [114]. Efficient algorithms should lead to significant speed-ups over classical approaches, especially for problems involving strong system-environment interactions. For close quantum systems, quantum algorithms such as the Quantum Phase Estimation (QPE) [84] and the Variational Quantum Eigen-solver (VQE) [122], as well as their numerous variants, have often been applied to study quantum systems and estimate their properties, including energy levels and ground states. However, the development of algorithms specifically tailored to handle strong system-environment interactions in open quantum systems remains an active area of research (see Refs. [86; 100; 130; 161; 48; 71; 76] for some recent developments in NISQ algorithms and Ref. [22] for a recent review, and see Refs. [33; 37] and papers citing and cited by these work for quantum simulation algorithms for scalable quantum computers). Generally speaking, the algorithm being developed in this direction should take into account the unique characteristics of open quantum systems, such as their non-Markovian dynamics [25] and the interplay between decoherence and entanglement [135; 135].
As can be seen, when considering explicit system-environment interactions, bosonic modes or harmonic oscillators in different forms are typically employed. However,
Figure 1: System–environment interactions in an open quantum system comprising a quantum system in a complex environment can pose a challenging problem.
encoding bosonic modes is quite different from encoding fermions [111]. Direct encoding of bosonic modes requires a large number of qubits, especially in intermediate and strong boson-fermion coupling regimes where encoding the Hamiltonian becomes more convoluted and resource-demanding than pure fermionic ones [12]. Furthermore, these challenges directly hinder a formal error analysis of quantum simulations of these model Hamiltonians [145]. Remarkably, despite recent advances in quantum hardware technology ushering in a new era of computing science [127], a comprehensive yet efficient methodology is still lacking in this field. This gap serves as the primary motivation for this paper, aiming to provide a relevant recipe in the rapidly evolving field of quantum computing.
This paper is organized as follows. In the 'Qubit Mappings and Quantum Simulation Techniques' section, we describe existing methods for mapping boson operators onto qubit systems and discuss techniques customized for the quantum simulation of various types of boson-related Hamiltonians. In the 'Effective Hamiltonian Construction' section, we present a detailed explanation of constructing the effective Hamiltonian through unitary or non-unitary flows, and discuss the design of new hybrid quantum algorithms that approach the ground state and construct the effective Hamiltonian of a bosonic quantum system. In the 'Error Analysis of Truncating Bosonic Mode' section, we focus on the error analysis of truncating bosonic modes and present mathematical derivations and results for a class of fermion-boson interacting Hamiltonians. In the 'Conclusion' section, we summarize the main findings and provide an outlook.
## II Qubit Mappings and Quantum Simulation Techniques
Consider an \(N\)-site open quantum system with each site interacting with a bosonic mode. For each bosonic mode, since there is no limit to the number of bosons, one will need to select a finite upper limit number of bosons, \(N_{b}\), for every site in the practical simulation, i.e., site \(i\) can be occupied by \(n_{i}\) (\(0\leq n_{i}\leq N_{b}\)) bosons. We use \(b_{i}^{\dagger}\) and \(b_{i}\) to denote the bosonic creation and annihilation operators at site \(i\), and \(\hat{n}_{i}=b_{i}^{\dagger}b_{i}\) denotes the number operator. The bosonic commutation relations (in an infinite-dimentional Hilbert space) are
\[[b_{i}^{\dagger},b_{j}^{\dagger}]=0,\ \ [b_{i},b_{j}]=0,\ \ [b_{i},b_{j}^{ \dagger}]=\delta_{ij}. \tag{1}\]
The bosonic state with \(n_{i}\) (\(0\leq n_{i}\leq N_{b}\)) bosons at the \(i\)-th site is represented as a natural tensor product structure
\[|n_{1},\cdots,n_{i},\cdots,n_{N}\rangle=|n_{1}\rangle\otimes\cdots\otimes|n_{ i}\rangle\otimes\cdots\otimes|n_{N}\rangle. \tag{2}\]
When \(b_{i}^{\dagger}\), \(b_{i}\), and \(\hat{n}_{i}=b_{i}^{\dagger}b_{i}\) are acting on the bosonic state, we have
\[b_{i}^{\dagger}|n_{1},\cdots,n_{i},\cdots,n_{N}\rangle = \sqrt{n_{i}+1}\ \ \ |n_{1},\cdots,n_{i}+1,\cdots,n_{N}\rangle,\] \[b_{i}|n_{1},\cdots,n_{i},\cdots,n_{N}\rangle = \sqrt{n_{i}}\ \ |n_{1},\cdots,n_{i}-1,\cdots,n_{N}\rangle,\] \[\hat{n}_{i}|n_{1},\cdots,n_{i},\cdots,n_{N}\rangle = n_{i}\ \ \ |n_{1},\cdots,n_{i},\cdots,n_{N}\rangle. \tag{3}\]
Similar to (2), we can write the tensor product representations of \(b_{i}^{\dagger}\), \(b_{i}\) and \(\hat{n}_{i}\),
\[b_{i}^{\dagger} = {\bf I}_{1}\otimes\cdots\otimes\tilde{b}_{i}^{\dagger}\otimes \cdots\otimes{\bf I}_{N},\] \[b_{i} = {\bf I}_{1}\otimes\cdots\otimes\tilde{b}_{i}\otimes\cdots\otimes {\bf I}_{N},\] \[\hat{n}_{i} = {\bf I}_{1}\otimes\cdots\otimes\tilde{n}_{i}\otimes\cdots\otimes {\bf I}_{N}, \tag{4}\]
where \({\bf I}_{i}\) represents an identity operation at qubit \(i\), and \(\tilde{b}_{i}^{\dagger}\), \(\tilde{b}_{i}\), and \(\tilde{n}_{i}\) satisfy
\[\tilde{b}_{i}^{\dagger}\ |n_{i}\rangle = \sqrt{n_{i}+1}\ |n_{i}+1\rangle,\] \[\tilde{b}_{i}\ |n_{i}\rangle = \sqrt{n_{i}}\ \ \ |n_{i}-1\rangle,\] \[\tilde{n}_{i}\ |n_{i}\rangle = n_{i}\ \ \ |n_{i}\rangle. \tag{5}\]
Bosonic operators are usually employed to re-express the **Harmonic Oscillators**. Take the Hamiltonian of the linear harmonic oscillator as an example
\[\hat{H}=\frac{\hat{p}^{2}}{2m}+\frac{1}{2}m\omega^{2}\hat{q}^{2}, \tag{6}\]
where \(\hat{p}\) and \(\hat{q}\) are the coordinate and momentum Hermitian operators satisfying the canonical commutation relation
\[[\hat{q},\hat{p}]=i\hbar. \tag{7}\]
The \(\hat{q}\) and \(\hat{p}\) operators are mapped to bosonic operators through
\[b^{\dagger} = \left(\ \sqrt{\frac{m\omega}{2\hbar}}\ \ \frac{-i}{\sqrt{2m\omega }}\ \right)\left(\ \hat{q}\atop\hat{p}\right),\] \[b = \left(\ \sqrt{\frac{m\omega}{2\hbar}}\ \ \frac{i}{\sqrt{2m\omega }}\ \right)\left(\ \hat{q}\atop\hat{p}\right), \tag{8}\]
or equivalently
\[\hat{q}=\sqrt{\frac{\hbar}{2m\omega}}(b+b^{\dagger}),\ \ \hat{p}=\sqrt{\frac{m\omega\hbar}{2}}\frac{(b-b^{\dagger})}{i}, \tag{9}\]
from which the Hamiltonian takes a simple diagonal form
\[\hat{H}=\hbar\omega(b^{\dagger}b+\frac{1}{2}). \tag{10}\]
One can straightforwardly extend the above mapping to the case of many harmonic oscillators. Consider a system of \(N_{h}\) identical linear harmonic oscillators of mass \(M\) and frequency \(\omega\) with coordinates \({\bf Q}=\{Q_{j},j=1,\cdots,N_{h}\}\) and momenta \({\bf P}=\{P_{j},j=1,\cdots,N_{h}\}\) also satisfying the commutation relations
\[\left\{\begin{array}{l}[Q_{j},Q_{k}]=[P_{j},R_{k}]=0,\\ [Q_{j},R_{k}]=i\hbar\delta_{jk}\end{array}\right.,\ \ j,k\in[1,\cdots,N_{h}]. \tag{11}\]
The Hamiltonian then reads
\[\hat{H}=\sum_{j=1}^{N_{h}}\frac{P_{j}^{2}}{2M_{j}}+\frac{1}{2}\sum_{j,k=1}^{N _{h}}V_{jk}Q_{j}Q_{k} \tag{12}\]
with matrix \({\bf V}\) a positive semidefinite Hermitian matrix. Construct a matrix \({\bf v}\) with elements
\[v_{jk}=V_{jk}/\sqrt{M_{j}M_{k}}, \tag{13}\]
and two vectors
\[\mathbf{p}=\{p_{j},j=1,\cdots,N_{h}\},\ \ \mathbf{q}=\{q_{j},j=1,\cdots,N_{h}\} \tag{14}\]
with elements
\[q_{j}=\sqrt{M_{j}}Q_{j},\ \ p_{j}=P_{j}/\sqrt{M_{j}},\ \ \text{s.t.}\ \ [q_{j},p_{k}]=i \hbar\delta_{jk}, \tag{15}\]
the Hamiltonian can then be rewritten as
\[\hat{H}=\frac{1}{2}\mathbf{p}^{T}\mathbf{p}+\frac{1}{2}\mathbf{q}^{T}\mathbf{ v}\mathbf{q}. \tag{16}\]
Since the matrix \(\mathbf{V}\) is a Hermitian matrix that is positive semidefinite, so is the matrix \(\mathbf{v}\), and we can always find a unitary matrix to diagonalize a Hermitian matrix (via the finite-dimensional spectral theorem) with non-negative eigenvalues
\[\mathbf{v}=\mathbf{U}\Omega\mathbf{U}^{\dagger}, \tag{17}\]
where \(\Omega\) is a diagonal matrix with the non-negative diagonal elements \(\{\omega_{j}^{2}\geq 0,j=1,\cdots,N_{h}\}\). The unitary matrix when acting on \(\mathbf{q}\) and \(\mathbf{p}\) generates the **normal mode** (\(\mathbf{\tilde{q}}\) and \(\mathbf{\tilde{p}}\))
\[\mathbf{\tilde{q}}=\mathbf{U}\mathbf{q},\ \ \mathbf{\tilde{p}}=\mathbf{U} \mathbf{p} \tag{18}\]
that (i) preserves the commutation relations (11), and (ii) re-expresses the Hamiltonian as
\[\hat{H}=\frac{1}{2}\left(\mathbf{\tilde{p}}^{T}\mathbf{\tilde{p}}+\mathbf{ \tilde{q}}^{T}\Omega\mathbf{\tilde{q}}\right). \tag{19}\]
Now we can map from the normal modes to bosonic operators similar to (8,9)
\[b_{j}^{\dagger}=\left(\begin{array}{cc}\sqrt{\frac{\omega_{j}}{2 \hbar}}&\frac{-i}{\sqrt{2\omega_{j}\hbar}}\end{array}\right)\left(\begin{array} []{c}\tilde{q}_{j}\\ \tilde{p}_{j}\end{array}\right), \tag{20}\] \[b_{j}=\left(\begin{array}{cc}\sqrt{\frac{\omega_{j}}{2\hbar}}& \frac{i}{\sqrt{2\omega_{j}\hbar}}\end{array}\right)\left(\begin{array}{c} \tilde{q}_{j}\\ \tilde{p}_{j}\end{array}\right),\] (21) \[\tilde{q}_{j}=\sqrt{\frac{\hbar}{2\omega_{j}}}(b+b^{\dagger}),\] (22) \[\tilde{p}_{j}=\sqrt{\frac{\omega_{j}\hbar}{2}}\frac{(b-b^{\dagger })}{i}, \tag{23}\]
and obtain the Hamiltonian in normal mode
\[\hat{H}=\sum_{j=1}^{N_{h}}\hbar\omega_{j}\left(b_{j}^{\dagger}b_{j}+\frac{1}{ 2}\right). \tag{24}\]
### Boson-to-Qubit mapping
A typical boson-to-qubit mapping is the direct **one-to-one mapping** (also known as the unary mapping) studied by previous work such as Ref. [142], which maps a boson number state \(|n_{i}\rangle\ (0\leq n_{i}\leq N_{b})\) to a tensor representation employing \((N_{b}+1)\)-qubit,
\[|n_{i}\rangle \leftrightarrow|0_{0}\cdots 0_{n-1}1_{n}0_{n+1}\cdots 0_{N_{b}}\rangle\] \[=|0\rangle_{0}\otimes\cdots\otimes|0\rangle_{n_{i}-1}\otimes|1 \rangle_{n_{i}}\otimes|0\rangle_{n_{i}+1}\otimes\cdots\otimes|0\rangle_{N_{b}}, \tag{25}\]
where
\[|0\rangle_{j}=\left(\begin{array}{c}1\\ 0\end{array}\right),\ \ |1\rangle_{j}=\left(\begin{array}{c}0\\ 1\end{array}\right), \tag{26}\]
are computation basis states of qubit \(j\). From (4), (5), and (25), we can then represent \(\mathbf{I}_{i}\), \(\tilde{b}_{i}^{\dagger}\), \(\tilde{b}_{i}\), and \(\tilde{n}_{i}\) using a similar tensor structure,
\[\mathbf{I}_{i} =I_{0}\otimes I_{1}\otimes\cdots\otimes I_{N_{b}},\] \[\tilde{b}_{i}^{\dagger} =\sum_{n=0}^{N_{b}-1}\sqrt{n+1}\ I_{0}\otimes\cdots\otimes(+)_{n} \otimes(-)_{n+1}\otimes\cdots\otimes I_{N_{b}},\] \[\tilde{b}_{i} =\sum_{n=1}^{N_{b}}\sqrt{n}\ I_{0}\otimes\cdots\otimes(-)_{n-1} \otimes(+)_{n}\otimes\cdots\otimes I_{N_{b}},\] \[\tilde{n}_{i} =\tilde{b}_{i}^{\dagger}\tilde{b}_{i}. \tag{27}\]
where \((\pm)_{j}\) are ladder operator (on qubit \(j\)) defined from the Pauli matrices
\[(+) =\frac{1}{2}(X+iY)=\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right),\] \[(-) =\frac{1}{2}(X-iY)=\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right), \tag{28}\]
with the Pauli matrices (on qubit \(j\))
\[X=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\ \ Y=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\ \ Z=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{29}\]
The Pauli ladder operators satisfy the following relations
\[(+)|1\rangle =|0\rangle,\ \ (-)|1\rangle=0,\] \[(+)|0\rangle =0,\ \ (-)|0\rangle=|1\rangle. \tag{30}\]
It's worth mentioning that the above approach uses \(N_{b}\) qubits to represent \(N_{b}\) bosonic particle states in one site. Nevertheless, the computational basis of \(N_{b}\) (\(>1\)) qubits explore the subspace of up to \(2^{N_{b}}\) dimension. In other words, to encode \(N_{b}\) boson particle states and one vacuum state at each site, one would in principle only need \(\lceil\log_{2}(N_{b}+1)\rceil\) qubits. To achieve this, we can choose a **binary mapping** (studied in Ref. [151], with its bit-swapping-efficient version, e.g. Gray code, discussed in Ref. [133]) to represent bosonic states (at a given site \(i\))
\[|n_{i}\rangle \leftrightarrow|\underbrace{011\cdots 101}_{\text{binary rep. of $n_{i}$}}\rangle\] \[=|0\rangle_{1}\otimes|1\rangle_{2}\otimes|1\rangle_{3}\otimes\cdots\] \[\otimes|1\rangle_{N_{q}-2}\otimes|0\rangle_{N_{q}-1}\otimes|1 \rangle_{N_{q}}. \tag{31}\]
Using the above convention we can write
\[|0\rangle \leftrightarrow|0_{1}\rangle\otimes\cdots|0_{N_{q}-2}\rangle \otimes|0_{N_{q}-1}\rangle\otimes|0_{N_{q}}\rangle\] \[|1\rangle \leftrightarrow|0_{1}\rangle\otimes\cdots|0_{N_{q}-2}\rangle \otimes|0_{N_{q}-1}\rangle\otimes|1_{N_{q}}\rangle\] \[|2\rangle \leftrightarrow|0_{1}\rangle\otimes\cdots|0_{N_{q}-2}\rangle \otimes|1_{N_{q}-1}\rangle\otimes|0_{N_{q}}\rangle\] \[|3\rangle \leftrightarrow|0_{1}\rangle\otimes\cdots|0_{N_{q}-2}\rangle \otimes|1_{N_{q}-1}\rangle\otimes|1_{N_{q}}\rangle\] \[\vdots\] \[|2^{N_{q}}-1\rangle \leftrightarrow|1_{1}\rangle\otimes\cdots|1_{N_{q}-2}\rangle \otimes|1_{N_{q}-1}\rangle\otimes|1_{N_{q}}\rangle. \tag{32}\]
Accordingly, \(\mathbf{I}_{i}\), \(\tilde{b}_{i}^{\dagger}\), \(\tilde{b}_{i}\), and \(\tilde{n}_{i}\) can be represented in a \(2^{N_{q}}\times 2^{N_{q}}\) dimensional matrix form that can also be rewritten in a tensor product form, i.e.,
\[\mathbf{I}_{i} =\left(\begin{array}{cccc}1&0&0&\cdots&0\\ 0&1&0&\cdots&0\\ 0&0&1&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&1\end{array}\right)\] \[=I_{1}\otimes I_{2}\otimes\cdots\otimes I_{N_{q}}, \tag{33}\] \[\tilde{b}_{i}^{\dagger} =\left(\begin{array}{cccccc}0&0&0&\cdots&0&0\\ 1&0&0&\cdots&0&0\\ 0&\sqrt{2}&0&\cdots&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\cdots&\sqrt{2^{N_{q}}-1}&0\end{array}\right)\] \[=\sum_{n=0}^{2^{N_{q}}-2}\sqrt{n+1}\;|n+1\rangle\langle n|,\] (34) \[\tilde{b}_{i} =\left(\begin{array}{cccccc}0&1&0&\cdots&0\\ 0&0&\sqrt{2}&\cdots&0\\ 0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&\sqrt{2^{N_{q}}-1}\\ 0&0&0&\cdots&0\end{array}\right)\] \[=\sum_{n=1}^{2^{N_{q}}-1}\sqrt{n}\;|n-1\rangle\langle n|,\] (35) \[\tilde{n}_{i} =\left(\begin{array}{cccccc}0&0&0&\cdots&0\\ 0&1&0&\cdots&0\\ 0&0&2&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&2^{N_{q}}-1\end{array}\right)\] \[=\sum_{n=0}^{2^{N_{q}}-1}n\;|n\rangle\langle n|, \tag{36}\]
where the outer products \(|\cdot\rangle\langle\cdot|\) can be expressed as a tensor product form similar to (32) with the operation on each qubit being one of the following four,
\[|0\rangle\langle 0| =\frac{1}{2}\big{(}I+Z),\;|0\rangle\langle 1|=(+),\] \[|1\rangle\langle 0| =(-),\;|1\rangle\langle 1|=\frac{1}{2}\big{(}I-Z\big{)}. \tag{37}\]
**Representing Hamiltonians** Now using the boson-to-qubit mappings introduced above, we can map some model Hamiltonians that are commonly used in quantum computation to qubit systems.
#### ii.1.1 Boson-Preserving Hamiltonian
Consider the following general Bose-Hubbard model
\[\hat{H}=\sum_{i=1}^{N} \left(-\mu\hat{n}_{i}+\frac{U}{2}\hat{n}_{i}(\hat{n}_{i}-1)-t\sum _{j>i}^{N}(b_{i}^{\dagger}b_{j}+b_{i}b_{j}^{\dagger})\right.\] \[\left.+\,V\sum_{j>i}^{N}\hat{n}_{i}\hat{n}_{j}\right), \tag{38}\]
where scalars \(t\), \(U\), \(V\) and \(\mu\) are the hopping amplitude, on-site interaction, non-local interaction, and chemical potential. \(b_{i}^{\dagger}\), \(b_{i}\), and \(\hat{n}_{i}\) have been defined above. In the following, we will discuss the representations of the Hamiltonian of a two-site system (with at most one boson per site, i.e. \(N=2\) and \(N_{b}=1\)) employing the two boson-to-qubit mapping, respectively.
The _one-to-one mapping_ requires \(N\times(N_{b}+1)=2\times(1+1)=4\) qubits in total. Specifically, we have
\[b_{1}^{\dagger} =\tilde{b}_{1}^{\dagger}\otimes\mathbf{I}_{2},\;\;b_{1}=\tilde{b} _{1}\otimes\mathbf{I}_{2},\] \[b_{2}^{\dagger} =\mathbf{I}_{1}\otimes\tilde{b}_{2}^{\dagger},\;\;b_{2}=\mathbf{I }_{1}\otimes\tilde{b}_{2}, \tag{39}\] \[\tilde{n}_{1} =\tilde{n}_{1}\otimes\mathbf{I}_{2},\;\;\tilde{n}_{2}=\mathbf{I }_{1}\otimes\tilde{n}_{2},\]
where
\[\tilde{b}_{1/2}^{\dagger} =(+)\otimes(-)=|0\rangle\langle 1|\otimes|1\rangle\langle 0|,\] \[\tilde{b}_{1/2} =(-)\otimes(+)=|1\rangle\langle 0|\otimes|0\rangle\langle 1|, \tag{40}\]
and
\[\tilde{n}_{1/2} =\tilde{b}_{1/2}^{\dagger}\tilde{b}_{1/2}=\frac{1}{4}(I+Z) \otimes(I-Z)\] \[=|0\rangle\langle 0|\otimes|1\rangle\langle 1|. \tag{41}\]
Then the Hamiltonian can be re-written as
\[\hat{H} =\alpha_{1}\;\tilde{n}_{1}\otimes\mathbf{I}_{2}+\alpha_{2}\; \mathbf{I}_{1}\otimes\tilde{n}_{2}+\beta_{12}\;(\tilde{b}_{1}^{\dagger}\otimes \tilde{b}_{2}+\tilde{b}_{1}\otimes\tilde{b}_{2}^{\dagger})\] \[\quad+\gamma_{12}\;\tilde{n}_{1}\otimes\tilde{n}_{2} \tag{42}\]
where
\[\tilde{n}_{1}\otimes\mathbf{I}_{2} =\frac{1}{4}\Big{(}IIII+ZIII-IZII-ZZII\Big{)}\] \[\mathbf{I}_{1}\otimes\tilde{n}_{2} =\frac{1}{4}\Big{(}IIIII+IIZI-IIIZ-IIZZ\Big{)}\] \[\tilde{n}_{1}\otimes\tilde{n}_{2} =\frac{1}{16}\left(III+ZIII-IZII-ZZII\right.\] \[\quad+IIZI+ZIZI-IZZII\] \[\quad-IIIZ-ZIIZ+IZIZ+ZZIZ\] \[\quad-IIZZ-ZIZ+IZZZZ\Big{)} \tag{43}\]
and
\[\tilde{b}_{1}^{\dagger}\otimes\tilde{b}_{2}+\tilde{b}_{1}\otimes \tilde{b}_{2}^{\dagger}\] \[\quad=\frac{1}{8}\Big{(}XXXX+XXYY+YYXX+YYYY\] \[\qquad\qquad+XYXY-XYYX-YXXY+YYX\Big{)} \tag{44}\]
with the symbols denoting the tensor products of Pauli matrices, e.g., \(XXXX\) represents \(X_{0}\otimes X_{1}\otimes X_{2}\otimes X_{3}\).
The _binary mapping_ instead requires 2 qubits in total. Specifically, according to Eqs. (36), we can re-define \(\tilde{b}^{\dagger}\), \(\tilde{b}\), and \(\tilde{n}\) as
\[\tilde{b}_{1/2}^{\dagger} =(-)=|1\rangle\langle 0|,\ \ \tilde{b}_{1/2}=(+)=|0\rangle\langle 1|,\] \[\tilde{n}_{1/2} =\frac{1}{2}(I-Z)=|1\rangle\langle 1|. \tag{45}\]
Then the Hamiltonian components of (42) can be re-written as
\[\tilde{n}_{1}\otimes I_{2} =\frac{1}{2}\Big{(}II-ZI\Big{)},\ \ I_{1}\otimes\tilde{n}_{2}=\frac{1}{2}\Big{(}II-IZ \Big{)}\] \[\tilde{n}_{1}\otimes\tilde{n}_{2} =\frac{1}{4}\Big{(}II-IZ-ZI+ZZ\Big{)}\] \[\tilde{b}_{1}^{\dagger}\otimes\tilde{b}_{2}+\tilde{b}_{1}\otimes \tilde{b}_{2}^{\dagger} =\frac{1}{2}\Big{(}XX+YY\Big{)} \tag{46}\]
Using the Hamiltonian encoding, we can start to study the boson dynamics under the Hamiltonian, e.g. the quantum walks of indistinguishable bosons on 1D optical lattice that depends on the (absolute) strength of the on-site interaction \(U\) (see Figure 2).
### Spin-Boson Hamiltonian
The simplest spin-boson model is a two-level system (TLS) coupled to a bath of harmonic oscillators referred to as a boson field. The full Hamiltonian is
\[\hat{H}=\hat{H}_{0}+\hat{H}_{SB} \tag{47}\]
where \(\hat{H}_{0}\) is the zero-order Hamiltonian describing the separated subsystems
\[\hat{H}_{0}=\Delta X+\frac{1}{2}\epsilon Z+\sum_{i}\omega_{i}\hat{n}_{i} \tag{48}\]
with \(\Delta\) the bare tunneling amplitude between the two levels, \(\epsilon\) the bias, and \(\omega_{i}\) the frequencies of the oscillators.
\[\hat{H}_{SB}=\frac{1}{2}X\sum_{i}g_{i}\omega_{i}(b_{i}+b_{i}^{\dagger}) \tag{49}\]
is the coupling between the TLS and the bath of harmonic oscillators with \(\lambda_{i}\) the coupling strength. The effect of the oscillator bath is completely determined by the spectral function \(J(\omega)\), which in the ohmic case is given by
\[J(\omega)=\frac{\pi}{2}\sum_{i}c_{i}^{2}\delta(\omega-\omega_{i}) \tag{50}\]
with \(c_{i}\sim g_{i}\omega_{i}\). For the simplicity of the discussion, we consider simulating the spin dynamics of a spin-boson model with one spin and three bosons, and set \(\omega=2\), \(\epsilon=2\) and \(\Delta=1\). The Hamiltonian is then simplified as
\[\hat{H}=X+Z+2\hat{n}+gX(b^{\dagger}+b). \tag{51}\]
If the binary mapping for the bosons is used on two qubits, then
\[|0\rangle=|00\rangle,|1\rangle=|01\rangle,|2\rangle=|10\rangle,|3 \rangle=|11\rangle, \tag{52}\]
and
\[b^{\dagger} =|1\rangle\langle 0|+\sqrt{2}|2\rangle\langle 1|+\sqrt{3}|3 \rangle\langle 2|,\] \[=|01\rangle\langle 00|+\sqrt{2}|10\rangle\langle 01|+\sqrt{3}|11 \rangle\langle 10|,\] \[=|0\rangle\langle 0|\otimes|1\rangle\langle 0|+\sqrt{2}|1\rangle \langle 0|\otimes|0\rangle\langle 1|\] \[\quad+\sqrt{3}|1\rangle\langle 1|\otimes|1\rangle\langle 0|\] \[\quad+\frac{\sqrt{3}}{4}(I-Z)\otimes(X-iY). \tag{53}\]
where (37) is employed. Similarly,
\[b =|0\rangle\langle 1|+\sqrt{2}|1\rangle\langle 2|+\sqrt{3}|2 \rangle\langle 3|,\] \[=\frac{1}{4}(I+Z)\otimes(X+iY)+\frac{\sqrt{2}}{4}(X+iY)\otimes(X-iY)\] \[\quad+\frac{\sqrt{3}}{4}(I-Z)\otimes(X+iY), \tag{54}\] \[\tilde{n} =|1\rangle\langle 1|+2|2\rangle\langle 2|+3|3\rangle\langle 3|\] \[=\frac{1}{4}(I+Z)\otimes(I-Z)+\frac{2}{4}(I-Z)\otimes(I+Z)\] \[\quad+\frac{3}{4}(I-Z)\otimes(I-Z)\] \[=\frac{1}{2}\big{(}3II-IZ-22I\big{)}. \tag{55}\]
Figure 2: Two-particle correlation of the quantum walkers of two indistinguishable bosons, \(\Gamma_{p,q}(t)=\langle\phi_{b}(t)|b_{p}^{\dagger}b_{q}^{\dagger}b_{q}b_{p}| \phi_{b}(t)\rangle\), on 1D optical lattice with five sites. The corresponding density distribution, \(\langle n_{p}\rangle=\langle\phi_{b}(t)|b_{p}^{\dagger}b_{p}^{\dagger}b_{q}| \phi_{b}(t)\rangle\), is shown in the bottom of each plot of correlation. The Hamiltonian is given in (38) with \(N=5\), \(\mu=-0.5U\), \(t=1\), and \(V=0\). \(|\phi_{b}(t)\rangle\) evolves from the initial state \(|\phi_{b}(0)\rangle=b_{2}^{\dagger}b_{4}^{\dagger}|vac\rangle\) through the propagator \(\exp(-i\hat{H}t)\) over the time \(t=0.003\) a.u. (\(\Delta t=1.0\times 10^{-5}\) a.u.), against the positions of two-boson \(p\) and \(q\) for different on-site interactions \(U=1\) and \(U=100\).
Now the Hamiltonian (51) can be written as the linear combination of Pauli strings,
\[\hat{H}= XII+ZII+3III-IIZ-2IZI+\frac{g}{2}\bigg{(}(1+\sqrt{3})XIX\] \[+(1-\sqrt{3})XZX+\sqrt{2}XXX+\sqrt{2}XYY\bigg{)} \tag{56}\]
which can be mapped to three qubits (the first one is for the spin and the other two are for the bosons). Employing this Hamiltonian representation, we can then simulate the boson and spin dynamics of this model system. As shown in Figure 3 and Appendix A, by employing a Lindblad formula, the simulation can be done in different coupling regimes with and without external conditions.
#### ii.1.3 Boson-Fermion Hamiltonian
In this section, we focus on one of the simplest models that feature the boson-fermion interaction, the Holstein model. Nevertheless, same procedure can also apply to more general Hubbard-Holstein model by including on-site interactions, or to molecules by replacing the quadratic fermionic parts with one-electron and two-electron molecular Hamiltonians. The one-dimensional version of the Holstein model reads
\[\hat{H}=-\sum_{\langle i,j\rangle}vf_{i}^{\dagger}f_{j}+\sum_{i}\omega b_{i}^ {\dagger}b_{i}+\sum_{i}g\omega f_{i}^{\dagger}f_{i}(b_{i}^{\dagger}+b_{i}), \tag{57}\]
with \(V\) the hopping coefficient between the nearest neighbour pair \(\langle i,j\rangle\), \(\omega\) the vibration frequency, and \(g\) the coupling constant. For the simplicity of demonstration, we choose a three-site Holstein model with periodic boundary condition and \(N_{b}=v=\omega=1\) and only treat \(g\) as the variable. For the binary encoding, we need \(N\times\lceil\log_{2}(N_{b}+1)\rceil=6\) qubits. The Hamiltonian can then be re-written as
\[\hat{H}= -f_{1}^{\dagger}f_{2}-f_{2}^{\dagger}f_{1}-f_{2}^{\dagger}f_{3}- f_{3}^{\dagger}f_{2}-f_{3}^{\dagger}f_{1}-f_{1}^{\dagger}f_{3}\] \[+(b_{1}^{\dagger}b_{1}+b_{2}^{\dagger}b_{2}+b_{3}^{\dagger}b_{3}) \tag{58}\] \[+g\left[f_{1}^{\dagger}f_{1}(b_{1}^{\dagger}+b_{1})+f_{2}^{ \dagger}f_{2}(b_{2}^{\dagger}+b_{2})+f_{3}^{\dagger}f_{3}(b_{3}^{\dagger}+b_{3} )\right]\]
Assume bosons and fermions are elementary particles, then the bosonic and fermionic operators commute
\[[b,f]=[b,f^{\dagger}]=[b^{\dagger},f]=[b^{\dagger},f^{\dagger}]=0. \tag{59}\]
Therefore, we can use the first three qubits for fermion encoding and the remaining three qubits for boson encoding. The fermion operators can be mapped to qubit systems through the Jordan-Wigner transformation, i.e.
\[f_{1}^{\dagger}=(-)\otimes I\otimes I\otimes I\otimes I,\] \[f_{1}=(+)\otimes I\otimes I\otimes I\otimes I,\] \[f_{2}^{\dagger}=Z\otimes(-)\otimes I\otimes I\otimes I\otimes I,\] \[f_{2}^{\dagger}=Z\otimes(+)\otimes I\otimes I\otimes I,\] \[f_{3}^{\dagger}=Z\otimes Z\otimes(-)\otimes I\otimes I\otimes I,\] \[f_{3}^{\dagger}=Z\otimes Z\otimes(+)\otimes I\otimes I\otimes I,\] \[f_{1}^{\dagger}f_{1}=\frac{I-Z}{2}\otimes I\otimes I\otimes I \otimes I\otimes I,\] \[f_{1}^{\dagger}f_{2}=(-)\otimes(+)\otimes I\otimes I\otimes I \otimes I,\] \[f_{1}^{\dagger}f_{3}=(-)\otimes Z\otimes(+)\otimes I\otimes I \otimes I,\] \[f_{2}^{\dagger}f_{2}=I\otimes\frac{I-Z}{2}\otimes I\otimes I \otimes I\otimes I,\] \[f_{2}^{\dagger}f_{1}=(+)\otimes(-)\otimes I\otimes I\otimes I \otimes I,\] \[f_{2}^{\dagger}f_{3}=I\otimes(-)\otimes(+)\otimes I\otimes I \otimes I,\] \[f_{3}^{\dagger}f_{3}=I\otimes I\otimes\frac{I-Z}{2}\otimes I \otimes I\otimes I,\] \[f_{3}^{\dagger}f_{2}=I\otimes(+)\otimes(-)\otimes I\otimes I \otimes I,\] \[f_{3}^{\dagger}f_{2}=I\otimes(+)\otimes(-)\otimes I\otimes I \otimes I, \tag{60}\]
Here note the following simple Pauli relations
\[Z(+)=(+),\ \ (+)Z=-(+),\] \[(-)Z=(-),\ \ Z(-)=-(-), \tag{61}\]
and
\[XZ=-iY,\ \ ZX=iY,\ \ YZ=iX,\ \ ZY=-iX. \tag{62}\]
For the boson operators, using (45), we have
\[b_{1}^{\dagger}=I\otimes I\otimes I\otimes(-)\otimes I\otimes I,\] \[b_{1}=I\otimes I\otimes I\otimes(+)\otimes I\otimes I,\] \[b_{2}^{\dagger}=I\otimes I\otimes I\otimes I\otimes(-)\otimes I,\] \[b_{2}=I\otimes I\otimes I\otimes I\otimes(+)\otimes I,\] \[b_{3}^{\dagger}=I\otimes I\otimes I\otimes I\otimes(-),\] \[b_{3}=I\otimes I\otimes I\otimes I\otimes(+),\] \[b_{1}^{\dagger}b_{1}=I\otimes I\otimes I\otimes\frac{I-Z}{2} \otimes I\otimes I,\] \[b_{2}^{\dagger}b_{2}=I\otimes I\otimes I\otimes I\otimes\frac{I-Z}{ 2}\otimes I,\] \[b_{3}^{\dagger}b_{3}=I\otimes I\otimes I\otimes I\otimes\frac{I-Z}{ 2},\] \[b_{1}^{\dagger}+b_{1}=I\otimes I\otimes I\otimes X\otimes I,\] \[b_{2}^{\dagger}+b_{2}=I\otimes I\otimes I\otimes I\otimes X\otimes I,\] \[b_{3}^{\dagger}+b_{3}=I\otimes I\otimes I\otimes I\otimes I \otimes X. \tag{63}\]
#### ii.1.2 Prepare Initial Bosonic States
For an \(N\)-site quantum system with \(n_{i}\) (\(0\leq n_{i}\leq N_{b}\)) bosons at each site, its bosonic state can be written as
\[|\phi_{b}\rangle=\mathcal{N}(b_{1}^{\dagger})^{n_{1}}(b_{2}^{\dagger})^{n_{2}} \cdots(b_{N}^{\dagger})^{n_{N}}|\text{vac}\rangle, \tag{64}\]
where \(\mathcal{N}\) is a normalization constant, and
\[|\text{vac}\rangle=\underbrace{|0\rangle\otimes\cdots\otimes|0\rangle\otimes|0 \rangle}_{N\text{-tensor}} \tag{65}\]
with \(|0\rangle\) at site \(i\) being mapped to a \(n_{i}\)-tensor product according to the one-to-one mapping (25) or the binary mapping (31). Using the mapping (25), only \(N\) qubits need to be flipped to prepare \(|\phi_{b}\rangle\). For the mapping (31), at most \(N\times\lceil\log_{2}(N_{b}+1)\rceil\) qubits need to be flipped instead.
If the initial bosonic state assumes the superposition form
\[|\psi\rangle=\sum_{k=1}^{K}c_{k}|\phi_{b,k}\rangle, \tag{66}\]
where
\[\sum_{k=1}^{K}|c_{k}|^{2}=1,\ \ \langle\phi_{b,k}|\phi_{b,l}\rangle=\delta_{k,l},\ \ |\phi_{b,k}\rangle=U_{k}|\text{vac}\rangle, \tag{67}\]
with \(U_{k}\) representing a unitary operation, similar to (64), then we can use \(K\) ancillas to do the following state preparation procedures [142]
* Initialize the \(K\) ancillas in state \(|0\rangle\);
* Generate \(\sum_{k=1}^{K}c_{k}|k\rangle\). Here the state \(|k\rangle\) is an ancilla state with only the \(k\)-th qubit being \(|1\rangle\);
* Loop over the \(K\) ancillas, perform the controlled unitary operations, i.e., conditional on the \(k\)-th ancilla being \(|1\rangle\), apply \(U_{k}\) on the \(|\text{vac}\rangle\). This will generate \(\sum_{k=1}^{K}c_{k}|k\rangle\otimes|\phi_{b,k}\rangle\);
* Generate \(\frac{1}{\sqrt{K}}\sum_{k=1}^{K}c_{k}|0\rangle\otimes|\phi_{b,k}\rangle\).
The corresponding circuit structure is given in Figure 4. As can be seen, step #2 and #4 in the above procedure are essentially identical, and only differing by the single qubit rotations and the selection of the subspace to project out. The single qubit rotations can be figured out from \(K\) and \(c_{k}\)'s. To see this, take step #2 as an example, the state over \(|b_{1}\rangle\) and the \(K\) ancillas evolves as follows
\[|0\rangle\otimes|\underbrace{0\cdots 0}_{K}\left[\underbrace{|0 \rangle\langle 0|\otimes R_{k}+1\rangle\langle 1|\otimes X}_{k}\right]_{k=1 \cdots K}x_{1}\cdots x_{K}|0\rangle\otimes|\underbrace{0\cdots 0}_{K}+\sum_{k=1}^{K}x_{1} \cdots x_{k-1}\nu_{k}|1\rangle\otimes|\underbrace{0\cdots 0}_{k-1}1_{k} \underbrace{0\cdots 0}_{K-k}\rangle\] \[\frac{\text{if }|b_{1}\rangle=|1\rangle}{\text{on }K\text{ ancillas}}\ \ \sum_{k=1}^{K}x_{1}\cdots x_{k-1}y_{k}|\underbrace{0\cdots 0}_{k-1}1_{k} \underbrace{0\cdots 0}_{K-k}\rangle, \tag{68}\]
where \(x_{1}\cdots x_{k-1}y_{k}=c_{k}\) for \(k=1\cdots K\). The probability of successfully preparing the state in step #4 is \(1/K\). We can boost the success probability to close to 1 by classically repeating this preparation \(\mathcal{O}(K)\) times, or doing \(\mathcal{O}(\sqrt{K})\) steps of amplitude amplification.
We can construct a more efficient state preparation by
Figure 3: Boson (upper panels) and spin (lower panels) dynamics of a spin-boson model for different coupling regimes. The Hamiltonian is described in (51) with the corresponding qubit representation in (56). The decoherence is simulated through a Lindblad master equation as described in Appendix A where the parameters \(\Gamma\) and \(\gamma\) account for the experimental imperfections.
generating the state
\[\frac{1}{\sqrt{\sum_{k=1}^{K}|c_{k}|}}\sum_{k=1}^{K}\sqrt{|c_{k}|}\,|k\rangle \tag{69}\]
in step #2 and measure the same state in #4, while introducing phases of \(c_{k}\) in step #3. The probability of success then becomes \(1/(\sum_{k=1}^{K}|c_{k}|)^{2}\), and can be boosted to unity with \(\mathcal{O}(\sum_{k=1}^{K}|c_{k}|)\) steps of amplitude amplification. Note that this approach has a lower complexity than the previous approach due to the fact that the inequality
\[\sum_{k=1}^{K}|c_{k}|\leq\sqrt{K}\sum_{k=1}^{K}|c_{k}|^{2}=\sqrt{K} \tag{70}\]
always holds but is not always saturated.
It is possible to further lower the complexity by synthesizing a permutation unitary. Specifically, we first generate \(\sum_{k=1}^{K}c_{k}|k\rangle\), and our goal is then to implement the transformation
\[|k\rangle\mapsto\left|\phi_{b,k}\right\rangle, \tag{71}\]
where both \(k\) and \(\phi_{b,k}\) are encoded in binary. This is a permutation of \(K\) basis states. It can be decomposed into cyclic permutations of total length \(\mathcal{O}(K)\), and further into \(\mathcal{O}(K)\) transpositions (transformations between two basis states with all remaining basis states fixed). A transposition can be represented as a two-level unitary, and can be synthesized using the Gray code as in [114] (Section 4.5.2).
### Capture Ground States
Once the Hamiltonian and the initial states are encoded, we can start to compute the ground states. Take the boson-fermion Hamiltonian as an example, bounds to the lower portion of the eigenspectrum of \(\hat{H}\) can be obtained through many approaches. On the NISQ devices, typical routines include hybrid quantum-classical Variational Quantum algorithm (VQA) [139, 139, 138, 137, 38, 75, 81, 122, 110, 122, 3], quantum approximate optimization algorithm (QAOA) [51], quantum annealing [3, 22], gaussian boson sampling [1], analog quantum simulation [62, 49], iterative quantum assisted eigensolver [112, 110, 95, 113], imaginary time evolution (ITE) [138, 139, 143, 157, 158, 161, 79, 99, 160, 112, 135, 162, 137, 159, 163], and many others. Particularly, the ITE approach has a long history of being a robust computational approach to solve the ground state of a many-body quantum system. The development and application of ITE approach targeting the ground state wave function and energy dates back to 1970s, when the similar random-walk imaginary-time technique were developed for diffusion Monte-Carlo methods [7, 8, 9, 4, 4].
Combining the ITE with the variational expansion, there had been another interesting yet less known moment approach proposed by Peeters and Devreese [118], and further analyzed by Soldatov [141] in the 1990s, which we will refer to as the Peeters-Devreese-Soldatov (PDS) approach. We recently reviewed this method and proposed its potential use for accurate quantum computations [136, 136, 89, 120]. Specifically, in this approach the energy functional depends on the moments of the Hamiltonian \(\langle\phi|H^{n}|\phi\rangle=\langle H^{n}\rangle\), for some trial state \(|\phi\rangle\). For a PDS approximation of order \(K\), that is, PDS(\(K\)), one needs to estimate \(\langle\hat{H}\rangle,\langle\hat{H}^{2}\rangle,\cdots\langle\hat{H}^{2K-1}\rangle\). The moments serve as elements of the matrix \(\mathbf{M}\) (\(M_{ij}=\langle\hat{H}^{2K-i-j}\rangle\)) and vector \(\mathbf{Y}\) (\(Y_{i}=\langle\hat{H}^{2K-i}\rangle\)), which are related by \(\mathbf{M}\mathbf{X}=-\mathbf{Y}\). The solution of this linear system comprises the coefficients of the following polynomial
\[P_{K}(\mathcal{E})=\mathcal{E}^{K}+\sum_{i=1}^{K}X_{i}\mathcal{E}^{K-i}=0, \tag{72}\]
whose roots are the bounds to the \(K\) lowest eigenvalues in the eigenspectrum of \(\hat{H}\).
One requirement for PDS to work is that the trial state \(|\phi\rangle\) finds support in the ground state of \(\hat{H}\), that is, it has non-zero overlap with the ground state. This is a crucial requirement because variation in the magnitude of the coupling constant \(g\) in (58) leads to changes in the relative position of the energy levels corresponding to states in different fermionic number sectors. This change is due to the number of fermions increasing as \(g\) becomes larger until all fermionic sites are occupied. In the case of a three-site Holstein model, for \(g\lessapprox 1.27\), the total number of fermions is 1, and for \(g\lessapprox 1.27\) it is 3. Consequently, an attempt to estimate the ground state energy with methods such as PDS in the entire domain of \(g\) implies a trial state that is a superposition of states with all allowed particle numbers, which in this case can be as simple as
\[|\phi\rangle=\frac{1}{\sqrt{2}}\underbrace{(|001\rangle+|111\rangle)}_{\text{ fermions}}\otimes\underbrace{|000\rangle}_{\text{bosons}}. \tag{73}\]
The results of employing the trial state in (73) to evaluate the bound to the ground state energy provided by various orders of the PDS energy functional as function of the parameter \(g\) are reported in Figure 5.
The simple trial state put forth in (73) is adequate to capture the different particle number sectors that are covered in the domain of the fermion-boson coupling \(g\). This
is evidenced by Figure 5 in demonstrating that the bound to the ground state energy furnished by the PDS energy functional rapidly converges toward the value obtained by exact diagonalization (ED) with increasing order parameter \(K\). While PDS(4) displays small, but still noticeable disagreement with exact values, PDS(5) energy estimates are visually indistinguishable from the results from ED. This lends credence to the suitability of the PDS approach to be invaluable tool in studying open quantum systems.
### Trotterization
Regarding the simulation of bosonic evolution, the idea is to approximately implement the unitary operator \(\hat{U}(t)=\exp\bigl{(}-i\hat{H}t\bigr{)}\) where \(H\) is the Hamiltonian of system with its representations exemplified in Section II.1. For a given Hamiltonian that has a polynomial large number of terms (with respect to \(N\)), its unitary evolution \(\hat{U}(t)\) can be efficiently performed on a quantum computer by applying a polynomial number of single and two-qubit gates.
One simple quantum algorithm for simulating the evolution \(\hat{U}(t)\) involves Trotter decomposition, which approximates the target evolution by a product of exponentials of terms in the Hamiltonian. For example, if \(\hat{H}=\hat{K}+\hat{V}\), then
\[\hat{U}(t)=e^{-i\hat{H}t}\begin{cases}&=e^{-i\hat{K}t}e^{-i\hat{V}t}\qquad \qquad\text{if }[\hat{K},\hat{V}]=0\\ &\approx[e^{-i\hat{K}t/n}e^{-i\hat{V}t/n}]^{n}\ \ \text{if }[\hat{K},\hat{V}]\neq 0 \end{cases}. \tag{74}\]
Specifically, the approximation error of each step is bounded by
\[\left\|\hat{U}\left(\frac{t}{n}\right)-e^{-i\hat{K}t/n}e^{-i\hat{V}t/n}\right\| \leq\frac{\left\|[\hat{K},\hat{V}]\right\|t^{2}}{2n^{2}}, \tag{75}\]
which implies an overall error of at most
\[\left\|\hat{U}\left(t\right)-[e^{-i\hat{K}t/n}e^{-i\hat{V}t/n}]^{n}\right\| \leq\frac{\left\|[\hat{K},\hat{V}]\right\|t^{2}}{2n}. \tag{76}\]
To achieve an accuracy \(\epsilon\), it thus suffices to take
\[n=\left\lceil\frac{\left\|[\hat{K},\hat{V}]\right\|t^{2}}{2\epsilon}\right\rceil. \tag{77}\]
This analysis can be extended to Hamiltonians containing multiple terms, and to other Trotter decompositions with a higher-order accuracy [30].
As discussed in Section II.1, we can represent bosonic operators using either a unary or a binary encoding. For the unary encoding, bosonic operators are represented by linear combinations of Pauli operators which can then be split using the Trotter decomposition. To exponentiate a Pauli operator, take \(\exp\left(-i\frac{\delta}{2}XXYY\right)\) as an example. Since
\[HZH=X,\ \ SXS^{\dagger}=Y, \tag{78}\]
with Hadamard gate \(H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}1&1\\ 1&-1\end{array}\right)\) and phase gate \(S=\left(\begin{array}{cc}1&0\\ 0&i\end{array}\right)\), we then have
\[XXYY =IISS\cdot XXX\cdot IIS^{\dagger}S^{\dagger}\] \[=IISS\cdot HHHH\cdot ZZZZ\cdot HHHH\cdot IIS^{\dagger}S^{\dagger} \tag{79}\]
and
\[\exp\left(-i\frac{\delta}{2}XXYY\right) \tag{80}\] \[=IISS\cdot HHHH\cdot\exp\left(-i\frac{\delta}{2}ZZZZ\right)\cdot HH HH \cdot IIS^{\dagger}S^{\dagger}.\]
The entire circuit representing \(\exp\left(-i\frac{\delta}{2}XXYY\right)\) is shown in Figure 6, where the middle "CNOT-staircase" circuit represents the operator \(\exp\left(-i\frac{\delta}{2}ZZZZ\right)\).
For the binary encoding, we can implement bosonic operators by representing them as linear combinations of position and momentum operators, which are diagonalized in the position and momentum basis respectively. When two operators act on different modes, or when operators of the same type act on a single mode, they can be simultaneously diagonalized and the circuit implementation is straightforward. However, the implementation becomes more challenging for products of position and momentum operators on the same mode (such terms arise in squeezing Hamiltonians [63]). Take \(e^{-i\left(pq+qp\right)}\) as an example. Since
Figure 5: The change in the ground state energy from exact diagonalization (ED) and in various orders of the PDS energy functional of a three-site Holstein model as a function of fermion-boson coupling strength.
Figure 6: Circuit to generate the operator \(\exp\left(-i\frac{\delta}{2}XXYY\right)\). The middle “CNOT-staircase” (shaded) corresponds to the operator \(\exp\left(-i\frac{\delta}{2}ZZZZ\right)\).
neither \(|p\rangle\) nor \(|q\rangle\) are eigenvectors of \((pq+qp)\), the diagonalization of \((pq+qp)\) needs to be performed numerically on the classical side followed by encoding the eigenvectors in terms of the boson states [106], i.e.
\[(pq+pq)|\mu\rangle=\epsilon_{\mu}|\mu\rangle,\ \ |\mu\rangle=\sum_{i=0}^{N_{ \mathrm{b}}-1}U_{\mu,i}|n_{i}\rangle. \tag{81}\]
where the matrix \(\mathbf{U}\) with \(U_{\mu,i}\) being the element is unitary. The complexity of this approach has a polynomial scaling with the bosonic cutoff instead of a logarithmic scaling as in the diagonalizable case (we refer the readers to the Section IV E in Ref. [106] for a detailed discussion). Alternatively, we can think about approximating \(e^{-it(pq+qp)}\) using product formulas [102; 116]. Note that
\[pq+qp=\{p,q\} \tag{82}\]
is essentially an anticommutator, then employing the strategy introduced in Ref. [32], one can create analogs of \(pq\) and \(qp\) on an enlarged Hilbert space
\[p^{\prime}:=p\otimes Y,\ \ q^{\prime}:=q\otimes X \tag{83}\]
where \(X\) and \(Y\) are Pauli matrices. Then for any quantum state \(|\phi\rangle\) prepared in the same Hilbert space as \(p\) and \(q\), we have
\[[p^{\prime},q^{\prime}]|\phi\rangle\otimes|0\rangle=-i\{p,q\}| \phi\rangle\otimes|0\rangle\] \[\Rightarrow\ e^{-it\{p,q\}}|\phi\rangle\otimes|0\rangle=e^{\prime [p^{\prime}J^{\prime}]}|\phi\rangle\otimes|0\rangle. \tag{84}\]
Then a product formula approximation can be constructed for \(e^{i[p^{\prime}J^{\prime}]}\) with a number of exponentials that scales almost linearly with the evolution time (a similar formula can be used to implement \(\tilde{H}^{n}\) for the PDS(\(K\)) approach discussed in Section II.3). Commutator-type error bounds can also be derived for such product formulas. For example, following Ref. [66] (Lemma 6), we have
\[\|e^{-itp}e^{-itq}e^{itp}e^{itq}-e^{-t^{2}|p,q\rangle/2}\|\] \[\leq t^{3}\left(\|p,[p,q]\|+\|q,[q,p]\|\right). \tag{85}\]
### Qubitization
We now discuss how bosonic Hamiltonians can be simulated using an alternative simulation algorithm called "quitization".
The basic component of this algorithm is a probabilistic encoding of the target operator using unitary operations. For instance, suppose the target operator has the decomposition \(H=\sum_{\ell=1}^{L}\beta_{\ell}U_{\ell}\) as a linear combination of unitaries, where the coefficients \(\beta_{\ell}\) are all positive and operators \(U_{\ell}\) are unitaries. Then, we define
\[\mathrm{PREP}|0\rangle=\frac{1}{\sqrt{\sum_{\ell=1}^{L}\beta_{\ell}}}\sum_{ \ell=1}^{L}\sqrt{\beta_{\ell}}|\ell\rangle,\ \ \mathrm{SEL}=\sum_{\ell=1}^{L}|\ell\rangle\langle\ell|\otimes U_{\ell}, \tag{86}\]
so that
\[\left((0|\mathrm{PREP}^{\dagger}\otimes I)\,\mathrm{SEL}\,(\mathrm{PREP}|0 \rangle\otimes I)=\frac{H}{\sum_{\ell=1}^{L}\beta_{\ell}}. \tag{87}\]
That is, the target Hamiltonian \(H\) is encoded by the unitaries PREP and SEL with the normalization factor \(1/\sum_{\ell}\beta_{\ell}\). With this encoding, one can use the so-called subitization algorithm [104] to approximate the time evolution \(e^{-itH}\) for an accuracy \(\epsilon\) by making
\[\mathcal{O}\left(\sum_{\ell}\beta_{\ell}t+\log\left(\frac{1}{\epsilon}\right)\right) \tag{88}\]
queries to PREP and SEL. The action of PREP is to prepare an \(L\)-dimensional state which takes \(\mathcal{O}(L)\) gates [140]. For SEL, we need to cycle through the binary representation of all \(\ell=1,2,\ldots,L\), which has cost \(\mathcal{O}(L)\)[34] (Appendix G.4).
Besides the linear-combination-of-unitary model, it is also possible to use the subitization algorithm when the target operator \(H\) is sparse, meaning that each row or column of \(H\) has at most a constant number of nonzero elements. The underlying idea is to view the target Hamiltonian as the weighted adjacency matrix of some graph and then perform quantum walk [29].
Note that the Hamiltonians introduced in Section II.1 can all be expressed as linear combinations of elementary products of spin, fermionic, bosonic operators. Assuming encodings to the elementary operators are available, it is straightforward to perform linear combinations and multiplications to encode the target Hamiltonian, and thereby simulating the Hamiltonian via subitization. Efficient encodings of spin and fermionic operators are known from previous work such as Ref. [12]. Here, we focus on the encoding of bosonic Hamiltonians.
Specifically, we consider the truncated bosonic operator
\[b^{\dagger} =\begin{bmatrix}0&\cdots&\cdots&\cdots&0\\ 1&0&\ddots&\ddots&\vdots\\ 0&\sqrt{2}&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&\cdots&\sqrt{\Lambda-1}&0\end{bmatrix}\] \[=\sum_{\lambda=0}^{\Lambda-1}\sqrt{(\lambda+1)\bmod\Lambda}|( \lambda+1)\bmod\Lambda\rangle\langle\lambda|. \tag{89}\]
This operator is sparse and can thus be encoded using quantum walk as discussed above. However, the resulting circuit is complicated and the square root function requires a large amount of arithmetics to implement. In the following we describe an alternative method based on a linear combination of unitaries, which can be easier to implement with a minimum amount of arithmetics.
Suppose we want to implement some nonnegative function \(f(\lambda)\) for \(\lambda\in[0,\Lambda]\). The core idea behind our implementation is to consider the integral representation [13] (Appendix C)
\[f(\lambda)=\int_{0}^{\|f\|_{\max}}\mathrm{d}x\ (-1)^{2\chi\cdot\|f\|_{\max}+f( \lambda)}. \tag{90}\]
Here, \(\|f\|_{\max}=\max_{\lambda\in[0,\Lambda]}|f(\lambda)|\) is the maximum value of \(f\) over the interval \([0,\Lambda]\), and \(2\chi>\|f\|_{\max}+f(\lambda)\) is a Boolean expression that has value 1 if the condition is
true and 0 otherwise. To prove this equality, note that the integral on the right-hand side simplifies to
\[(+1)\frac{\left\|f\right\|_{\text{max}}+f(\lambda)}{2}+(-1)\left(\left\|f\right\| _{\text{max}}-\frac{\left\|f\right\|_{\text{max}}+f(\lambda)}{2}\right) \tag{91}\]
which gives \(f(\lambda)\) on the left-hand side. Applying this integral representation to the bosonic operator, we have
\[b^{\dagger}=\int_{0}^{\sqrt{\Lambda-1}}\text{d}x\sum_{\lambda=0}^ {\Lambda-1}(-1)^{2\Sigma>\sqrt{\Lambda-1}+\sqrt{(\lambda+1)\text{ mod }\Lambda}}\] \[\times\left|(\lambda+1)\text{ mod }\Lambda\right\rangle\! \langle\lambda|. \tag{92}\]
This suggests a circuit implementation as follows. We first prepare a uniform superposition state \(\text{PREP}\left|0\right\rangle=\frac{1}{\sqrt{\Sigma}}\sum_{\xi=0}^{\Sigma} \left|\xi\right\rangle\) for some large value of \(\Xi\) to be determined later. We then implement a cyclic shift on the register \(\left|\lambda\right\rangle\) modulo \(\Lambda\). We now test the inequality
\[2\frac{\xi}{\Xi}\sqrt{\Lambda-1}>\sqrt{\Lambda-1}+\sqrt{(\lambda+1)\text{ mod }\Lambda}, \tag{93}\]
and use the outcome to flip the minus sign. These two operations together define SEL, and \(\left(\langle 0|\text{PREP}^{\dagger}\otimes I\right)\text{SEL}\left(\text{PREP} |0\right)\otimes I\right)\) encodes the Riemann sum
\[\frac{1}{\Xi}\sum_{\xi=0}^{\Xi-1}\left(\sum_{\lambda=0}^{\Lambda -1}(-1)^{2\frac{\xi}{2}\sqrt{\Lambda-1}>\sqrt{\Lambda-1}+\sqrt{(\lambda+1) \text{ mod }\Lambda}}\right.\] \[\times\left|(\lambda+1)\text{ mod }\Lambda\right\rangle\! \langle\lambda|\right) \tag{94}\]
which approximates the normalized bosonic operator
\[\frac{b^{\dagger}}{\sqrt{\Lambda-1}} \tag{95}\]
when \(\Xi\) is sufficiently large.
We now consider the gate complexity of implementing this encoding. To simplify the discussion, we assume that both \(\Lambda\) and \(\Xi\) are powers of 2. Then the preparation of the uniform superposition state takes only \(\log(\Xi)\) Hadamard gates. The cyclic shifting on \(\left|\lambda\right\rangle\) can be realized as a binary addition on \(\log(\Lambda)\) bits and thus has complexity \(\mathcal{O}(\log(\Lambda))\). For the next step, we can equivalently test the following system of inequalities
\[\begin{cases}2\xi>\Xi,\\ (2\xi-\Xi)^{2}(\Lambda-1)>\Xi^{2}\left((\lambda+1)\text{ mod }\Lambda\right), \end{cases} \tag{96}\]
which has a cost of
\[\mathcal{O}(\log^{2}(\Xi)\log(\Lambda)). \tag{97}\]
This is also the asymptotic gate complexity for the entire encoding.
We now consider choosing \(\Xi\) so that the Riemann sum well approximates the integral. For a fixed value of \(\lambda\), note that the Riemann sum
\[\frac{1}{\Xi}\sum_{\xi=0}^{\Xi-1}(-1)^{2\frac{\xi}{2}\sqrt{\Lambda-1}>\sqrt{ \Lambda-1}+\sqrt{(\lambda+1)\text{ mod }\Lambda}} \tag{98}\]
and the integral
\[\frac{1}{\sqrt{\Lambda-1}}\int_{0}^{\sqrt{\Lambda-1}}\text{d}x\ (-1)^{2\Sigma>\sqrt{\Lambda-1}+\sqrt{(\lambda+1)\text{ mod }\Lambda}} \tag{99}\]
differ over an interval of length at most \(1/\Xi\). Thus, their difference is bounded by \(2/\Xi\). Since \(b^{\dagger}\) is 1-sparse, the error of approximating \(b^{\dagger}\) is given by the maximum error of approximating its entries. Therefore, we choose \(\Xi=\mathcal{O}(1/\delta)\) so that
\[\left\|\left(\langle 0|\text{PREP}^{\dagger}\otimes I\right)\text{SEL}\left( \text{PREP}|0\right)\otimes I\right)-\frac{b^{\dagger}}{\sqrt{\Lambda-1}} \right\|\leq\delta \tag{100}\]
and this encoding can thus be implemented with cost \(\mathcal{O}(\log(\Lambda)\log^{2}(1/\delta))\).
We have so far constructed a quantum circuit that encodes the bosonic operator \(b^{\dagger}\). The conjugate transpose of this circuit encodes \(b\). On the other hand, the encoding of spin and fermionic operators is well studied in the context of quantum chemistry simulation [12]. Then the quhitization algorithm allows us to implement linear combinations of products of these elementary operators, which is sufficient to simulate bosonic models, such as the spin-boson Hamiltonian (47) and the boson-fermion Hamiltonian (57).
## III Effective Hamiltonian construction through coupled-cluster approach
To further optimize quantum computation by reducing the number of gates and operations, a standard approach involves constructing an effective Hamiltonian for the quantum system. This Hamiltonian accurately encapsulates particle interactions while conserving resources. The Hamiltonian can follow a unitary and/or non-unitary path to transform into an effective form.
The unitary transformation has been extensively discussed in the context of understanding many-body localization phenomena [16; 17; 28; 11; 13; 19; 129]. For completeness, we provide a concise overview of the unitary path in Appendix B. Subsequently, our focus shifts to exploring some coupled cluster (CC) formulations, which are inherently non-unitary, and designing quantum algorithms specifically for bosonic systems. It is worth noting that the interplay between unitary and non-unitary features and operations becomes apparent during the Hamiltonian diagonalization and open system-solving processes. Specifically, traditional flow equations and corresponding generators can be generalized to handle non-Hermitian matrices and open quantum systems governed by, for example, Lindbladians (see, e.g. Ref. [136]). On the other hand, recent progress in encoding general non-unitary operators on quantum computers allows for highly accurate quantum simulations of correlated fermionic systems [20; 23; 31; 10; 14; 121]. Here, we extend the CC downfolding technique to bosonic systems. As will be evident in the ensuing discussion, the unitary analog of traditionally non-unitary CC formulations for bosonic systems can be meticulously designed to ensure the exactness of the ansatz, which then guarantees that the effective Hamiltonian can be accurately and efficiently constructed via appropriate quantum algorithms.
The single reference coupled-cluster formalism for a mixture of bosons localized at different sites has been intensively studied in the context of Bose-Einstein condensates (BECs) and trapped bosonic systems [58; 28]. Let us focus attention on the system of \(N\) identical bosons that can occupy \(M\) "one-particle" states. In such case the dimensionality of the bosonic FCI space is
\[\mathrm{dim}_{\mathrm{FCI}}=\begin{pmatrix}M+N-1\\ N\end{pmatrix} \tag{101}\]
which grows much faster than analogous dimension of the Fermionic FCI space. The CC parametrization of the bosonic ground-state wave function \(|\Psi\rangle\) takes analogous form as in the fermionic case
\[|\Psi\rangle=e^{T}|\phi_{0}\rangle\;, \tag{102}\]
where the normalized reference function \(|\phi_{0}\rangle\) for bosons is defined as
\[|\phi_{0}\rangle=\frac{1}{\sqrt{N!}}(b_{1}^{\dagger})^{N}|\mathrm{vac}\rangle \tag{103}\]
and \(|0\rangle\) denotes physical vacuum. The cluster operator \(T\) (in the class of standard CC approximations) is represented as a sum of its many-body components \(T_{k}\)
\[T_{k}=\sum_{a_{1},\ldots,a_{k}=2}^{N}t_{a_{1}\ldots a_{k}}b_{a_{1}}^{\dagger} \ldots b_{a_{k}}^{\dagger}(b_{1})^{k} \tag{104}\]
where \([T_{k},T_{l}]=0\). As for the non-Hermitian CC downfolding let us partition one-particle space into the lowest \(M_{\mathrm{act}}\) active one-particle functions and remaining (inactive). Our goal is to build an effective representation of the Hamiltonian that furnished the same ground-state energy in the active space the dimension
\[\mathrm{dim}_{\mathrm{act}}=\begin{pmatrix}M_{\mathrm{act}}+N-1\\ N\end{pmatrix} \tag{105}\]
To this end, let us partition the \(T\) operator into internal (\(T_{\mathrm{int}}\)) and external (\(T_{\mathrm{ext}}\)) parts
\[T=T_{\mathrm{int}}+T_{\mathrm{ext}} \tag{106}\]
where \(T_{\mathrm{int}}\) and \(T_{\mathrm{ext}}\) produce excitation within and outside of model space, respectively, when acting on the reference function \(|\Phi\rangle\). For example, it means that cluster amplitudes defining \(T_{\mathrm{int}}\) carry active orbital indices only, whereas external amplitudes must include at least one inactive orbital index.
To derive bosonic variant of CC downfolding we start from the energy-independent form of the CC equations
\[(P+Q)He^{T}|\Phi\rangle=E(P+Q)e^{T}|\Phi\rangle\;, \tag{107}\]
which at the solution is equivalent to the standard, connected, form bosonic CC equations:
\[Qe^{-T}He^{T}|\Phi\rangle = 0\;, \tag{108}\] \[\langle\Phi|e^{-T}He^{T}|\Phi\rangle = E\;, \tag{109}\]
In Eqs. (107)-(109), \(Q\) is the projection operator onto excited configurations generated by action of \(T\) on the reference function and \(P\) designates the projection operator onto the reference function. Projecting Eq. (107) onto the active space configurations described by projection operator \((P+Q_{\mathrm{int}})\) (where \(Q_{\mathrm{int}}\) is a projection operator onto excited configuration in the active space), and assuming that \(e^{T_{\mathrm{int}}}|\Phi\rangle\) generates FCI-type expansion in the active space (i.e., \(T_{\mathrm{int}}|\Phi\rangle\) generates all excited active-space configuration), in analogy to fermionic case [17; 18; 19; 94], one can show that CC energy can be calculated as eigenvalue of the active-sapce effective Hamiltonian \(H^{\mathrm{eff}}\):
\[H^{\mathrm{eff}}e^{T_{\mathrm{int}}}|\Phi\rangle=Ee^{T_{\mathrm{int}}}|\Phi \rangle\;, \tag{110}\]
where
\[H^{\mathrm{eff}}=(P+Q_{\mathrm{int}})e^{-T_{\mathrm{ext}}}He^{T_{\mathrm{ext} }}(P+Q_{\mathrm{int}})\;. \tag{111}\]
If external cluster amplitudes are known or can be efficiently approximated, then the effective Hamiltonian can be viewed as a reduced-dimensionality representation of bosonic problem.
As a specific example how the dimensionality of the problem can be compressed let us consider the simplest example where active space contains two orbitals (\(M_{\mathrm{act}}=2\)). In this case, the general form of the internal \(k\)-tuply excited cluster operators take a simple form
\[T_{k}=c_{2\ldots 2}(b_{2}^{\dagger})^{k}(b_{1})^{k}\;. \tag{112}\]
For the case when \(N=10\), \(M_{\mathrm{act}}=2\), and \(M=10\), \(\mathrm{dim}_{\mathrm{act}}=11\) and the dimension compression defined as \(\mathrm{dim}_{\mathrm{FCI}}/\mathrm{dim}_{\mathrm{act}}\) amounts to \(\simeq 10^{5}\).
Similar to systems defined by interacting fermions, the Hermitian form of downfolding, based on double unitary Coupled Cluster (CC) ansatz, can be extended to bosonic systems. Figure 7 demonstrates a quantum-classical workflow targeting the ground state and an effective Hamiltonian of a three-site, two-boson model as described by Hamiltonian (38). In this workflow, a Trotterized double unitary CC with singles and doubles (D-UCCSD) ansatz is employed to ensure exactness (see Appendix C for the proof). However, unlike conventional VQE methods using the same ansatz, the excitation operators are partitioned into two subsets. Each subset corresponds to different excitation sub-manifolds and is treated separately.
As illustrated in Figure (b)b, part of the free parameters corresponding to the sub-manifold \(Q_{2}\) is managed by the regular VQE module (the micro update), and the remaining portion (corresponding to the excitations between site 0 and site 1) is obtained through constructing and diagonalizing an effective Hamiltonian \(H_{\mathrm{eff}}\) (the macro update). This partition significantly reduces the computational cost of direct optimization of the entire free-parameter space as required in conventional VQE routines. As depicted in Figure (c)c, the macro update in the proposed workflow for the studied model converges more effectively than the conventional VQE. It is worth noting that the diagonalization of \(H_{\mathrm{eff}}\) can be done either classically or quantumly, depending on its size. The converged \(H_{\mathrm{eff}}\) can also be preserved for later use such as quantum dynamics simulations and Green's function calculations.
It is worth noting that the methods of moments of coupled cluster (MMCC) equations and their implementation through a compact unitary basis and the corresponding quantum algorithms on a quantum computer (see, e.g., Ref. [121]) provide another way of constructing the effective Hamiltonian. The bosonic variant of the MMCC method can be readily derived using steps similar to those used for the fermionic cases [45, 46, 47, 49, 60, 61, 70, 89, 90, 91, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156]. Briefly, assuming that the bosonic system is described by Hamiltonians \(H\), the asymmetric energy functional can be expressed as
\[E_{\text{MMCC}}[\Psi_{T}]=\frac{\langle\Psi_{T}|He^{T^{(A)}}|\Phi\rangle}{ \langle\Psi_{T}|e^{T^{(A)}}|\Phi\rangle} \tag{113}\]
where \(|\Psi_{T}\rangle\) is the so-called trial wave function, \(T^{(A)}\) is an arbitrary approximation (the parent approach) to the exact cluster operator \(T\), and \(H\) is a many-body Hamiltonian defined by one- and two-body interactions. When \(|\Psi_{T}\rangle\) is replaced by the exact bosonic ground-state wave function
Figure 7: Proposed quantum-classical workflow and its performance for searching the ground state and constructing the effective Hamiltonian for a three-site two-boson model. The model is described by Hamiltonian (38) with \(\mu=\{-1,0,1\}\), \(t=1\), and \(U=100\) giving rise to the exact ground state energy of 198.251313 a.u. (**a**) The model and its five excitations can be partitioned into three manifolds, the sub-manifold \(\mathbf{P}\) corresponding to the reference, the sub-manifold \(\mathbf{Q}_{1}\) corresponding to the excitations only between site \(0\) and site \(1\), and the sub-manifold \(\mathbf{Q}_{2}\) taking all the remaining excitations. (**b**) A Trotterized double unitary coupled cluster with singles and doubles (D-UCCSD) ansätz is employed to cover the true ground state for the quantum-classical workflow featuring a two-loop structure, a micro VQE loop only updating \(\{s_{i}\}\) (three free-parameters for \(\hat{U}_{\text{\#2}}\)) and the a macro loop updating \(\{r_{i}\}\) (two free-parameters for \(\hat{U}_{\text{\#1}}\)). (**c**) The convergence performance of the proposed workflow and its comparison with the conventional VQE. During the optimization, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [55], a quasiNewton strategy, is employed and initial values are \(s_{0}=s_{1}=s_{2}=-1.0\). The conventional VQE also employs the D-UCCSD ansätze but optimize \(\{r_{i}\}\) and \(\{s_{i}\}\), i.e. five free-parameters, at the same time.
\(|\Psi\rangle\), then the value of the B-MMCC functional Eq. (113) is equal to the exact ground-state energy \(E\);
\[E_{\text{MMCC}}[\Psi]=E. \tag{114}\]
Assuming that low-rank moments are used to calculate cluster amplitudes, i.e.,
\[Q_{A}M^{(A)}|\Phi\rangle=Q_{A}e^{-T^{(A)}}He^{-T^{(A)}}|\Phi\rangle=0. \tag{115}\]
the many-body form of functional (113) can be rewritten as
\[E_{\text{MMCC}}[\Psi_{T}]=E^{(A)}+\frac{\langle\Psi|e^{T^{(A)}}Q_{R}M^{(A)}| \Phi\rangle}{\langle\Psi|e^{T^{(A)}}|\Phi\rangle}\, \tag{116}\]
where \(E^{(A)}\) is the approximate CC energy and \(Q_{R}\) is a projection operator onto excited configurations not included in the \(T^{(A)}|\Phi\rangle\) expansion.
## IV Error analysis of truncating bosonic mode
As mentioned earlier, bosonic modes have infinite degrees of freedom. Thus Hamiltonians that involve bosonic interactions need to be truncated before they can be simulated on a finite-memory quantum computer. In this section, we discuss how to rigorously bound the truncation error for a class of Hamiltonians with boson-fermion interactions.
Specifically, given a bosonic Hamiltonian \(H\), our goal is to find a truncated Hamiltonian \(\widetilde{H}\), such that the time evolution \(e^{-iH}\approx e^{-i\widetilde{H}}\) is well approximated when acting on a given initial state. The truncated Hamiltonian can then be simulated using a common quantum simulation algorithm such as Trotterization and Qubitization, as discussed in Sections II.4 and II.5 respectively. For concreteness, we take the boson-fermion Hamiltonian (57) as an example, but a similar analysis applies to the spin-boson Hamiltonian as well.
Our setting is similar to that of a recent work [148]. However, we have streamlined their analysis and improved a polylogarithmic factor over their bound, while also extending the result to time-dependent Hamiltonians. We will focus on analyzing the asymptotic scaling of the truncation cutoff, but one can keep track of all the constant factors so as to estimate the concrete resources required for a specific simulation task. Interestingly, the truncation bound we derive for the time evolution of bosonic Hamiltonians has also implications to studying their ground state properties [2].
The remainder of this section is organized as follows. In Section IV.1, we present several technical conditions which are required for our analysis to hold. In Section IV.2, we give a bound on the growth of bosonic number under the ideal Hamiltonian evolution when the initial state is restricted to a low-particle subspace. We then apply this in Section IV.3 to bound the error in the time evolution when the Hamiltonian is truncated to finite-dimensional Hilbert spaces. We discuss generalizations to multiple bosonic modes in Section IV.4, establishing the main result Theorem 7. We show how the analysis can be adapted to Hamiltonians with explicit time dependence in Section IV.5.
### Technical conditions
Following Ref. [148], we use \(\lambda\) to denote the number of bosons in a specific bosonic mode, and write the corresponding state as \(|\lambda\rangle\). Then, we define \(\Pi_{\mathcal{S}}\) as the projector onto the subspace of all states with bosonic number belonging to set \(S\) and let \(\overline{\Pi}_{\mathcal{S}}=I-\Pi_{\mathcal{S}}\) be the complementary projector. It follows from the completeness requirement that \(\Pi_{[0,\infty]}=\sum_{\lambda=0}^{\infty}\Pi_{\lambda}=I\). However, to implement simulation algorithms on a finite-memory quantum computer, we need to choose a maximum cutoff \(\Lambda\), which results in a partial projection \(\Pi_{[0,\Lambda]}\) and introduces a truncation error. We will rigorously analyze this error below.
Fixing a specific bosonic mode, our main goal is to upper bound the leakage of bosonic number under the ideal Hamiltonian evolution when the initial state has at most \(\Lambda_{0}\) particles. For technical reasons, we will impose additional conditions on the Hamiltonian. Specifically, we assume that the Hamiltonian can be decomposed as
\[H=H_{w}+H_{r}, \tag{117}\]
where
\[\Pi_{\mathcal{S}}H_{w}\Pi_{\mathcal{S}\lambda^{\prime}} =0\ \ (\text{if}\ |\lambda-\lambda^{\prime}|>1), \tag{118}\] \[\left\|H_{w}\Pi_{[0,\Lambda]}\right\| \leq\chi(\Lambda+1)^{r},\] \[\left[H_{r},\Pi_{\mathcal{S}}\right] =0\]
for some \(\chi>0\) and \(0\leq r<1\). In words, the last condition asserts that \(H_{r}\) does not change the number of bosons in that specific mode. While the bosonic number does change under \(H_{w}\), the amount and magnitude of this change are upper bounded by the first two conditions. Note that the conditions (118) can be alternatively understood as requirements on the block structure of the Hamiltonian. Indeed, a bosonic Hamiltonian \(H=H_{w}+H_{r}\) satisfies (118) if and only if
\[H_{w} =\sum_{\lambda=0}^{\infty}\left(\Pi_{\lambda+1}H\Pi_{\lambda}+ \Pi_{\lambda}H\Pi_{\lambda+1}\right),\] \[H_{r} =\sum_{\lambda=0}^{\infty}\Pi_{\lambda}H\Pi_{\lambda}. \tag{119}\]
We verify these conditions for the boson-fermion Hamiltonian (57). Without loss of generality, consider the first bosonic mode. Then, we have
\[H_{w} =g\omega f_{1}^{\dagger}f_{1}(b_{1}^{\dagger}+b_{1}),\] \[H_{r} =-\sum_{\langle i,j\rangle}vf_{i}^{\dagger}f_{j}+\sum_{l}\omega b _{l}^{\dagger}b_{i}+\sum_{i\neq 1}g\omega f_{i}^{\dagger}f_{i}(b_{l}^{ \dagger}+b_{i}).\]
One can check that \(H_{r}\) does not change the bosonic number in the first mode, whereas \(H_{w}\) changes the number by at most \(\pm 1\). Furthermore,
\[\|H_{w}\Pi_{[0,\Lambda]}\|\leq g\omega\sqrt{4\Lambda+2}<g\omega 2\sqrt{ \Lambda+1}. \tag{120}\]
This is because
\[\left(b^{\dagger}+b\right)^{2}\leq\left(b^{\dagger}+b\right)^{2}+i^{2}\left(b ^{\dagger}-b\right)^{2}=4b^{\dagger}b+2I, \tag{121}\]
which implies
\[\|\left(b^{\dagger}+b\right)\Pi_{[0,\Lambda]}\| =\sqrt{\left\|\Pi_{[0,\Lambda]}\left(b^{\dagger}+b\right)^{2}\Pi_{[ 0,\Lambda]}\right\|}\] \[\leq\sqrt{\left\|\Pi_{[0,\Lambda]}\left(4b^{\dagger}b+2I\right)\Pi _{[0,\Lambda]}\right\|}\] \[=\sqrt{4\Lambda+2}. \tag{122}\]
### State truncation
To realize the truncation, we will introduce a sequence of time durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) with corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\). Assuming this is done, we then proceed to bound the approximation error
\[e^{-iH}\Pi_{[0,\Lambda_{0}]}\approx \Pi_{[0,\Lambda_{1}]}e^{-i\Delta t_{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\] \[\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0,\Lambda_{1}]}e^{- i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}. \tag{123}\]
Specifically, we telescope the summation to get
\[e^{-iH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{1}]}e^{-i\Delta t_ {s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi _{[0,\Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\] \[=e^{-i(\Delta t_{2}+\cdots+\Delta t_{s})H}\overline{\Pi}_{[0, \Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}+e^{-i(\Delta t_{3}+ \cdots+\Delta t_{s})H}\overline{\Pi}_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_ {[0,\Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\] \[\quad+\cdots\] \[\quad+\overline{\Pi}_{[0,\Lambda_{1}]}e^{-i\Delta t_{s}H}\Pi_{[0, \Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0,\Lambda_ {1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}. \tag{124}\]
This means we can upper bound the long-time leakage by the sum of short-time leakages, and the error adds up at most linearly
\[\left\|e^{-iH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}H} \Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0, \Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|\leq\sum_{j=0}^{s -1}\left\|\overline{\Pi}_{[0,\Lambda_{j+1}]}e^{-i\Delta t_{j+1}H}\Pi_{[0, \Lambda_{j}]}\right\|. \tag{125}\]
To bound the leakage \(\overline{\Pi}_{[0,\Lambda]}e^{-i\Delta tH}\Pi_{[0,\Lambda]}\) for each short time \(\Delta t\geq 0\), we will use the interaction picture. Specifically, we have
\[e^{-i\Delta tH}=e^{-i\Delta tH}\mathcal{T}\left\{e^{-i\int_{0}^{ \Lambda t}\mathrm{d}\tau\ e^{iH_{s}H_{s}}e^{-i\Delta t_{s}H_{s}}}\right\}, \tag{126}\]
where \(\mathcal{T}\) is the time-ordering operator for time-dependent Hamiltonian evolution. Since \(H_{r}\) does not change the bosonic number, this implies
\[\overline{\Pi}_{[0,\Lambda]}e^{-i\Delta tH}\Pi_{[0,\Lambda]}\] \[=e^{-i\Delta tH}\overline{\Pi}_{[0,\Lambda]}\mathcal{T}\left\{e^ {-i\int_{0}^{\Lambda t}\mathrm{d}\tau\ e^{iH_{s}H_{s}}e^{-i\Delta t_{s}H_{s}}} \right\}\Pi_{[0,\Lambda]}. \tag{127}\]
We now rewrite the right-hand side using the Dyson series:
\[\overline{\Pi}_{[0,\Lambda]}\cdot\mathcal{T}\exp\left(-i\int_{0} ^{\Lambda t}\mathrm{d}\tau\ e^{i\tau H_{s}}H_{w}e^{-i\tau H_{s}}\right)\cdot \Pi_{[0,\Lambda]}\] \[=\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}(-i)^{k}\int_{0}^{ \Lambda t}\mathrm{d}\tau_{k}\cdots\int_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\int_{0 }^{\tau_{2}}\mathrm{d}\tau_{1}\overline{\Pi}_{[0,\Lambda]}e^{i\tau_{3}H_{s}} H_{w}e^{-i\tau_{2}H_{s}}e^{i\tau_{2}H_{s}}e^{i\tau_{3}H_{1}}H_{w}e^{-i\tau_{2}H_{s}}\Pi_{[0, \Lambda]} \tag{128}\] \[=\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}(-i)^{k}\int_{0}^{ \Lambda t}\mathrm{d}\tau_{k}\cdots\int_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\int_{0 }^{\tau_{2}}\mathrm{d}\tau_{1}\overline{\Pi}_{[0,\Lambda^{\prime}]}e^{i\tau_{3 }H_{s}}H_{w}\Pi_{[0,\Lambda+k-1]}e^{-i\tau_{2}H_{s}}H_{w}\Pi_{[0,\Lambda+1]}e^{ -i\tau_{2}H_{s}}e^{i\tau_{3}H_{1}}H_{w}\Pi_{[0,\Lambda]}e^{-i\tau_{3}H_{1}},\]
where the last equality follows since \(H_{r}\) does not change the bosonic number and \(H_{w}\) only changes the number by \(\pm 1\). Putting these altogether,
\[\left\|\overline{\Pi}_{[0,\Lambda^{\prime}]}e^{-i\Delta H}\Pi_{[0, \Lambda]}\right\| =\left\|\overline{\Pi}_{[0,\Lambda^{\prime}]}\cdot\mathcal{T} \exp\left(-i\int_{0}^{\Delta t}\mathrm{d}\tau\ e^{i\pi H_{t}}H_{w}e^{-i\pi H_{ t}}\right)\cdot\Pi_{[0,\Lambda]}\right\|\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{\Delta t^{k} }{k!}\|H_{w}\Pi_{[0,\Lambda+k-1]}\|\cdots\|H_{w}\Pi_{[0,\Lambda+1]}\|\|H_{w} \Pi_{[0,\Lambda]}\|\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{(\chi\Delta t )^{k}}{k!}\sqrt{\Lambda+k}\cdots\sqrt{\Lambda+2}\sqrt{\Lambda+1}\leq\sum_{k= \Lambda^{\prime}-\Lambda}^{\infty}\frac{(\chi\Delta t)^{k}}{k!}\sqrt{(\Lambda +k)^{k}}\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{(e\chi\Delta t )^{k}}{ek^{k}}2^{\frac{k}{2}-1}\left(\Lambda^{\frac{k}{2}}+k^{\frac{k}{2}} \right)=\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\left(\frac{\sqrt{2}e\chi \Delta t}{k}\right)^{k}\frac{\Lambda^{\frac{k}{2}}+k^{\frac{k}{2}}}{2e}, \tag{129}\]
where we have used the inequality between the 1- and \(k/2\)-norm
\[\Lambda+k\leq 2^{1-\frac{2}{2}}\left(\Lambda^{\frac{k}{2}}+k^{\frac{k}{2}} \right)^{\frac{2}{2}} \tag{130}\]
and Stirling's approximation
\[e\left(\frac{k}{e}\right)^{k}\leq k!. \tag{131}\]
To proceed, we assume that the short evolution time satisfies
\[0\leq\Delta t\leq\frac{1}{\chi\sqrt{\Lambda}}\leq\frac{1}{\chi}. \tag{132}\]
Then, the error bound simplifies to
\[\left\|\overline{\Pi}_{[0,\Lambda]}e^{-i\Delta H}\Pi_{[0,\Lambda ]}\right\|\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\left(\frac{\sqrt{2 }e\chi\Delta t}{k}\right)^{k}\frac{\Lambda^{\frac{k}{2}}+k^{\frac{k}{2}}}{2e}\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{1}{2e}\left( \frac{\sqrt{2}e}{k}\right)^{k}+\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty} \frac{1}{2e}\left(\frac{\sqrt{2}e}{\sqrt{k}}\right)^{k}\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{1}{e}\left( \frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{k}. \tag{133}\]
Assuming that
\[\sqrt{\Lambda^{\prime}-\Lambda}\geq 2\sqrt{2}e, \tag{134}\]
we have
\[\left\|\overline{\Pi}_{[0,\Lambda^{\prime}]}e^{-i\Delta tH}\Pi_{[ 0,\Lambda]}\right\|\] \[\leq\sum_{k=\Lambda^{\prime}-\Lambda}^{\infty}\frac{1}{e}\left( \frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{k}\] \[=\frac{1}{e}\left(\frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda }}\right)^{\Lambda^{\prime}-\Lambda}\sum_{k=0}^{\infty}\left(\frac{\sqrt{2}e} {\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{k}\] \[\leq\frac{1}{e}\left(\frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}- \Lambda}}\right)^{\Lambda^{\prime}-\Lambda}\sum_{k=0}^{\infty}\frac{1}{2^{k}}\] \[\leq\left(\frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}} \right)^{\Lambda^{\prime}-\Lambda}. \tag{135}\]
We summarize this bound as follows.
**Lemma 1** (Short-time state truncation).: _Given bosonic Hamiltonian \(H=H_{w}+H_{r}\) satisfying (118) with parameter \(\chi>0\), we have_
\[\left\|\overline{\Pi}_{[0,\Lambda^{\prime}]}e^{-i\Delta tH}\Pi_{[0,\Lambda]} \right\|\leq\left(\frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{ \Lambda^{\prime}-\Lambda} \tag{136}\]
_for any \(0\leq\Delta t\leq 1/\chi\sqrt{\Lambda}\) and integers \(0\leq\Lambda<\Lambda^{\prime}\) such that \(\Lambda^{\prime}-\Lambda\geq 8e^{2}\)._
To extend this analysis to a long time evolution, we divide the evolution into short steps with durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\). We let the bosonic number increase linearly, i.e.,
\[\Delta\Lambda_{j}\equiv\Delta\Lambda\ \Rightarrow\ \Lambda_{j}=\Lambda_{0}+j\Delta\Lambda \tag{137}\]
and choose the duration upper bounds
\[\Delta\tau_{j}=\frac{1}{\chi\sqrt{\Lambda_{j-1}}}\ \Rightarrow\ \tau_{s}=\sum_{j=1}^{s}\frac{1}{\chi\sqrt{\Lambda_{0}+(j-1)\Delta\Lambda}}. \tag{138}\]
The total time during which this analysis applies is then at least
\[\tau_{s} =\sum_{j=1}^{s}\frac{1}{\chi\sqrt{\Lambda_{0}+(j-1)\Delta\Lambda}}\] \[\geq\int_{0}^{s}\mathrm{d}x\ \frac{1}{\chi\sqrt{\Lambda_{0}+x\Delta \Lambda}} \tag{139}\] \[=\frac{2}{\chi\Delta\Lambda}\left(\sqrt{\Lambda_{0}+s\Delta \Lambda}-\sqrt{\Lambda_{0}}\right).\]
Note that \(\lim_{s\to\infty}\tau_{s}=\infty\), so we can choose the first integer \(s\) such that \(\tau_{s}\geq t\). Explicitly,
\[s=\left\lceil\frac{1}{\Delta\Lambda}\left(\left(\sqrt{\Lambda_{0}}+\frac{\chi t \Delta\Lambda}{2}\right)^{2}-\Lambda_{0}\right)\right\rceil. \tag{140}\]
With these time upper bounds, we simply let
\[\Delta t_{1} =\Delta\tau_{1},\] \[\Delta t_{2} =\Delta\tau_{2},\] \[\vdots\] \[\Delta t_{s} =t-(\Delta t_{1}+\cdots+\Delta t_{s-1})\leq\Delta\tau_{s}.\]
Applying (125) and the short-time bound, we have
\[\left\|e^{-iH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i \Delta t_{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2 }H}\Pi_{[0,\Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|\] \[\leq\sum_{j=0}^{s-1}\left\|\overline{\Pi}_{[0,\Lambda_{j+1}]}e^{- i\Delta t_{j+1}H}\Pi_{[0,\Lambda_{j}]}\right\|\leq s\left(\frac{\sqrt{2}e}{ \sqrt{\Delta\Lambda}}\right)^{\Delta\Lambda}. \tag{141}\]
For asymptotic analysis, we have \(s=\mathcal{O}\left(\sqrt{\Lambda_{0}}\chi^{2}t^{2}\Delta\Lambda\right)\), which gives
\[\left\|e^{-iH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i \Delta t_{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t _{2}H}\Pi_{[0,\Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|= \mathcal{O}\left(\sqrt{\Lambda_{0}}\chi^{2}t^{2}\Delta\Lambda\left(\frac{ \sqrt{2}e}{\sqrt{\Delta\Lambda}}\right)^{\Delta\Lambda}\right). \tag{142}\]
We want to choose \(\Delta\Lambda\) so that the leakage error is at most \(\varepsilon\). The scaling of \(\Delta\Lambda\) can be understood via the Lambert-W function (see Ref. [73] as well as Lemma 2 below) as
\[\Delta\Lambda=\mathcal{O}\left(1+\log\left(\frac{\Lambda_{0}\chi t}{\varepsilon }\right)\right). \tag{143}\]
This then gives
\[\sqrt{\Lambda_{s}} =\sqrt{\Lambda_{0}+s\Delta\Lambda}=\sqrt{\left(\sqrt{\Lambda_{0}} +\frac{\chi t\Delta\Lambda}{2}\right)^{2}+\mathcal{O}\left(\Delta\Lambda\right)}\] \[=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\chi t\log\left(\frac{ \Lambda_{0}\chi t}{\varepsilon}\right)\right). \tag{144}\]
**Lemma 2** (Bounding bosonic number by the Lambert-W function).: _For constant \(b>0\) and sufficiently large \(a>0\), the function_
\[f(y):=a\left(\frac{b}{\sqrt{y}}\right)^{y} \tag{145}\]
_is monotonically decreasing for \(y\geq b^{2}\). Furthermore, for sufficiently small \(0<\varepsilon=\mathcal{O}(1)\), \(f(y)=\varepsilon\) has a unique solution which scales like_
\[y=f^{-1}(\varepsilon)=\mathcal{O}\left(1+\frac{\log(a/\varepsilon)}{\log\log( a/\varepsilon)}\right). \tag{146}\]
Proof.: The monotonicity of \(f\) follows from the fact that \(\sqrt{y}^{y}>b^{y}\) for all \(y\geq b^{2}\). Since \(f(b^{2})=\Theta(a)\) and \(f(\infty)=0\), this implies the existence and uniqueness of the solution \(y=f^{-1}(\varepsilon)\). Then,
\[a\left(\frac{b}{\sqrt{y}}\right)^{y}=\varepsilon \Rightarrow \left(\frac{\sqrt{y}}{b}\right)^{y}=\frac{a}{\varepsilon} \tag{147}\] \[\Rightarrow y\log\left(\frac{\sqrt{y}}{b}\right)=\log\left(\frac{a}{ \varepsilon}\right).\]
Letting \(x=\log\bigl{(}\sqrt{y}/b\bigr{)}\), we have \(y=b^{2}e^{2x}\), which implies
\[b^{2}xe^{2x}=\log\left(\frac{a}{\varepsilon}\right) \Rightarrow 2xe^{2x}=\frac{2}{b^{2}}\log\left(\frac{a}{\varepsilon}\right). \tag{148}\]
The claimed scaling now follows by solving the last equation using the Lambert-W function.
**Corollary 3** (Long-time state truncation).: _Given bosonic Hamiltonian \(H=H_{u}+H_{r}\) satisfying (118) with parameter \(\chi>0\), for any \(t>0\), \(\varepsilon>0\) and integer \(\Lambda_{0}>0\), there exist \(s\) time durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\), such that_
\[\left\|e^{-i\Delta tH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t _{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H} \Pi_{[0,\Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|\leq\varepsilon. \tag{149}\]
Proof.: The monotonicity of \(f\) follows from the fact that \(\sqrt{y}^{y}>b^{y}\) for all \(y\geq b^{2}\). Since \(f(b^{2})=\Theta(a)\) and \(f(\infty)=0\), this implies the existence and uniqueness of the solution \(y=f^{-1}(\varepsilon)\). Then,
\[a\left(\frac{b}{\sqrt{y}}\right)^{y}=\varepsilon \Rightarrow \left(\frac{\sqrt{y}}{b}\right)^{y}=\frac{a}{\varepsilon} \tag{150}\] \[\Rightarrow y\log\left(\frac{\sqrt{y}}{b}\right)=\log\left(\frac{a}{ \varepsilon}\right).\]
Letting \(x=\log\bigl{(}\sqrt{y}/b\bigr{)}\), we have \(y=b^{2}e^{2x}\), which implies
\[b^{2}xe^{2x}=\log\left(\frac{a}{\varepsilon}\right) \Rightarrow 2xe^{2x}=\frac{2}{b^{2}}\log\left(\frac{a}{\varepsilon}\right). \tag{148}\]
The claimed scaling now follows by solving the last equation using the Lambert-W function.
**Corollary 4** (Long-time state truncation).: _Given bosonic Hamiltonian \(H=H_{u}+H_{r}\) satisfying (118) with parameter \(\chi>0\), for any \(t>0\), \(\varepsilon>0\) and integer \(\Lambda_{0}>0\), there exist \(s\) time durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\), such that_
\[\left\|e^{-i\Delta tH}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s }H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0, \Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|\leq\varepsilon. \tag{149}\]
Proof.: The monotonicity of \(f\) follows from the fact that \(\sqrt{y}^{y}>b^{y}\) for all \(y\geq b^{2}\). Since \(f(b^{2})=\Theta(a)\) and \(f(\infty)=0\), this implies the existence and uniqueness of the solution \(y=f^{-1}(\varepsilon)\). Then,
\[a\left(\frac{b}{\sqrt{y}}\right)^{y}=\varepsilon \Rightarrow \left(\frac{\sqrt{y}}{b}\right)^{y}=\frac{a}{\varepsilon} \tag{140}\] \[\Rightarrow y\log\left(\frac{\sqrt{y}}{b}\right)=\log\left(\frac{a}{ \varepsilon}\right).\]
Letting \(x=\log\bigl{(}\sqrt{y}/b\bigr{)}\), we have \(y=b^{2}e^{2x}\), which implies
\[b^{2}xe^{2x}=\log\left(\frac{a}{\varepsilon}\right) \Rightarrow 2xe^{2x}=\frac{2}{b^{2}}\log\left(\frac{a}{\varepsilon}\right). \tag{148}\]
The claimed scaling now follows by solving the last equation using the Lambert-W function.
**Corollary 5** (Long-time state truncation).: _Given bosonic Hamiltonian \(H=H_{u}+H_{r}\) satisfying (118) with parameter \(\chi>0\), for any \(t>0\), \(\varepsilon>0\) and integer \(\Lambda_{0}>0\), there exist \(s\) time durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\), such that_
\[\left\|e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t _{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0, \Lambda_{1}]}e^{-i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}\right\|\leq\varepsilon. \tag{149}\]
### Hamiltonian truncation
In the previous subsection, we show that when restricting the initial state to a low-particle subspace, sufficiently large cutoff values \(\Lambda_{0}<\Lambda_{1}<\dots<\Lambda_{s}\) can be chosen so that the following approximation holds with an arbitrarily high accuracy
\[e^{-iH}\Pi_{[0,\Lambda_{0}]} \approx\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}H}\Pi_{[0,\Lambda_{s -1}]}\dots\] \[\Pi_{[0,\Lambda_{2}]}e^{-i\Delta t_{2}H}\Pi_{[0,\Lambda_{1}]}e^{- i\Delta t_{1}H}\Pi_{[0,\Lambda_{0}]}. \tag{151}\]
We now leverage this result to truncate the Hamiltonian at some cutoff \(\widetilde{\Lambda}\) so that
\[e^{-iH}\Pi_{[0,\Lambda_{0}]}\approx e^{-i\Pi_{[0,\widetilde{\Lambda}]}H\Pi_{[ 0,\widetilde{\Lambda}]}}\Pi_{[0,\Lambda_{0}]}. \tag{152}\]
After that, we can use a quantum algorithm to simulate \(\Pi_{[0,\widetilde{\Lambda}]}H\Pi_{[0,\widetilde{\Lambda}]}\) and the outcome is provably accurate as long as the initial state has particle number at most \(\Lambda_{0}\). For notational convenience, we define the truncated Hamiltonians
\[\widetilde{H} :=\Pi_{[0,\widetilde{\Lambda}]}H\Pi_{[0,\widetilde{\Lambda}]},\] \[\widetilde{H}_{w} :=\Pi_{[0,\widetilde{\Lambda}]}H_{w}\Pi_{[0,\widetilde{\Lambda}]},\] \[\widetilde{H}_{r} :=\Pi_{[0,\widetilde{\Lambda}]}H_{r}\Pi_{[0,\widetilde{\Lambda}]}. \tag{153}\]
Our analysis proceeds in a similar way as in Subsection IV.2. Specifically, we first consider the Hamiltonian truncation error for a short-time evolution, i.e.,
\[\Pi_{[0,\Lambda^{\prime}]}e^{-i\Delta tH}\Pi_{[0,\Lambda]}\approx\Pi_{[0, \Lambda^{\prime}]}e^{-i\Delta t\Pi_{[0,\widetilde{\Lambda}]}H\Pi_{[0, \widetilde{\Lambda}]}}\Pi_{[0,\Lambda]}. \tag{154}\]
We switch to the interaction picture for both sides of the above equation:
\[\Pi_{[0,\Lambda^{\prime}]}e^{-i\Delta tH}\Pi_{[0,\Lambda]}\] \[=\Pi_{[0,\Lambda]}e^{-i\Delta tH_{r}}\mathcal{T}\left\{e^{-i\int_ {0}^{\Lambda_{0}}\mathrm{d}\tau\ e^{i\Delta t_{r}H_{w}}e^{-i\Delta tH_{r}}} \right\}\Pi_{[0,\Lambda]}\] \[=e^{-i\Delta\Lambda^{\prime}_{\Lambda_{-0}}\Pi_{\Lambda}H\Pi_{ \Lambda}}\Pi_{[0,\Lambda]}\mathcal{T}\left\{e^{-i\int_{0}^{\Lambda_{0}} \mathrm{d}\tau\ e^{i\Delta t_{r}H_{w}}e^{-i\Delta t_{r}}}\right\}\Pi_{[0, \Lambda]},\] \[\Pi_{[0,\Lambda^{\prime}]}e^{-i\Delta t\widetilde{H}}\Pi_{[0, \Lambda]}\] \[=\Pi_{[0,\Lambda]}e^{-i\Delta t\widetilde{H}_{r}}\mathcal{T}\left\{e ^{-i\int_{0}^{\Lambda_{0}}\mathrm{d}\tau\ e^{i\Delta t_{r}H_{w}}e^{-i\Delta t _{r}}}\right\}\Pi_{[0,\Lambda]}\] \[=e^{-i\Delta t\Sigma^{\prime}_{\Lambda_{-0}}\Pi_{\Lambda}H\Pi_{ \Lambda}}\Pi_{[0,\Lambda]}\mathcal{T}\left\{e^{-i\int_{0}^{\Lambda_{0}} \mathrm{d}\tau\ e^{i\Delta t_{r}H_{w}}\tilde{H}_{w}e^{-i\Delta t_{r}H_{w}}} \right\}\Pi_{[0,\Lambda]},\]
where the last equality holds if we assume a sufficiently large cutoff
\[\widetilde{\Lambda}\geq\Lambda^{\prime}. \tag{155}\]
We then expand the time-dependent Hamiltonian evolutions using Dyson series
\[\Pi_{[0,\Lambda^{\prime}]}\mathcal{T}\exp\left(-i\int_{0}^{ \Lambda_{0}}\mathrm{d}\tau\ e^{i\tau H_{r}}H_{w}e^{-i\tau H_{r}}\right)\Pi_{[0,\Lambda]}\] \[\quad=\sum_{k=0}^{\infty}(-i)^{k}\int_{0}^{\Lambda_{0}}\mathrm{d} \tau_{k}\cdots\int_{0}^{\tau_{k}}\mathrm{d}\tau_{2}\int_{0}^{\tau_{2}} \mathrm{d}\tau_{1}\Pi_{[0,\Lambda^{\prime}]}e^{i\tau_{1}H_{r}}H_{w}e^{-i\tau _{1}H_{r}}\cdots e^{i\tau_{2}H_{r}}H_{w}e^{-i\tau_{2}H_{r}}e^{i\tau_{2}H_{r}}H _{w}e^{-i\tau_{2}H_{r}}e^{i\tau_{2}H_{1}}H_{w}e^{-i\tau_{2}H_{1}}\Pi_{[0, \Lambda]}, \tag{156}\] \[\Pi_{[0,\Lambda^{\prime}]}\mathcal{T}\exp\left(-i\int_{0}^{ \Lambda_{0}}\mathrm{d}\tau\ e^{i\widetilde{H}_{r}}\widetilde{H}_{w}e^{-i \widetilde{H}_{r}}\right)\Pi_{[0,\Lambda]}\] \[\quad=\sum_{k=0}^{\infty}(-i)^{k}\int_{0}^{\Lambda_{0}}\mathrm{d} \tau_{k}\cdots\int_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\int_{0}^{\tau_{2}} \mathrm{d}\tau_{1}\Pi_{[0,\Lambda^{\prime}]}e^{i\tau_{1}\widetilde{H}_{r}} \widetilde{H}_{w}e^{-i\tau_{1}\widetilde{H}_{r}}\cdots e^{i\tau_{2}\widetilde{H }_{r}}\widetilde{H}_{w}e^{-i\tau_{2}\widetilde{H}_{r}}e^{i\tau_{1}\widetilde{H}_ {r}}\widetilde{H}_{w}e^{-i\tau_{1}\widetilde{H}_{1}}\Pi_{[0,\Lambda]}. \tag{157}\]
Comparing the two expansions, we see that the terms agree at order \(k=0,1,\dots,\Lambda^{\prime}-\Lambda-1\); hence the error only comes from higher order terms in both summations
\[\left\|\Pi_{[0,\Lambda^{\prime}]}e^{-i\Delta tH}\Pi_{[0,\Lambda]}-\Pi_{[0, \Lambda^{\prime}]}e^{-i\Delta t\Pi_{[0,\widetilde{\Lambda}]}H\Pi_{[0, \widetilde{\Lambda}]}}\Pi_{[0,\Lambda]}\right\|\leq 2\sum_{k=\Lambda^{\prime}-\Lambda}^{ \infty}\frac{\Delta t^{k}}{k!}\|H_{w}\Pi_{[0,\Lambda+k-1]}\|\cdots\|H_{w}\Pi_{[0,\Lambda+1]}\|\|H_{w}\Pi_{[0,\Lambda]}\|. \tag{158}\]
Proceeding as in Subsection IV.2, we obtain:
**Lemma 4** (Short-time Hamiltonian truncation).: _Given bosonic Hamiltonian \(H=H_{w}+H_{r}\) satisfying (118) with parameter \(\chi>0\), we have_
\[\left\|\Pi_{[0,\Lambda^{\prime}]}\left(e^{-i\lambda tH}-e^{-i\Delta t\Pi_{[0,\widetilde {\Lambda}}H\Pi_{[0,\widetilde{\Lambda}})}\right)\Pi_{[0,\Lambda]}\right\|\leq 2 \left(\frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{\Lambda^{\prime}-\Lambda} \tag{159}\]
_for any \(0\leq\Delta t\leq 1/\chi\sqrt{\Lambda}\) and integers \(0\leq\Lambda<\Lambda^{\prime}\leq\widetilde{\Lambda}\) such that \(\Lambda^{\prime}-\Lambda\geq 8e^{2}\)._
This bound may be generalized using the triangle inequality to truncate a long-time Hamiltonian evolution as follows:
**Corollary 5** (Long-time Hamiltonian truncation).: _Given bosonic Hamiltonian \(H=H_{w}+H_{r}\) satisfying (118) with parameter \(\chi>0\), for any \(t>0\), \(\epsilon>0\) and integer \(\Lambda_{0}>0\), there exist \(s\) time durations \(\Delta t_{1}+\cdots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\cdots<\Lambda_{s}\leq\widetilde{\Lambda}\), such that_
\[\left\|\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}H}\Pi_{[0,\Lambda_{s -1}]}\cdots\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}H}\Pi_{[0,\Lambda_{s}]}e^{- i\Delta t_{s}H}\Pi_{[0,\Lambda_{0}]}\right.\] \[\left.\qquad\qquad\qquad-\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s} \widetilde{H}}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t _{s}\widetilde{H}}\Pi_{[0,\Lambda_{s}]}\right\|\leq\epsilon. \tag{160}\]
_The final cutoff \(\Lambda_{s}\) has the asymptotic scaling_
\[\sqrt{\Lambda_{s}} =\sqrt{\Lambda_{0}+s\Delta\Lambda}\] \[=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\chi t\log\left(\frac{ \Lambda_{0}\chi t}{\epsilon}\right)\right). \tag{161}\]
It remains to revert the state truncation:
\[\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}\widetilde{H}}\Pi_{[0, \Lambda_{s-1}]}\cdots\] \[\Pi_{[0,\Lambda_{s}]}e^{-i\Delta t_{s}\widetilde{H}}\Pi_{[0, \Lambda_{s}]}e^{-i\Delta t_{s}\widetilde{H}}\Pi_{[0,\Lambda_{s}]}\approx e^{ -i\widetilde{H}}\Pi_{[0,\Lambda_{0}]},\]
which proceeds similarly as in Subsection IV.2. Setting the accuracy parameter to be \(\epsilon/3\) for each of the following three approximations,
\[e^{-iH}\Pi_{[0,\Lambda_{s}]} \stackrel{{\epsilon/3}}{{\approx}}\Pi_{[0,\Lambda_{ s}]}e^{-i\Delta t_{s}H}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{s}]}e^{-i \Delta t_{s}H}\Pi_{[0,\Lambda_{s}]}\] \[\stackrel{{\epsilon/3}}{{\approx}}e^{-i\widetilde{H }}\Pi_{[0,\Lambda_{s}]} \tag{162}\]
we finally obtain:
**Theorem 6** (Bosonic Hamiltonian truncation).: _Given bosonic Hamiltonian \(H=H_{w}+H_{r}\) satisfying (118) with parameter \(\chi>0\), for any \(t>0\), \(\epsilon>0\) and integer \(\Lambda_{0}>0\), there exists an integer \(\widetilde{\Lambda}>0\) such that_
\[\left\|e^{-\hat{u}H}\Pi_{[0,\Lambda_{0}]}-e^{-i\Pi_{[0,\widetilde{\Lambda}]} H\Pi_{[0,\widetilde{\Lambda}]}}\Pi_{[0,\Lambda_{0}]}\right\|\leq\epsilon. \tag{163}\]
_The cutoff \(\widetilde{\Lambda}\) has the asymptotic scaling_
\[\sqrt{\widetilde{\Lambda}}=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\chi t\log \left(\frac{\Lambda_{0}\chi t}{\epsilon}\right)\right). \tag{164}\]
### Multiple bosonic modes
In our above analysis, we have fixed a specific bosonic mode and bounded the growth of occupation number under the time evolution. We now discuss how this result can be generalized to Hamiltonians with \(N\) bosonic modes.
Specifically, we use \(\Pi_{S}^{(j)}\) to denote the projector onto states of the \(j\)th mode with occupation number from set \(S\). These projectors commute with each other for different bosonic modes. Then, the initial state subspace and the Hamiltonian truncation are respectively determined by the projectors
\[\Pi_{[0,\Lambda_{0}]}^{(all)}:=\prod_{j=1}^{N}\Pi_{[0,\Lambda_{0}]}^{(j)},\ \ \Pi_{[0,\widetilde{\Lambda}]}^{(all)}:=\prod_{j=1}^{N}\Pi_{[0,\widetilde{\Lambda}]}^ {(j)}. \tag{165}\]
Our goal is to analyze the scaling of the maximum cutoff \(\widetilde{\Lambda}\)
for all bosonic modes such that
\[e^{-iH}\Pi^{(all)}_{[0,\Lambda_{0}]}\approx e^{-i\Pi^{(all)}_{[0,\bar{\Lambda}]}H \Pi^{(all)}_{[0,\bar{\Lambda}]}}\Pi^{(all)}_{[0,\Lambda_{0}]}. \tag{166}\]
This can be achieved by setting the accuracy to be \(\epsilon/N\) for each of the following approximations
\[e^{-iH}\Pi^{(all)}_{[0,\Lambda_{0}]}\stackrel{{ \epsilon/N}}{{\approx}}e^{-i\Pi^{(1)}_{[0,\bar{\Lambda}]}H\Pi^{(all)}_{[0, \bar{\Lambda}]}}\Pi^{(all)}_{[0,\Lambda_{0}]}\] \[\stackrel{{\epsilon/N}}{{\approx}}e^{-i\Pi^{(2)}_{[0, \bar{\Lambda}]}H\Pi^{(1)}_{[0,\bar{\Lambda}]}H\Pi^{(2)}_{[0,\bar{\Lambda}]}}\Pi^ {(all)}_{[0,\Lambda_{0}]}\Pi^{(all)}_{[0,\Lambda_{0}]}\] \[\stackrel{{\epsilon/N}}{{\approx}}\ldots\] \[\stackrel{{\epsilon/N}}{{\approx}}e^{-i\Pi^{(all)}_{[0, \bar{\Lambda}]}H\Pi^{(all)}_{[0,\bar{\Lambda}]}}\Pi^{(all)}_{[0,\Lambda_{0}]}. \tag{167}\]
Note that any intermediate truncated Hamiltonian
\[\prod_{u=1}^{j}\Pi^{(u)}_{[0,\bar{\Lambda}]}H\prod_{u=1}^{j}\Pi^{(u)}_{[0, \bar{\Lambda}]} \tag{168}\]
is a bosonic model that satisfies conditions (118) with the same parameter \(\chi\) as in the original Hamiltonian. It thus follows from Theorem 6 that the maximum cutoff has the scaling
\[\sqrt{\widetilde{\Lambda}}=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\chi t\log \left(\frac{N\Lambda_{0}\chi t}{\epsilon}\right)\right). \tag{169}\]
**Theorem 7** (\(N\)-mode bosonic Hamiltonian truncation).: _Given an \(N\)-mode bosonic Hamiltonian \(H\), suppose that \(H=H_{w}+H_{r}\) satisfying (118) with parameter \(\chi>0\) for all the bosonic modes. For any \(t>0\), \(\epsilon>0\) and integer \(\Lambda_{0}>0\), there exist an integer \(\widetilde{\Lambda}>0\) such that_
\[\left\|e^{-\pm H}\Pi^{(all)}_{[0,\Lambda_{0}]}-e^{-i\Pi^{(all)}_{[0,\bar{ \Lambda}]}H\Pi^{(all)}_{[0,\Lambda_{0}]}}\Pi^{(all)}_{[0,\Lambda_{0}]}\right\|\leq\epsilon. \tag{170}\]
_The cutoff \(\widetilde{\Lambda}\) has the asymptotic scaling_
\[\sqrt{\widetilde{\Lambda}}=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\chi t\log \left(\frac{N\Lambda_{0}\chi t}{\epsilon}\right)\right). \tag{171}\]
In Figure 8 we plot the truncation threshold \(\widetilde{\Lambda}(t)\) required to ensure the time propagation error due to the Hamiltonian truncation is below a predefined error \(\epsilon\) for the Holstein model (57), which is a special case of the Hubbard-Holstein model without on-site interaction. We assume the initial state is a tensor product between the fermionic ground state and a quantum state of the bosonic modes that has at most \(\Lambda_{0}=1\) particles in each mode. We compare the scaling of \(\widetilde{\Lambda}(t)\) in this work with the ones obtained in previous works,[148] observing a lower truncation threshold when the system size becomes larger or when the precision requirement is higher.
### Time-dependent Hamiltonians
We now extend our error analysis to dynamics generated by time-dependent boson-fermion Hamiltonians \(H(\tau)\). The resulting evolution
\[\frac{d}{d\tau}U(\tau)=-iU(\tau)\quad(0\leq\tau\leq t),\;\;U(0)=I \tag{172}\]
is described by a time-ordered exponential, which can be further expressed as a Dyson series
\[U(t) =\mathcal{T}\left\{e^{-i\int_{0}^{t}d\tau\;H(\tau)}\right\}\] \[=\sum_{k=0}^{\infty}(-i)^{k}\int_{0}^{t}d\tau_{k}\cdots\int_{0}^{ \tau_{3}}d\tau_{2}\int_{0}^{\tau_{2}}d\tau_{1}\;H(\tau_{k})\cdots H(\tau_{2})H (\tau_{1}). \tag{173}\]
We first fix a specific bosonic mode and let the initial state have at most \(\Lambda_{0}\) particles. Like in the above discussion, we assume that the Hamiltonian can be decomposed as
\[H(\tau)=H_{w}(\tau)+H_{r}(\tau), \tag{174}\]
where
\[\Pi_{\lambda}H_{w}(\tau)\Pi_{\lambda^{\prime}} =0,\;\;(\text{if }|\lambda-\lambda^{\prime}|>1),\] \[\left\|H_{w}(\tau)\Pi_{[0,\Lambda]}\right\| \leq\chi(\tau)(\Lambda+1)^{\prime},\] \[\left[H_{r}(\tau),\Pi_{\lambda}\right] =0, \tag{175}\]
for some \(\chi(\tau)>0\) and \(0\leq r<1\). These conditions are similar to the ones for time-independent Hamiltonians (118), except we require them to hold for all instantaneous times \(\tau\).
Our analysis for the time-dependent case will be similar to that for the time-independent case, so we will only highlight the key steps. For the state truncation, the error still adds up at most linearly, except we need to modify the error bound to include the time dependence
\[\left\|\mathcal{T}\left\{e^{-i\int_{0}^{M}d\tau\ H(\tau)}\right\} \Pi_{[0,\Lambda_{0}]}-\Pi_{[0,\Lambda_{s}]}\mathcal{T}\left\{e^{-i\int_{t_{s-1 }}^{M}d\tau\ H(\tau)}\right\}\Pi_{[0,\Lambda_{s-1}]}\cdots\Pi_{[0,\Lambda_{2}]} e^{-i\int_{t_{1}}^{t_{2}}d\tau\ H(\tau)}\Pi_{[0,\Lambda_{1}]}e^{-i\int_{t_{0}}^{t_{1}}d \tau\ H(\tau)}\Pi_{[0,\Lambda_{0}]}\right\|\] \[\leq\sum_{j=0}^{s-1}\left\|\overline{\Pi}_{[0,\Lambda_{j+1}]} \mathcal{T}\left\{e^{-i\int_{j}^{t_{j+1}}d\tau\ H(\tau)}\right\}\Pi_{[0, \Lambda_{j}]}\right\|, \tag{176}\]
where
\[0=t_{0}\leq t_{1}\leq t_{2}\leq\ldots\leq t_{s-1}\leq t_{s}=t. \tag{177}\]
For each short time leakage, we may without loss of generality shift the time interval to \(0\leq\tau\leq\Delta t\). Then we have the interaction picture evolution[30] (Lemma A.2)
\[\mathcal{T}\left\{e^{-i\int_{0}^{M}d\tau\ H(\tau)}\right\}= \mathcal{T}\left\{e^{-i\int_{0}^{M}d\tau\ H(\tau)}\right\}\cdot\mathcal{T} \left\{e^{-i\int_{0}^{M}d\tau\ \mathcal{T}\left\{e^{-i\int_{t_{1}}^{M}d\tau\mathcal{T}\left\{e^{-i\int_{t_{ 1}}^{M}d\tau\left\{e^{-i\int_{t_{1}}^{M}d\tau\right\}}\right\}}\cdot H_{\mu_{ \nu}}(\tau)\cdot\mathcal{T}\left\{e^{-i\int_{t_{1}}^{M}d\tau\left\{e^{-i\int_{ t_{1}}^{M}d\tau\left\{e^{-i\int_{t_{1}}^{M}d\tau\right\}}}\right\}}\right\}}}\right\}. \tag{178}\]
Note that the evolution generated by \(H_{r}(\tau)\) necessarily preserves the bosonic number
\[\left[\mathcal{T}\exp\left(-i\int_{0}^{\Delta t}d\tau\ H_{r}( \tau)\right),\Pi_{\lambda}\right]=0, \tag{179}\]
which follows by applying the condition \([H_{r}(\tau),\Pi_{\lambda}]=0\) in the Dyson series. Thus, we have
\[\left\|\overline{\Pi}_{[0,\Lambda]}\mathcal{T}\exp\left(-i\int_{0 }^{\Delta t}d\tau\ H(\tau)\right)\Pi_{[0,\Lambda]}\right\|\] \[\leq\sum_{k=N^{-}\Lambda}^{\infty}\int_{0}^{\Delta t}\mathrm{d} \tau_{k}\cdots\int_{0}^{\tau_{3}}\mathrm{d}\tau_{2}\int_{0}^{\tau_{2}}\mathrm{ d}\tau_{1}\|H_{w}(\tau_{h})\Pi_{[0,\Lambda+k-1]}\|\cdots\|H_{w}(\tau_{2})\Pi_{[0, \Lambda+1]}\|\|H_{w}(\tau_{1})\Pi_{[0,\Lambda]}\|\] \[\leq\sum_{k=N^{-}\Lambda}^{\infty}\frac{(\int_{0}^{M}d\tau\, \chi(\tau))^{k}}{k!}\sqrt{\Lambda+k}\cdots\sqrt{\Lambda+2}\sqrt{\Lambda+1}. \tag{180}\]
Figure 8: The truncation threshold \(\tilde{\Lambda}(t)\) required to keep the error below different \(\epsilon\) (left, \(N\) is fixed to 100) for the time propagator of boson-fermion Hamiltonian of different size (right, \(\epsilon\) is fixed to 0.001). The horizontal dotted lines show the energy-based truncation threshold, i.e., \(\tilde{\Lambda}=\mathcal{O}\left(N(N+\omega\epsilon)/\omega^{2}\epsilon^{2}\right)\) with \(E\sim\mathcal{O}(N)\) (see Appendix K.2 in Ref. [148] for detailed derivation). The dashed curves are the truncation threshold using the upper bound reported in Ref. [148]. \(\tilde{\Lambda}(t)\) in this work (solid curves) are obtained employing (169) for the Holstein model (57) with \(\Lambda_{0}=1\) and \(g\omega=1\).
We now proceed as in the time-independent case to obtain the following leakage bound
\[\left\|\overline{\Pi}_{[0,\Lambda^{\prime}]}\mathcal{T}\left\{e^{-i\int_{0}^{ \Lambda^{\prime}}d\tau\,\,\chi(\tau)}\right\}\Pi_{[0,\Lambda]}\right\|\leq\left( \frac{\sqrt{2}e}{\sqrt{\Lambda^{\prime}-\Lambda}}\right)^{\Lambda^{\prime}-\Lambda} \tag{181}\]
for any \(0\leq\int_{0}^{\Lambda^{\prime}}d\tau\,\,\chi(\tau)\leq 1/\sqrt{\Lambda}\) and integers \(0\leq\Lambda<\Lambda^{\prime}\) such that \(\Lambda^{\prime}-\Lambda\geq 8e^{2}\).
To extend this analysis to a long time evolution, we divide the evolution into short steps with durations \(\Delta t_{1}+\dots+\Delta t_{s}=t\) and corresponding bosonic numbers \(\Lambda_{0}<\Lambda_{1}<\dots<\Lambda_{s}\). We let the bosonic number increase linearly, i.e.,
\[\Delta\Lambda_{j}\equiv\Delta\Lambda\ \ \Rightarrow\ \ \Lambda_{j}=\Lambda_{0}+j\Delta\Lambda \tag{182}\]
and choose the duration upper bounds
\[\int_{\tau_{j-1}}^{\tau_{j}}d\tau\,\,\chi(\tau)=\frac{1}{\sqrt{\Lambda_{j-1}}}, \tag{183}\]
which implies
\[\int_{0}^{\tau_{k}}d\tau\,\,\chi(\tau) =\sum_{j=1}^{s}\frac{1}{\sqrt{\Lambda_{0}+(j-1)\Delta\Lambda}}\] \[\geq\frac{2}{\Delta\Lambda}\left(\sqrt{\Lambda_{0}+s\Delta\Lambda }-\sqrt{\Lambda_{0}}\right). \tag{184}\]
Note that \(\lim_{s\to\infty}\int_{0}^{\tau_{k}}d\tau\,\,\chi(\tau)=\infty\), so we can choose the first integer \(s\) such that \(\int_{0}^{\tau_{k}}d\tau\,\,\chi(\tau)\geq\int_{0}^{t}d\tau\,\,\chi(\tau)\). Explicitly,
\[s=\left[\frac{1}{\Delta\Lambda}\left(\left(\sqrt{\Lambda_{0}+\frac{\int_{0}^{ t}d\tau\,\,\chi(\tau)\Delta\Lambda}{2}}\right)^{2}-\Lambda_{0}\right)\right]. \tag{185}\]
With these time upper bounds, we simply let
\[\Delta t_{1} =\Delta\tau_{1},\] \[\Delta t_{2} =\Delta\tau_{2},\,\dots,\] \[\Delta t_{s} =t-(\Delta t_{1}+\dots+\Delta t_{s-1})\leq\Delta\tau_{s}. \tag{186}\]
The remaining analysis proceeds in a similar way as in the time-independent case. This yields a cutoff value
\[\sqrt{\widetilde{\Lambda}}=\sqrt{\Lambda_{0}}+\mathcal{O}\left(\int_{0}^{t} \mathrm{d}\tau\,\,\chi(\tau)\log\left(\frac{N\Lambda_{0}\int_{0}^{t}d\tau\, \,\chi(\tau)}{\epsilon}\right)\right). \tag{187}\]
for \(N\geq 1\) bosonic modes.
## V Conclusion
In this paper, we have elucidated and analyzed the methodologies and techniques employed for the quantum simulation of various types of boson-related model Hamiltonians, from qubit mapping to state preparation and evolution. We have discussed the construction of the effective Hamiltonian of bosonic quantum systems within the coupled cluster context through unitary or non-unitary flows and the design of corresponding hybrid quantum algorithms potentially suitable for future larger-scale applications.
A critical aspect of practical quantum simulation involves truncating the bosonic mode. Hence, we have devoted special attention to the error analysis of such truncations. We detail the mathematical derivations, their results, and the implications these have on the techniques used in quantum simulations. Notably, our bound applies to time-dependent Hamiltonians and is more stringent than a recent bound from Ref. [148], aiding in the development and evaluation of novel algorithms and approximate approaches.
Looking ahead, the challenge and importance of simulating open quantum systems accurately and efficiently will only increase, especially for systems where fermions interact with bosonic modes. Customized quantum simulation and effective Hamiltonian techniques for boson-related quantum systems, guided by associated error analysis, can serve as valuable resources in the field. They inspire further studies to advance our understanding and capabilities in quantum simulations.
## VI Acknowledgments
B.P., D.C. and K.K. are supported by the "Embedding QC into Many-body Frameworks for Strongly Correlated Molecular and Materials Systems" project, which is funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, the Division of Chemical Sciences, Geosciences, and Biosciences. This work was supported by the Quantum Science Center, a National Quantum Information Science Research Center of the DOE. KK also acknowledges the support from the Center for Many-Body Methods, Spectroscopies, and Dynamics for Molecular POLaritonic Systems (MAPOL) under FWP 79715, which is funded as part of the Computational Chemical Sciences (CCS) program by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences at Pacific Northwest National Laboratory (PNNL). The Pacific Northwest National Laboratory is operated by Battelle for the DOE under Contract DE-AC05-76RL01830.
## Appendix A Quantum Simulation of Spin-Boson Model
We compute the time evolution of a spin-boson model according to a Lindblad master equation that accounts for experimental imperfections causing heating and dephasing of the motional mode [6]
\[\frac{\partial\hat{\rho}(t)}{\partial t}= -i[\hat{H},\hat{\rho}(t)]+\Gamma\left[2\hat{n}\hat{\rho}(t)\hat{n }-\{\hat{n}\hat{n},\hat{\rho}(t)\}\right]\] \[+\gamma\left[2b\hat{\rho}(t)b^{\dagger}-\{bb^{\dagger},\hat{\rho} (t)\}\right.\] \[\left.\qquad+2b^{\dagger}\hat{\rho}(t)b-\{\hat{n},\hat{\rho}(t) \}\right] \tag{188}\]
where \(\Gamma\) and \(\gamma\) are dephasing parameter and heating rate, respectively, and \(\{\cdot,\cdot\}\) is the anticommutator, i.e.,
\[\{A,B\}=AB+BA. \tag{101}\]
The classical simulation can be done using, for example, a fourth order Runge-Kutta method with a relatively small time step (e.g. \(10^{-3}\)) to achieve the numerical stability of the simulation in the studied time regime. To perform the quantum simulation of Eq. (100), we first transform (100) to its vectorized form. This can be done through Choi-Jamiolkowski isomorphism (also called channel-state duality) [35; 77; 78], i.e.,
\[|i\rangle\langle j|=|j\rangle\otimes|i\rangle, \tag{102}\]
which from a matrix point of view is same as transforming the matrix to a vector through stacking its columns, e.g.
\[M=\left[\begin{array}{cc}m_{11}&m_{12}\\ m_{21}&d_{22}\end{array}\right]\Rightarrow\bar{M}=\left[\begin{array}{c}m_ {11}\\ m_{21}\\ m_{12}\\ m_{22}\end{array}\right]. \tag{103}\]
There are two useful properties of this vectorization
\[\left\{\begin{array}{l}\mathrm{Tr}\big{\{}A^{\dagger}B\big{\}}=\vec{A}^{ \dagger}\vec{B}\\ M=ABC\Rightarrow\bar{M}=(C^{T}\otimes A)\vec{B}\end{array}\right., \tag{104}\]
from which one can rewrite the operator products in (100), e.g.
\[\hat{H}\hat{\rho}=\hat{H}\hat{\rho}l \Rightarrow(I\otimes\hat{H})\bar{\rho}\] \[\hat{\rho}\hat{H}=l\hat{\rho}\hat{H} \Rightarrow(\hat{H}^{T}\otimes I)\bar{\rho}\] \[b^{\dagger}\hat{\rho}b \Rightarrow(b^{T}\otimes b^{\dagger})\bar{\rho}\] \[b\hat{\rho}b^{\dagger} \Rightarrow(b^{*}\otimes b)\bar{\rho}\] \[\vdots\]
Here, \(*\), \(T\), and \(\dagger\) denote the complex conjugate, transpose, and adjoint operators, respectively. Then (100) can be re-expressed as
\[\frac{\partial\bar{\rho}(t)}{\partial t}=\hat{\mathcal{L}}\bar{\rho}(t) \ \ \Rightarrow\ \bar{\rho}(t)=e^{\hat{\mathcal{L}}t}\bar{\rho}(0) \tag{105}\]
with the Lindbladian \(\hat{\mathcal{L}}\) defined as
\[\hat{\mathcal{L}} =-il\otimes\hat{H}+i\hat{H}^{T}\otimes I\] \[\quad+\Gamma\bigg{[}2\hat{n}^{T}\otimes\hat{n}-I\otimes(\hat{n} \hat{n})-(\hat{n}\hat{n})^{T}\otimes I\bigg{]}\] \[\quad+\gamma\bigg{[}2b^{*}\otimes b-I\otimes(bb^{\dagger})-(b^{* }b^{T})\otimes I\] \[\quad\quad+2b^{T}\otimes b^{\dagger}-I\otimes\hat{n}-\hat{n}^{T }\otimes I\bigg{]}. \tag{106}\]
The Lindblian above is not time dependent and generally nonunitary. Simple Lindbladian can be directly diagonalized through a unitary transform \(U\)
\[e^{\hat{\mathcal{L}}_{I}}=e^{U\hat{\mathcal{L}}_{B}U^{\dagger}}=Ue^{\hat{ \mathcal{L}}_{I}t}U^{\dagger}. \tag{107}\]
For more general Hamiltonians (including Dirac Hamiltonian for relativistic quantum simulations) and optical potentials, the nonunitary propagator \(e^{\hat{\mathcal{L}}_{I}}\) can be expressed as a linear combination of unitary (LCU) operators \(e^{\hat{\mathcal{L}}_{I}}=\sum_{i}c_{i}\hat{U}_{i}\), and encode the linear combination of unitary propagates on quantum computers. One way to find the unitary basis is to express \(e^{\hat{\mathcal{L}}_{I}}\) as the sum of a Hermitian \(\mathcal{A}\) and an anti-Hermitian \(\mathcal{B}\) operators, and approximate each of them using first-order Taylor expansion [134]
\[e^{\hat{\mathcal{L}}_{I}}=\mathcal{A}+\mathcal{B} \tag{108}\]
with
\[\mathcal{A}=\tfrac{1}{2}(e^{\hat{\mathcal{L}}_{I}}+e^{\hat{ \mathcal{L}}_{I}t})=(ie^{-i\epsilon\mathcal{A}}-ie^{i\epsilon\mathcal{A}})/2 \epsilon+\mathcal{O}(\epsilon^{2})\] \[\mathcal{B}=\tfrac{1}{2}(e^{\hat{\mathcal{L}}_{I}}-e^{\hat{ \mathcal{L}}_{I}t})=(e^{\epsilon\mathcal{B}}-e^{-\epsilon\mathcal{B}})/2 \epsilon+\mathcal{O}(\epsilon^{2}) \tag{109}\]
The four unitaries can be implemented separately or together on a dilated space. However, implementing each unitary and its associated Trotterized approximation may lead to a deep circuit structure, which warrants exploration of the optimal circuit structure through analytical and numerical means.
## Appendix B Unitary Flow for Many-Body-Localization
We can define a unitary path \(\hat{U}(s)\) depending on a parameter \(s\) such that for the Hamiltonian \(\hat{H}\) of a given quantum system the following unitary transformation
\[\hat{H}^{\prime}(s)=\hat{U}(s)\hat{H}\hat{U}^{\dagger}(s) \tag{110}\]
can gradually reduce the band of \(\hat{H}\), and repeating unitary transformations \(n\) times
\[\hat{H}_{D}=\hat{U}_{n}\cdots\hat{U}_{1}\hat{H}\hat{U}_{1}^{\dagger}\cdots\hat{ U}_{n}^{\dagger} \tag{111}\]
would give the diagonal form of the Hamiltonian \(\hat{H}_{D}\) (in analogous to the Jacobi eigenvalue algorithm where a series of Given rotations are performed). The goal of seeking for an optimal \(s\) to gradually reduce the band of \(\hat{H}\) can be viewed as a optimization problem for which one would usually need to interrogate the corresponding gradient of \(\hat{H}^{\prime}\) with respect to \(s\),
\[\frac{\partial\hat{H}^{\prime}}{\partial s}=[\mathcal{G}(s),\hat{H}^{\prime}(s)], \tag{112}\]
with the anti-Hermitian generator \(\mathcal{G}(s)\) defined as
\[\mathcal{G}(s)=\frac{\partial\hat{U}}{\partial s}\hat{U}^{\dagger}. \tag{113}\]
Eq. (112) is also recognized as the most general form of a unitary flow on the Hamiltonian [15].
Among several unitary paths or their generators in this section, Wegner Generator [152; 43] is one of the simplest, and is defined as
\[\mathcal{G}(s)=[\hat{H}_{D}^{\prime}(s),\hat{H}^{\prime}(s)], \tag{114}\]
with its matrix elements
\[\mathcal{G}_{ij}=\hat{H}^{\prime}_{ij}(d_{i}-d_{j})=\left\{\begin{array}{rl}0& \text{if }i=j\\ h_{ij}(d_{i}-d_{j})&\text{if }i\neq j\end{array}\right.. \tag{101}\]
Here \(d_{i}\)'s and \(h_{ij}\)'s are diagonal and off-diagonal entries of \(\hat{H}^{\prime}\), respectively. Note that both \(d_{i}\)'s and \(h_{ij}\)'s are \(s\)-dependent, their \(s\)-derivatives can be derived by plugging the Wegner generator (100) to the the unitary flow (100),
\[\left(\frac{\partial\hat{H}^{\prime}}{\partial s}\right)_{ij} =\sum_{k}\mathcal{G}_{ik}\hat{H}^{\prime}_{kj}-\hat{H}^{\prime}_{ ik}\mathcal{G}_{kj}\] \[=\sum_{k}h_{ik}\hat{H}^{\prime}_{kj}(d_{i}-d_{k})-\hat{H}^{ \prime}_{ik}h_{kj}(d_{k}-d_{j})\] \[=-h_{ij}(d_{i}-d_{j})^{2}+\sum_{k\neq i,j}h_{ik}h_{kj}(d_{i}+d_{ j}-2d_{k})\]
in which the diagonal elements reduce to
\[\left(\frac{\partial\hat{H}^{\prime}}{\partial s}\right)_{ii}=\frac{\partial d _{i}}{\partial s}=2\sum_{k}|h_{ik}|^{2}(d_{i}-d_{k}) \tag{102}\]
To demonstrate how the amplitudes of the \(d_{i}\)'s or \(h_{ij}\)'s evolve along the Wegner unitary flow, one can look at the derivatives of \(d_{i}^{2}\)'s or those of \(|h_{ij}|^{2}\). The reason why we can only look at one of these is because their sum is constantly zero due to the fact that
\[\frac{\partial}{\partial s}\text{Tr}(\hat{H}^{\prime 2})\] \[=\frac{\partial}{\partial s}\sum_{l}\hat{H}^{\prime 2}_{ii}=\frac{ \partial}{\partial s}\sum_{l,k}\hat{H}^{\prime}_{ik}\hat{H}^{\prime}_{ki}= \frac{\partial}{\partial s}\left(\sum_{l}d_{i}^{2}+\sum_{i,j\neq i}|h_{ij}|^{ 2}\right)\] \[=\frac{\partial}{\partial s}\text{Tr}(\hat{U}_{n}\cdots\hat{U}_{ 1}\hat{H}^{2}\hat{U}_{1}^{\dagger}\cdots\hat{U}_{n}^{\dagger})\] \[=\frac{\partial}{\partial s}\text{Tr}(\hat{H}^{2}\hat{U}_{1}^{ \dagger}\cdots\hat{U}_{n}^{\dagger}\hat{U}_{n}\cdots\hat{U}_{1})\] \[=\frac{\partial}{\partial s}\text{Tr}(\hat{H}^{2})=0, \tag{103}\]
where we utilize \(\text{Tr}(\mathbf{A}\mathbf{B})=\text{Tr}(\mathbf{B}\mathbf{A})\) and \(\hat{U}_{i}^{\dagger}\hat{U}_{i}=\mathbf{1}\). Since
\[\frac{\partial}{\partial s}\sum_{l}d_{i}^{2} =2\sum_{l}d_{i}\frac{\partial d_{i}}{\partial s}=2\sum_{i,k}|h_{ ik}|^{2}(2d_{i}^{2}-2d_{i}d_{k})\] \[=2\underbrace{\sum_{l,k}|h_{ik}|^{2}(d_{i}^{2}+d_{k}^{2}-2d_{i}d_ {k})}_{\text{infices }i\text{ and }k\text{ are interchangable here}}\] \[=2\sum_{l,k}|h_{ik}|^{2}(d_{i}-d_{k})^{2}\geq 0\] \[\Rightarrow\frac{\partial}{\partial s}\sum_{l,j\neq i}|h_{ij}|^{ 2} \leq 0, \tag{104}\]
the amplitudes of the diagonal (off-diagonal) elements of \(\hat{H}\) will monotonically increase (decrease) as the flow evolves.
It's worth mentioning that for certain types of Hamiltonians, e.g. number-conserving quadratic Hamiltonian
\[\hat{H}=\sum_{ij}H_{ij}\alpha_{i}^{\dagger}a_{j} \tag{105}\]
where the matrix \(H\) must be Hermitian for the operator \(\hat{H}\) to also be Hermitian, and \(a_{i}^{\dagger}\) and \(a_{i}\) can be either fermionic or bosonic creation and annihilation operators, the Hermitian matrix \(H\) can be diagonalized by a unitary transformation \(U\) with eigenvalues \(E=\{\epsilon_{1},\cdots,\epsilon_{n}\}\). This transformation also applies to the creation and annihilation operators for the diagonalization of the operator \(\hat{H}\), i.e.
\[U^{\dagger}HU=E,\ \ a_{i}^{\dagger}=\sum_{m}\alpha_{m}^{ \dagger}(U^{\dagger})_{mj},\ \ a_{i}=\sum_{l}U_{lm}\alpha_{m},\] \[\Rightarrow\ \hat{H}=\sum_{ij}\sum_{mn}\alpha_{m}^{\dagger}U_{mi}^{ \dagger}H_{ij}U_{jn}\alpha_{n}=\sum_{n}\epsilon_{n}\alpha_{n}^{\dagger}\alpha_ {n}. \tag{106}\]
One method to identify such a unitary transformation \(U\) is the well-known Fourier-Bogoliubov transformation method. To see how it works, we can first take the fermionic Hamiltonian of XY model (with an external field in \(z\) direction) as an example. The Hamiltonian reads
\[\hat{H}=-J\sum_{j}\left[\frac{1+\gamma}{2}X_{j}X_{j+1}+\frac{1- \gamma}{2}Y_{j}Y_{j+1}+\lambda Z_{j}\right] \tag{107}\]
whose fermionic form is
\[\hat{H}=-J\sum_{j}\left[f_{j+1}^{\dagger}f_{j}+f_{j}^{\dagger}f_{j+ 1}+\gamma(f_{j+1}f_{j}+f_{j}^{\dagger}f_{j+1}^{\dagger})\right. \tag{108}\] \[\left.-2\lambda f_{j}^{\dagger}f_{j}+\lambda\right]\]
with \(J>0\), \(\gamma\), and \(\lambda\) being scalars. Note that one can also use hard-core bosonic operator to get the same form of the Hamiltonian. Employing the **Fourier transform** of the (boson or fermion) creation/annihilation operators
\[f_{j}^{\dagger}=\frac{1}{\sqrt{N}}\sum_{k}e^{-ikja}f_{k}^{\dagger},\quad f_{j}= \frac{1}{\sqrt{N}}\sum_{k}e^{+ikja}f_{k} \tag{109}\]
with \(a=\frac{2\pi}{N}\) and
\[k=\left\{\begin{array}{ll}-\frac{N-1}{N^{2}},\cdots,-1,0,1,\cdots,\frac{N-1}{2 },&\text{if }N\text{ is odd}\\ -\frac{N}{2},\cdots,-1,0,1,\cdots,\frac{N}{2}-1,&\text{if }N\text{ is even}\end{array}\right.,\]
and utilizing the relation
\[N^{-1}\sum_{j=1}^{N}e^{i(k+k^{\prime})ja}=\delta_{k,-k^{\prime}}, \tag{110}\]
we can equivalently rewrite the Hamiltonian in the \(k\)-space, i.e.
\[\hat{H} =-J\sum_{k}\Big{[}2(\cos(ka)-\lambda)f_{k}^{\dagger}f_{k}+\gamma(f _{k}f_{-k}+f_{k}^{\dagger}f_{-k}^{\dagger})e^{ikja}\] \[\qquad\qquad+\lambda\Big{]}\] \[=-J\sum_{k}\Big{[}(\cos(ka)-\lambda)(f_{k}^{\dagger}f_{k}+f_{-k}^{ \dagger}f_{-k})\] \[\qquad\qquad+i\gamma\text{sin}(ka)(f_{k}f_{-k}+f_{k}^{\dagger}f_{-k} ^{\dagger})+\lambda\Big{]}. \tag{111}\]
where the second line is due to the fact that \(k\) is distributed symmetrically around zero, e.g., the second term can be
rewritten as
\[\sum_{k}(f_{k}f_{-k}+f_{k}^{\dagger}f_{-k}^{\dagger})e^{i\lambda a}\] \[\quad=\sum_{k}(f_{k}f_{-k}+f_{-k}^{\dagger}f_{k}^{\dagger})e^{-ika} =-\sum_{k}(f_{k}f_{-k}+f_{k}^{\dagger}f_{-k}^{\dagger})e^{-ika}\] \[\quad=\frac{1}{2}\sum_{k}(f_{k}f_{-k}+f_{k}^{\dagger}f_{-k}^{ \dagger})(e^{ika}-e^{-ika})\] \[\quad=i\sum_{k}\sin(ka)(f_{k}f_{-k}+f_{k}^{\dagger}f_{-k}^{ \dagger}) \tag{101}\]
Note that the above coupling between \(k\) and \(-k\) can be removed through the **Bogoliubov transformation**, another common tool often used to diagonalize fermonic and/or bosonic Hamiltonians. To see that, we can define
\[g_{k}^{\dagger} =u_{k}f_{k}^{\dagger}+iv_{k}f_{-k},\quad g_{-k}^{\dagger}=u_{k}f_ {-k}^{\dagger}-iv_{k}f_{k},\] \[g_{k} =u_{k}f_{k}-iv_{k}f_{-k}^{\dagger},\quad g_{-k}=u_{k}f_{-k}+iv_{k }f_{k}^{\dagger}, \tag{102}\]
with \(u_{k}=\cos\theta_{k}\), \(v_{k}=\sin\theta_{k}(\theta_{k}\in\mathbf{R})\), and \(u_{k}^{2}+v_{k}^{2}=1\). It's then easy to show that
\[f_{k}^{\dagger} =u_{k}g_{k}^{\dagger}-iv_{k}g_{-k},\quad f_{-k}^{\dagger}=u_{k}g _{-k}^{\dagger}+iv_{k}g_{k},\] \[f_{k} =u_{k}g_{k}+iv_{k}g_{-k}^{\dagger},\quad f_{-k}=u_{k}g_{-k}-iv_{k }g_{k}^{\dagger}. \tag{103}\]
Plugging (103) to (100) and denoting
\[\Delta_{k}=\gamma\sin(ka),\ \ \epsilon_{k}=\lambda-\cos(ka),\ \ E_{k}= \sqrt{\Delta_{k}^{2}+\epsilon_{k}^{2}} \tag{104}\]
we then have
\[\hat{H}=J\sum_{k} \left\{\left[\epsilon_{k}(u_{k}^{2}-v_{k}^{2})+2\Delta_{k}u_{k} v_{k}\right](g_{k}^{\dagger}g_{k}+g_{-k}^{\dagger}g_{-k})\right.\] \[\quad+2v_{k}^{2}\epsilon_{k}-2u_{k}v_{k}\Delta_{k}\] \[\quad+\left[2i\epsilon_{k}u_{k}v_{k}-i\Delta_{k}(u_{k}^{2}-v_{k} ^{2})\right](g_{k}^{\dagger}g_{-k}^{\dagger}+g_{k}g_{-k})\] \[\quad-\lambda\left.\right\}. \tag{105}\]
The full diagonalization then requires the \((g_{k}^{\dagger}g_{-k}^{\dagger}+g_{-k}g_{k})\) term in the above equation to disappear which means \(u_{k}\) and \(v_{k}\) need to be chosen such that
\[2i\epsilon_{k}u_{k}v_{k}-i\Delta_{k}(u_{k}^{2}-v_{k}^{2})=0,\] \[\Rightarrow \tan 2\theta_{k}=\frac{2u_{k}v_{k}}{u_{k}^{2}-v_{k}^{2}}=\frac{ \Delta_{k}}{\epsilon_{k}},\] \[u_{k}^{2}-v_{k}^{2}=\pm\epsilon_{k}/E_{k},\ \ u_{k}v_{k}=\pm\Delta_{k}/2E_{k}.\]
By choosing the signs of \(u_{k}^{2}-v_{k}^{2}\) and \(u_{k}v_{k}\), the Hamiltonian is simplified to
\[\hat{H} =J\sum_{k}\left(2E_{k}g_{k}^{\dagger}g_{k}+2v_{k}^{2}\tilde{ \epsilon}_{k}-2u_{k}v_{k}\Delta_{k}-\lambda\right)\] \[=J\sum_{k}\left(2E_{k}g_{k}^{\dagger}g_{k}+\epsilon_{k}-E_{k}- \lambda\right)\] \[=J\sum_{k}\left(2E_{k}g_{k}^{\dagger}g_{k}+\cos(ka)-E_{k}\right)\] \[=J\sum_{k}2E_{k}\left(g_{k}^{\dagger}g_{k}-\frac{1}{2}\right). \tag{106}\]
where we also utilize the fact \(\sum_{k}\cos(ka)=0\).
Bogoliubov transformation can be also used for bosonic system, though the unitary matrix becomes different. Take the diagonalization of the following two-site Hamiltonian as an example
\[\hat{H} =\epsilon(c_{1}^{\dagger}c_{1}+c_{2}^{\dagger}c_{2})+\lambda(c_{1} ^{\dagger}c_{2}^{\dagger}+c_{2}c_{1})\] \[=\frac{1}{2}\left(\begin{array}{cccc}c_{1}^{\dagger}&c_{2}&c_{ 2}^{\dagger}&c_{1}\\ \end{array}\right)H\left(\begin{array}{c}c_{1}\\ c_{2}^{\dagger}\\ c_{2}\\ c_{1}^{\dagger}\end{array}\right)+\epsilon \tag{107}\]
where \(\epsilon,\lambda\in\mathbf{R}\) and \(H\) is a block-diagonal matrix
\[H=\left(\begin{array}{cccc}\epsilon&\lambda&0&0\\ \lambda&-\epsilon&0&0\\ 0&0&\epsilon&-\lambda\\ 0&0&-\lambda&-\epsilon\end{array}\right). \tag{108}\]
If \(c^{\dagger}\) and \(c\) are fermionic creation and annihilation operators, the fermionic Bogoliubov transformation is
\[\left(\begin{array}{c}c_{1}\\ c_{2}^{\dagger}\\ c_{2}\\ c_{1}^{\dagger}\end{array}\right)=U_{f}\left(\begin{array}{c}d_{1}\\ d_{2}^{\dagger}\\ d_{1}^{\dagger}\end{array}\right),\ \ U_{f}=\left(\begin{array}{cccc}u&v&0&0\\ -v&u&0&0\\ 0&0&u&-v\\ 0&0&v&u\end{array}\right) \tag{109}\]
with the typical choices of \(u\) and \(v\) being
\[u=\cos\theta,\ \ v=\sin\theta,\ \ \text{s.t.}\ \ u^{2}+v^{2}=1. \tag{110}\]
When \(\tan 2\theta=-\lambda/\epsilon\), \(U_{f}\) diagonalizes \(H\) with eigenvalue \(\tilde{\epsilon}=\pm\sqrt{\epsilon^{2}+\lambda^{2}}\), and the Hamiltonian has the form
\[\hat{H}=\tilde{\epsilon}(d_{1}^{\dagger}d_{1}+d_{2}^{\dagger}d_{2})-\epsilon+ \tilde{\epsilon}. \tag{111}\]
If \(c^{\dagger}\) and \(c\) are bosonic creation and annihilation operators, then the bonic Bogoliubov transformation is
\[\left(\begin{array}{c}c_{1}\\ c_{2}^{\dagger}\\ c_{2}^{\dagger}\\ c_{1}^{\dagger}\end{array}\right)=U_{b}\left(\begin{array}{c}d_{1}\\ d_{2}^{\dagger}\\ d_{2}^{\dagger}\\ d_{1}^{\dagger}\end{array}\right),\ \ U_{b}=\left(\begin{array}{cccc}u&v&0&0\\ v&u&0&0\\ 0&0&u&v\\ 0&0&v&u\end{array}\right) \tag{112}\]
with the typical choices of \(u\) and \(v\) being
\[u=\cosh\theta,\ \ v=\sinh\theta,\ \ \text{s.t.}\ \ u^{2}-v^{2}=1, \tag{113}\]
When \(\tanh 2\theta=-\lambda/\epsilon\), \(U_{b}\) diagonalizes \(H\) with eigenvalue \(\tilde{\epsilon}=\sqrt{\epsilon^{2}-\lambda^{2}}\) implying that \(|\epsilon|>|\lambda|\) (otherwise the Hamiltonian would be representing a system at an unstable equilibrium point).
## Appendix C Trotterized D-UCCSD Ansatz for Three-Level Two-Boson Model System
For a three-level two-boson model for which the excitation manifold is partitioned as depicted in Figure (a)a, the direct D-UCCSD ansatz can be written as
\[|\phi\rangle=e^{\alpha_{2}}e^{\phi_{H1}}|\phi_{0}\rangle \tag{114}\]
with the reference \(|\phi_{0}\rangle\) being denoted by \(|200\rangle\) indicating the boson occupation at each level and
\[\sigma_{\mathsf{fl}1} =r_{1}\left[(b_{1}^{\dagger})^{2}(b_{0})^{2}-(b_{0}^{\dagger})^{2} (b_{1})^{2}\right]+r_{2}\left(b_{1}^{\dagger}b_{0}-b_{0}^{\dagger}b_{1}\right)\] \[\sigma_{\mathsf{fl}2} =s_{1}\left[(b_{2}^{\dagger})^{2}(b_{0})^{2}-(b_{0}^{\dagger})^{2 }(b_{2})^{2}\right]+s_{2}\left[b_{2}^{\dagger}b_{1}^{\dagger}(b_{0})^{2}\right.\] \[\left.-(b_{0}^{\dagger})^{2}b_{1}b_{2}\right]+s_{3}\left(b_{2}^{ \dagger}b_{0}-b_{0}^{\dagger}b_{2}\right). \tag{100}\]
Nevertheless, it is challenging to direct implement the unitaries \(e^{\sigma_{\mathsf{fl}1}}\) and \(e^{\sigma_{\mathsf{fl}2}}\). In practice, to ease the implementation, we typically employ the Trotterized forms of (100) with certain ordering to ensure the exactness. For example, consider the following ansatz
\[|\phi\rangle=\prod_{i=3}^{1}e^{\sigma_{\mathsf{fl}2}^{i}}\prod_{j=2}^{1}e^{ \sigma_{\mathsf{fl}1}^{i}}|\phi_{0}\rangle \tag{101}\]
where
\[\sigma_{\mathsf{fl}1}^{1} =r_{1}\left[(b_{1}^{\dagger})^{2}(b_{0})^{2}-(b_{0}^{\dagger})^{ 2}(b_{1})^{2}\right],\] \[\sigma_{\mathsf{fl}1}^{2} =r_{2}\left(b_{1}^{\dagger}b_{0}-b_{0}^{\dagger}b_{1}\right),\] \[\sigma_{\mathsf{fl}2}^{1} =s_{1}\left[(b_{2}^{\dagger})^{2}(b_{0})^{2}-(b_{0}^{\dagger})^{ 2}(b_{2})^{2}\right],\] \[\sigma_{\mathsf{fl}2}^{2} =s_{2}\left[b_{2}^{\dagger}b_{1}^{\dagger}(b_{0})^{2}-(b_{0}^{ \dagger})^{2}b_{1}b_{2}\right],\] \[\sigma_{\mathsf{fl}2}^{3} =s_{3}\left(b_{2}^{\dagger}b_{0}-b_{0}^{\dagger}b_{2}\right). \tag{102}\]
Now the question is whether such a representation can represent an arbitrary state \(|\Psi\rangle\) for the three-level two-boson model. To see that, we need to check (**a**) if the above ansatz can generate all the possible configurations of the model, and (**b**) if \(\{r_{1},r_{2},s_{1},s_{2},s_{3}\}\) can be determined for \(|\Psi\rangle\).
Regarding (**a**), realizing that the each exponential operator in (101) is a Givens rotation, and utilizing the following algebraic relations
\[e^{\sigma_{\mathsf{fl}1}^{2}}|200\rangle =c_{2r_{1}}|200\rangle+s_{2r_{1}}|020\rangle, \tag{103}\] \[e^{\sigma_{\mathsf{fl}2}^{2}}|200\rangle =c_{r_{2}}^{2}|200\rangle+\sqrt{2}s_{r_{2}}c_{r_{2}}|110\rangle+s _{r_{1}}^{2}|020\rangle,\] (104) \[e^{\sigma_{\mathsf{fl}2}^{2}}|020\rangle =s_{r_{2}}^{2}|200\rangle-\sqrt{2}s_{r_{2}}c_{r_{2}}|110\rangle+c _{r_{2}}^{2}|020\rangle,\] (105) \[e^{\sigma_{\mathsf{fl}2}^{2}}|200\rangle =c_{2s_{1}}|200\rangle+s_{2s_{1}}|002\rangle,\] (106) \[e^{\sigma_{\mathsf{fl}2}^{2}}|110\rangle =|110\rangle,\] (107) \[e^{\sigma_{\mathsf{fl}2}^{2}}|020\rangle =|020\rangle,\] (108) \[e^{\sigma_{\mathsf{fl}2}^{2}}|200\rangle =c_{s_{1}}^{2}|200\rangle+\sqrt{2}s_{s_{3}}c_{s_{3}}|101\rangle+s _{s_{2}}^{2}|002\rangle,\] (109) \[e^{\sigma_{\mathsf{fl}2}^{2}}|002\rangle =s_{s_{3}}^{2}|200\rangle-\sqrt{2}s_{s_{3}}c_{s_{3}}|101\rangle+c _{s_{3}}^{2}|002\rangle,\] (110) \[e^{\sigma_{\mathsf{fl}2}^{2}}|002\rangle =s_{s_{3}}^{2}|200\rangle-\sqrt{2}s_{s_{3}}c_{s_{3}}|101\rangle+c _{s_{3}}^{2}|002\rangle,\] (111) \[e^{\sigma_{\mathsf{fl}2}^{2}}|110\rangle =c_{s_{3}}|110\rangle+s_{s_{3}}|011\rangle,\] (112) \[e^{\sigma_{\mathsf{fl}2}^{2}}|011\rangle =c_{s_{3}}|011\rangle-s_{s_{3}}|110\rangle, \tag{113}\]
where \(c_{p}\) and \(s_{p}\) denote \(\cos(p)\) and \(\sin(p)\), respectively. It is straightforward to see the all the possible configurations of the model can be mapped out through the consecutive Givens rotations (see Figure 9).
Regarding (**b**), an arbitrary state \(|\Psi\rangle\) can be defined as a linear combination of all six configurations
\[|\Psi\rangle =d_{1}|200\rangle+d_{2}|101\rangle+d_{3}|002\rangle\] \[\quad+d_{4}|110\rangle+d_{5}|011\rangle+d_{6}|020\rangle. \tag{114}\]
with arbitrary \(\{d_{i}\}\) (\(i=1,\cdots,6\)) satisfying \(\sum_{i=1}^{6}d_{i}^{2}=1\). To determine \(\{r_{1},r_{2},s_{1},s_{2},s_{3}\}\), we can utilize the reverse flow of Figure 9 (i.e. from \(|\Psi\rangle\) to \(|\phi\rangle\)), in particular the fact that only one configuration disappears after each reverse Givens rotation, to solve the rotations one by one. We can take the first two step as examples. The first step applies \(e^{-\sigma_{\mathsf{fl}2}^{3}}\) on \(|\Psi\rangle\) to remove \(|101\rangle\). Since the only relevant equations are
\[d_{1}\cdot e^{-\sigma_{\mathsf{fl}2}^{3}}|200\rangle =d_{1}\cdot(c_{s_{3}}^{2}|200\rangle-\sqrt{2}s_{s_{3}}c_{s_{3}}|101\rangle\] \[\quad+s_{s_{3}}^{2}|002\rangle),\] \[d_{2}\cdot e^{-\sigma_{\mathsf{fl}2}^{3}}|101\rangle =d_{2}\cdot(\sqrt{2}s_{s_{3}}c_{s_{3}}|200\rangle+c_{2s_{3}}|101\rangle\] \[\quad-\sqrt{2}s_{s_{3}}c_{s_{3}}|002\rangle),\] \[d_{3}\cdot e^{-\sigma_{\mathsf{fl}2}^{3}}|002\rangle =d_{3}\cdot(s_{3}^{2}|200\rangle+\sqrt{2}s_{s_{3}}c_{s_{3}}|101\rangle\] \[\quad+c_{s_{3}}^{2}|002\rangle), \tag{115}\]
we can then write
\[c_{2s_{3}}d_{2}-\sqrt{2}s_{s_{3}}c_{s_{3}}(d_{1}-d_{3})=0, \tag{116}\]
from which if \(d_{2}\neq 0\), we can choose one non-singular solution to be
\[s_{3}=\tan^{-1}\left(\frac{-\sqrt{2}(d_{1}-d_{3})\pm\sqrt{\Delta}}{2d_{2}}\right) \tag{117}\]
with \(\Delta=2(d_{1}-d_{3})^{2}+4d_{2}^{2}\geq 0\). Then \(d_{1},d_{3},d_{4},d_{5}\) will be updated to \(d_{1}^{\prime},d_{3}^{\prime},d_{4}^{\prime},d_{5}^{\prime}\) (\(d_{6}\) stays the same) before proceeding to the second step. In the second step, the operation
Figure 9: The expansion of the configuration space when sequentially applying the Givens rotation in ansitz (101) to the reference state \(|200\rangle\).
\(e^{-\sigma_{62}^{2}}\) removes the configuration \(|011\rangle\). Since the only relevant equations are
\[d_{1}^{\prime}\cdot e^{-\sigma_{62}^{2}}|200\rangle =d_{1}^{\prime}\cdot(c_{\sqrt{2}_{2}}|200\rangle-s_{\sqrt{2}_{2}}| 011\rangle),\] \[d_{5}^{\prime}\cdot e^{-\sigma_{62}^{2}}|011\rangle =d_{5}^{\prime}\cdot(s_{\sqrt{2}_{2}}|200\rangle+c_{\sqrt{2}_{2}}| 011\rangle),\] (C23)
from which the parameter \(s_{2}\) can be chosen to be
\[s_{2}=\frac{1}{\sqrt{2}}\tan^{-1}\left(\frac{d_{1}^{\prime}}{d_{1}^{\prime}} \right).\] (C24)
Following the same procedure we can also find the solutions for the remaining parameters. Therefore, both (**a**) and (**b**) are satisfied meaning the ansatz (C3) is able to represent arbitrary state of the model.
Note that the procedure described above for addressing (**b**) is similar to the one that is utilized to prove the exactness of the disentangled UCC ansatze for fermionic systems (see Ref. [50]). Although the explicit Givens rotations, e.g. (C5)-(C18), are different between fermionic and bosonic systems, there are some common effects after applying a series single and/or double Givens rotations on a fermionic or bosonic state. For example, with certain ordering of these rotations, one can successively generate/eliminate configurations. Regarding the higher order rotations dealing with more bosons, given the fact that the high order excitation can be rewritten as a series of nested commutators of one-body and two-body excitations [50], high order Givens rotations can be approximated as the products of one-body and two-body Givens rotations to an arbitrary accuracy. Therefore, one can generalize the above procedure to generate a Trotterized D-UCCSD ansatz to represent any state for a general bosonic system to an arbitrary accuracy.
|
2310.03037 | Quantum image edge detection based on eight-direction Sobel operator for
NEQR | Quantum Sobel edge detection (QSED) is a kind of algorithm for image edge
detection using quantum mechanism, which can solve the real-time problem
encountered by classical algorithms. However, the existing QSED algorithms only
consider two- or four-direction Sobel operator, which leads to a certain loss
of edge detail information in some high-definition images. In this paper, a
novel QSED algorithm based on eight-direction Sobel operator is proposed, which
not only reduces the loss of edge information, but also simultaneously
calculates eight directions' gradient values of all pixel in a quantum image.
In addition, the concrete quantum circuits, which consist of gradient
calculation, non-maximum suppression, double threshold detection and edge
tracking units, are designed in details. For a 2^n x 2^n image with q gray
scale, the complexity of our algorithm can be reduced to O(n^2 + q^2), which is
lower than other existing classical or quantum algorithms. And the simulation
experiment demonstrates that our algorithm can detect more edge information,
especially diagonal edges, than the two- and four-direction QSED algorithms. | Wenjie Liu, Lu Wang | 2023-10-01T05:38:59Z | http://arxiv.org/abs/2310.03037v1 | # Quantum image edge detection based on eight-direction Sobel operator for NEQR
###### Abstract
Quantum Sobel edge detection (QSED) is a kind of algorithm for image edge detection using quantum mechanism, which can solve the real-time problem encountered by classical algorithms. However, the existing QSED algorithms only consider two- or four-direction Sobel operator, which leads to a certain loss of edge detail information in some high-definition images. In this paper, a novel QSED algorithm based on eight-direction Sobel operator is proposed, which not only reduces the loss of edge information, but also simultaneously calculates eight directions' gradient values of all pixel in a quantum image. In addition, the concrete quantum circuits, which consist of gradient calculation, non-maximum suppression, double threshold detection and edge tracking units, are designed in details. For a \(\mathbf{2^{n}\times 2^{n}}\) image with \(\boldsymbol{q}\) gray-scale, the complexity of our algorithm can be reduced to O(\(\boldsymbol{n^{2}+q^{2}}\)), which is lower than other existing classical or quantum algorithms. And the simulation experiment demonstrates that our algorithm can detect more edge information, especially diagonal edges, than the two- and four-direction QSED algorithms.
**Keywords:** Quantum image processing,Edge detection, Eight-direction Sobel operator, Non-maximum suppression, Double threshold, Edge tracking
## 1 Introduction
In recent years, quantum image processing (QIP) received widespread attention and deep research from researchers as an emerging sub-discipline of quantum computing and image processing [1, 2]. Due to the parallelism and entanglement properties of quantum computing, the computational speed can be improved in different degrees than classical computing in some problems. At present, the demand for high-quality images is increasing, which results in a sharp increase in computation time on classical computers. Therefore, the real-time problem of digital image processing encounters great challenges. Quantum image processing can utilize the advantages of quantum computing to improve the processing speed, which makes it necessary to develop image processing on quantum computers.
Quantum image processing is usually divided into three stages: quantum image representation, quantum image processing algorithm and measuring quantum image information. Quantum image representation is a model that represents digital images as quantum images. At present, there are many quantum image representation models and can be approximately divided into two categories [3, 4]. One is to encode the gray-scale values of the quantum image into the probability amplitude of the qubits, which can encode images using fewer qubits, such as qubit lattice representation [5], real ket representation [6], entangled images representation [7], flexible representation of quantum image (FRQI) [8], multi-channel RGB images representation of quantum images (MCQI) [9], normal arbitrary quantum superposition state (NAQSS) [10], quantum probability image encoding representation (QPIE) [11]. When an image is retrieved, a large number of measurements are required to get an approximation of the probability magnitude, which makes it difficult to retrieve images. The other method, such as novel enhanced quantum representation (NEQR) [12], novel quantum representation of color digital images (NCQI) [13] and novel quantum image representation based on HSI color space model (QIRHSI) [14], solves this problem well and it is to encode the gray-scale values by using a separate qubit sequence. When an image is retrieved, the gray-scale value of each pixel can be accurately recovered with a few measurements. Therefore, the NEQR model is widely used due to its simplicity of operation. As different quantum image representation models are proposed, a large number of quantum image processing algorithms emerge, such as geometrical transformation of quantum image [15], feature extraction of quantum image [16], quantum image watermarking [17], quantum image bilinear interpolation [18], quantum image segmentation [19, 20], quantum image steganography [21], quantum image edge detection [22, 23], etc.
Image edge detection is a fundamental problem in image processing, which can retain the basic framework in the image, remove irrelevant information and reduce the amount of data. Currently, the digital image edge detection algorithms have been widely explored, but the research on quantum counterparts is still in its infancy. Many researchers use different operators, such as Sobel [22], Prewitt [24], LoG [25], etc. for edge detection of quantum images,
among which the Sobel operator is the first choice. In 2015, Zhang et al. [22] firstly proposed a quantum Sobel edge detection (QSED) algorithm for FRQI image. This algorithm is a quantization of the classical Sobel edge detection by using quantum circuit, and there is an exponential acceleration relative to the classical method, which improves the real-time performance, but it can not accurately measure the color information of the image. In 2019, Fan et al. [23] proposed a two-direction QSED algorithm for NEQR image. However, only the edges in the horizontal and vertical directions were considered in their algorithm, which exists large limitations. In order to improve the accuracy of edge detection, Zhou et al. [26] proposed a four-direction QSED algorithm for generalized quantum image representation (GQIR) image, but its circuit complexity is higher than other algorithms. For NEQR image, Chetia et al. [27] also proposed a four-direction QSED algorithm, but the edge detection effect of this algorithm is relatively poor. In order to detect more edge information and reduce circuit complexity, they [28] future proposed an improved version in 2021. As far as we know, the existing QSED algorithms only consider either two- or four-direction Sobel operator, and the edge information detected is insufficient in some scenarios. Therefore, we have done further research on QSED, and the main works are summarized as follows:
* A NEQR image edge detection algorithm based on eight-direction Sobel operator is proposed.
* Several specific quantum circuits are designed, which can simultaneously calculate eight directions' gradient values of all pixels, and classify the pixels accurately according to the obtained gradient values.
* We verify the superiority and feasibility of our proposed algorithm by analyzing the circuit complexity and performing simulation experiments, respectively.
This paper is organized as follows. Sec. 2 introduces the principle of the NEQR representation model and the classical edge detection of eight-direction Sobel operator. In Sec. 3, some basic quantum operation modules are introduced. Then, a series of quantum circuits of edge detection are designed and the relevant quantum states equations are given. Sec. 4 analyzes the computational complexity of our algorithm and experimental results, and the conclusion is drawn in Sec. 5.
## 2 Related work
### Neqr
A pixel in a digital image contains the position and color information. The NEQR uses two entangled qubit sequences to store the grayscale information and position information of the image, and stores the entire image in a superposition of the two-qubit sequences. For grayscale images of size \(2^{n}\times 2^{n}\), the grayscale range is \([0,2^{q}-1]\) and requires a qubit sequence of \(q\) length to store the grayscale of pixels. Moreover, two qubit sequences of \(n\) length are needed to
store the position information of each pixel in the image. The entire representation is the tensor product of these three entangled qubits sequences, so that all pixels can be stored and processed simultaneously. Then the NEQR model of a quantum image can be written in the form of the quantum superposition state shown in Eq. (1) [12].
\[|\mathrm{I}\rangle=\frac{1}{2^{n}}\sum_{Y=0}^{2^{n}-1}\sum_{X=0}^{2^{n}-1}|C_{ YX}\rangle\otimes|Y\rangle|X\rangle=\frac{1}{2^{n}}\sum_{YX=0}^{2^{2n}-1} \underset{k=0}{\overset{\otimes}{\otimes}}|C_{YX}^{K}\rangle\otimes|YX\rangle \tag{1}\]
where \(|C_{YX}\rangle=|C_{YX}^{q-1},C_{YX}^{q-2},\cdots C_{YX}^{1}C_{YX}^{0}\rangle\) represents the quantum image gray-scale values, \(C_{YX}^{k}\in\{0,1\}\), \(k=q-1,q-2,\cdots,0\). \(|YX\rangle=|Y\rangle|X\rangle=|Y_{n-1},Y_{n-2},\cdots Y_{0}\rangle|X_{n-1},X_{ n-2},\cdots X_{0}\rangle\) represents the position of the pixel in a quantum image, \(Y_{t},X_{t}\in\{0,1\}\).
Fig. 1 shows an example of a grayscale image of size 2\(\times\)2, and the corresponding NEQR expression of which is given as follows:
\[|I\rangle=\tfrac{1}{2}\left(|0\rangle|00\rangle+|100\rangle|01 \rangle+|200\rangle|10\rangle+|255\rangle|11\rangle\right)\] \[\qquad=\tfrac{1}{2}\left(\begin{array}{c}|00000000\rangle|00 \rangle+|0110010\rangle|01\rangle\\ +|11001000\rangle|10\rangle+|1111111\rangle|11\rangle\end{array}\right) \tag{2}\]
### Classical Sobel edge detection
Image's edges are caused by discontinuous color intensities, and they are the pixels where the color intensity of the image changes the fastest. Based on this principle, many operators for edge detection appear. Among them, the Sobel operator is the most widely used. The Sobel operator consists of a set of masks of size \(3\times 3\) to calculate the gradient of pixel color intensity in the image. The underlying Sobel operator has two directions and can calculate the horizontal and vertical gradients of the pixels. This detects the horizontal and vertical edges in the image, but the calculated image edges are rectangular. So the detected edges can be further improved. Therefore, to obtain better edge detection effects, the underlying Sobel operator can be rotated to obtain the Sobel operators in four directions of \(0^{\circ}\), \(45^{\circ}\), \(90^{\circ}\)and \(135^{\circ}\). The Sobel operator with \(3\times 3\) in four directions can detect the edges of the image more accurately. But the detected edge continuity is not enough. To detect the edge pixels in the image more accurately, the Sobel operator can be improved to eight directions: \(0^{\circ}\), \(22.5^{\circ}\), \(45^{\circ}\), \(67.5^{\circ}\), \(90^{\circ}\), \(112.5^{\circ}\), \(135^{\circ}\), \(157.5^{\circ}\)[29]. In addition, coupled with non-maximum suppression and edge tracking, the edge information will be
Figure 1: An example of a 2\(\times\)2 image
more precise and detailed [28]. Fig. 2 shows a \(5\times 5\) pixel neighborhood window. The pixels' gradient values in eight directions can be calculated by the following equations:
\[G^{0}=\left[\begin{array}{cccccc}0&0&0&0&0\\ -1&-2&-4&-2&-1\\ 0&0&0&0&0\\ 1&2&4&2&1\\ 0&0&0&0&0\end{array}\right]*p,G^{22.5}=\left[\begin{array}{cccccc}0&0&0&0&0 \\ 0&-2&-4&-2&0\\ -1&-4&0&4&1\\ 0&2&4&2&0\\ 0&0&0&0&0\end{array}\right]*p\]
\[G^{45}=\left[\begin{array}{cccc}0&0&0&-1&0\\ 0&-2&-4&0&1\\ 0&-4&0&4&0\\ -1&0&4&2&0\\ 0&1&0&0&0\end{array}\right]*p,G^{67.5}=\left[\begin{array}{cccc}0&0&-1&0&0\\ 0&-2&-4&2&0\\ 0&-4&0&4&0\\ 0&-2&4&2&0\\ 0&0&1&0&0\end{array}\right]*p\]
\[G^{90}=\left[\begin{array}{cccc}0&-1&0&1&0\\ 0&-2&0&2&0\\ 0&-4&0&4&0\\ 0&-2&0&2&0\\ 0&-1&0&1&0\end{array}\right]*p,G^{112.5}=\left[\begin{array}{cccc}0&0&1&0&0 \\ 0&-2&4&2&0\\ 0&-4&0&4&0\\ 0&-2&-4&2&0\\ 0&0&-1&0&0\end{array}\right]*p \tag{3}\]
\[G^{135}=\left[\begin{array}{cccc}0&1&0&0&0\\ -1&0&4&2&0\\ 0&-4&0&4&0\\ 0&-2&-4&0&1\\ 0&0&0&-1&0\end{array}\right]*p,G^{157.5}=\left[\begin{array}{cccc}0&0&0&0&0 \\ 0&2&4&2&0\\ -1&-4&0&4&1\\ 0&-2&-4&-2&0\\ 0&0&0&0&0\end{array}\right]*p\]
Among these, \(G^{0}\), \(G^{22.5}\), \(G^{45}\), \(G^{67.5}\), \(G^{90}\), \(G^{112.5}\), \(G^{135}\) and \(G^{157.5}\) represent the image gradient values detected by the eight directional edges of \(0^{\circ}\), \(22.5^{\circ}\), \(45^{\circ}\), \(67.5^{\circ}\), \(90^{\circ}\), \(112.5^{\circ}\), \(135^{\circ}\), \(157.5^{\circ}\), respectively. The \(p\) represents a pixel neighborhood window. Specific calculations are as follows:
\[\begin{array}{c}G^{0}\!=\!p(Y\!-\!2,X\!+\!1)\!+\!2p(Y\!-\!1,X\!+\!1)\!+\!4p(Y,X\!+\!1)\!+\!2p(Y\!+\!1,X\!+\!1)\!+\!p(Y\!+\!2,X\!+\!1)\\ \!-p(Y\!-\!2,X\!-\!1)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!4p(Y,X\!-\!1)\!-\!2p(Y\!+\!1,X \!-\!1)\!-\!p(Y\!+\!2,X\!-\!1)\\ \!\!G^{22.5}\!=\!p(Y\!+\!2,X)\!+\!2p(Y\!+\!1,X\!+\!1)\!+\!2p(Y\!-\!1,X\!+\!1)\! +\!4p(Y,X\!+\!1)\!+\!4p(Y+\!1,X)\\ \!-p(Y\!-\!2,X)\!-\!2p(Y\!+\!1,X\!-\!1)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!4p(Y,X\!- \!1)\!-\!4p(Y\!-\!1,X)\\ \!\!G^{45}\!=\!p(Y\!+\!2,X\!-\!1)\!+\!p(Y\!-\!1,X\!+\!2)\!+\!2p(Y\!+\!1,X\!+\!1 )\!+\!4p(Y\!+\!1,X)\!+\!4p(Y,X\!+\!1)\\ \!-p(Y\!+\!1,X\!-\!2)\!-\!p(Y\!-\!2,X\!+\!1)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!4p(Y \!-\!1,X)\!-\!4p(Y,X\!-\!1)\\ \!\!G^{67.5}\!=\!p(Y,X\!+\!2)\!+\!2p(Y\!+\!1,X\!+\!1)\!+\!2p(Y\!+\!1,X\!-\!1)\! +\!4p(Y\!+\!1,X)\!+\!4p(Y,X\!+\!1)\\ \!-p(Y,X\!-\!2)\!-\!2p(Y\!-\!1,X\!+\!1)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!4p(Y\!-\!1,X)\!-\!4p(Y,X\!-\!1)\\ \!\!G^{90}\!=\!p(Y\!+\!1,\!X\!-\!2)\!+\!p(Y\!+\!1,\!X\!+\!2)\!+\!2p(Y\!+\!1,X\! -\!1)\!+\!2p(Y\!+\!1,X\!+\!1)\!+\!4p(Y\!+\!1,X)\\ \!-p(Y\!-\!1,X\!-\!2)\!-\!p(Y\!-\!1,X\!+\!2)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!2p(Y \!-\!1,X\!+\!1)\!-\!4p(Y\!-\!1,X)\\ \!\!G^{112.5}\!=\!p(Y,X\!-\!2)\!+\!2p(Y\!+\!1,X\!-\!1)\!+\!2p(Y\!+\!1,X\!+\!1) \!+\!4p(Y\!+\!1,X)\!+\!4p(Y,X\!-\!1)\\ \!-p(Y,X\!+\!2)\!-\!2p(Y\!-\!1,X\!+\!1)\!-\!2p(Y\!-\!1,X\!-\!1)\!-\!4p(Y\!-\! 1,X)\!-\!4p(Y,X\!+\!1)\\ \!\!G^{135}\!=\!p(Y\!-\!1,\!X\!-\!2)\!+\!p(Y\!+\!1,X\!+\!1)\!+\!2p(Y\!+\!1,X\! -\!1)\!+\!4p(Y\!+\!1,X)\!+\!4p(Y,X\!-\!1)\\ \!-p(Y\!-\!2,X\!-\!1)\!-\!p(Y\!+\!1,X\!+\!2)\!-\!2p(Y\!-\!1,X\!+\!1)\!-\!4p(Y \!-\!1,X)\!-\!4p(Y,X\!+\!1)\\ \!\!G^{157.5}\!=\!p(Y\!+\!2,X)\!+\!2p(Y\!+\!1,X\!-\!1)\!+\!2p(Y\!-\!1,X\!-\!1) \!+\!4p(Y\!+\!1,X)\!+\!4p(Y,X\!-\!1)\\ \!-p(Y\!-\!2,X)\!-\!2p(Y\!+\!1,X\!+\!1)\!-\!2p(Y\!-\!1,X\!+\!1)\!-\!4p(Y\!-\!1,X )\!-\!4p(Y,X\!+\!1)\\ \end{array}\]
The gradient of each pixel is the maximum of the absolute value of the gradient value in eight directions. It can be written as follows:
\[G=\max\left\{|G^{0}|,|G^{22.5}|,|G^{45}|,|G^{67.5}|,|G^{90}|,|G^{112.5}|,|G^{135 }|,|G^{157.5}|\right\} \tag{5}\]
Comparing the gradient values to the threshold, this pixel will be considered part of the edge when \(G\geq T\).
Figure 3: Eight masks of eight-direction Sobel algorithm
Figure 2: The pixel neighborhood window
## 3 Quantum image edge detection based on the eight-direction Sobel operator
### Quantum operations
1. Quantum comparator Quantum comparator (QC) [30] can compare the magnitude relationship between two numbers. It takes two \(n\) qubits sequences \(|A\rangle=|a_{n-1}a_{n-2}\cdots a_{1}a_{0}\rangle\) and \(|B\rangle=|b_{n-1}b_{n-2}\cdots b_{1}b_{0}\rangle\) as the input and \(C_{1}\), \(C_{0}\) as the output. If \(A>B\), then \(C_{1}=1\) and \(C_{0}=0\); if \(A<B\), then \(C_{1}=0\) and \(C_{0}=1\); if \(A=B\), then \(C_{1}=0\) and \(C_{0}=0\). A schematic diagram of the QC is shown in Fig. 4.
2. Cycle shift transformation The cyclic shift transformation operation (CT) [15, 31, 32] is moving all pixels in an image simultaneously several units in the \(X\) or \(Y\) direction. For \(n\) qubits sequence \(|Y\rangle=|y_{n-1}y_{n-2}\cdots y_{1}y_{0}\rangle\), the CT operation can implement \(2^{n}+1\) and \(2^{n}-1\). Fig. 5 shows the schematic diagram of the CT operation: \(CT(+1)\) and \(CT(-1)\). They are \(|(Y+1)\bmod 2^{n}\rangle\) and \(|(Y-1)\bmod 2^{n}\rangle\).
3. Reversible parallel adder The reversible parallel adder (PA) [33] can compute the addition of \(n\) qubits sequence \(|A\rangle=|a_{n-1}a_{n-2}\cdots a_{1}a_{0}\rangle\) and \(n\) qubits sequence \(|B\rangle=|b_{n-1}b_{n-2}\cdots b_{1}b_{0}\rangle\). It takes \(|A\rangle\) and \(|B\rangle\) as the input and \(|A+B\rangle\) as the output. As shown in Fig. 6.
4. Quantum absolute value operation The quantum absolute value operation is used to calculate the absolute value of two integers in a quantum circuit, and subtraction of a binary bit sequence can be converted to the addition of complement. Quantum subtractor circuits can therefore be designed through a combination of
Figure 4: Quantum circuit realization of QC
Figure 5: Diagram of CT(\(+1\)) and CT(\(-1\)) operation
quantum PA operation and complement operation (CA) [34, 35, 36]. Assuming that \(x=x_{n}x_{n-1}\cdots x_{1}x_{0}\) is a signed binary integer, the highest bit is symbolic bit (0 represents the \(x\) as a positive number and 1 represents the \(x\) as a negative number) and the other bits represent value. The complement operation for the binary number \(x\) is [37]: \[\left[x\right]_{\rm CA}=\left\{\begin{array}{l}0,x_{n-1}x_{n-2}\cdots x_{1}x _{0},x_{n}=0\\ 1,\overline{x_{n-1}x_{n-2}}\cdots\overline{x_{1}x_{0}},x_{n}=1\end{array}\right.\] (6) Among them, \(\overline{x_{k}}=1-x_{k},k=0,1,\cdots,n-1\). The complement operation is shown in Fig. 7. Thus, the subtraction operation of computing two integers can be written as : \[A-B=A+(-B)_{\rm CA}=\left[A+(\overline{B}+1)\right]_{\rm CA}\] (7) Among them, \(\overline{B}=b_{n}\overline{b_{n-1}b_{n-2}}\cdots\overline{b_{1}b_{0}}\). Suppose the value of \(A-B\) is expressed as a \(n+1\) bits binary number with a signed bit : \(D=d_{n}d_{n-1}d_{n-2}\cdots d_{1}d_{0}\), where the \(d_{n}\) is a sign bit. So while ignoring the sign bit, the absolute value of \(A-B\) is \(d_{n}d_{n-1}d_{n-2}\cdots d_{1}d_{0}\). Therefore, the quantum circuit for calculating the absolute value of the \(|A-B|\) operation is shown in Fig. 8.
5. Quantum double operation The quantum double operation (DO) [28, 38] is used to multiply an integer binary bits by 2. The quantum circuit of this quantum operation based on quantum Swap gates operation and auxiliary qubits is shown in Fig. 9.
Figure 6: Quantum circuit realization of PA operation
Figure 7: Quantum circuit realization of CA
6. Quantum copy operation The quantum copy operation is completed with quantum controlled not-gates (CNOT) and auxiliary qubits [34]. The quantum circuit is shown in Fig. 10.
### Quantum circuit realization for edge detection
In this subsection, we introduce the workflow of the whole edge detection algorithm first. And then, the corresponding quantum circuits according to the workflow are designed.
Figure 11 represents the workflow of the quantum image edge detection algorithm based on eight-direction Sobel operator, mainly consisting of six steps -- quantum image preparation, quantum image set shift transformation, quantum image gradient value calculation, non-maximum suppression, double threshold detection and edge tracking. The original image is represented as a NEQR image firstly. Then, according to the Sobel operator, the quantum
Figure 8: Quantum circuit of AV operation
Figure 10: Quantum circuit realization of Copy
Figure 9: Quantum circuit realization of DO
images obtained in the first step are cycle shift-transformed. Following that, the gradient \(|G\rangle\) for each pixel is calculated by using the Sobel operator. Then, each pixel is processed with non-maximum suppression to eliminate edge false detection and stored as the maximum gradient \(|G^{s}\rangle\). In addition, the gradient values of all pixels are compared with the double threshold to obtain strong and weak edges \(|E\rangle\). Finally, edge tracking is used to obtain the true edge \(|B\rangle\).
**Step 1 NEQR images preparation.** In order to turn a digital image into a quantum image, \((2n+q)\) qubits are required to store \(2^{n}\times 2^{n}\) size of an image. Furthermore, 24 extra qubits are required to record the color information of the shifted pixels in next step, which can be prepared by using the tensor product of auxiliary qubits and quantum image \(|I\rangle\), i.e.,
\[\begin{array}{l}|0\rangle^{\otimes 24q}\otimes|I_{YX}\rangle=\frac{1}{2^{n}} \sum\limits_{Y=0}^{2^{n}-1}\sum\limits_{X=0}^{2^{n}-1}|0\rangle^{\otimes 24q}|C_{ YX}\rangle|Y\rangle|X\rangle\\ =\frac{1}{2^{n}}\sum\limits_{Y=0}^{2^{n}-1}\sum\limits_{X=0}^{2^{n}-1}|0 \rangle^{\otimes q}\cdots|0\rangle^{\otimes q}|Y\rangle|X\rangle\end{array} \tag{8}\]
**Step 2 Quantum image set shift transformation.** Following the steps in Table 1, the neighborhood pixels of the entire image \(|I_{YX}\rangle\) are acquired and stored in additional qubits. In this step, every time a shift operation is performed, we use the Copy operation to copy the gray-scale value information of the shifted pixels into the prepared qubits to get 24 quantum images, and the pixels in the 24 quantum images are simultaneously shifted within the \(5\times 5\) neighborhood pixels using CT operation. Specific quantum operations of any
Figure 11: Workflow of our proposed algorithm
\(5\times 5\) neighborhood pixels are as follows:
\[\begin{array}{l}\frac{1}{2^{n}}\sum\limits_{Y=0}^{2^{n}-1}\sum\limits_{X=0}^{2^{ n}-1}|C_{Y-2,X-2}\rangle\otimes|C_{Y-1,X-2}\rangle\otimes|C_{Y,X-2}\rangle \otimes|C_{Y+1,X-2}\rangle\\ \otimes|C_{Y+2,X-2}\rangle\otimes|C_{Y-2,X-1}\rangle\otimes|C_{Y-1,X-1}\rangle \otimes|C_{Y,X-1}\rangle\otimes|C_{Y+1,X-1}\rangle\\ \otimes|C_{Y+2,X-1}\rangle\otimes|C_{Y-2,X}\rangle\otimes|C_{Y-1,X}\rangle \otimes|C_{Y,X}\rangle\otimes|C_{Y+1,X}\rangle\otimes|C_{Y+2,X}\rangle\\ \otimes|C_{Y-2,X+1}\rangle\otimes|C_{Y-1,X+1}\rangle\otimes|C_{Y,X+1}\rangle \otimes|C_{Y+1,X+1}\rangle\otimes|C_{Y+2,X+1}\rangle\\ \otimes|C_{Y-2,X+2}\rangle\otimes|C_{Y-1,X+2}\rangle\otimes|C_{Y,X+2}\rangle \otimes|C_{Y+1,X+2}\rangle\otimes|C_{Y+2,X+2}\rangle\\ \otimes|Y\rangle|X\rangle\end{array} \tag{9}\]
**Step 3 Gradients calculation**. The gradients of the \(|I_{YX}\rangle\) are calculated using the Sobel operator in eight directions. The specific calculations operation are as follows:
\[\begin{array}{l}||G^{0}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y-2,X+1} \rangle+|2C_{Y-1,X+1}\rangle+|4C_{Y,X+1}\rangle+|2C_{Y+1,X+1}\rangle+|C_{Y+2,X+ 1}\rangle\\ -|C_{Y-2,X-1}\rangle-|2C_{Y-1,X-1}\rangle-|4C_{Y,X-1}\rangle-|2C_{Y+1,X-1} \rangle-|C_{Y+2,X-1}\rangle\\ \end{array}\right|\\ ||G^{22,5}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y+2,X}\rangle+|2C_{Y+1,X+ 1}\rangle+|2C_{Y-1,X+1}\rangle+|4C_{Y,X+1}\rangle+|4C_{Y+1,X}\rangle\\ -|C_{Y-2,X}\rangle-|2C_{Y+1,X-1}\rangle-|2C_{Y-1,X-1}\rangle-|4C_{Y,X-1} \rangle-|4C_{Y-1,X}\rangle\\ \end{array}\right|\\ ||G^{45}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y+2,X-1}\rangle+|C_{Y-1,X+ 2}\rangle+|2C_{Y+1,X+1}\rangle+|4C_{Y+1,X}\rangle+|4C_{Y,X+1}\rangle\\ -|C_{Y+1,X-2}\rangle-|C_{Y-2,X+1}\rangle-|2C_{Y-1,X-1}\rangle-|4C_{Y-1,X} \rangle-|4C_{Y,X-1}\rangle\\ \end{array}\right|\\ ||G^{67,5}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y,X+2}\rangle+|2C_{Y+1,X+ 1}\rangle+|2C_{Y+1,X-1}\rangle+|4C_{Y+1,X}\rangle+|4C_{Y,X+1}\rangle\\ -|C_{Y,X-2}\rangle-|2C_{Y-1,X+1}\rangle-|2C_{Y-1,X-1}\rangle-|4C_{Y-1,X} \rangle-|4C_{Y,X-1}\rangle\\ \end{array}\right|\\ ||G^{90}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y+1,X-2}\rangle+|C_{Y+1,X+ 2}\rangle+|2C_{Y+1,X-1}\rangle+|2C_{Y+1,X+1}\rangle+|4C_{Y+1,X}\rangle\\ -|C_{Y-1,X-2}\rangle-|C_{Y-1,X+2}\rangle-|2C_{Y-1,X-1}\rangle-|2C_{Y-1,X+1} \rangle-|4C_{Y-1,X}\rangle\\ \end{array}\right|\\ ||G^{112.5}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y,X-2}\rangle+|2C_{Y+1,X- 1}\rangle+|2C_{Y+1,X+1}\rangle+|4C_{Y+1,X}\rangle+|4C_{Y,X-1}\rangle\\ -|C_{Y,X+2}\rangle-|2C_{Y-1,X+1}\rangle-|2C_{Y-1,X-1}\rangle-|4C_{Y-1,X} \rangle-|4C_{Y,X+1}\rangle\\ \end{array}\right|\\ ||G^{135}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y-1,X-2}\rangle+|C_{Y+1,X+ 1}\rangle+|2C_{Y+1,X-1}\rangle+|4C_{Y+1,X}\rangle+|4C_{Y,X-1}\rangle\\ -|C_{Y-2,X-1}\rangle-|C_{Y+1,X+2}\rangle-|2C_{Y-1,X+1}\rangle-|4C_{Y-1,X} \rangle-|4C_{Y,X+1}\rangle\\ \end{array}\right|\\ ||G^{157.5}_{YX}\rangle|=\left|\begin{array}{l}|C_{Y+2,X}\rangle+|2C_{Y+1,X- 1}\rangle+|2C_{Y-1,X-1}\rangle+|4C_{Y+1,X}\rangle+|4C_{Y,X-1}\rangle\\ -|C_{Y-2,X}\rangle-|2C_{Y+1,X+1}\rangle-|2C_{Y-1,X+1}\rangle-|4C_{Y-1,X} \rangle-|4C_{Y,X+1}\rangle\\ \end{array}\right|\\ \end{array} \tag{10}\]
Thus, the gradient for each pixel is
\[\begin{array}{l}||G\rangle|=\max\left\{||G^{0}\rangle|,||G^{22.5}\rangle|,||G^ {45}\rangle|,||G^{67.5}\rangle|,||G^{90}\rangle|,||G^{112.5}\rangle|,||G^{135} \rangle|,||G^{157.5}\rangle|\right\}\end{array} \tag{11}\]
Through quantum operations such as the quantum absolute value operation and the quantum comparator, the gradient of each pixel can be obtained from Eq. (11). The gradient values \(|G\rangle\) can be expressed as:
\[\begin{array}{l}|G\rangle=\frac{1}{2^{n}}\sum\limits_{Y=0}^{2^{n}-1}\sum \limits_{X=0}^{2^{n}-1}|N\rangle|G^{d}_{YX}\rangle|Y\rangle|X\rangle\end{array} \tag{12}\]
where \(d=\)0\({}^{\circ}\), 22.5\({}^{\circ}\), 45\({}^{\circ}\), 67.5\({}^{\circ}\), 90\({}^{\circ}\), 112.5\({}^{\circ}\), 135\({}^{\circ}\), 157.5\({}^{\circ}\); \(|N\rangle=|1\rangle\) for gradient values and \(|N\rangle=|0\rangle\) for non-gradient values.
\begin{table}
\begin{tabular}{l} \hline
1. Input: the original NEQR image \(I_{YX}\),\(|I_{YX}\rangle=\frac{1}{2^{x}}\sum\limits_{Y=0}^{2^{x}-1}\sum\limits_{X=0}^{2^{x}-1}|C_{YX }\rangle|Y\rangle|X\rangle\) \\
[MISSING_PAGE_POST]
wo units leftwards and two units downwards to the original position, then \(|I_{YX}\rangle=CT(X-)CT(X-)CT(Y+)CT(Y+)|I_{Y+1X-2}\rangle=\frac{1}{2^{x}}\sum \limits_{Y=0}^{2^{x}-12^{x}-12^{x}-12^{x}-1}|C_{Y+2X-2}\rangle|Y\rangle|X\rangle\) \\ \hline \end{tabular}
\end{table}
Table 1: Computation prepared algorithm for shifting the image
Figures 12-15 show the gradient values calculation quantum circuits in eight directions. The quantum circuit for the gradient calculation of the quantum image is shown in Fig. 16. The oblique lines in the circuits represent n qubits, and the measurements and some auxiliary qubits are omitted.
**Step 4 The non-maximum suppression.** Non-maximum suppression means setting the current pixel grayscale value to 0 if the gradient value is smaller than the two pixels' in its gradient direction, then the current pixel is a non-maximum pixel; If the gradient value of the current pixel is greater than or equal to the gradient value of the two pixel in its gradient direction, the current pixel is determined as a maximum point, and it is to be retained. In this way, the points with the maximum local gradient values can be retained, which can eliminate edge false detections. In this paper, we use the Sobel operator to calculate the gradient values for all eight directions (\(0^{\circ}\), \(22.5^{\circ}\), \(45^{\circ}\), \(67.5^{\circ}\),
Figure 12: Quantum circuit realization for gradient value calculation of a quantum image into the \(0^{\circ}\) and \(22.5^{\circ}\) directions
\(90^{\circ}\), \(112.5^{\circ}\), \(135^{\circ}\), \(157.5^{\circ}\)). Quantum comparators are used to find the maximum local gradient value pixels \(|G^{s}\rangle\) of the gradient image \(|G\rangle\) obtained with the Sobel operator in eight directions. Each pixel's information is obtained from the \(5\times 5\) neighborhood window using the previously prepared NEQR image set. Quantum gradient image \(|G^{S}\rangle\) after non-maximum suppression can be written as:
\[|G^{S}\rangle=\frac{1}{2^{n}}\sum_{Y=0}^{2^{n}-1}\sum_{X=0}^{2^{n}-1}|M\rangle| G_{YX}\rangle|Y\rangle|X\rangle \tag{13}\]
where \(|M\rangle=1\) indicates that the current pixel is a maximum pixel point and \(|M\rangle=0\) indicates that the current pixel point is a non-maximum pixel point. Fig. 17 presents the quantum circuit design for non-maximum suppression. **Step 5 Double threshold detection**. After non-maximum suppression, the remaining pixels can more accurately represent the edges in the image. But it will still be affected by some noise present. To address the problem, the
Figure 13: Quantum circuit realization for gradient value calculation of a quantum image into the \(45^{\circ}\) and \(67.5^{\circ}\) directions
double threshold need to be used for detection. The high threshold \(T_{H}\) and the low threshold \(T_{L}\) are selected to divide the edge points. Pixels with gradient values less than the low threshold are determined as non-edge points, pixels with gradient values between high threshold and low threshold are determined as weak edge points, and pixels with gradient values greater than the high threshold are determined as strong edge points. All pixels' gradient values of the \(5\times 5\) neighborhood are compared with the double threshold, where the relationship between the high and low threshold is \(|T_{L}\rangle=\frac{1}{3}|T_{H}\rangle\). The quantum image obtained after double threshold detection can be expressed as:
\[|E\rangle=\frac{1}{2^{n}}\sum_{Y=0}^{2^{n}-1}\sum_{X=0}^{2^{n}-1}|E_{YX}\rangle |Y\rangle|X\rangle \tag{14}\]
Figure 14: Quantum circuit realization for gradient value calculation of a quantum image into the \(90^{\circ}\) and \(112.5^{\circ}\) directions
where \(E_{Y\!X}=E^{0}{}_{Y\!X}E^{1}{}_{Y\!X},E^{h}{}_{Y\!X}\in\left\{0,1\right\},h=0,1\). The correspondence between the values of the \(E_{Y\!X}\) and the 3 kinds of edge points are: if \(E_{Y\!X}\!=\!10\), then it is a strong edge point; if \(E_{Y\!X}\!=\!01\), then it is a weak edge point; if \(E_{Y\!X}\!=\!00\), then it is not an edge point. The quantum circuit of double threshold detection is shown in Fig. 18.
**Step 6 Edge tracking**. After double threshold detection, the pixels classified as strong edge points have been determined as edges because these edges are real edges in the image. The weak edge points may be real edges or caused by factors such as noise, which requires further processing through edge tracking. When a strong edge point exists in the 24 neighborhood of a pixel centered on the weak edge point, the weak edge point is determined as a true edge point left, otherwise the weak edge point is determined as a false edge point. Based on the division of strong and weak edge points in the fifth step, if \(E_{Y\!X}\!=\!01\), then the current pixel is a weak edge point. Under the auxiliary qubits control, the double threshold detection information of the \(5\times 5\) neighborhood
Figure 15: Quantum circuit realization for gradient value calculation of a quantum image into the \(135^{\circ}\) and \(157.5^{\circ}\) directions
pixels was obtained using the cyclic shift operation, and the double threshold detection results of each pixel of the 24 neighborhood were compared using the quantum comparator. The double threshold detection results for each pixel in the neighborhood were then compared with the quantum sequence \(|00\rangle\) for the presence of strong edge points in the 24 neighborhood using the auxiliary qubits \(|B_{YX}\rangle\). If \(B_{YX}=1\), this indicates the presence of strong edge points, which otherwise do not exist. The final quantum state of the quantum edge image after the edge tracking operation are represented as follows:
\[|B\rangle=\frac{1}{2^{n}}\sum_{Y=0}^{2^{n}-1}\sum_{X=0}^{2^{n}-1}|B_{YX}\rangle |Y\rangle|X\rangle \tag{15}\]
where \(B_{YX}=1\) in case of edge point and \(B_{YX}=0\) in case of non-edge point. The quantum circuit implementation of edge tracking is shown in Fig. 19.
## 4 Circuit complexity and experiment analysis
In this section, we first discuss the circuit complexity based on edge detection of the eight-direction Sobel operator and compare the complexity of our algorithm with some existing edge detection algorithms. Then, simulation experiments are presented to show the effect of edge detection in quantum images.
Figure 16: Quantum circuit realization of the gradient calculation of image
Figure 17: Quantum circuit realization of non-maximum suppression
### Circuit complexity analysis
The NOT gate and CNOT gate are commonly used in quantum computing. This paper considers its circuit complexity as 1. Therefore, we can compute the complexity of the quantum circuit with the number of basic logic gates. In Ref. [39], Nielsen et al. point out that the Toffoli gates of 3 qubits can be decomposed into five two-qubit gates, so the complexity of the Toffoli gate is 5. The CNOT gate C\({}_{n-1}(x)\) (The number of control qubits is \(n-1\)) can be decomposed into the quantum circuit with \(2(n-1)\) Toffoli gates and 1 CNOT gate [39]. Thus the C\({}_{n-1}(x)\) gate circuit complexity is \(10n-9\).
Taking an image of size \(2^{n}\times 2^{n}\) as an example, we discuss the complexity of the circuit in six steps. They are quantum image preparation, Quantum image set cyclic shift, gradient calculation based on eight-direction Sobel operator, non-maximum suppression, double threshold detection and edge tracking.
In step 1, NEQR image is prepared. Digital image is prepared as NEQR quantum image. The computational complexity of this step is O(\(qn2^{2n}\)) [12].
In step 2, the quantum image is cycle shifted. This step requires Copy operations [23] and CT operations [15, 31, 32]. The complexity is O(\(n^{2}\)) [32].
In step 3, the gradients are calculated. This step is to calculate the gradient of each pixel. The quantum adder, quantum double operation, absolute value operation, quantum comparator and swap operation are needed. The complexity of each q-qubit quantum adder operation and the quantum double operation are O(\(q\)) [38]. The circuit complexity of the absolute value operation is O(\(q^{2}\)) [23, 35]. The quantum comparator has a complexity of O(\(n\)) [30]. The complexity of the Swap operation is O(\(n\)) [38]. Therefore, the circuit complexity of this step is O(\(n+q^{2}\)).
In step 4, non-maximum pixels are suppressed. The 25 additional images and 5\(\times\)5 neighborhood window pixels are replicated with the Copy operation and then cycle shifted with the CT operation. In addition, this step requires quantum comparators and Toffoli gates to find the maximum gradient value pixel. Therefore, the circuit complexity in this step is O(\(n^{2}\)).
Figure 18: The quantum circuit realization of the double threshold detection
Figure 19: The quantum circuit implementation of edge tracking
In step 5, double threshold is used to compare edge pixels. The quantum comparator, Toffoli gate and CNOT gate are used. Therefore, the circuit complexity at this step is O(\(n\)).
In step 6, edge pixels are edge-tracked. This step requires CT operation, quantum comparators and some Toffoli gates. Therefore, the circuit complexity at this phase is O(\(n^{2}\)).
According to the complexity analysis of the above 6 steps, we can know that the computational complexity of circuit realization of QSED for a \(2^{n}\times 2^{n}\) classical image is
\[\mathrm{O}[qn2^{2n}+n^{2}+(n+q^{2})+n^{2}+n+n^{2}]\] \[=\mathrm{O}(qn2^{2n}+n^{2}+q^{2})\]
The QIP algorithm is for quantum images rather than classical images, but it is currently impossible to directly obtain quantum images, so we need to convert the classical images into quantum images firstly. For the completeness of the paper, we also analyze the complexity of the quantum image preparation process. But typically, the quantum image preparation and measurement processes are not considered part of quantum image processing [23, 28]. Therefore, for \(2^{n}\times 2^{n}\) images, the complexity of our algorithm is O(\(n^{2}+q^{2}\)). On classical computers, for images of size \(2^{n}\times 2^{n}\), edge detection need to be processed individually for each pixel. So, the complexity of the classical edge detection algorithm is no less than O(\(2^{2n}\)) [23]. Thus, our scheme achieves an exponential acceleration relative to the classical edge detection algorithm, so the real-time problem in classical image edge detection can be solved well. In Table 2, the computational complexity of our algorithm is compared with some other edge detection schemes, and the complexity of our algorithm is greatly improved.
### Experiment analysis
Due to the limitations of the current technology, there are no suitable quantum computers for our use. To test our proposed scheme, all experiments were simulated on a classical computer with MATLAB 2014. The unit vector and unitary matrices in MATLAB can replace that of the quantum states and quantum gates, respectively. Therefore, although the simulation on a classical computer can not truly realize the quantum model simulation, it can simulate
\begin{table}
\begin{tabular}{l l l l} \hline Algorithm & Encoding model & Complexity & Directions \\ \hline Sobel [29, 39, 40] & - & O(\(2^{2n}\)) & 2/4/8 \\ Fan [23] & NEQR & O(\(n^{2}+2^{q+4}\)) & 2 \\ R.Chetia [28] & NEQR & O(\(n^{2}+q^{3}\)) & 4 \\ Our scheme & NEQR & O(\(n^{2}+q^{2}\)) & 8 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of the complexity of the Sobel edge detection algorithm
the execution steps of quantum computation, which can theoretically verify the effectiveness of the quantum algorithm.
Five common test images were selected randomly, such as Lena, cameraman, Livingroom, House and Pirate. The size of the images is 512\(\times\)512. We compare the quantum two-direction and four-direction Sobel operator edge detection algorithm with our proposed eight-direction Sobel operator edge detection algorithm.
As can be seen from Fig. 20, our algorithm detects more edge information, especially in the more detailed parts, such as Leana's hat, the photographer's grass, the livingroom's curtains, the house's wall and the pirate' hair accessories. This is because we employ the Sobel mask of 5\(\times\)5 to detect image edges from eight directions and further process edge information using non-maximum suppression double threshold values detection and edge tracking, from which we obtain a clearer edge profile and more edge information.
In addition, we also use the mean square error (MSE) to judge the quality of the resulting image, which is one of the most commonly used methods for judging image quality. In this paper, the fewer false edges in the detected image, the smaller MSE value. For two gray-scale images Q and R with size \(2^{n}\times 2^{n}\), MSE is defined as
\[MSE=\frac{1}{2^{2n}}\sum_{Y=0}^{2^{n}-1}\sum_{X=0}^{2^{n}-1}[Q(Y,X)-R(Y,X)]^{2} \tag{16}\]
where Y and X represent the position information of the images.
From Tab. 3, it can be seen that the MSE values of all images detected by our algorithm is less than that of images detected by the other two algorithms, which is because our algorithm detects fewer false edges. To sum up, our algorithm can not only detect more edge pixels, but also detect fewer false edges, which is meaningful.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Input image} & \multicolumn{3}{c}{MSE} \\ \cline{2-4} & Two-direction QSED & Four-direction QSED & Our algorithm \\ \hline Lena & 159.16 & 153.19 & 147.27 \\ Cameraman & 186.05 & 183.06 & 181.58 \\ Livingroom & 169.32 & 167.88 & 164.80 \\ House & 217.95 & 217.26 & 216.01 \\ Pirate & 159.68 & 158.39 & 154.49 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the MSE values of the different QSED algorithms
Figure 20: a Five common and original test images. b The result images of the two-direction Sobel operator edge detection algorithm. c The result images of the four-direction Sobel operator edge detection algorithm. d The result images of our proposed algorithm.
## 5 Conclusion
In this paper, based on the eight-direction Sobel operator, a novel quantum image edge detection algorithm is proposed, which can simultaneously calculates eight directions' gradient values of all pixel in a quantum image. In addition, it combines non-maximum suppression, double threshold detection and edge tracking, which can detect more accurate edge information. The concrete quantum circuits realization are reported that our algorithm can detect edges in the complexity of O(\(n^{2}+q^{2}\)) for a NEQR image with a size of \(2^{n}\times 2^{n}\). Compared with the classical and some existing QSED algorithms, our algorithm can achieve a significant improvement in the case of edge information and circuit complexity.
At present, the number of qubits of quantum computers available is relatively small, which cannot meet the requirements of a certain scale quantum image processing, therefore we performed a experimental simulation on a classic computer in this paper. In addition, we performed experimental simulations in an ideal scenario, and do not consider the effects of noise. How to introduce noise into our scenario and design an anti-noise QSED algorithm is our future research work.
Acknowledgments.This work is supported by the National Natural Science Foundation of China (62071240, 61802175), the Natural Science Foundation of Jiangsu Province (BK20171458), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
All data generated or analysed during this study are included in this published article [and its supplementary information files].
|
2301.03776 | Supersolvable saturated matroids and chordal graphs | A matroid is supersolvable if it has a maximal chain of flats each of which
is modular. A matroid is saturated if every round flat is modular. In this
article we present supersolvable saturated matroids as analogues to chordal
graphs, and we show that several results for chordal graphs hold in this
matroid context. In particular, we consider matroid analogues of the reduced
clique graph and clique trees for chordal graphs. The latter is a
maximum-weight spanning tree of the former. We also show that the matroid
analogue of a clique tree is an optimal decomposition for the matroid parameter
of tree-width. | Dillon Mayhew, Andrew Probert | 2023-01-10T04:12:06Z | http://arxiv.org/abs/2301.03776v2 | # Supersolvable saturated matroids and chordal graphs
###### Abstract.
A matroid is supersolvable if it has a maximal chain of flats each of which is modular. A matroid is saturated if every round flat is modular. In this article we present supersolvable saturated matroids as analogues to chordal graphs, and we show that several results for chordal graphs hold in this matroid context. In particular, we consider matroid analogues of the reduced clique graph and clique trees for chordal graphs. The latter is a maximum-weight spanning tree of the former. We also show that the matroid analogue of a clique tree is an optimal decomposition for the matroid parameter of tree-width.
## 1. Introduction
The study of chordal graphs is well established, and dates to work by Dirac [3] and Berge [1]. Our contribution here is to consider a new analogue of chordality for matroids. A graph is chordal if every cycle with at least four vertices has a chord. This leads fairly directly to the definition of a chordal matroid used by Cordovil, Forge, and Klein [2]. If \(C\) is a circuit in a matroid, then a _chord_ of \(C\) is an element \(z\notin C\) such that there is a partition of \(C\) into parts \(A\) and \(B\) where \(A\cup z\) and \(B\cup z\) are both circuits. We will say that a matroid is _\(C\)-chordal_ if every circuit with size at least four has a chord. (Cordovil et al. call such a matroid chordal, but we will try to avoid confusion by reserving that term solely for graphs.)
In this article we concentrate on a different matroid analogue for chordality. An alternative characterisation of chordal graphs is due to Dirac [3]: a vertex is _simplicial_ if its neighbours are pairwise adjacent. Now \(G\) is chordal if and only if it has a simplicial vertex \(v\) such that \(G-v\) is chordal. This definition is well suited for matroid purposes, because the edges not incident with a simplicial vertex comprise a modular hyperplane in the corresponding graphic matroid. (A flat \(F\) is _modular_ if \(r(F)+r(F^{\prime})=r(F\cap F^{\prime})+r(F\cup F^{\prime})\) for every flat \(F^{\prime}\). A hyperplane is modular if and only if it has a non-empty intersection with every rank-two flat of the matroid.) Now we can recursively consider the class of matroids \(\mathcal{M}\) such that \(M\) is in \(\mathcal{M}\) if and only if \(M\) has a modular hyperplane \(H\) where restricting \(M\) to \(H\) produces a matroid in \(\mathcal{M}\). The class \(\mathcal{M}\) is exactly the family of _supersolvable_ matroids, introduced by Stanley [11].
Figure 1 shows a geometric representation of a rank-four matroid, \(M\). We see that the hyperplane \(F_{3}=\{1,2,3,4,5,6,7\}\) is modular, since every
rank-two flat has a non-empty intersection with \(F_{3}\). In the same way, \(F_{2}=\{1,2,3,4\}\) is a modular hyperplane of the restriction to \(F_{3}\), and \(F_{1}=\{1\}\) is a modular hyperplane of the restriction to \(F_{2}\). Finally, \(\emptyset\) is a modular hyperplane of the restriction to \(F_{1}\). It follows that \(M\) is supersolvable.
It turns out that the condition of supersolvability is not strong enough for our purposes because supersolvable matroids may fail to have properties shared by all graphic matroids. To expand on this point, we consider matroid analogues of cliques in a graph. Let \(F\) be a flat of a matroid. Then \(F\) is _round_ if there is no pair of flats \((F_{1},F_{2})\) such that \(F=F_{1}\cup F_{2}\) and \(F_{1}\) and \(F_{2}\) are properly contained in \(F\). Let \(G\) be a graph and let \(F\) be a flat of the graphic matroid \(M(G)\). Then \(F\) is round if and only if \(G[F]\) is a clique (Proposition 3.7). Therefore we think of round flats as the matroid analogues of cliques. In graphic matroids every round flat is modular but this is not true for matroids in general, nor is it true for supersolvable matroids. For example, if \(M\) is the matroid in Figure 1, then \(\{4,6,7,8,9,10\}\) is a round hyperplane, since it cannot be expressed as the union of two flats that it properly contains. However, it is not modular, since it has an empty intersection with the rank-two flat \(\{3,5\}\).
We define a matroid to be _saturated_ if every round flat is modular. Thus saturated matroids can be thought of as analogues to graphs. To this condition, we add the condition of supersolvability to obtain our matroid analogue of chordal graphs. So our fundamental objects of study are supersolvable and saturated matroids. The graphic matroid \(M(G)\) is supersolvable and saturated if and only \(G\) is chordal (Corollary 3.8 and Proposition 3.9). Many other examples arise: for example, the matroids that are constructed using generalised parallel connections, starting with the projective geometries of a given order. Any such matroid is supersolvable and saturated.
The class of supersolvable saturated matroids is properly contained in the class of \(C\)-chordal matroids (Proposition 3.6). So our focus is on a proper subclass of \(C\)-chordal matroids. The relationships between the conditions
Figure 1. A supersolvable matroid
of supersovability, saturation, and \(C\)-chordality are illustrated in Figure 2. We will justify this Venn diagram in Section 3.1.
Our main focus is showing that many facts about chordal graphs have analogues in the class of supersolvable saturated matroids. In particular, Section 4 introduces one of our main ideas: the _rotunda graph_ of such a matroid. A _rotunda_ is a maximal round flat. The vertices of the rotunda graph are the rotunda of the matroid. Assume that \(R_{1}\) and \(R_{2}\) are distinct rotunda with a non-empty intersection and that \((F_{1},F_{2})\) is a pair of modular flats of \(M\) such that \(E(M)=F_{1}\cup F_{2}\) and neither \(F_{1}\) nor \(F_{2}\) is equal to \(E(M)\). If \(R_{i}\subseteq F_{i}\) for \(i=1,2\) and \(F_{1}\cap F_{2}=R_{1}\cap R_{2}\), then we make \(R_{1}\) and \(R_{2}\) adjacent in the rotunda graph. The idea of a rotunda graph is analogous to the _reduced clique graph_ introduced by Galinier, Habib, and Paul in [4] (where it is called a clique graph). If \(G\) is a chordal graph, then the vertices of the reduced clique graph of \(G\) are the maximal cliques of \(G\). If \(C\) and \(C^{\prime}\) are maximal cliques then they are adjacent if \(C\cap C^{\prime}\neq\emptyset\) and any path from a vertex of \(C-C^{\prime}\) to a vertex of \(C^{\prime}-C\) uses a vertex of \(C\cap C^{\prime}\).
If \(G\) is a chordal graph then the reduced clique graph of \(G\) and the rotunda graph of \(M(G)\) need not be the same, but this is only because \(G\) may have low connectivity. In Proposition 4.4 we show that when \(G\) is \(2\)-connected the reduced clique graph of \(G\) and the rotunda graph of \(M(G)\) are identical. We can go further than this: the class of reduced clique graphs and the class of rotunda graphs are identical.
**Theorem 1.1**.: _Let \(H\) be a graph. Then \(H\) is isomorphic to the rotunda graph of a supersolvable saturated matroid if and only if \(H\) is isomorphic to the reduced clique graph of a chordal graph._
We prove this theorem in Section 4.1. It tells us that although a supersolvable saturated matroid may be far from graphic, the structure of its
Figure 2. Three matroid definitions
rotunda will be mirrored by the structure of maximal cliques in a chordal graph.
Knowing that these two classes of graphs are identical allows us to deduce facts about the structure of rotunda graphs from the facts about reduced clique graphs that we list in [9]. For example, in [9] we show that the reduced clique graph of a chordal graph may have induced cycles of length three, four, or six, but not five. Therefore the same statement applies to rotunda graphs. We conjecture that a reduced clique graph cannot have an induced cycle of length greater than six, so we therefore conjecture that the same statement holds for rotunda graphs. In [9] we show that no rotunda graph can be isomorphic to a cycle of length at least four. Thus the class of rotunda graphs is properly contained in the class of graphs with no induced cycle of length five. We also believe that every chordal graph is isomorphic to the rotunda graph of some supersolvable saturated matroid, and that there is a polynomial-time algorithm for recognising when a given graph is isomorphic to some rotunda graph.
A _clique tree_ of the graph \(G\) is a tree whose nodes are the maximal cliques of \(G\), where the set of maximal cliques containing an arbitrary vertex \(v\in V(G)\) induces a subtree of \(T\). Clique trees were introduced by Gavril [5], who showed that \(G\) has a clique tree if and only if \(G\) is chordal. The analogue for a supersolvable saturated matroid \(M\) is a _rotunda tree_. In this case the nodes of the rotunda tree are the rotunda of \(M\), and the rotunda containing an arbitrary element \(x\in E(M)\) induces a subtree. A matroid may have a rotunda tree without being supersolvable and saturated. For example, the matroid in Figure 1 is not saturated, but it does have a rotunda tree (having two nodes, corresponding to \(\{1,2,3,4,5,6,7\}\) and \(\{4,6,7,8,9,10\}\)).
Galinier et al. [4] weight the edges of reduced clique graphs. The edge that joins maximal cliques \(C\) and \(C^{\prime}\) is weighted with \(|C\cap C^{\prime}|\). They then prove that a spanning tree of the reduced clique graph is a clique tree if and only if it has maximum total weight amongst all spanning trees. (Their proof contains a flaw, which we explain and correct in [9].) In our analogous result we weight the edges of rotunda graphs. The edge that joins rotunda \(R\) and \(R^{\prime}\) is weighted with the rank of \(R\cap R^{\prime}\). (Our techniques are general enough that we could also weight it with \(|R\cap R^{\prime}|\)). In Section 5 we prove the following.
**Theorem 1.2**.: _Let \(M\) be a connected supersolvable and saturated matroid. Every rotunda tree of \(M\) is a spanning tree of the rotunda graph of \(M\). Every edge of the rotunda graph is contained in a rotunda tree. Moreover, a spanning tree is a rotunda tree if and only if it has maximum weight amongst all spanning trees._
In Section 6 we concentrate on tree-decompositions of optimal width. In unpublished work, Heggernes [7] observed that a clique tree of a chordal graph is an optimal decomposition of the graph with respect to the parameter of tree-width. A matroid analogue of tree-width was developed by
Hlineny and Whittle [8], and in Theorem 6.5 we prove the matroid analogue of Heggernes's observation: any rotunda tree of a supersolvable and saturated matroid is an optimal decomposition with respect to the matroid parameter of tree-width.
We refer to [10] for the foundations of matroid theory.
## 2. Preliminaries
### Chordal graphs
Let \(G\) be a graph. If \(X\) is a set of vertices in \(G\), then \(G[X]\) is the subgraph induced by \(X\). We say that a path \(P\) is _\(X\)-avoiding_ if any vertex of \(X\) in \(P\) is a terminal vertex of \(P\). A _clique_ of \(G\) is a set of pairwise adjacent vertices. We blur the distinction between a subgraph, its vertex set, and its edge set. So for example we may refer to a clique of the graph \(G\) as being a flat in the cyclic matroid \(M(G)\).
If \(C\) is a cycle of a graph, then a _chord_ is an edge that joins two distinct vertices of the cycle without being an edge of the cycle or parallel to any such edge. A graph is _chordal_ if every cycle with at least four vertices has a chord. Thus a graph is chordal if and only if has no induced cycle with more than three vertices. Clearly every induced subgraph of a chordal graph is chordal.
Let \(G\) be a graph, and let \(v\) be a vertex of \(G\). If deleting \(v\) from \(G\) produces a graph with more connected components than \(G\), then \(v\) is a _cut-vertex_ of \(G\). A connected graph with no cut-vertex is _\(2\)-connected_.
An ordering \(v_{1},\ldots,v_{n}\) of the vertices in a graph is a _perfect elimination order_ if the neighbours of \(v_{i}\) amongst \(v_{i+1},\ldots,v_{n}\) form a clique, for each \(i\). A proof of the following can be found in [6, Theorem 4.1].
**Proposition 2.1**.: _A graph is chordal if and only if it has a perfect elimination order._
### Modularity
Let \(M\) be a matroid. The flat \(F\) is _modular_ if \(r(F)+r(F^{\prime})=r(F\cup F^{\prime})+r(F\cap F^{\prime})\) whenever \(F^{\prime}\) is a flat. Note that the entire ground set is trivially a modular flat. We also see that the unique rank-zero flat is modular. The following is proved in [10, Proposition 6.9.2].
**Proposition 2.2**.: _Let \(F\) be a flat of the matroid \(M\). Then \(F\) is modular if and only if \(r(F)+r(F^{\prime})=r(F\cup F^{\prime})\) whenever \(F^{\prime}\) is a flat such that \(F\cap F^{\prime}=\emptyset\)._
It follows easily that if \(F\) is a hyperplane, then \(F\) is modular if and only if \(r(F\cap L)=1\) whenever \(L\) is a rank-\(2\) flat not contained in \(F\). We often use an equivalent definition.
**Proposition 2.3**.: _Let \(F\) be a flat of the matroid \(M\). Then \(F\) is modular if and only if there is no circuit \(C\subseteq F\cup F^{\prime}\) containing elements from both \(F\) and \(F^{\prime}\), whenever \(F^{\prime}\) is a flat that is disjoint from \(F\)._
Proof.: Let \(F^{\prime}\) be an arbitrary flat that is disjoint from \(F\). There is no circuit of \(M|(F\cup F^{\prime})\) that contains elements of both \(F\) and \(F^{\prime}\) if and only
if \(r(F)+r(F^{\prime})=r(F\cup F^{\prime})\)[10, Proposition 4.2.1]. Now the result follows by Proposition 2.2.
The next result combines Proposition 6.9.5 and Corollary 6.9.8 from [10].
**Proposition 2.4**.: _Let \(F\) and \(F^{\prime}\) be modular flats of the matroid \(M\). Then \(F\cap F^{\prime}\) is a modular flat of \(M\). If \(F\subseteq X\subseteq E(M)\) then \(F\) is a modular flat of \(M|X\)._
**Proposition 2.5**.: _Let \(F\) be a modular flat of the matroid \(M\) and let \(C\) be a circuit of \(M\) such that \(C\cap F\) is non-empty. Then \(\operatorname{cl}(C-F)\cap F\) is non-empty._
Proof.: If \(\operatorname{cl}(C-F)\cap F=\emptyset\) then Proposition 2.3 is violated, since \(\operatorname{cl}(F-C)\) is a flat that is disjoint from \(F\), but \(C\) is a circuit that contains elements from both \(F\) and \(\operatorname{cl}(C-X)\).
Let \(H\) be a modular hyperplane of the matroid \(M\), and let \(C^{*}\) be the complementary cocircuit. Let \(x\) and \(y\) be distinct rank-one flats contained in \(C^{*}\). Then \(r(H\cap\operatorname{cl}(x\cup y))=1\), because \(H\) is modular. We say that the rank-one flat \(H\cap\operatorname{cl}(x\cup y)\) is the _projection_ of \(x\) and \(y\) onto \(H\), and we denote this flat with \(P_{H}(x,y)\). If \(x\) and \(y\) are elements of \(C^{*}\) such that \(r(\{x,y\})=2\), then we also use \(P_{H}(x,y)\) to stand for \(P_{H}(\operatorname{cl}(\{x\}),\operatorname{cl}(\{y\}))\).
**Proposition 2.6**.: _Let \(H\) be a modular hyperplane of the matroid \(M\). Let \(X\) be a subset of \(E(M)-H\) and let \(P\) be the union \(\cup P_{H}(x,y)\), where \(\{x,y\}\) ranges over all pairs of distinct rank-one flats in \(X\). Let \(U\) be a subset of \(H\) such that \(U\) contains \(P\). Then \(\operatorname{cl}(U)=\operatorname{cl}(U\cup X)\cap H\)._
Proof.: Note that \(\operatorname{cl}(U)\) is contained in \(H\). Thus it is obvious that \(\operatorname{cl}(U)\) is a subset of \(\operatorname{cl}(U\cup X)\cap H\). Let us assume that the containment is proper, and let \(z\) be an element that is in \(\operatorname{cl}(U\cup X)\cap H\) but not \(\operatorname{cl}(U)\). Thus \(z\) is not in \(U\). There is some circuit \(C\subseteq U\cup X\cup z\) that contains \(z\). Let us assume that we have chosen \(C\) so that \(C-H\) is as small as possible. If \(C-H\) is empty, then \(C\) certifies that \(z\) is in \(\operatorname{cl}(U)\), contrary to hypothesis, so \(C-H\neq\emptyset\). If \(C-H\) contains a single element \(x\), then \(C\) certifies that \(x\) is in \(\operatorname{cl}(H)=H\), which is a contradiction. Therefore we can choose \(x\) and \(y\) to be distinct elements of \(C-H\). Let \(p\) be an element in \(P_{H}(x,y)\). Thus \(p\) is in \(P\) and \(\{x,y,p\}\) is a circuit. Note that \(z\neq p\), since \(z\) is not in \(P\subseteq U\). We perform strong circuit elimination on \(C\) and \(\{x,y,p\}\) to obtain the circuit \(C^{\prime}\subseteq(C-x)\cup\{p,z\}\) such that \(z\) is in \(C^{\prime}\). Thus \(C^{\prime}\) is a subset of \(U\cup X\cup z\), but \(C^{\prime}-H\) is smaller than \(C\). Now our choice of \(C\) is contradicted, and this completes the proof.
**Proposition 2.7**.: _Let \(H\) be a modular hyperplane of the connected matroid \(M\). Then \(M|H\) is connected._
Proof.: Assume that \(M|H\) is not connected, and let \((U,V)\) be a separation of \(M|H\). Because \(M\) is connected, there are circuits of \(M\) that contain elements from both \(U\) and \(V\). Amongst such circuits choose \(C\) so that \(C-H\) is as small as possible. Let \(u\) be an element in \(C\cap U\) and let \(v\) be an element
from \(C\cap V\). Note that \(C-H\) is not empty since \((U,V)\) is a separation of \(M|H\). Furthermore, \(C-H\) does not contain a single element, or else that element would be in \(\operatorname{cl}(H)=H\). Therefore we choose distinct elements \(x,y\in C-H\). Let \(p\) be an element in \(P_{H}(x,y)\), so that \(\{x,y,p\}\) is a circuit of \(M\). Because \(p\) is in \(H\) we can assume without loss of generality that \(p\) is in \(U\). We perform strong circuit elimination on \(C\) and \(\{x,y,p\}\) to obtain a circuit \(C^{\prime}\subseteq(C-x)\cup\{y,p\}\) that contains \(v\). Note that \(C^{\prime}\) contains \(p\), or else it is a proper subset of \(C\). Thus \(C^{\prime}\) contains elements from both \(U\) and \(V\), but \(|C^{\prime}-H|<|C-H|\), and we have a contradiction. Therefore \(M|H\) is connected.
### Roundness
A _proper_ flat of a matroid is one that is not equal to the entire ground set.
**Definition 2.8**.: Let \(M\) be a matroid. A _vertical cover_ of \(M\) is a pair \((F,F^{\prime})\) of proper flats such that \(F\cup F^{\prime}=E(M)\). If, in addition, \(F\) and \(F^{\prime}\) are modular flats, then \((F,F^{\prime})\) is a _modular cover_. A matroid is _round_ if it has no vertical cover.
Thus a matroid is round if and only if there is no partition \((U,U^{\prime})\) of \(E(M)\) such that neither \(U\) nor \(U^{\prime}\) is spanning. Such a partition is said to be a _vertical separation_. If \(X\) is a subset of \(E(M)\), then we say that \(X\) is round if \(M|X\) is round. If \(F\) is a round flat of the matroid \(M\) and \(F\) is contained in the subset \(X\subseteq E(M)\), then clearly \(F\) is a round flat of \(M|X\). A round flat is _maximal_ if it is not properly contained in a round flat. For brevity, we refer to a maximal round flat as a _rotunda_. The set of rotunda of a matroid \(M\) is denoted by \(\mathcal{R}(M)\).
**Proposition 2.9**.: _Let \(R\) and \(R^{\prime}\) be distinct rotunda. Let \((F,F^{\prime})\) be a vertical cover such that \(R\subseteq F\) and \(R^{\prime}\subseteq F^{\prime}\) and \(F\cap F^{\prime}=R\cap R^{\prime}\). Then \(R\nsubseteq F^{\prime}\) and \(R^{\prime}\nsubseteq F\)._
Proof.: It suffices to prove that \(R\) is not contained in \(F^{\prime}\). Assume this fails. Then \(R\) is contained in \(F\cap F^{\prime}=R\cap R^{\prime}\), implying that \(R\) is a subset of \(R^{\prime}\). This is impossible since \(R\) and \(R^{\prime}\) are distinct rotunda.
The next result follows from work in [12], but we include a proof for completeness.
**Proposition 2.10**.: _Let \(H\) be a modular hyperplane of the matroid \(M\). Let \(X\) be a subset of the cocircuit \(E(M)-H\). Then_
\[\{P_{H}(x,y)\colon x,y\in X,r(\{x,y\})=2\}\]
_is round._
Proof.: Let \(P\) be the union of all projections onto \(H\) of pairs of distinct, non-parallel, elements in \(X\). Thus our aim is to show that \(P\) is round. We assume for a contradiction that \((F,F^{\prime})\) is a vertical cover of \(M|P\), so that \(F\) and \(F^{\prime}\) are proper flats of \(M|P\) and \(F\cup F^{\prime}=P\). Note that if \(X\) contains
fewer than three rank-one flats, then \(P\) is either empty or consists of a single rank-one flat. In this case \(P\) is trivially round, so we must assume that \(X\) contains at least three rank-one flats.
Let \(x\), \(y\), and \(z\) be distinct rank-one flats in \(X\). Assume that \(P_{H}(x,y)\) and \(P_{H}(x,z)\) are both in \(F\). We claim that \(P_{H}(y,z)\) is also in \(F\). If \(z\) is in \(\operatorname{cl}(x\cup y)\), then \(\operatorname{cl}(x\cup y)=\operatorname{cl}(x\cup z)=\operatorname{cl}(y\cup z)\), and it follows that \(P_{H}(x,y)=P_{H}(x,z)=P_{H}(y,z)\), so the claim is true. Therefore we will assume that \(r(x\cup y\cup z)=3\). Let \(Z\) be \(\operatorname{cl}(x\cup y\cup z)\). Since \(H\) is a modular hyperplane and \(Z\) is not contained in \(H\), it follows that \(r(H\cap Z)=2\). Now \(P_{H}(x,y)\) and \(P_{H}(x,z)\) are rank-one flats contained in \(H\cap Z\). If they are not distinct, then \(y\) and \(z\) are both in the closure of \(x\cup P_{H}(x,y)\). This implies that \(z\) is in \(\operatorname{cl}(x\cup y)\), contrary to earlier hypothesis. It follows that \(P_{H}(x,y)\cup P_{H}(x,z)\) spans \(H\cap Z\), and in particular spans \(P_{H}(y,z)\). Thus \(P_{H}(y,z)\) is in \(F\), as claimed. Symmetrically, if \(P_{H}(x,y)\) and \(P_{H}(x,z)\) are both in \(F^{\prime}\), then so is \(P_{H}(y,z)\).
We think of the rank-one flats that have a non-empty intersection with \(X\) as the vertices of a complete graph. If \(x\) and \(y\) are two such flats, then we colour the edge between \(x\) and \(y\) red if \(P_{H}(x,y)\) is in \(F\), and blue if it is in \(F^{\prime}\). Notice that an edge may be both red and blue. The previous paragraph shows that if the edges \(xy\) and \(xz\) are both red (blue), then the edge \(yz\) is also red (blue).
Let \(x\) be a vertex in this complete graph and assume that every edge incident with \(x\) is red. Then every edge is red, and it follows that \(P\) is contained in \(F\). This is impossible since \(F\) is a proper flat of \(M|P\). Similarly, it is not possible for every edge incident with \(x\) to be blue.
Therefore we can assume that the edge between \(x\) and \(y\) is red but not blue, and the edge between \(x\) and \(z\) is blue but not red. However, if the edge \(yz\) is red, then \(xz\) is red, and if \(yz\) is blue then \(xy\) is blue. In either case we have a contradiction, so the proof is complete.
**Proposition 2.11**.: _Let \(H\) be a modular hyperplane of the matroid \(M\) and let \(C^{*}\) be the complementary cocircuit. Let \((F_{1},F_{2})\) be a vertical cover of \(M|H\). Let \(P\) be the union \(\cup P_{H}(x,y)\), where \(x\) and \(y\) range over all distinct rank-one flats contained in \(C^{*}\). Then \(P\) is contained in \(F_{i}\) for some \(i\), and \((F_{i}\cup C^{*},F_{3-i})\) is a vertical cover of \(M\). Moreover, if \((F_{1},F_{2})\) is a modular cover, then so is \((F_{i}\cup C^{*},F_{3-i})\)._
Proof.: Proposition 2.10 says that \(P\) is a round subset of \(H\). Thus \((F_{1}\cap P,F_{2}\cap P)\) is not a vertical cover of \(M|P\), so either \(F_{1}\cap P\) or \(F_{2}\cap P\) is equal to \(P\). We assume the former without any loss of generality, so \(P\subseteq F_{1}\). Proposition 2.6 implies that \(F_{1}\) is equal to \(\operatorname{cl}(F_{1}\cup C^{*})\cap H\). It follows that \(\operatorname{cl}(F_{1}\cup C^{*})=F_{1}\cup C^{*}\). Now \(F_{1}\cup C^{*}\) is a proper flat of \(M\) because \(F_{1}\) is a proper flat of \(M|H\). Similarly, \(F_{2}\) is a proper flat of \(M\). As \((F_{1}\cup C^{*})\cup F_{2}=E(M)\), it follows that \((F_{1}\cup C^{*},F_{2})\) is a vertical cover of \(M\).
Now we assume that \((F_{1},F_{2})\) is a modular cover of \(M|H\). Then \(F_{2}\) is a modular flat of \(M|H\) so it immediately follows from [10, Proposition 6.9.7]
that \(F_{2}\) is also a modular flat in \(M\). It remains only to prove that \(F_{1}\cup C^{*}\) is a modular flat of \(M\). To this end, assume that \(F\) is a flat of \(M\) that is disjoint from \(F_{1}\cup C^{*}\). Thus \(F\) is a flat of \(M|(F_{2}-F_{1})\). If we can show that there is no circuit of \(M|((F_{1}\cup C^{*})\cup F)\) containing elements from both \(F\) and \(F_{1}\cup C^{*}\), then the result will follow from Proposition 2.3. Assume that \(C\) is such a circuit, chosen so that \(C\cap C^{*}\) is as small as possible. Let \(f\) be an element of \(C\cap F\). If \(C\cap C^{*}=\emptyset\), then Proposition 2.3 implies that \(F_{1}\) is not a modular flat of \(M|H\), which is a contradiction. Therefore \(C\cap C^{*}\neq\emptyset\). If \(C\cap C^{*}\) contains a single element, \(x\), then \(C\) certifies that \(x\) is in \(\operatorname{cl}(H)=H\), a contradiction. Therefore we let \(x\) and \(y\) be distinct elements in \(C\cap C^{*}\). Let \(p\) be in \(P_{H}(x,y)\). Thus \(\{x,y,p\}\) is a circuit and \(p\) is in \(P\), and hence in \(F_{1}\). We perform strong circuit elimination on \(C\) and \(\{x,y,p\}\) to obtain \(C^{\prime}\subseteq(C-x)\cup\{y,p\}\), a circuit that contains \(f\). It must contain \(p\), since otherwise it is properly contained in \(C\). But now \(C^{\prime}\) is contained in \(F_{1}\cup C^{*}\cup F\), and it contains elements from both \(F\) and \(F_{1}\cup C^{*}\). Since \(C^{\prime}\cap C^{*}\) is strictly smaller than \(C\cap C^{*}\), we have contradicted our choice of \(C\), so the proof is complete.
The following result provides a partial converse to Proposition 2.11.
**Proposition 2.12**.: _Let \(H\) be a modular hyperplane of the matroid \(M\) and let \(C^{*}\) be the complementary cocircuit. Let \((F,F^{\prime})\) be a modular cover of \(M\) such that \(F^{\prime}\) is contained in \(H\). If \(F^{\prime}\neq H\), then \((F\cap H,F^{\prime}\cap H)\) is a modular cover of \(M|H\)._
Proof.: Note that \(C^{*}\) is contained in \(F\) because \(F^{\prime}\) contains no element of \(C^{*}\). Let \(P\) be the union of \(P_{H}(x,y)\) where \(x\) and \(y\) range over distinct rank-one flats in \(C^{*}\). Since \(F\) contains \(C^{*}\), and \(P\) is spanned by \(C^{*}\) it follows that \(P\) is a subset of \(F\). Now \(F\cap H\) is the intersection of two modular flats, so Proposition 2.4 implies that it is a modular flat of \(M\), and hence of \(M|H\). Because \(F^{\prime}\) is contained in \(H\) it is also true that \(F^{\prime}\cap H=F^{\prime}\) is a modular flat of \(M|H\). By hypothesis \(F^{\prime}\cap H\) is a proper flat of \(M|H\). Furthermore, \(F\cap H\) is a proper flat of \(M|H\), or else \(F\) contains \(H\cup C^{*}=E(M)\), contradicting the fact that \(F\) is a proper flat of \(M\). Therefore \((F\cap H,F^{\prime}\cap H)\) is a modular cover of \(M|H\).
**Proposition 2.13**.: _Let \(H\) be a modular hyperplane of the matroid \(M\) and let \(C^{*}\) be the complementary cocircuit. If \(F\) is a round flat not contained in \(H\), then \(F\subseteq\operatorname{cl}(C^{*})\)._
Proof.: Assume this fails. Then \(F\cap C^{*}\) does not span \(F\). It is also true that \(F\cap H\) does not span \(F\), as \(\operatorname{cl}(F\cap H)\subseteq\operatorname{cl}(H)=H\) and \(F\) is not contained in \(H\). Therefore \((F\cap H,F\cap C^{*})\) is a vertical cover of \(M|F\), and this contradicts the fact that \(M|F\) is round.
**Proposition 2.14**.: _Let \(H\) be a modular hyperplane of the matroid \(M\) and let \(C^{*}\) be the complementary cocircuit. Then \(\operatorname{cl}(C^{*})\) is a rotunda. Furthermore, every other rotunda of \(M\) is contained in \(H\)._
Proof.: Let \(R\) be \(\operatorname{cl}(C^{*})\). Assume that \(R\) is not round, and let \((F,F^{\prime})\) be a vertical cover of \(M|R\). Let \(P\) be the union \(\cup P_{H}(x,y)\), where \(x\) and \(y\) range over all distinct rank-one flats contained in \(C^{*}\). Note that \(P\) is contained in \(R\cap H\). Proposition 2.10 says that \(P\) is round. It follows that one of \(F\cap P\) or \(F^{\prime}\cap P\) is equal to \(P\). Without loss of generality we will assume the former.
If \(F^{\prime}\) contains \(C^{*}\), then it contains \(R\), which is impossible as \((F,F^{\prime})\) is a vertical cover of \(R\). Therefore we choose \(x\in C^{*}-F^{\prime}\). The same argument shows we can choose \(y\in C^{*}-F\). Note that \(x\) and \(y\) are not parallel, since \(x\) is in \(F-F^{\prime}\) and \(y\) is in \(F^{\prime}-F\). Let \(p\) be in \(P_{H}(x,y)\), so that \(p\) is in \(P\), and hence in \(F\). As \(\{x,y,p\}\) is a circuit and both \(x\) and \(p\) belong to the flat \(F\) it follows that \(y\) is in \(F\), contrary to assumption. Therefore \(R\) is round.
Let \(Z\) be any flat that properly contains \(R\). Note that \(Z\cap H\) is a flat that does not contain any element of \(C^{*}\). Therefore \((Z\cap H,R)\) is a vertical cover of \(Z\), a contradiction. This shows that \(R\) is a maximal round flat, which is to say, a rotunda.
Finally, let \(Z\) be a rotunda that is not contained in \(H\). By Proposition 2.13, we see that \(Z\) is contained in \(R\). As \(Z\) and \(R\) are both rotunda it now follows that \(Z=R\).
**Proposition 2.15**.: _Let \(H\) be a modular hyperplane of the matroid \(M\). Let \(C^{*}\) be the complementary cocircuit. Then \(\operatorname{cl}(C^{*})\cap H\) is round._
Proof.: Let \(R\) be \(\operatorname{cl}(C^{*})\). Assume for a contradiction that \((F,F^{\prime})\) is a vertical cover of \(R\cap H\). Let \(P\) be the union \(\cup P_{H}(x,y)\) where \(x\) and \(y\) range over all distinct rank-one flats contained in \(C^{*}\). Note that \(P\) is contained in \(R\cap H\). Proposition 2.10 says that \(P\) is round. Therefore \((F\cap P,F^{\prime}\cap P)\) is not a vertical cover of \(P\), so we can assume without loss of generality that \(P\) is contained in \(F\). Applying Proposition 2.6, we see that \(\operatorname{cl}(F\cup C^{*})\cap H\) is equal to \(F\). Thus \(\operatorname{cl}(C^{*})\cap H=R\cap H\) is contained in \(F\). This contradicts the fact that \((F,F^{\prime})\) is a vertical cover of \(R\cap H\).
## 3. Supersolvability and saturation
The following definition was introduced by Stanley [11].
**Definition 3.1**.: The rank-\(r\) matroid \(M\) is _supersolvable_ if it has a chain of modular flats \(F_{0}\subseteq F_{1}\subseteq\cdots\subseteq F_{r}\), where \(r(F_{i})=i\) for each \(i\).
We can give an equivalent, recursive, definition: if \(r(M)>0\) then \(M\) is supersolvable if it contains a modular hyperplane \(H\) such that \(M|H\) is supersolvable. Note that every rank-zero matroid is trivially supersolvable.
**Definition 3.2**.: A matroid is _saturated_ if every round flat is modular.
**Proposition 3.3**.: _Let \(F\) be a flat of the saturated matroid \(M\). Then \(M|F\) is saturated._
Proof.: Let \(R\) be a round flat of \(M|F\). Then \(R\) is a round flat of \(M\) so it is modular in \(M\). Now [10, Proposition 6.9.5] implies that \(R\) is a modular flat of \(M|F\)
If \(M\) is supersolvable and saturated and \(H\) is a modular hyperplane such that \(M|H\) is supersolvable, then it follows from Proposition 3.3 that \(M|H\) is supersolvable and saturated.
**Proposition 3.4**.: _Let \(M\) be a saturated matroid. Let \(H\) be a modular hyperplane of \(M\) and let \(C^{*}\) be the complementary cocircuit. If \(C^{*}\) is non-spanning, then \((H,\operatorname{cl}(C^{*}))\) is a modular cover of \(M\)._
Proof.: Certainly \(H\) is a proper flat of \(M\), and \(C^{*}\) is non-spanning by hypothesis. Therefore \((H,\operatorname{cl}(C^{*}))\) is a vertical cover. We have assumed that \(H\) is a modular flat. Proposition 2.14 says that \(\operatorname{cl}(C^{*})\) is round. Since \(M\) is saturated, it follows that \(\operatorname{cl}(C^{*})\) is modular, so the proof is complete.
**Proposition 3.5**.: _Let \(M\) be a matroid. Then \(M\) is supersolvable if and only if each of its connected components is supersolvable. Similarly \(M\) is saturated if and only if each of its connected components is saturated._
Proof.: This result will follow by an easy inductive argument if we can prove it in the case when \(M\) has exactly two connected components. Therefore we will assume that \(M=M_{1}\oplus M_{2}\), where \(M_{1}\) and \(M_{2}\) are non-empty connected matroids. For \(i=1,2\), let \(r_{i}\) be \(r(M_{i})\).
Assume that \(M_{1}\) and \(M_{2}\) are supersolvable. For \(i=1,2\), let \(F_{0}^{i}\subseteq F_{1}^{i}\subseteq\cdots\subseteq F_{r_{i}}^{i}\) be a chain of modular flats in \(M_{i}\) such that each \(F_{j}^{i}\) has rank \(j\). Using [10, Corollary 6.9.10] we see that each \(F_{j}^{1}\cup F_{k}^{2}\) is a modular flat of \(M\). Now it is easy to confirm that the chain
\[F_{0}^{1}\subseteq F_{1}^{1}\subseteq\cdots\subseteq F_{r_{1}}^{1}\subseteq F _{r_{1}}^{1}\cup F_{1}^{2}\subseteq F_{r_{1}}^{1}\cup F_{2}^{2}\cdots\subseteq F _{r_{1}}^{1}\cup F_{r_{2}}^{2}\]
certifies that \(M\) is supersolvable.
For the other direction, assume that \(M\) is supersolvable. Assume for a contradiction that either \(M_{1}\) or \(M_{2}\) is not supersolvable. We will assume that amongst such counterexamples, \(M\) is as small as possible. Now \(M\) has a modular hyperplane \(H\) such that \(M|H\) is supersolvable. The complement of \(H\) is a cocircuit, and is therefore contained in either \(M_{1}\) or \(M_{2}\). Without loss of generality we assume that \(H\) contains \(E(M_{2})\). Now \(M|H=(M_{1}|H)\oplus M_{2}\). The minimality of \(M\) means that \(M_{1}|H\) and \(M_{2}\) are both supersolvable. But [10, Corollary 6.9.10] implies that \(H\cap E(M_{1})\) is a modular flat of \(M_{1}\). It is the complement of a cocircuit of \(M_{1}\), so \(H\cap M_{1}\) is a modular hyperplane of \(M_{1}\), and restricting to this hyperplane produces a supersolvable matroid. This shows that \(M_{1}\) too is supersolvable, so the proof of this direction is complete.
From [10, Corollary 6.9.10] we see that \(E(M_{1})\) and \(E(M_{2})\) are modular flats of \(M\). It follows from [10, Proposition 6.9.5] that a flat of \(M_{i}\) is modular in \(M\) if and only if it is modular in \(M_{i}\). If \(F\) is a round flat of \(M\) then \(F\subseteq E(M_{1})\) or \(F\subseteq E(M_{2})\) because otherwise \((F\cap E(M_{1}),F\cap E(M_{2}))\) is a vertical cover of \(M|F\). In fact, the round flats of \(M\) are exactly the round flats of \(M_{1}\) along with the round flats of \(M_{2}\). From these considerations we can easily see that \(M\) is saturated if and only if \(M_{1}\) and \(M_{2}\) are saturated.
### Chordality for matroids
We shall start this section by justifying the Venn diagram in Figure 2. Recall that if \(C\) is a matroid circuit, then a chord of \(C\) is an element \(x\notin C\) such that \(A\cup z\) and \(B\cup z\) are both circuits for some partition of \(C\) into sets \(A\) and \(B\). A matroid is \(C\)-chordal if every circuit with at least four elements has a chord.
As we discussed in the introduction, the matroid in Figure 1 is supersolvable but not saturated. To see that it is not \(C\)-chordal, note that \(\{3,5,6,7\}\) has no chord. Because the only round flats of \(U_{3,6}\) are the empty set, the singleton sets, and the entire ground set, we can easily confirm that every round flat is modular, so \(U_{3,6}\) is saturated. It has no modular hyperplane, so it is not supersolvable, and no circuit has a chord so it is not \(C\)-chordal. Recall that \(\mathcal{W}^{4}\) is the rank-three matroid with ground set \(\{a,b,c,d,e,f\}\) and non-spanning circuits \(\{a,b,d\}\), \(\{b,c,e\}\), and \(\{a,c,f\}\). It is easy to confirm that every circuit of size four has a chord. However no line is modular, so \(\mathcal{W}^{4}\) is not supersolvable, and it also follows that it is not saturated.
We will leave as an exercise the fact that the Fano matroid \(F_{7}\) is supersolvable, saturated, and \(C\)-chordal. Cordovil et al. note that \(M^{*}(K_{3,3})\) is not supersolvable [2]. It is an easy exercise to see that it is saturated and \(C\)-chordal. Finally, let \(M\) be the rank-three matroid with ground set \(\{p,a,b,c,d,e,f,x\}\) where the non-spanning circuits are \(\{p,a,b,c\}\), \(\{p,d,e,f\}\), \(\{a,d,x\}\), \(\{b,e,x\}\), and \(\{c,f,x\}\). Now \(\{p,a,b,c\}\) and \(\{p,d,e,f\}\) are both modular hyperplanes, and we can easily confirm that \(M\) is supersolvable. On the other hand, \(\{a,d,x\}\) is a round hyperplane that has empty intersection with the rank-two flat \(\{b,f\}\). Hence \(\{a,d,x\}\) is not modular and therefore \(M\) is not saturated. On the other hand, a simple case-analysis shows that \(M\) is \(C\)-chordal. We can finish the justification of Figure 2 by proving that every supersolvable saturated matroid is \(C\)-chordal. In fact, we prove something slightly stronger.
**Proposition 3.6**.: _Let \(C\) be a circuit in the supersolvable saturated matroid \(M\) and assume that \(|C|\geq 4\). There exist distinct elements \(x,y\in C\) and an element \(z\notin C\) such that \(\{x,y,z\}\) and \((C-\{x,y\})\cup z\) are circuits of \(M\)._
Proof.: Let \(M\) be a smallest possible counterexample to the result. If \(r(M)\leq 2\) then the result holds vacuously, so \(r(M)\geq 3\). Let \(H\) be a modular hyperplane of \(M\) such that \(M|H\) is supersolvable and saturated. Let \(C^{*}\) be the complement of \(H\).
Choose \(C\) to be an arbitrary circuit of \(M\) such that \(|C|\geq 4\). If \(C\) is a circuit of \(M|H\), then the result holds by induction. Therefore \(C\cap C^{*}\) is non-empty. Because \(H\) is a flat it follows that \(C\cap C^{*}\) contains distinct elements \(x\) and \(y\). Let \(L\) be \(\operatorname{cl}(\{x,y\})\). Note that \(L\) contains an element in \(P_{H}(x,y)\), so that \(L\) is a rank-two flat containing at least three rank-one flats. Now it is easy to confirm that \(L\) is a round flat. Since \(M\) is saturated, it follows that \(L\) is modular. Note that \(C\) contains exactly two elements of \(L\) because \(|C|\geq 4\). Now
\[r(\operatorname{cl}(C-L)\cap L)=r(C-L)+r(L)-r(C)=(|C|-2)+2-(|C|-1)=1.\]
Therefore we choose an element \(z\) which is in \(\operatorname{cl}(C-L)\cap L\). Note that neither \(x\) nor \(y\) is in \(\operatorname{cl}(C-L)\), or else \(C\) properly contains a circuit. Therefore \(z\) is in \(L-\{x,y\}\) and \(\{x,y,z\}\) is a circuit. Let \(C^{\prime}\subseteq(C-L)\cup z\) be a circuit that contains \(z\). Now \((C^{\prime}\cup\{x,y\})-z\) contains a circuit, by circuit elimination with \(C^{\prime}\) and \(\{x,y,z\}\). But \((C^{\prime}\cup\{x,y\})-z\) is a subset of \(C\), so \((C^{\prime}\cup\{x,y\})-z=C\). It follows that \(C^{\prime}=(C-L)\cup z\). Thus \((C-L)\cup z\) and \(\{x,y,z\}\) are both circuits and \(M\) is not a counterexample after all.
In the next results we justify using supersolvable saturated matroids as analogues for chordal graphs.
**Proposition 3.7**.: _Let \(G\) be a graph, and let \(F\) be a flat of \(M(G)\). Then \(F\) is round if and only if \(G[F]\) is a clique._
Proof.: Let \(M\) be \(M(G)\). Assume that \(G[F]\) is not a clique. Let \(u\) and \(v\) be distinct vertices in \(G[F]\) that are not adjacent. Let \(U\) be the set of edges in \(F\) that are incident with \(u\), and let \(U^{\prime}\) be \(F-U\). If \(f\in F\) is an edge incident with \(u\), there is no cycle contained in \(U^{\prime}\cup f\) that contains \(f\). This shows that \(\operatorname{cl}(U^{\prime})\) is a proper flat of \(M|F\). The same argument shows that \(\operatorname{cl}(U)\) is a proper flat of \(M|F\), so \((\operatorname{cl}(U),\operatorname{cl}(U^{\prime}))\) is a vertical cover of \(M|F\). Thus \(F\) is not round.
For the other direction, assume that \(G[F]\) is a clique, but that \((U,U^{\prime})\) is a vertical cover of \(G[F]\). We colour the edges of \(F\) red if they are in \(U\), and blue if they are in \(V\). Note that an edge may be both red and blue. Let \(v\) be an arbitrary vertex of \(G[F]\). The set of edges incident with \(v\) spans \(F\), since \(G[F]\) is a clique. If all the edges of \(F\) incident with \(v\) are red, then \(U\) contains \(F\), a contradiction. By symmetry, we can now let \(e,f\in F\) be edges incident with \(v\) so that \(e\) is red but not blue, and \(f\) is blue but not red. Let \(g\) be the edge of \(F\) so that \(\{e,f,g\}\) is the edge-set of a triangle. If \(g\) is red, then \(f\) is also red, and if \(g\) is blue, then \(e\) is blue, and in either case we have a contradiction.
**Corollary 3.8**.: _Let \(G\) be a graph. Then \(M(G)\) is a saturated matroid._
Proof.: From Proposition 3.7 we see that every round flat of \(M(G)\) is a clique of \(G\), and any such flat is modular by [10, Proposition 6.9.11]. The result follows.
The next result is a consequence of [11, Proposition 2.8].
**Proposition 3.9**.: _Let \(G\) be a graph. Then \(G\) is chordal if and only if \(M(G)\) is supersolvable._
The next result implies the known fact [6, Proposition 4.16] that in a chordal graph the number of maximal cliques does not exceed the number of vertices.
**Proposition 3.10**.: _Let \(M\) be a supersolvable matroid. Then \(M\) has at most \(r(M)\) rotunda._
Proof.: Let \(H\) be a modular hyperplane of \(M\) such that \(M|H\) is supersolvable. Any rotunda of \(M\) that is contained in \(H\) is a rotunda of \(M|H\). But \(M|H\) has at most \(r(M)-1\) rotunda by induction, and Proposition 2.14 says there is exactly one rotunda of \(M\) that is not a rotunda of \(M|H\). The result follows.
## 4. Reduced clique graphs and rotunda graphs
Let \(G\) be a chordal graph. The _clique graph_ of \(G\), denoted \(C(G)\), has the maximal cliques of \(G\) as its vertices. Two distinct maximal cliques are adjacent in \(C(G)\) if and only if they have at least one vertex in common. Our focus will be the _reduced clique graph_, \(C_{R}(G)\), which was introduced in [4]. The vertices of \(C_{R}(G)\) are again the maximal cliques of \(G\). Let \(C_{1}\) and \(C_{2}\) be distinct maximal cliques of \(G\). We say that \(C_{1}\) and \(C_{2}\) are a _separating pair_ if there is at least one vertex in \(C_{1}\cap C_{2}\) and any path from a vertex of \(C_{1}-C_{2}\) to a vertex of \(C_{2}-C_{1}\) uses a vertex in \(C_{1}\cap C_{2}\). Now \(C_{R}(G)\) is the subgraph of \(C(G)\) where two maximal cliques are adjacent if and only if they form a separating pair. We now define a matroid analogue of this graph.
**Definition 4.1**.: Let \(M\) be a supersolvable saturated matroid. Recall that \(\mathcal{R}(M)\) is the family of rotunda of \(M\). The _rotunda graph_\(R(M)\) is the graph with \(\mathcal{R}(M)\) as its vertex set. The rotunda \(R_{1}\) and \(R_{2}\) are adjacent in \(R(M)\) if \(R_{1}\cap R_{2}\neq\emptyset\) and there is a modular cover \((F_{1},F_{2})\) such that \(R_{i}\subseteq F_{i}\) for \(i=1,2\), and \(F_{1}\cap F_{2}=R_{1}\cap R_{2}\). In this case we say that the modular cover \((F_{1},F_{2})\)_certifies_ the adjacency of \(R_{1}\) and \(R_{2}\).
The next result allows us to prove statements about rotunda graphs inductively.
**Proposition 4.2**.: _Let \(M\) be a supersolvable saturated matroid and let \(H\) be a modular hyperplane of \(M\) such that \(M|H\) is supersolvable. Let \(C^{*}\) be the complement of \(H\) and let \(R\) be \(\operatorname{cl}(C^{*})\). Then \(R\) is a rotunda of \(M\) and either:_
* \(R\cap H\) _is a rotunda of_ \(M|H\) _and_ \[\mathcal{R}(M)=(\mathcal{R}(M|H)-\{R\cap H\})\cup\{R\},\qquad\text{or}\]
* \(R\cap H\) _is properly contained in a rotunda of_ \(M|H\) _and_ \[\mathcal{R}(M)=\mathcal{R}(M|H)\cup\{R\}.\]
_If case_ (a) _holds then \(R(M|H)\) is obtained from \(R(M)\) by relabelling \(R\) as \(R\cap H\). If case_ (b) _holds then \(R(M|H)\) is obtained from \(R(M)\) by deleting \(R\)._
Proof.: Note that \(M|H\) is saturated as well as supersolvable (Proposition 3.3). Proposition 2.14 says that \(R\) is a rotunda of \(M\), and that moreover it is the unique rotunda of \(M\) that is not contained in \(H\). Now it is an easy
exercise to prove that every other rotunda of \(M\) is a rotunda of \(M|H\). This shows \(\mathcal{R}(M)\subseteq\mathcal{R}(M|H)\cup\{R\}\).
Proposition 2.15 says that \(R\cap H\) is a round flat of \(M|H\). First assume that \(R\cap H\) is a maximal round flat of \(M|H\). Then \(R\cap H\) is a rotunda of \(M|H\) but not of \(M\), since \(R\cap H\) is properly contained in \(R\). So in this case \(\mathcal{R}(M)\) is contained in \((\mathcal{R}(M|H)-\{R\cap H\})\cup\{R\}\). Now let \(Z\) be a rotunda of \(M|H\) that is not equal to \(R\cap H\). We will prove that \(Z\) is a rotunda of \(M\). Assume otherwise. Because \(Z\) is a round flat of \(M|H\), and hence of \(M\), it is properly contained in a rotunda of \(M\). Let this rotunda be \(Z^{\prime}\). Now \(Z^{\prime}\) is not contained in \(H\), because in this case \(Z\) and \(Z^{\prime}\) would both be rotunda of \(M|H\), and then \(Z\) cannot be properly contained in \(Z^{\prime}\). So \(Z^{\prime}\) is a rotunda of \(M\) that is not contained in \(H\), and hence \(Z^{\prime}=R\). Thus \(Z\) is contained in \(R\cap H\). Because \(Z\) is not properly contained in a round flat of \(M|H\) we deduce that \(Z=R\cap H\), contrary to hypothesis. Thus \(Z\) is a rotunda of \(M\) and we have shown that when \(R\cap H\) is a rotunda of \(M|H\), the set \(\mathcal{R}(M)\) is equal to \((\mathcal{R}(M|H)-\{R\cap H\})\cup\{R\}\) and case (a) holds.
Next we assume that \(R\cap H\) is not a rotunda of \(M|H\). We have already shown that \(\mathcal{R}(M)\) is contained in \(\mathcal{R}(M|H)\cup\{R\}\). Let \(Z\) be a rotunda of \(M|H\) and assume that \(Z\) is not a rotunda of \(M\). Then \(Z\) is properly contained in \(Z^{\prime}\), a rotunda of \(M\). As in the previous paragraph, \(Z^{\prime}=R\), so \(Z\) is contained in \(R\cap H\). Again we deduce that \(Z=R\cap H\), and we have a contradiction to \(Z\) being a rotunda of \(M|H\). So in the case \(\mathcal{R}(M)\) is equal to \(\mathcal{R}(M|H)\cup\{R\}\). Furthermore, \(R\cap H\) is a round flat of \(M|H\) but not a rotunda, so it must be properly contained in a rotunda of \(M|H\). Thus case (b) holds.
Assume case (a) holds. We let \(Z_{1}\) and \(Z_{2}\) be distinct rotunda of \(M\), where \(Z_{1}\) is not equal to \(R\). Thus \(Z_{1}\) is a rotunda of \(M|H\). Either \(Z_{2}\) is equal to \(R\) or it is not. In the former case \(Z_{2}\cap H=R\cap H\) and in the latter \(Z_{2}\cap H=Z_{2}\). In either case \(Z_{2}\cap H\) is a rotunda of \(M|H\). We will prove that \(Z_{1}\) and \(Z_{2}\cap H\) are adjacent in \(R(M|H)\) if and only if \(Z_{1}\) and \(Z_{2}\) are adjacent in \(R(M)\), and this will show that \(R(M|H)\) is obtained from \(R(M)\) by relabelling \(R\) as \(R\cap H\).
Assume that \((F_{1},F_{2})\) is a modular cover of \(M\) that certifies the adjacency of \(Z_{1}\) and \(Z_{2}\) in \(R(M)\). Thus \(F_{1}\) and \(F_{2}\) are proper modular flats of \(M\) and \(F_{1}\cup F_{2}=E(M)\). Moreover \(F_{1}\cap F_{2}=Z_{1}\cap Z_{2}\). Assume that either \(F_{1}\) or \(F_{2}\) contains \(H\). Since Proposition 2.9 implies that neither \(F_{1}\) nor \(F_{2}\) contains \(Z_{1}\cup Z_{2}\), we deduce that \(Z_{2}=R\) and \(F_{1}=H\). Now
\[R\cap H\subseteq F_{1}\cap F_{2}=Z_{1}\cap Z_{2}\]
so \(Z_{1}\) contains \(R\cap H\). Since \(Z_{1}\) and \(R\cap H\) are both rotunda of \(M|H\), we see that \(Z_{1}=R\cap H\), and in this case \(Z_{1}\) is properly contained in \(Z_{2}\). This is impossible, so \(F_{1}\cap H\) or \(F_{2}\cap H\) are proper flats of \(M|H\). Moreover, their union is equal to \(H\).
Since \(F_{1}\) and \(F_{2}\) are modular flats of \(M\) it follows that \(F_{1}\cap H\) and \(F_{2}\cap H\) are modular flats of \(M\) (Proposition 2.4), and hence modular flats of \(M|H\)
Furthermore,
\[(F_{1}\cap H)\cap(F_{2}\cap H)=(F_{1}\cap F_{2})\cap H=(Z_{1}\cap Z_{2})\cap H=Z_ {1}\cap(Z_{2}\cap H).\]
Now we see that \((F_{1}\cap H,F_{2}\cap H)\) is a modular cover of \(M|H\), and that it certifies the adjacency of \(Z_{1}\) and \(Z_{2}\cap H\) in \(R(M|H)\).
For the other direction, assume \(Z_{1}\) and \(Z_{2}\cap H\) are adjacent in \(R(M|H)\), and let \((F_{1},F_{2})\) be a modular cover of \(M|H\) that certifies their adjacency. Let \(P\) be the union \(\cup P_{H}(x,y)\), where \(x\) and \(y\) range over distinct rank-one flats in \(C^{*}\). We apply Proposition 2.11 and see that \(P\) is contained in either \(F_{1}\) or \(F_{2}\).
Assume that \(Z_{2}=R\). Then \(P\) is contained in \(Z_{2}\cap H\subseteq F_{2}\). In this case Proposition 2.11 implies that \((F_{1},F_{2}\cup C^{*})\) is a modular cover of \(M\). Moreover,
\[F_{1}\cap(F_{2}\cup C^{*})=F_{1}\cap F_{2}=Z_{1}\cap(Z_{2}\cap H)=Z_{1}\cap Z _{2}.\]
Thus \((F_{1},F_{2}\cup C^{*})\) certifies the adjacency of \(Z_{1}\) and \(Z_{2}\) in \(R(M)\). Next we assume that \(Z_{2}\neq R\), so that \(Z_{1}\) and \(Z_{2}\) are both rotunda of \(M|H\). We again apply Proposition 2.11 and see that \((F_{i}\cup C^{*},F_{3-i})\) is a modular cover of \(M\) for some \(i\in\{1,2\}\), and as before we can see that \((F_{i}\cup C^{*},F_{3-i})\) certifies the adjacency of \(Z_{1}\) and \(Z_{2}\) in \(R(M)\). Thus we are now finished with case (a).
Assume case (b) holds. Let \(Z_{1}\) and \(Z_{2}\) be two rotunda of \(M|H\). We can use exactly the same arguments as in the previous paragraphs to show that \(Z_{1}\) and \(Z_{2}\) are adjacent in \(R(M|H)\) if and only if they are adjacent in \(R(M)\). Thus \(R(M|H)\) is obtained from \(R(M)\) by deleting the rotunda \(R\) and the proof is complete.
### Rotunda graphs vs. reduced clique graphs
In this section we compare rotunda graphs and reduced clique graphs. Ultimately we will show that they are identical classes of graphs. We also consider the connection between the reduced clique graph of \(G\) and the rotunda graph of \(M(G)\) when \(G\) is a chordal graph.
**Proposition 4.3**.: _Let \(G\) be a chordal graph. Then the maximal cliques of \(G\) are the rotunda of \(M(G)\), and every edge in \(R(M(G))\) is an edge in \(C_{R}(G)\)._
Proof.: The first statement follows from Proposition 3.7. Let \(M\) stand for \(M(G)\), so that we identify the vertices of \(C_{R}(G)\) and the vertices of \(R(M)\). Let \(R_{1}\) and \(R_{2}\) be rotunda that are adjacent in \(R(M)\), and let \(C_{1}\) and \(C_{2}\) be the corresponding maximal cliques of \(G\). We will show that \(C_{1}\) and \(C_{2}\) are adjacent in \(C_{R}(G)\). Let \((F_{1},F_{2})\) be a modular cover of \(M\) certifying the adjacency of \(R_{1}\) and \(R_{2}\), so that \(R_{i}\subseteq F_{i}\) for \(i=1,2\), and \(F_{1}\cap F_{2}=R_{1}\cap R_{2}\).
Because \(R_{1}\) and \(R_{2}\) are adjacent in \(R(M)\), they have a non-empty intersection, which means that \(C_{1}\) and \(C_{2}\) share at least two vertices. Let \(S\) be the set of vertices in both \(C_{1}\) and \(C_{2}\). Thus \(|S|\geq 2\). If \(C_{1}\) and \(C_{2}\) form a separating pair, then there is nothing left for us to prove. Therefore we will let \(P\) be an \(S\)-avoiding path from \(a_{1}\in C_{1}-C_{2}\) to \(a_{2}\in C_{2}-C_{1}\).
Let \(u\) be an arbitrary vertex in \(S\). Assume that every edge of \(C_{1}\) incident with \(u\) is in \(F_{2}\). Then \(R_{1}\) is contained in \(F_{2}\), which contradicts Proposition 2.9. Therefore we let \(e_{1}\) be an edge of \(C_{1}\) that is incident with \(u\) and not in \(F_{2}\). By the same reasoning, we can let \(e_{2}\) be an edge of \(C_{2}\) that is incident with \(u\) and not in \(F_{1}\). Assume that \(e_{i}\) joins \(u\) to \(b_{i}\) for \(i=1,2\). Note that \(b_{1}\) is in \(C_{1}-C_{2}\), or else \(e_{1}\) would be in \(R_{2}\subseteq F_{2}\). Similarly \(b_{2}\) is in \(C_{2}-C_{1}\).
We obtain the cycle \(D\) from \(P\) by appending the edges \(e_{1}\) and \(e_{2}\) as well as \(a_{1}b_{1}\) and \(a_{2}b_{2}\). (This assumes that \(a_{1}\neq b_{1}\); if \(a_{1}=b_{1}\) then we do not append \(a_{1}b_{1}\). The same comment applies if \(a_{2}=b_{2}\).)
Note that \(D\) is not contained in \(F_{2}\), as \(e_{1}\) is not in \(F_{2}\). Because \(F_{2}\) is a modular flat we can apply Proposition 2.5 and deduce that there is an element \(x\in F_{2}\cap\operatorname{cl}(D-F_{2})\). Thus \(D^{\prime}\cup x\) is a cycle of \(G\) for some subset \(D^{\prime}\subseteq D-F_{2}\). Because \((D-F_{2})\cup x\) is a circuit of \(M\) it follows that \(x\) is in \(F_{1}\) as well as \(F_{2}\). Therefore \(x\) is in \(R_{1}\cap R_{2}\), so \(x\) joins two vertices of \(S\). Let \(v\) be a vertex incident with \(x\) such that \(v\) is not \(u\). Thus \(v\) is in the cycle \(D\), so \(v\) is either an internal vertex of \(P\), or is equal to one of \(a_{1}\), \(b_{1}\), \(a_{2}\), or \(b_{2}\). But none of the internal vertices of \(P\) is in \(S\), and \(a_{1},b_{1}\) are in \(C_{1}-C_{2}\) while \(a_{2},b_{2}\) are in \(C_{2}-C_{1}\). Therefore we have a contradiction that completes the proof.
From the previous result we know that \(R(M(G))\) is a subgraph of \(C_{R}(G)\). To see that \(R(M(G))\) and \(C_{R}(G)\) need not be equal, we let \(G\) be the path with two edges. Thus \(G\) is a tree and is therefore chordal. There are two maximal cliques in \(G\), and \(M(G)\) has two rotunda. However \(C_{R}(G)\) consists of two vertices joined by an edge, whereas \(R(M(G))\) consists of two isolated vertices, since the two rotunda of \(M(G)\) are disjoint. The next result shows that sufficient connectivity prevents this situation from happening.
**Proposition 4.4**.: _Let \(G\) be a chordal graph that is \(2\)-connected. Then \(C_{R}(G)=R(M(G))\)._
Proof.: We identify the vertices of \(C_{R}(G)\) and \(R(M)\). By virtue of Proposition 4.3, it suffices to show that every edge of \(C_{R}(G)\) is also an edge of \(R(M)\). To this end let \(C_{1}\) and \(C_{2}\) be maximal cliques of \(G\) that are adjacent in \(C_{R}(G)\). Let \(R_{i}\) be the edge set of \(C_{i}\) for \(i=1,2\). Then \(R_{1}\) and \(R_{2}\) are rotunda of \(M\). We will show they are adjacent in \(R(M)\).
Set \(S\) to be the set of vertices in both \(C_{1}\) and \(C_{2}\). Since \(C_{1}\) and \(C_{2}\) are adjacent in \(C_{R}(G)\) it follows that \(S\neq\emptyset\). For each \(i=1,2\), let \(a_{i}\) be a vertex in \(C_{i}-C_{3-i}\).
#### 4.4.1. \(R_{1}\cap R_{2}\neq\emptyset\)
Proof.: This claim holds if \(|S|\geq 2\), because then any edge joining two vertices of \(S\) is in \(R_{1}\cap R_{2}\). So assume that \(|S|=1\) and let \(v\) be the unique vertex of \(S\). Now \(C_{1}\) and \(C_{2}\) form a separating pair, so \(a_{1}\) and \(a_{2}\) are in different connected components of \(G-S=G-v\), but this contradicts the fact that \(G\) is \(2\)-connected.
Now we know that \(R_{1}\) and \(R_{2}\) are not disjoint we can complete the proof by constructing a modular cover to certify their adjacency in \(R(M)\). Let \(U_{1}\) be the set of edges that are contained in \(S\)-avoiding paths having \(a_{1}\) as a terminal vertex. Observe that every edge incident with \(a_{1}\) is in \(U_{1}\). Let \(U_{2}\) be the set of edges of \(G\) not in \(U_{1}\). Thus \((U_{1},U_{2})\) is a partition of the edge set.
**4.4.2**.: \((U_{1},U_{2})\) _is a vertical separation of \(M\)._
Proof.: We must prove that neither \(U_{1}\) nor \(U_{2}\) is spanning in \(M\). Let \(e\) be any edge incident with \(a_{2}\). We claim that \(e\) is not in \(U_{1}\). Assume otherwise, and let \(P\) be an \(S\)-avoiding path with \(a_{1}\) as a terminal vertex, where \(P\) contains \(e\). Since \(e\) is incident with \(a_{2}\), we can let \(P^{\prime}\) be a subpath of \(P\) from \(a_{1}\) to \(a_{2}\). As \(C_{1}\) and \(C_{2}\) form a separating pair, it follows that \(P^{\prime}\) contains a vertex of \(S\). But the end vertices of \(P^{\prime}\) are \(a_{1}\) and \(a_{2}\), and neither is in \(S\), so \(P^{\prime}\) has an internal vertex in \(C_{1}\cap C_{2}\). Thus \(P\) does as well, a contradiction. Therefore \(e\) is not in \(U_{1}\).
Assume that \(U_{1}\) is spanning. Let \(e\) be an edge incident with \(a_{2}\). Then \(e\) is in \(U_{2}\) by the previous paragraph. Since it is in the closure of \(U_{1}\), we can let \(D\) be a cycle containing \(e\) such that every other edge of \(D\) is in \(U_{1}\). In particular, this means that \(a_{2}\) is incident with an edge of \(U_{1}\), contrary to the previous paragraph. So \(U_{1}\) is not spanning.
Similarly, if \(U_{2}\) is spanning, then we let \(e\) be an edge incident with \(a_{1}\). Then \(e\) is not in \(U_{2}\), so we can let \(D\) be a cycle that contains \(e\), where all the other edges of \(D\) are in \(U_{2}\). This implies that an edge incident with \(a_{1}\) is in \(U_{2}\), which contradicts an earlier conclusion. Therefore \((U_{1},U_{2})\) is a vertical separation.
For \(i=1,2\), we let \(F_{i}\) be \(\operatorname{cl}(U_{i})\). Recall that \(R_{i}\) is the edge-set of \(C_{i}\).
**4.4.3**.: \(F_{1}\cap F_{2}=R_{1}\cap R_{2}\)_._
Proof.: Let \(e\) be an edge that joins vertices \(u\) and \(v\). First assume that \(e\) is in \(R_{1}\cap R_{2}\). Then \(u\) and \(v\) are in \(S\). This means there is no \(S\)-avoiding path containing \(e\) with \(a_{1}\) as a terminal vertex. Hence \(e\) is not in \(U_{1}\) so it is in \(U_{2}\). However, \(a_{1}\) is adjacent to \(u\) and \(v\), and the edges \(a_{1}u\) and \(a_{1}v\) are in \(U_{1}\), so \(e\) is in \(\operatorname{cl}(U_{1})\). Thus \(e\) is in \(F_{1}\cap F_{2}\) and we have shown that \(R_{1}\cap R_{2}\subseteq F_{1}\cap F_{2}\).
For the other direction, assume that \(e\) is in \(F_{1}\cap F_{2}\). First assume that \(e\) is in \(U_{1}\). Let \(P\) be an \(S\)-avoiding path with \(a_{1}\) as a terminal vertex such that \(e\) is in \(P\). We can assume that either \(u\) or \(v\) is a terminal vertex of \(P\).
Since \(e\) is in \(U_{1}\cap\operatorname{cl}(U_{2})\) we can let \(D\) be a cycle such that \(e\) is in \(D\), and every other edge of \(D\) is in \(U_{2}\). Thus both \(u\) and \(v\) are incident with edges in \(U_{2}\). Let \(e^{\prime}\) be an edge incident with \(u\) that is in \(U_{2}\). Assume for a contradiction that \(u\) is not in \(S\). If \(u\) is a terminal vertex of \(P\) then we obtain a new path by adding \(e^{\prime}\) to the end of \(P\). No internal vertex of this new path is in \(S\), so it implies that \(e^{\prime}\) is in \(U_{1}\), a contradiction to \(e^{\prime}\) being in \(U_{2}\). Therefore \(u\) is not a terminal vertex of \(P\). Since \(P\) contains \(e\), it follows
that \(u\) is an internal vertex of \(P\), so \(v\) is a terminal vertex of \(P\). In this case we can obtain a new path from \(P\) by replacing the edge \(e\) with \(e^{\prime}\). Again we see that \(e^{\prime}\) is in \(U_{1}\) and we have a contradiction. Therefore \(u\) is in \(S\), and by symmetry, so is \(v\). Hence \(e\) joins two vertices of \(S\), and is thus in \(R_{1}\cap R_{2}\).
We must also consider the case that \(e\) is in \(U_{2}\cap\operatorname{cl}(U_{1})\). Let \(D\) be a cycle that contains \(e\), where every other edge of \(D\) is in \(U_{1}\). Let \(x\) be an edge of \(D-e\) that is incident with \(u\). Thus \(x\) is in \(U_{1}\). Let \(P\) be an \(S\)-avoiding path containing \(x\) and \(a_{1}\) as a terminal vertex. If \(u\) is not in \(S\), then we can either extend \(P\) by adding the edge \(e\), or replacing \(x\) in \(P\) with \(e\). In either case, the new path shows that \(e\) is in \(U_{1}\), a contradiction. Therefore \(u\), and by symmetry \(v\), is in \(S\), so we again see that \(e\) is in \(R_{1}\cap R_{2}\). Hence \(F_{1}\cap F_{2}\subseteq R_{1}\cap R_{2}\) and the claim is proved.
Recall that \(a_{1}\) is in the clique \(C_{1}\). Every edge incident with \(a_{1}\) is in \(U_{1}\). As every edge of \(C_{1}-a_{1}\) is spanned by two such edges, it follows that \(R_{1}\) is contained in \(F_{1}\). We must also show that \(R_{2}\) is spanned by \(U_{2}\). Let \(e\) be an edge in \(R_{2}\) and assume that it is not in \(F_{2}\). In particular, this means that \(e\) is not in \(U_{2}\), so it is in \(U_{1}\). Let \(P\) be an \(S\)-avoiding path containing \(e\) that has \(a_{1}\) as a terminal vertex. Let \(u\) be the first internal vertex of \(P\) that is incident with \(e\). Then \(u\) is not in \(S\), as \(P\) is \(S\)-avoiding. But \(u\) is in \(C_{2}\), since \(e\) is in \(R_{2}\). Thus \(u\) is in \(C_{2}-C_{1}\), and the subpath of \(P\) from \(a_{1}\) to \(u\) is an \(S\)-avoiding path from a vertex of \(C_{1}-C_{2}\) to a vertex of \(C_{2}-C_{1}\). This is a contradiction, as \(C_{1}\) and \(C_{2}\) form a separating pair. This shows that \(R_{2}\) is contained in \(F_{2}\), as claimed.
Now we can complete the proof that \(R_{1}\) and \(R_{2}\) are adjacent in \(R(M)\) by showing that \((F_{1},F_{2})\) is a modular cover. Assume that \(F_{1}\) is not a modular flat, so that by utilising Proposition 2.3 we can let \(F\) be an arbitrary flat of \(M\) that is disjoint from \(F_{1}\) such that some circuit \(C\subseteq F\cup F_{1}\) contains elements of both \(F\) and \(F_{1}\). If each connected component of \(G[F]\) shares at most one vertex with \(G[F_{1}]\), then no such cycle can exist. Therefore we let \(u\) and \(v\) be distinct vertices from the same connected component of \(G[F]\) so that both of \(u\) and \(v\) are incident with edges in \(F_{1}\). Since \(u\) is incident with an edge in \(F_{1}\), it is incident with an edge in \(U_{1}\). Let \(e\) be such an edge, and let \(P\) be a shortest-possible \(S\)-avoiding path that contains \(e\) and has \(a_{1}\) as a terminal vertex. Let \(f\) be an edge of \(F\) that is incident with \(u\). If \(u\) is not in \(S\), then extending \(P\) by adding \(f\) shows that \(f\) is in \(U_{1}\), a contradiction. Therefore \(u\), and by symmetry \(v\), is in \(S\). This means that there is an edge \(g\) of \(R_{1}\) that joins \(u\) and \(v\). Thus \(g\) is in \(R_{1}\subseteq F_{1}\). But there is a path of \(G[F]\) that joins \(u\) to \(v\), so \(g\) is in \(\operatorname{cl}(F)=F\), and we have contradicted \(F\cap F_{1}=\emptyset\). Therefore \(F_{1}\) is a modular flat. Almost exactly the same argument shows that \(F_{2}\) is a modular flat.
The previous result shows that when \(G\) is a \(2\)-connected chordal graph, \(C_{R}(G)\) is isomorphic to the rotunda graph of a supersolvable saturated matroid. In fact, this is true even when \(G\) is not \(2\)-connected, as we now show.
First we make a simple observation. Recall from Proposition 3.5 that a matroid is supersolvable and saturated if and only if all its components are.
**Proposition 4.5**.: _Let \(G\) be a chordal graph with connected components \(H_{1},\dots,H_{k}\). Then \(C_{R}(G)\) is the disjoint union of \(C_{R}(H_{1}),\dots,C_{R}(H_{k})\). Similarly, if \(M\) is a supersolvable saturated matroid with connected components \(N_{1},\dots,N_{k}\), then \(R(M)\) is the disjoint union of \(R(N_{1}),\dots,R(N_{k})\)._
Proof.: Maximal cliques in different components of \(G\) cannot be adjacent in \(C_{R}(G)\) because they have no vertices in common. Similarly, rotunda from different components of \(M\) are not adjacent in \(R(M)\) because they have empty intersection. The result follows.
**Lemma 4.6**.: _Let \(G\) be a chordal graph. There is a supersolvable saturated matroid \(M\) such that \(C_{R}(G)\) is isomorphic to \(R(M)\)._
Proof.: Let \(H_{1},\dots,H_{k}\) be the connected components of \(G\). If each \(C_{R}(H_{i})\) is isomorphic to \(R(N_{i})\) for some supersolvable saturated \(N_{i}\), then Proposition 4.5 implies that \(C_{R}(G)\) is isomorphic to \(R(N_{1}\oplus\dots\oplus N_{k})\). In other words, it suffices to prove the lemma when \(G\) is connected. In this case, we will prove that \(C_{R}(G)\) is isomorphic to \(C_{R}(G^{\prime})\), where \(G^{\prime}\) is a \(2\)-connected chordal graph. Then Proposition 4.4 shows that \(C_{R}(G^{\prime})=R(M(G^{\prime}))\), and \(M(G^{\prime})\) is supersolvable and saturated by Corollary 3.8 and Proposition 3.9, so the result will follow.
If \(G\) is \(2\)-connected, then there is nothing left to prove, so let \(v_{1},v_{2},\dots,v_{m}\) be the cut-vertices of \(G\). We produce \(G^{\prime}\) by introducing new vertices \(v^{\prime}_{1},v^{\prime}_{2},\dots,v^{\prime}_{m}\) and for each \(i\) making \(v^{\prime}_{i}\) adjacent to \(v_{i}\) and all of the neighbours of \(v_{i}\) in \(G\).
**4.6.1**.: \(G^{\prime}\) _is \(2\)-connected._
Proof.: Certainly \(G^{\prime}\) is connected. Assume that \(v\) is a cut-vertex of \(G^{\prime}\). Note that for each \(i\), the graph produced from \(G^{\prime}\) by deleting \(v^{\prime}_{i}\) is obtained from \(G\) by adding \(m-1\) new vertices and making each of them adjacent to at least one vertex in \(G\). Since \(G\) is connected it follows that \(G^{\prime}-v^{\prime}_{i}\) is connected. Thus no vertex \(v^{\prime}_{i}\) is a cut-vertex of \(G^{\prime}\) so \(v\) is not equal to \(v^{\prime}_{i}\) for any \(i\). Now \(v\) is a vertex of \(G\). If \(v\notin\{v_{1},v_{2},\dots,v_{m}\}\) then \(G-v\) is connected and \(G^{\prime}-v\) is obtained from the connected graph \(G-v\) by adding \(m\) new vertices and making each of them adjacent to at least one vertex in \(G-v\). Thus \(G^{\prime}-v\) is connected, which is a contradiction. Therefore \(v=v_{i}\) for some \(i\). But in \(G^{\prime}\) the vertices \(v_{i}\) and \(v^{\prime}_{i}\) are adjacent to exactly the same vertices. Therefore \(G^{\prime}-v_{i}\) is obtained from \(G^{\prime}-v^{\prime}_{i}\) by relabelling \(v^{\prime}_{i}\) as \(v_{i}\). This means that \(G^{\prime}-v_{i}\) is connected, and we have a contradiction.
**4.6.2**.: \(G^{\prime}\) _is chordal._
Proof.: We rely on Proposition 2.1. Let \(u_{1},u_{2},\dots,u_{n}\) be a perfect elimination order of \(G\). We produce an ordering of the vertices of \(G^{\prime}\) by inserting each \(v^{\prime}_{i}\) into the order \(u_{1},u_{2},\dots,u_{n}\) immediately after \(v_{i}\). It is easy to verify that this produces a perfect elimination order for \(G^{\prime}\) and the result follows.
We can complete the proof by showing that \(C_{R}(G)\) is isomorphic to \(C_{R}(G^{\prime})\). It is clear that any maximal clique of \(G^{\prime}\) contains one of the vertices \(\{v_{i},v_{i}^{\prime}\}\) if and only if it contains both. Now we can easily verify that there is a bijective correspondence between the maximal cliques of \(G\) and the maximal cliques of \(G^{\prime}\). If \(C\) is a maximal clique of \(G\), then we obtain the corresponding maximal clique of \(G^{\prime}\) by adding each vertex \(v_{i}^{\prime}\) such that \(v_{i}\) is in \(C\).
Let \(C_{1}\) and \(C_{2}\) be distinct maximal cliques of \(G\), and let \(C_{1}^{\prime}\) and \(C_{2}^{\prime}\) be the corresponding maximal cliques of \(G^{\prime}\). We will prove that \(C_{1}\) and \(C_{2}\) are adjacent in \(C_{R}(G)\) if and only if \(C_{1}^{\prime}\) and \(C_{2}^{\prime}\) are adjacent in \(C_{R}(G^{\prime})\). First note that \(C_{1}\cap C_{2}\) is non-empty if and only if \(C_{1}^{\prime}\cap C_{2}^{\prime}\) is non-empty.
If \(C_{1}\) and \(C_{2}\) are not adjacent in \(C_{R}(G)\), then either \(C_{1}\cap C_{2}=\emptyset\), or \(P\) is a \((C_{1}\cap C_{2})\)-avoiding path of \(G\) from a vertex of \(C_{1}-C_{2}\) to a vertex in \(C_{2}-C_{1}\). In the first case \(C_{1}^{\prime}\cap C_{2}^{\prime}=\emptyset\). In the second case, it is obvious that \(P\) is a \((C_{1}^{\prime}\cap C_{2}^{\prime})\)-avoiding path of \(G^{\prime}\). In either case \(C_{1}^{\prime}\) and \(C_{2}^{\prime}\) are not adjacent in \(C_{R}(G^{\prime})\).
Next assume that \(C_{1}^{\prime}\) and \(C_{2}^{\prime}\) are not adjacent in \(C_{R}(G^{\prime})\). If \(C_{1}^{\prime}\cap C_{2}^{\prime}=\emptyset\) then \(C_{1}\cap C_{2}=\emptyset\) so we have nothing left to prove. Therefore we will assume that \(P\) is a \((C_{1}^{\prime}\cap C_{2}^{\prime})\)-avoiding path in \(G^{\prime}\), and that \(P\) joins a vertex in \(C_{1}^{\prime}-C_{2}^{\prime}\) to a vertex in \(C_{2}^{\prime}-C_{1}^{\prime}\). If a vertex \(v_{i}^{\prime}\) appears anywhere in \(P\), then we may replace it with \(v_{i}\), since these two vertices have the same neighbourhoods. Note that the resulting path is still \((C_{1}^{\prime}\cap C_{2}^{\prime})\)-avoiding, and still joins a vertex of \(C_{1}^{\prime}-C_{2}^{\prime}\) to a vertex of \(C_{2}^{\prime}-C_{1}^{\prime}\). Thus we can assume that \(P\) is a path of \(G\), and is consequently a \((C_{1}\cap C_{2})\)-avoiding path of \(G\) from a vertex of \(C_{1}-C_{2}\) to a vertex of \(C_{2}-C_{1}\). This shows that \(C_{1}\) and \(C_{2}\) are not adjacent in \(C_{R}(G)\) so the proof is complete.
We have established that every reduced clique graph is isomorphic to a rotunda graph. Next we start moving towards proving the converse.
**Definition 4.7**.: Let \(M\) be a connected supersolvable and saturated matroid, and let \(G\) be a \(2\)-connected chordal graph. Assume that \(\theta\) is a function from \(E(M)\) to the powerset of \(V(G)\). If \(U\) is a subset of vertices in \(G\), then let \(\theta^{-1}(U)\) be \(\{x\in E(M)\colon\theta(x)\subseteq U\}\). For any subset \(R\subseteq E(M)\), let \(\theta(R)\) stand for \(\cup_{x\in R}\theta(x)\). Thus we can think of \(\theta\) as being a function from \(\mathcal{P}(E(M))\) to \(\mathcal{P}(V(G))\) such that \(R\subseteq R^{\prime}\) if and only if \(\theta(R)\subseteq\theta(R^{\prime})\). Assume that the following properties hold:
1. \(|\theta(x)|=2\) for every \(x\in E(M)\), and
2. for any vertex \(v\in V(G)\) there exists exactly one element \(x\in E(M)\) such that \(v\) is in \(\theta(x)\).
3. if \(R\) is a non-empty round flat of \(M\), then \(\theta(R)\) is a clique,
4. if \(F\) is a modular flat of \(M\) and \(U\) is a union of connected components of \(G-\theta(F)\), then \(F\cup\theta^{-1}(U)\) is a modular flat of \(M\), and
5. the restriction of \(\theta\) to \(\mathcal{R}(M)\) is a bijection from \(\mathcal{R}(M)\) to the maximal cliques of \(C_{R}(G)\) and this bijection is an isomorphism between \(R(M)\) and \(C_{R}(G)\).
If all these conditions hold, then we will say that \((G,\theta)\) is _compliant_ with \(M\).
**Lemma 4.8**.: _Let \(M\) be a connected supersolvable and saturated matroid. There exists a \(2\)-connected chordal graph \(G\) and a function \(\theta\colon E(M)\to\mathcal{P}(V(G))\) such that \((G,\theta)\) is compliant with \(M\)._
Proof.: The proof is a straightforward induction, although the technical details are require some work. If \(M\) has rank at most one, then we can simply make \(G\) a clique of the appropriate size. Now we are going to choose \(C^{*}\) to be the complement of a modular hyperplane, \(H\). Then inductively \(M|H\) has a compliant graph. The intersection of \(H\) with \(\operatorname{cl}(C^{*})\) is a round flat, and therefore corresponds to a clique. We create a new maximal clique by adding new vertices and making them adjacent to each other and to the clique corresponding to \(H\cap\operatorname{cl}(C^{*})\). The rest of the proof involves nothing more than checking that this construction does indeed satisfy the conditions for compliance.
To implement this strategy, we let \(M\) be a supersolvable saturated matroid. Assume \(r(M)\leq 1\). We can easily see that the only rotunda of \(M\) is \(E(M)\) itself. We let \(G\) be isomorphic to \(K_{2|E(M)|}\), and we consider an arbitrary partition of \(V(G)\) into blocks of size two. We then set \(\theta\) to be an arbitrary bijection from \(E(M)\) to the blocks of the partition. It is not hard to verify that \((G,\theta)\) is compliant with \(M\). Therefore we assume that \(r(M)>1\).
Let \(H\) be a modular hyperplane of \(M\) such that \(M|H\) is supersolvable. Then \(M|H\) is also saturated. Proposition 2.7 says that \(M|H\) is connected. Therefore we can apply the obvious inductive hypothesis and let \(G^{\prime}\) be a \(2\)-connected chordal graph with a function \(\theta^{\prime}\colon H\to\mathcal{P}(V(G^{\prime}))\) such that \((G^{\prime},\theta^{\prime})\) is compliant with \(M|H\).
Let \(C^{*}\) be the complementary cocircuit of \(H\), and let \(R\) be the closure of \(C^{*}\). Proposition 2.14 says that \(R\) is a rotunda, and furthermore it is the only rotunda of \(M\) that is not contained in \(H\). Certainly \(C^{*}\) is non-empty, and \(r(H)=r(M)-1>0\), so \(H\) is non-empty also. But \((H,C^{*})\) is not a separation of \(M\), since \(M\) is connected. As \(H\) is modular, we deduce that
\[r(R\cap H)=r(\operatorname{cl}(C^{*})\cap H)=r(C^{*})+r(H)-r(M)>0.\]
Therefore \(R\cap H\) is non-empty and Proposition 2.15 tells us that \(R\cap H\) is round.
Let \(W\) be \(\theta^{\prime}(R\cap H)\). Since \((G^{\prime},\theta^{\prime})\) is compliant with \(M|H\), we see that \(W\) is the set of vertices of a clique in \(G^{\prime}\). Note also that \(|W|=2|R\cap H|\geq 2\). We produce \(G\) from \(G^{\prime}\) by adding \(Y\), a set of \(2|C^{*}|\) new vertices, and making each of them adjacent to all the vertices of \(W\). Note that \(W\cup Y\) is a maximal clique of \(G\) and \(G^{\prime}=G-Y\). Because \(W\) has at least two vertices it is easy to see that \(G\) is \(2\)-connected. The neighbours of any vertex in \(Y\) form a clique in \(G\). Therefore we can construct a perfect elimination order for \(G\) by
prepending the vertices of \(Y\) to a perfect elimination order for \(G^{\prime}\). It follows that \(G\) is chordal.
Consider an arbitrary partition of \(Y\) into pairs of vertices, and let \(\phi\) be an arbitrary bijection from \(C^{*}\) to the blocks of this partition. Then we define \(\theta\) to be the union of \(\theta^{\prime}\) and \(\phi\). Note that \(|\theta(x)|=2\) for any \(x\in E(M)\), and for any vertex \(v\) of \(G\), there is exactly one element \(x\in E(M)\) such that \(v\) is in \(\theta(x)\). Therefore the remainder of the proof consists in showing that \(\theta\) satisfies conditions (iii), (iv), and (v) in Definition 4.7.
Proposition 2.13 tells us that if \(Z\) is a round flat of \(M\) then either \(Z\subseteq H\) or \(Z\subseteq R\). In the former case, \(Z\) is a round flat of \(M|H\), and \(\theta(Z)=\theta^{\prime}(Z)\) is a clique of \(G\), since \(\theta^{\prime}\) satisfies (iii). In the latter case \(\theta(Z)\) is a subset of \(W\cup Y\), and again \(\theta(Z)\) is a clique of \(G\). So condition (iii) holds for \((G,\theta)\).
#### 4.8.1. Condition (iv) in Definition 4.7 holds for \((G,\theta)\)
Proof.: Let \(F\) be a modular flat of \(M\), and let \(U\) be a union of connected components of \(G-\theta(F)\). Let \(D\) be \(F\cup\theta^{-1}(U)\). Thus our aim is to show that \(D\) is a flat of \(M\). Assume that \(U\) is the empty union. In this case \(D=F\cup\theta^{-1}(U)=F\) and since \(F\) is a modular flat there is nothing left to prove. Therefore we assume that \(U\) contains at least one connected component of \(G-\theta(F)\).
Assume that \(D\) is disjoint with \(C^{*}\). This means that \(\theta(F)\cup U\) is disjoint with \(Y\). Thus \(F\) is a modular flat of \(M|H\). If \(U\) is not a union of connected components in \(G^{\prime}-\theta^{\prime}(F)\) then there is a connected component of this graph that contains vertices \(u\in U\) and \(v\notin U\). There is a path of \(G^{\prime}-\theta^{\prime}(F)=G-(\theta(F)\cup Y)\) from \(u\) to \(v\). Hence \(u\) and \(v\) are in the same component of \(G-\theta(F)\). This contradicts the fact that \(U\) is a union of components in this graph. Hence \(U\) is a union of components of \(G^{\prime}-\theta^{\prime}(F)\), so we can apply the inductive assumption and see that \(D=F\cup(\theta^{\prime})^{-1}(U)=F\cup\theta^{-1}(U)\) is a modular flat of \(M|H\). Therefore \(D\) is a modular flat of \(M\) and we are done. Hence we assume that \(D\) contains at least one element of \(C^{*}\).
Now \(\theta(F)\cup U\) contains at least two vertices from \(Y\). Since any such vertex is adjacent to every vertex in \(W\cup Y\), and \(U\) is a non-empty union of connected components, it now follows that \(\theta(F)\cup U\) contains \(W\cup Y\). Note that \(D\) contains \(\theta^{-1}(W\cup Y)=R=\operatorname{cl}(C^{*})\).
Assume that \(U-Y\) is not a union of connected components in \(G^{\prime}-\theta(F\cap H)\). Then there is a connected component of \(G^{\prime}-\theta(F\cap H)\) that contains vertices \(u\in U-Y\) and \(v\notin U-Y\). There is a path from \(u\) to \(v\) in \(G^{\prime}-\theta(F\cap H)=G-(\theta(F)\cup Y)\). Thus \(u\) and \(v\) are in the same connected component of \(G-\theta(F)\). This means that \(u\) and \(v\) are both in \(U\), since \(U\) is a union of connected components in this graph. Since \(v\) is not in \(U-Y\) this means that \(v\) is in \(Y\). But this is impossible, since \(v\) is a vertex of \(G^{\prime}\), which is equal to \(G-Y\). This shows that \(U-Y\) is a union of connected components in \(G^{\prime}-\theta(F\cap H)\).
We note that \(F\cap H\) is a modular flat of \(M|H\) since both \(F\) and \(H\) are modular in \(M\). The inductive hypothesis now tells us that
\[(F\cap H)\cup\theta^{-1}(U-Y)\]
is a modular flat of \(M|H\). Let this flat be \(D^{\prime}\). Note that because \(C^{*}\subseteq D\) we have
\[D=F\cup\theta^{-1}(U)=(F\cap H)\cup\theta^{-1}(U-Y)\cup C^{*}=D^{\prime}\cup C ^{*}.\]
Let \(P\) be the union \(\cup P_{H}(x,y)\), where \(x\) and \(y\) range over all distinct rank-one flats contained in \(C^{*}\). Thus \(P\) is a subset of \(R\cap H\subseteq D\cap H=D^{\prime}\). Note that
\[\operatorname{cl}(D)=(\operatorname{cl}(D)\cap H)\cup C^{*}=(\operatorname{ cl}(D^{\prime}\cup C^{*})\cap H)\cup C^{*}.\]
Now we apply Proposition 2.6. Since \(P\subseteq D^{\prime}\subseteq H\) we see that
\[(\operatorname{cl}(D^{\prime}\cup C^{*})\cap H)\cup C^{*}=\operatorname{cl}( D^{\prime})\cup C^{*}=D^{\prime}\cup C^{*}=D.\]
Thus \(D\) is a flat of \(M\).
Assume that \(D\) is not a modular flat of \(M\), and let \(F^{\prime}\) be a flat of \(M\) that is disjoint with \(D\), chosen so that \(C\subseteq D\cup F^{\prime}\) is a circuit that contains elements of both \(D\) and \(F^{\prime}\). Choose \(C\) so that \(|C\cap C^{*}|\) is as small as possible. Exactly as in the proof of Proposition 2.11 we can prove that \(C\cap C^{*}\) contains distinct elements \(x\) and \(y\). We choose \(p\) to be an element in \(P_{H}(x,y)\), and we perform strong circuit elimination on \(C\) and \(\{x,y,p\}\). In this way we find a circuit contained in \(D\cup F^{\prime}\) that contains elements of both sets, and contains fewer elements of \(C^{*}\) than \(C\). This contradiction shows that \(D\) is a modular flat of \(M\) so condition (iv) holds.
**4.8.2**.: _The restriction of \(\theta\) to \(\mathcal{R}(M)\) is a bijection between \(\mathcal{R}(M)\) and the maximal cliques of \(G\)._
Proof.: The inductive hypothesis means that \(\theta^{\prime}\) induces a bijection between the rotunda of \(M|H\) and the maximal cliques of \(G^{\prime}\). First assume that \(R\cap H\) is a rotunda of \(M|H\), so that \(W=\theta^{\prime}(R\cap H)\) is a maximal clique of \(G^{\prime}\). Now Proposition 4.2 shows that the rotunda of \(M\) are the rotunda of \(M|H\), except that \(R\cap H\) has been replaced by \(R\). It is easy to see that he maximal cliques of \(G\) are the maximal cliques of \(G^{\prime}\), except that \(W\) has been replaced by \(W\cup Y\). We observe that \(\theta(R)=W\cup Y\) and now it follows that \(\theta|_{\mathcal{R}(M)}\) is a bijection between the rotunda of \(M\) and the maximal cliques of \(G\).
Next we assume that \(R\cap H\) is not a rotunda of \(M|H\). Proposition 4.2 implies that every rotunda of \(M|H\) is also a rotunda of \(M\). Furthermore \(R\) is the only rotunda of \(M\) that is not a rotunda of \(M|H\). Because \(R\cap H\) is not a rotunda of \(M|H\), we can let \(Z\) be a rotunda of \(M|H\) that properly contains \(R\cap H\). Now \(W=\theta^{\prime}(R\cap H)\) is properly contained in \(\theta^{\prime}(Z)\). Since \(Z\) is round, we see that \(\theta(Z)=\theta^{\prime}(Z)\) is a clique that properly contains \(W\). Therefore \(W\) is not a maximal clique of \(G^{\prime}\). Now it is easy to see that every maximal clique of \(G^{\prime}\) is a maximal clique of \(G\), and that \(W\cup Y\) is
the only maximal clique of \(G\) that is not a maximal clique of \(G^{\prime}\). The claim follows.
We can complete the proof of Lemma 4.8 by proving that the restriction of \(\theta\) to \(\mathcal{R}(M)\) is an isomorphism from \(R(M)\) to \(C_{R}(G)\). Let \(Z\) and \(Z^{\prime}\) be distinct rotunda of \(M\). We will show that they are adjacent in \(R(M)\) if and only if \(\theta(Z)\) and \(\theta(Z^{\prime})\) are adjacent in \(C_{R}(G)\).
**Case 1.**_Neither \(Z\) nor \(Z^{\prime}\) is equal to \(R\)_. In this case both \(Z\) and \(Z^{\prime}\) are rotunda of \(M|H\), and \(\theta(Z)\) and \(\theta(Z^{\prime})\) are maximal cliques of \(G^{\prime}\). Assume that \(\theta(Z)\) and \(\theta(Z^{\prime})\) are adjacent in \(C_{R}(G)\). Then these maximal cliques have at least one vertex in common, and there is no \((\theta(Z)\cap\theta(Z^{\prime}))\)-avoiding path in \(G\) from a vertex of \(\theta(Z)-\theta(Z^{\prime})\) to a vertex of \(\theta(Z^{\prime})-\theta(Z)\). Exactly the same statements apply to \(\theta^{\prime}(Z)\) and \(\theta^{\prime}(Z^{\prime})\) in \(G^{\prime}\), so \(\theta^{\prime}(Z)\) and \(\theta^{\prime}(Z^{\prime})\) are adjacent in \(C_{R}(G^{\prime})\). The inductive assumption implies that \(Z\) and \(Z^{\prime}\) are adjacent in \(R(M|H)\). Proposition 4.2 now implies that they are also adjacent in \(R(M)\).
For the converse, assume that \(Z\) and \(Z^{\prime}\) are adjacent in \(R(M)\), and let \((F,F^{\prime})\) be a modular cover of \(M\) that certifies the adjacency. We assume that \(Z\subseteq F\) and \(Z^{\prime}\subseteq F^{\prime}\). Let us assume that both \(F\cap C^{*}\) and \(F^{\prime}\cap C^{*}\) are non-empty. No element of \(F\cap C^{*}\) is in \(F^{\prime}\), because any such element would be in \(F\cap F^{\prime}=Z\cap Z^{\prime}\), and this is not possible since \(Z\) and \(Z^{\prime}\) are subsets of \(H=E(M)-C^{*}\). Symmetrically, no element of \(F^{\prime}\cap C^{*}\) is in \(F\). So \(F\cap R\) does not contain any element of \(F^{\prime}\cap C^{*}\) and \(F^{\prime}\cap R\) does not contain any element of \(F\cap C^{*}\). This shows that \((F\cap R,F^{\prime}\cap R)\) is a vertical cover of \(R\), which is impossible as \(R\) is a round flat. Therefore either \(F\cap C^{*}=\emptyset\) or \(F^{\prime}\cap C^{*}=\emptyset\). We assume the latter, so \(C^{*}\) is a subset of \(F\) and \(F^{\prime}\) is a subset of \(H\).
Proposition 2.9 says that \(F^{\prime}\) does not contain \(Z\). It therefore does not contain \(H\), so we can apply Proposition 2.12 and deduce that
\[(F\cap H,F^{\prime}\cap H)=(F\cap H,F^{\prime})\]
is a modular cover of \(M|H\). Note that
\[(F\cap H)\cap F^{\prime}=F\cap F^{\prime}=Z\cap Z^{\prime}\]
so \(Z\) and \(Z^{\prime}\) are adjacent in \(R(M|H)\). By the inductive hypothesis, \(\theta^{\prime}(Z)=\theta(Z)\) and \(\theta^{\prime}(Z^{\prime})=\theta(Z^{\prime})\) are adjacent in \(C_{R}(G^{\prime})\). Since \(Z\) and \(Z^{\prime}\) are rotunda of \(M\), neither is equal to \(R\cap H\), which is properly contained in \(R\). Therefore neither \(\theta(Z)\) nor \(\theta(Z^{\prime})\) is equal to \(W\). Because \(\theta(Z)\) and \(\theta(Z^{\prime})\) are adjacent in \(C_{R}(G^{\prime})\) they have a non-empty intersection.
Assume that \(\theta(Z)\) and \(\theta(Z^{\prime})\) are not adjacent in \(C_{R}(G)\). Let \(P\) be a \((\theta(Z)\cap\theta(Z^{\prime}))\)-avoiding path of \(G\) from a vertex \(a\in\theta(Z)-\theta(Z^{\prime})\) to a vertex \(b\in\theta(Z^{\prime})-\theta(Z)\). Because no such path can exist in \(G^{\prime}=G-Y\), it follows that \(P\) contains a vertex in \(Y\). Let \(y\) and \(y^{\prime}\), respectively, be the first and last vertices of \(P\) that are in \(Y\). Note that \(y\) and \(y^{\prime}\) are not equal to \(a\) or \(b\), which are vertices of \(G^{\prime}\). Let \(w\) be the neighbour of \(y\) in the subpath of \(P\) from \(y\) to \(a\). Similarly let \(w^{\prime}\) be the neighbour of \(y^{\prime}\) in the subpath from
to \(b\). Because \(w\) and \(w^{\prime}\) are adjacent to vertices in \(Y\), but are not in \(Y\), they must be in \(W\). Thus \(w\) and \(w^{\prime}\) are adjacent, so there is a path of \(G^{\prime}\) from \(a\) to \(b\) that avoids any vertex in \(\theta(Z)\cap\theta(Z^{\prime})\). This is a contradiction, so we conclude that \(\theta(Z)\) and \(\theta(Z^{\prime})\) are adjacent in \(R(M)\).
We have now completed the case that neither \(Z\) nor \(Z^{\prime}\) is equal to \(R\).
**Case 2.**_One of \(Z\) and \(Z^{\prime}\) is equal to \(R\)_. We let \(Z\) be a rotunda of \(M\) that is distinct from \(R\), and we will prove that \(Z\) and \(R\) are adjacent in \(R(M)\) if and only if \(\theta(Z)\) and \(\theta(R)=W\cup Y\) are adjacent in \(C_{R}(G)\). Observe that \(Z\) is contained in \(H\) by Proposition 2.14.
First assume that \(\theta(Z)\) and \(\theta(R)\) are adjacent in \(C_{R}(G)\). Because \(\theta\) sends distinct elements of \(E(M)\) to distinct pairs of vertices, it cannot be the case that \(Z\cap R=\emptyset\), or else \(\theta(Z)\) and \(\theta(R)\) would have no vertices in common, contradicting their adjacency in \(C_{R}(G)\). Thus \(Z\) and \(R\) are non-disjoint.
Assume \(Z\) contains \(R\cap H\). If \(C^{*}\) is spanning in \(M\), then \(R\cap H=H\), so \(\theta(H)=W\) is a clique. In this case \(G=W\cup Y\) is a clique, but we have assumed that \(M\) has at least two distinct rotunda, so \(G\) has at least two distinct maximal cliques by 4.8.2. Thus \(C^{*}\) is not spanning. Proposition 3.4 says that \((H,R)\) is a modular cover of \(M\). Now \(R\cap H=R\cap Z\) so \((H,R)\) certifies that \(Z\) and \(R\) are adjacent in \(R(M)\) and we have nothing left to prove. Therefore we will assume that \(Z\) does not contain \(R\cap H\). Hence \(Z\cap R\) is a proper and non-empty subset of \(R\cap H\). It follows that \(\theta(Z)\) contains some, but not all, of the vertices of \(W\).
By Proposition 2.15 we know that \(R\cap H\) is a round flat of \(M|H\). Let \(Z_{0}\) be a rotunda of \(M|H\) that contains \(R\cap H\). Thus \(Z_{0}\) is not equal to \(Z\), but it may be equal to \(R\cap H\). Now \(\theta^{\prime}(Z_{0})=\theta(Z_{0})\) is a maximal clique of \(G^{\prime}\) that contains \(W\). Assume that \(\theta(Z)\) and \(\theta(Z_{0})\) are not adjacent in \(C_{R}(G^{\prime})\). Because these cliques have at least one vertex of \(W\) in common, we can let \(P\) be a \((\theta(Z)\cap\theta(Z_{0}))\)-avoiding path of \(G^{\prime}\) from a vertex \(a\in\theta(Z)-\theta(Z_{0})\) to a vertex \(b\in\theta(Z_{0})-\theta(Z)\). Note that \(P\) contains no vertex of \(\theta(Z)\cap\theta(R)\). But \(P\) is also a path of \(G\), and \(b\) is adjacent to any vertex of \(W-\theta(Z)\). Thus, if necessary, we can adjoin an edge to \(P\) from \(b\) to a vertex of \(W-\theta(Z)\), and certify that \(\theta(Z)\) and \(\theta(R)\) are not adjacent in \(C_{R}(G)\), contrary to hypothesis. Therefore \(\theta(Z)\) and \(\theta(Z_{0})\) are adjacent in \(C_{R}(G^{\prime})\), so by induction \(Z\) and \(Z_{0}\) are adjacent in \(R(M|H)\).
Because \(Z_{0}\) contains \(R\cap H\), the intersection of \(Z\) and \(Z_{0}\) contains \(Z\cap R\). Assume this containment is proper, and let \(e\) be an element of \(Z\cap Z_{0}\) that is not in \(Z\cap R\). Let \(v\) be a vertex in \(\theta(e)\). Thus \(v\) is in \(\theta(Z)-\theta(R)\). Choose \(w\), an arbitrary vertex in \(W-\theta(Z)\). Because \(v\) is in \(\theta(Z_{0})\), which contains \(W\), it follows that \(v\) and \(w\) are adjacent. Since \(w\) is in \(\theta(R)-\theta(Z)\), we now see that \(\theta(Z)\) and \(\theta(R)\) are not adjacent in \(C_{R}(G)\), contrary to hypothesis. We conclude that \(Z\cap Z_{0}=Z\cap R\).
Since \(Z\) and \(Z_{0}\) are adjacent in \(R(M|H)\), we can let \((F,F^{\prime})\) be a modular cover of \(M|H\) that certifies this adjacency, where \(Z\subseteq F\) and \(Z_{0}\subseteq F^{\prime}\). Because \(Z_{0}\) contains \(R\cap H\), it follows that \(F^{\prime}\) contains \(\cup P_{H}(x,y)\), where \(x\) and \(y\) range over distinct rank-one flats contained in \(C^{*}\). Proposition
2.11 says that \((F,F^{\prime}\cup C^{*})\) is a modular cover of \(M\). Certainly \(Z\subseteq F\) and \(Z_{0}\subseteq F^{\prime}\cup C^{*}\). Furthermore,
\[F\cap(F^{\prime}\cup C^{*})=F\cap F^{\prime}=Z\cap Z_{0}=Z\cap R.\]
Thus \((F,F^{\prime}\cup C^{*})\) certifies that \(Z\) and \(R\) are adjacent in \(R(M)\), exactly as desired.
For the converse, we assume that \(Z\) and \(R\) are adjacent in \(R(M)\). Thus \(Z\cap R\) is non-empty. Assume that \(Z\) contains \(R\cap H\). Then \(\theta(Z)\) contains \(\theta(R\cap H)=W\), so \(\theta(Z)\cap\theta(R)=W\). In \(G-W\) there is no path from a vertex of \(\theta(R)-\theta(Z)=Y\) to a vertex not in \(Y\), and in particular there is no path to a vertex in \(\theta(Z)-\theta(R)\). So in this case \(\theta(Z)\) and \(\theta(R)\) are adjacent in \(C_{R}(G)\) and we have nothing left to prove. Therefore we will assume that \(Z\) does not contain \(R\cap H\). Hence \(Z\cap R\) is a non-empty proper subset of \(R\cap H\). Since \(R\cap H\) is a round flat of \(M|H\) by Proposition 2.15, we can let \(Z_{0}\) be a rotunda of \(M|H\) that contains \(R\cap H\). Thus \(Z_{0}\) may be equal to \(R\cap H\), but it is not equal to \(Z\).
Let \((F,F^{\prime})\) be a modular cover of \(M\) that certifies the adjacency of \(R\) and \(Z\) in \(R(M)\), where \(R\subseteq F\) and \(Z\subseteq F^{\prime}\). Because \(F\cap F^{\prime}=R\cap Z\) and \(Z\) is contained in \(H\) it follows that \(F^{\prime}\) is contained in \(H\). If \(F^{\prime}=H\), then \(F\cap F^{\prime}\) contains \(R\cap H\), which properly contains \(Z\cap R\). This contradicts \(F\cap F^{\prime}=R\cap Z\), so \(F^{\prime}\) does not contain \(H\). By applying Proposition 2.12, we see that \((F\cap H,F^{\prime}\cap H)=(F\cap H,F^{\prime})\) is a modular cover of \(M|H\).
Because \(Z_{0}\) is round, one of \(F\cap Z_{0}\) and \(F^{\prime}\cap Z_{0}\) is not a proper flat of \(M|Z_{0}\). That is, \(Z_{0}\) is contained in either \(F\) or \(F^{\prime}\). Assume \(Z_{0}\) is contained in \(F^{\prime}\). Then \(R\cap H\subseteq Z_{0}\subseteq F^{\prime}\) and \(R\subseteq F\) so \(F\cap F^{\prime}\) contains \(R\cap H\). This is a contradiction as \(F\cap F^{\prime}=R\cap Z\), which is a non-empty proper subset of \(R\cap H\). Therefore \(Z_{0}\) is contained in \(F\). We observe that
\[(F\cap H)\cap F^{\prime}=(F\cap F^{\prime})\cap H=(R\cap Z)\cap H=R\cap Z=F \cap F^{\prime}\supseteq Z_{0}\cap Z.\]
Assume that \(F\cap F^{\prime}\) properly contains \(Z_{0}\cap Z\) and let \(e\) be an element of \((F\cap F^{\prime})-(Z_{0}\cap Z)\). Since \(F\cap F^{\prime}=R\cap Z\) it follows that \(e\) is in \(Z\). But we also have
\[e\in F\cap F^{\prime}=R\cap Z\subset R\cap H\subseteq Z_{0}.\]
Thus \(e\) is in \(Z_{0}\cap Z\) after all and we have a contradiction. Thus \((F\cap H)\cap F^{\prime}=Z_{0}\cap Z=R\cap Z\) and the modular cover \((F\cap H,F^{\prime})\) of \(M|H\) certifies that \(Z_{0}\) and \(Z\) are adjacent in \(R(M|H)\). Induction now tells us that \(\theta(Z_{0})\) and \(\theta(Z)\) are adjacent in \(C_{R}(G^{\prime})\).
Assume that \(\theta(Z)\) and \(\theta(R)=W\cup Y\) are not adjacent in \(C_{R}(G)\). These cliques certainly have common vertices, so we can let \(P\) be a path from \(a\in\theta(Z)-\theta(R)\) to \(b\in\theta(R)-\theta(Z)\) such that \(P\) contains no vertex of \(\theta(Z)\cap\theta(R)=\theta(Z)\cap\theta(Z_{0})\). If \(P\) is a path of \(G^{\prime}\) then it certifies that \(\theta(Z)\) and \(\theta(Z_{0})\) are not adjacent in \(R(M|H)\), contrary to our earlier conclusion. Therefore \(P\) contains at least one vertex in \(Y\). Consider the maximal subpath of \(P\) from \(a\) to vertex not in \(Y\), and let this vertex be \(w\). Note that \(w\) is in \(W-\theta(Z)\subseteq\theta(Z_{0})-\theta(Z)\). So this subpath certifies that \(\theta(Z)\) and \(\theta(Z_{0})\) are
not adjacent in \(R(M|H)\), and we have another contradiction that completes the proof.
Proof of Theorem 1.1.: Lemma 4.6 shows that every reduced clique graph is isomorphic to a rotunda graph. On the other hand, if \(M\) is a supersolvable saturated matroid with connected components \(M_{1},\ldots,M_{n}\), then \(R(M)\) is the disjoint union of \(R(M_{1}),\ldots,R(M_{n})\), as we observed in Proposition 4.5. Lemma 4.8 shows that each \(R(M_{i})\) is isomorphic to \(C_{R}(G_{i})\) for some \(2\)-connected chordal graph \(G_{i}\). If \(G\) is the disjoint union of \(G_{1},\ldots,G_{n}\), then \(C_{R}(G)\) is the disjoint union of \(C_{R}(G_{1}),\ldots,C_{R}(G_{n})\), and is thus isomorphic to \(R(M)\). So any rotunda graph is isomorphic to a reduced clique graph.
**Lemma 4.9**.: _Let \(M\) be a supersolvable saturated matroid. Then \(R(M)\) is connected if and only if \(M\) is connected._
Proof.: In Proposition 4.5 we noted that if \(N_{1},\ldots,N_{k}\) are the connected components of \(M\), then \(R(M)\) is the disjoint union of \(R(N_{1}),\ldots,R(N_{k})\). So if \(M\) is not connected then neither is \(R(M)\). For the converse, we let \(M\) be a connected supersolvable saturated matroid. Lemma 4.8 shows that \(R(M)\) is isomorphic to \(C_{R}(G)\) where \(G\) is a \(2\)-connected chordal graph \(G\). From Corollary 3.1 in [9] we see that \(C_{R}(G)\), and hence \(R(M)\), is connected.
## 5. Clique trees and rotunda trees
**Definition 5.1**.: Let \(M\) be a matroid and let \(T\) be a tree. Let \(\tau\) be a function from \(V(T)\) to \(\mathcal{P}(E(M))\). Assume that for every element \(x\in E(M)\) there is at least one vertex \(v\in V(T)\) such that \(x\in\tau(v)\). In this case we say that \((T,\tau)\) is a _tree-decomposition_ of \(M\). If for every element \(x\in E(M)\) there is _exactly one_ vertex \(v\in V(T)\) such that \(x\in\tau(v)\) then the tree-decomposition is _strict_.
In other words, the tree-decomposition is strict if \(\{\tau(t)\}_{t\in V(T)}\) is a partition of \(E(M)\).
Let \(G\) be a graph. A _clique tree_ of \(G\) is a pair \((T,\rho)\) where \(T\) is a tree and \(\rho\) is a bijection from \(V(T)\) to the set of maximal cliques of \(G\). We insist that for any \(v\in V(G)\), the set \(\{t\in V(T)\colon v\in\rho(t)\}\) induces a subtree of \(T\). Clique trees were introduced by Gavril [5], who showed that a graph has a clique tree if and only if it is chordal. Our next step is to define a matroid analogue of a clique tree.
**Definition 5.2**.: Let \(M\) be a matroid, and let \((T,\tau)\) be a tree-decomposition of \(M\) such that \(\tau\) is a bijection from \(V(T)\) to \(\mathcal{R}(M)\). If, for every \(x\in E(M)\), the set \(\{t\in V(T)\colon x\in\tau(t)\}\) induces a subtree of \(T\), then \((T,\tau)\) is a _rotunda tree_ of \(M\).
In the following material we must apply weights to the edges of reduced clique graphs and rotunda graphs. Let \(G\) be a chordal graph. Let \(\sigma\) be a function which takes the set
\[\{\emptyset\}\cup\{C\cap C^{\prime}\colon C\text{ and }C^{\prime}\text{ are distinct maximal cliques of }G\}\]
to non-negative integers, and where the following conditions hold:
1. \(\sigma(\emptyset)=0\),
2. if \(X\) and \(X^{\prime}\) are in the domain of \(\sigma\) and \(X\) is a proper subset of \(X^{\prime}\), then \(\sigma(X)<\sigma(X^{\prime})\).
In this case \(\sigma\) is a _legitimate weighting_ of \(G\). The function \(\sigma\) applies a weight to each edge of \(C_{R}(G)\), where the weight of the edge between \(C\) and \(C^{\prime}\) is \(\sigma(C\cap C^{\prime})\). The following result is the main theorem of [9].
**Theorem 5.3**.: _Let \(G\) be a connected chordal graph and let \(\sigma\) be a legitimate weighting. Every clique tree is a spanning tree of \(C_{R}(G)\) and every edge of \(C_{R}(G)\) is contained in a clique tree. Moreover, a spanning tree of \(C_{R}(G)\) is a clique tree if and only if it has maximum weight amongst all spanning trees._
Galinier, Habib, and Paul [4] prove the special case of Theorem 5.3 where \(\sigma(C\cap C^{\prime})=|C\cap C^{\prime}|\), but their proof contains a flaw which is explained in [9]. Next we consider the matroid analogue of legitimate weightings.
**Definition 5.4**.: Let \(M\) be a supersolvable saturated matroid. Let \(\sigma\) be a function taking
\[\{\emptyset\}\cup\{R\cap R^{\prime}\colon R,R^{\prime}\in\mathcal{R}(M),R\neq R ^{\prime}\}\]
to non-negative integers, where:
1. \(\sigma(\emptyset)=0\),
2. if \(X\) and \(X^{\prime}\) are in the domain of \(\sigma\) and \(X\) is a proper subset of \(X^{\prime}\), then \(\sigma(X)<\sigma(X^{\prime})\).
Then \(\sigma\) is a _legitimate weighting_ of \(M\).
For examples of legitimate weightings, we may set \(\sigma(R\cap R^{\prime})\) to be either the rank or the size of \(R\cap R^{\prime}\), for each pair of rotunda \(R\) and \(R^{\prime}\). In the case where we use rank, the legitimacy of the weighting relies on the fact that the intersection of two rotunda is a flat.
Now we are able to prove Theorem 1.2, which we restate in a more general form here.
**Theorem 5.5**.: _Let \(M\) be a connected supersolvable and saturated matroid and let \(\sigma\) be a legitimate weighting of \(M\). Every rotunda tree of \(M\) is a spanning tree of \(R(M)\) and every edge of \(R(M)\) is contained in a rotunda tree. Moreover, a spanning tree of \(R(M)\) is a rotunda tree if and only if it has maximum weight amongst all spanning trees._
Proof.: We apply Lemma 4.8 and let \(G\) be a \(2\)-connected chordal graph and let \(\theta\colon E(M)\to\mathcal{P}(V(G))\) be a function such that \((G,\theta)\) is compliant with \(M\). Let \(H\) be a graph that is isomorphic to both \(C_{R}(G)\) and \(R(M)\). Let \(\pi_{G}\) be a bijection from \(V(H)\) to the family of maximal cliques of \(G\), and let \(\pi_{M}\) be a bijection from \(V(H)\) to \(\mathcal{R}(M)\), such that \(\pi_{G}\) and \(\pi_{M}\) are both isomorphisms.
Let \((T,\tau)\) be a rotunda tree of \(M\). Define \(\rho\) to be the composition \(\theta|_{\mathcal{R}(M)}\circ\tau\). This means that \(\rho\) is a bijection from \(V(T)\) to the set of maximal cliques of \(G\). Let \(v\) be an arbitrary vertex of \(G\), and let \(x\) be the unique element of \(E(M)\) such that \(v\) is in \(\theta(x)\). Now
\[\{t\in V(T)\colon v\in\rho(t)\}=\{t\in V(T)\colon x\in\tau(M)\}. \tag{1}\]
Because the latter set induces a connected subgraph of \(T\), so does the former. This shows that \((T,\rho)\) is a clique tree of \(G\). Therefore \(T\) is (isomorphic to) a spanning tree of \(H\) by Theorem 5.3. We have now shown that any rotunda tree of \(M\) is a spanning tree of \(R(M)\). Moreover, if \(e\) is an arbitrary edge of \(H\), then there is some spanning tree \(T\) of \(H\) such that \(T\) contains \(e\) and \((T,\rho)\) is a clique tree of \(G\) for some bijection \(\rho\). Let \(\tau\) be the composition \((\theta|_{\mathcal{R}(M)})^{-1}\circ\rho\), so that \(\tau\) is a bijection from \(V(T)\) to \(\mathcal{R}(M)\). If \(x\) is an arbitrary element of \(E(M)\) and \(v\) is a vertex in \(\theta(x)\), then Equation (1) still holds and we see that \((T,\tau)\) is a rotunda tree of \(M\) that contains the edge \(e\). Thus any edge of \(R(M)\) is contained in a rotunda tree of \(M\).
We apply weights to the edges of \(H\). If \(u\) and \(u^{\prime}\) are adjacent in \(H\), then we weight the edge between them with \(\sigma(\pi_{M}(u)\cap\pi_{M}(u^{\prime}))\). It is not difficult to see that this weighting of \(H\) is also a legitimate weighting of \(C_{R}(G)\); that is, if \(C\) and \(C^{\prime}\) are maximal cliques of \(G\) that are adjacent in \(C_{R}(G)\), and \(\sigma_{G}\) applies the weight \(\sigma(\theta^{-1}(C)\cap\theta^{-1}(C^{\prime}))\) to the edge between \(C\) and \(C^{\prime}\), then \(\sigma_{G}\) is a legitimate weighting of \(G\).
Let \(T\) be a maximum-weight spanning tree of \(H\). Then \((T,\pi_{G})\) is a clique tree of \(G\), by Theorem 5.3. Exactly as before, we see that \((T,(\theta|_{\mathcal{R}(M)})^{-1}\circ\pi_{G})\) is a rotunda tree of \(M\). On the other hand, if \(T\) is a spanning tree of \(H\) and \((T,\tau)\) is a rotunda tree of \(M\), then \((T,\theta|_{\mathcal{R}(M)}\circ\tau)\) is a clique tree of \(G\). Hence \(T\) is a maximum-weight spanning tree of \(H\). We have now proved that the rotunda trees of \(M\) are exactly the maximum-weight spanning trees of \(R(M)\), as claimed.
It follows from Theorem 1.2 that \(R(M)\) is the exactly the union of all rotunda trees of \(M\).
## 6. Tree-decompositions
We recall the definition of graph tree-width. Let \(G\) be a graph. Let \(T\) be a tree and let \(\rho\) be a function from \(V(T)\) to \(\mathcal{P}(V(G))\) such that for every \(v\in V(G)\) the set \(\{t\in V(T)\colon v\in\rho(t)\}\) is non-empty and induces a subtree of \(T\). We further insist that if \(u\) and \(v\) are adjacent vertices of \(G\), then \(u,v\in\rho(t)\) for some \(t\in V(T)\). Then \((T,\rho)\) is a _tree-decomposition_ of \(G\), and the sets \(\rho(t)\) are the _bags_ of the decomposition. The _width_ of \((T,\rho)\) is the maximum size of a bag, and the _tree-width_ of \(G\) is the minimum width taken over all tree-decompositions.
Any clique tree of a chordal graph is a tree-decomposition of optimal width, where the bags of the tree-decomposition are exactly the maximal cliques [7, p. 14]. We now move towards a matroid analogue of this result.
We first introduce the notion of _matroid tree-width_, as developed by Hlineny and Whittle [8]. Recall that a tree-decomposition of a matroid \(M\) is a tree \(T\) along with a function \(\tau\colon V(T)\to\mathcal{P}(E(M))\) such that every element \(x\in E(M)\) is in at least one set \(\tau(t)\).
**Definition 6.1**.: Let \(M\) be a matroid and let \((T,\tau)\) be a tree-decomposition of \(M\). Let \(t\) be a node of \(T\) and let \(T_{1},\ldots,T_{d}\) be the connected components of \(T-t\). For each \(i\) let \(F_{i}\) be \(\cup_{s\in V(T_{i})}\tau(s)\). We define the _node-width_ of \(t\) to be
\[\left(\sum_{i=1}^{d}r\left(\tau(t)\cup\bigcup_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{d}F_{k}\right)\right)-(d-1)r(M).\]
The _width_ of \((T,\tau)\) is the maximum node-width of any node in \(T\). The _tree-width_ of \(M\) (denoted \(\operatorname{tw}(M)\)) is the smallest width of any tree-decomposition of \(M\).
Note that this definition is not exactly that used by Hlineny and Whittle because in their definition the minimum ranges over _strict_ tree-decompositions, rather than all tree-decompositions. To see that this makes no difference to the definition, assume that the element \(x\in E(M)\) is contained in both \(\tau(u)\) and \(\tau(v)\), where \(u\) and \(v\) are distinct vertices of the tree \(T\). We redefine \(\tau\) by removing \(x\) from \(\tau(u)\). It is easy to confirm that the width of no node is increased by this change. By repeating this process we can produce a strict tree-decomposition with width no greater than the width of our original decomposition. This argument shows that there exists a strict tree-decomposition whose width is as small as possible amongst all tree-decompositions. Thus extending Hlineny and Whittle's definition to include non-strict tree-decompositions makes no difference to the parameter.
We can always let \(T\) be a tree with a single node, and let \(\tau\) take every element of \(E(M)\) to this node. It follows from the definition that the width of \((T,\tau)\) is \(r(M)\). This shows that the tree-width of any matroid \(M\) is bounded above by \(r(M)\).
**Proposition 6.2**.: _Let \(M\) be a round matroid. Then \(\operatorname{tw}(M)=r(M)\)._
Proof.: Let \(E\) be the ground set of \(M\). Let \((T,\tau)\) be any strict tree-decomposition of \(M\). We direct each edge of \(T\) in the following way. Let \(e\) be an arbitrary edge of \(T\) and assume that \(e\) joins \(u_{1}\) to \(u_{2}\). For each \(i\) let \(T_{i}\) be the connected component of \(T\backslash e\) that contains \(u_{i}\). Let \(U_{i}=\cup_{s\in V(T_{i})}\tau(s)\). Thus \((U_{1},U_{2})\) is a partition of \(E\) (since the tree-decomposition is strict), and because \(M\) is round, either \(U_{1}\) or \(U_{2}\) is spanning. If \(U_{i}\) is spanning then we direct \(e\) from \(u_{3-i}\) to \(u_{i}\). Note that it is possible for an edge to have two directions applied to it.
Let \(P\) be a maximum length directed path in \(T\), and assume that \(t\) is the final node in \(P\). Let \(T_{1},\ldots,T_{d}\) be the connected components of \(T-t\) and let \(F_{i}=\cup_{s\in V(T_{i})}\tau(s)\). Because the edges incident with \(t\) are all directed
towards \(t\), it follows that \(E-F_{i}\) is spanning for each \(i\). Since \(F_{1},\ldots,F_{d}\) are pairwise disjoint, the width of \(t\) is
\[r(M)-\sum_{i=1}^{d}(r(M)-r(E-F_{i}))=r(M)-\sum_{i=1}^{d}(r(M)-r(M))=r(M).\]
Hence the node-width of \(t\) is equal to \(r(M)\). Thus \(\operatorname{tw}(M)\geq r(M)\). We have already observed that \(\operatorname{tw}(M)\leq r(M)\) so the proof is complete.
Hlineny and Whittle show that if \(N\) is a minor of the matroid \(M\), then \(\operatorname{tw}(N)\leq\operatorname{tw}(M)\)[8, Proposition 3.1]. The next result follows from this observation and Proposition 6.2.
**Corollary 6.3**.: _Let \(M\) be a matroid and let \(R\) be a round flat of \(M\). Then \(\operatorname{tw}(M)\geq r(R)\)._
**Proposition 6.4**.: _Let \((T,\tau)\) be a rotunda tree of \(M\), a supersolvable saturated matroid. Let \(e\) be an edge of \(T\) that joins vertices \(u_{1}\) and \(u_{2}\). For \(i=1,2\), let \(T_{i}\) be the connected component of \(T\backslash e\) that contains \(u_{i}\) and let \(F_{i}\) be \(\cup_{t\in V(T_{i})}\tau(t)\). Then \((F_{1},F_{2})\) is a modular cover of \(M\) and \(F_{1}\cap F_{2}=\tau(u_{1})\cap\tau(u_{2})\)._
Proof.: Note that every element of \(E(M)\) is contained in a round flat, and hence in a rotunda. From this it follows that \(E(M)=F_{1}\cup F_{2}\).
We apply Lemma 4.8 and we let \(G\) be a \(2\)-connected chordal graph with a function \(\theta\colon E(M)\to\mathcal{P}(V(G))\) such that \((G,\theta)\) is compliant with \(M\). Let \(\rho\) be the composition \(\theta|_{\mathcal{R}(M)}\circ\tau\) so that \(\rho\) is a bijection between \(V(T)\) and the maximal cliques of \(G\). Exactly as in the proof of Theorem 1.2 we can show that \((T,\rho)\) is a clique tree of \(G\).
Define \(R_{i}\) to be the rotunda \(\tau(u_{i})\). Let \(F\) be the flat \(R_{1}\cap R_{2}\). Note that because \(R_{1}\) and \(R_{2}\) are adjacent in a rotunda tree of \(M\), they are adjacent in \(R(M)\) by Theorem 1.2. This implies that \(F\) is non-empty. Let \(C_{i}=\theta(R_{i})\) for \(i=1,2\), so that \(C_{1}\) and \(C_{2}\) are the corresponding maximal cliques of \(G\). Define \(S\) to be \(\theta(F)=C_{1}\cap C_{2}\).
Note that if \(D\) is a maximal clique of \(G\), then \(D-S\) is contained in a connected component of \(G-S\). For \(i=1,2\), let \(v_{i}\) be an arbitrary vertex of \(T_{i}\). Then the path of \(T\) from \(v_{1}\) to \(v_{2}\) contains \(u_{1}\) and \(u_{2}\). It follows from [9, Proposition 2.8] that \(\rho(v_{1})-S\) and \(\rho(v_{2})-S\) are contained in different connected components of \(G-S\). Now we let \(U\) be the union of all connected components of \(G-S\) that contains \(\rho(v)-S\) for some \(v\) in \(V(T_{1})\). From the observations in this paragraph we see that \(F\cup\theta^{-1}(U)\) is equal to \(F_{1}\). Because \((G,\theta)\) is compliant with \(M\) this means that \(F_{1}\) is a modular flat of \(M\). Symmetrically, \(F_{2}\) is a modular flat.
Let \(x\) be an arbitrary element in \(F_{1}\cap F_{2}\). Let \(v\in V(T_{1})\) and \(v^{\prime}\in V(T_{2})\) be chosen so that \(x\) is in \(\tau(v)\cap\tau(v^{\prime})\). Because \((T,\tau)\) is a rotunda tree it follows that \(x\) is in \(\tau(w)\) whenever \(w\) is in the path of \(T\) from \(v\) to \(v^{\prime}\). In particular, \(x\) is in \(\tau(u)\cap\tau(u^{\prime})=R\cap R^{\prime}=F\). Thus \(F_{1}\cap F_{2}\subseteq F\). Because
\(u_{i}\) is in \(T_{i}\) for each \(i\) it follows that \(R_{i}\subseteq F_{i}\). Therefore \(F=R_{1}\cap R_{2}\) is a subset of \(F_{1}\cap F_{2}\), and now
\[\tau(u_{1})\cap\tau(u_{2})=R_{1}\cap R_{2}=F=F_{1}\cap F_{2}.\]
From this it follows that \(F_{1}\cap F_{2}\) does not contain \(R_{1}\) or \(R_{2}\), so neither \(F_{1}\) nor \(F_{2}\) is equal to \(E(M)\). Since \(F_{1}\) and \(F_{2}\) are proper modular flats of \(M\) and \(E(M)=F_{1}\cup F_{2}\) we see that \((F_{1},F_{2})\) is a modular cover and the result is proved.
Let \(M\) be a connected supersolvable and saturated matroid. We will now show that a rotunda tree of \(M\) has the properties of an optimal tree-decomposition as per Hlineny and Whittle.
**Theorem 6.5**.: _Let \(M\) be a supersolvable saturated matroid and let \((T,\tau)\) be a rotunda tree of \(M\). Then the width of \((T,\tau)\) is equal to \(\operatorname{tw}(M)\)._
Proof.: We will show that the node-width of any \(t\in V(T)\) is \(r(\tau(t))\), so that the width of \((T,\tau)\) is the maximum rank of a rotunda of \(M\). From Corollary 6.3 we see that \(\operatorname{tw}(M)\) is bounded below by this rank, so having completed this task, we will have shown that \((T,\tau)\) is a tree-decomposition of lowest-possible rank. It will then follow that \(\operatorname{tw}(M)\) is equal to the width of \((T,\tau)\).
So let \(t\) be an arbitrary vertex in \(T\) and let \(T_{1},\dots,T_{d}\) be the connected components of \(T-t\). For each \(i\) let \(t_{i}\) be the vertex of \(T_{i}\) that is adjacent to \(t\). Define \(F\) to be \(\tau(t)\), and let \(F_{i}\) be \(\cup_{s\in V(T_{i})}\tau(s)\) for each \(i\). We define \(\overline{F}_{i}\) to be
\[F\cup\bigcup_{\begin{subarray}{c}k=1\\ k\neq i\end{subarray}}^{d}F_{k}.\]
Therefore the node-width of \(t\) is
\[r(\overline{F}_{1})+\dots+r(\overline{F}_{d})-(d-1)r(M). \tag{2}\]
In addition, we define \(\overline{F}_{>i}\) to be
\[F\cup\bigcup_{k=i+1}^{d}F_{k}.\]
Notice that \(\overline{F}_{>1}=\overline{F}_{1}\) and that \(\overline{F}_{>d}=F\).
**6.5.1**.: _For any \(i\in\{1,\dots,d-1\}\), the intersection of \(\overline{F}_{>i}\) and \(\overline{F}_{i+1}\) is \(\overline{F}_{>i+1}\)._
Proof.: We note that
\[\overline{F}_{>i}\cap\overline{F}_{i+1}=(F\cup F_{i+1}\cup\dots \cup F_{d})\cap(F\cup F_{1}\cup\dots\cup F_{i}\cup F_{i+2}\cup\dots\cup F_{d}) \\ =(F_{i+1}\cap(F_{1}\cup\dots\cup F_{i}))\cup(F\cup F_{i+2}\cup \dots\cup F_{d}).\]
Now \(F_{i+1}\cap(F_{1}\cup\dots\cup F_{i})\) is contained in \(F_{i+1}\cap\overline{F}_{i+1}\). But Proposition 6.4 tells us that \(F_{i+1}\cap\overline{F}_{i+1}\) is equal to \(\tau(t_{i+1})\cap\tau(t)\), which is therefore contained in \(\tau(t)=F\). Hence we can remove \(F_{i+1}\cap(F_{1}\cup\dots\cup F_{i})\) from the
equation above and conclude that \(\overline{F}_{>i}\cap\overline{F}_{i+1}\) is \(F\cup F_{i+2}\cup\cdots\cup F_{d}=\overline{F}_{>i+1}\), as claimed.
Proposition 6.4 implies that \((F_{i},\overline{F}_{i})\) is a modular cover for each \(i\), so that in particular \(\overline{F}_{2}\) is a modular flat. Now Equation (2) reduces to
\[r(\overline{F}_{1}\cap\overline{F}_{2})+r(\overline{F}_{1}\cup \overline{F}_{2})+r(\overline{F}_{3})+\cdots+r(\overline{F}_{d})-(d-1)r(M)\] \[=r(\overline{F}_{>1}\cap\overline{F}_{2})+r(E(M))+r(\overline{F}_ {3})+\cdots+r(\overline{F}_{d})-(d-1)r(M)\] \[=r(\overline{F}_{>2})+r(\overline{F}_{3})+\cdots+r(\overline{F}_ {d})-(d-2)r(M)\]
where we have applied 6.5.1 in the final step. Because \(\overline{F}_{3}\) is a modular flat, we can again apply 6.5.1 and reduce to
\[r(\overline{F}_{>3})+r(\overline{F}_{4})+\cdots+r(\overline{F}_{d})-(d-3)r(M)\]
By continuing this process, we find that Equation (2) is equal to
\[r(\overline{F}_{>d})-(d-d)r(M)=r(F).\]
So the node-width of \(t\) is \(r(\tau(t))=r(F)\), exactly as we claimed, and the theorem is proved.
Now we present the central theorem for this section. Because a rotunda tree is a tree-decomposition of optimal width for a supersolvable saturated matroid \(M\), we can treat it as a canonical tree decomposition of \(M\).
**Corollary 6.6**.: _Let \(M\) be a supersolvable saturated matroid. Then \(\operatorname{tw}(M)=\max\{r(R)\colon R\in\mathcal{R}(M)\}\)._
Further observe the following. Let \(\operatorname{bw}(M)\) denote the branch-width of \(M\). By [8, Theorem 4.2] we see that
\[\operatorname{bw}(M)-1\leq\operatorname{tw}(M)\leq\max\{2\operatorname{bw}(M )-2,1\}.\]
We see therefore that given a supersolvable saturated matroid \(M\) of branch-width \(k\) there must be a rotunda tree of \(M\) where the rank of the largest maximal rotunda is bounded by a function of \(k\). As a result, we can conclude that supersolvable saturated matroids have canonical tree decompositions of optimal tree-width in much the same way as chordal graphs have canonical tree decompositions where each bag is a clique of the graph.
This theorem has algorithmic implications for how we can efficiently find the tree-width of a supersolvable saturated matroid. However, for this to work we would need an efficient method for constructing the rotunda graph.
## 7. Acknowledgements
We thank Geoff Whittle, who supervised the thesis of the second author (which includes much of the material in this article). We also thank a referee of an earlier draft for numerous helpful comments. |
2306.08684 | An Extended Catalogue of galaxy morphology using Deep Learning in
Southern Photometric Local Universe Survey Data Release 3 | The morphological diversity of galaxies is a relevant probe of galaxy
evolution and cosmological structure formation. However, in large sky surveys,
even the morphological classification of galaxies into two classes, like
late-type (LT) and early-type (ET), still represents a significant challenge.
In this work we present a Deep Learning (DL) based morphological catalog built
from images obtained by the Southern Photometric Local Universe Survey (S-PLUS)
Data Release 3 (DR3). Our DL method achieves an precision rate of 98.5$\%$ in
accurately distinguishing between spiral, as part of the larger category of
late type (LT) galaxies, and elliptical, belonging to early type (ET) galaxies.
Additionally, we have implemented a secondary classifier that evaluates the
quality of each galaxy stamp, which allows to select only high-quality images
when studying properties of galaxies on the basis of their DL morphology. From
our LT/ET catalog of galaxies, we recover the expected color--magnitude diagram
in which LT galaxies display bluer colors than ET ones. Furthermore, we also
investigate the clustering of galaxies based on their morphology, along with
their relationship to the surrounding environment. As a result, we deliver a
full morphological catalog with $164314$ objects complete up to $r_{petro}<18$,
covering $\sim 1800$ deg$^2$, including a significant area of the Southern
hemisphere that was not covered by previous morphology catalogues. | C. R. Bom, A. Cortesi, U. Ribeiro, L. O. Dias, K. Kelkar, A. V. Smith Castelli, L. Santana-Silva, V. Silva, T. S. Gonçalves, L. R. Abramo, E. V. R. Lima, F. Almeida-Fernandes, L. Espinosa, L. Li, M. L. Buzzo, C. Mendes de Oliveira, L. Sodré Jr., A. Alvarez-Candal, M. Grossi, E. Telles, S. Torres-Flores, S. V. Werner, A. Kanaan, T. Ribeiro, W. Schoenell | 2023-06-14T18:05:58Z | http://arxiv.org/abs/2306.08684v1 | An Extended Catalogue of galaxy morphology using Deep Learning in Southern Photometric Local Universe Survey Data Release 3
###### Abstract
The morphological diversity of galaxies is a relevant probe of galaxy evolution and cosmological structure formation. However, in large sky surveys, even the morphological classification of galaxies into two classes, like late-type (LT) and early-type (ET), still represents a significant challenge. In this work we present a Deep Learning (DL) based morphological catalog built from images obtained by the Southern Photometric Local Universe Survey (S-PLUS) Data Release 3 (DR3). Our DL method achieves an precision rate of 98.5% in accurately distinguishing between spiral, as part of the larger category of late type (LT) galaxies, and elliptical, belonging to early type (ET) galaxies. Additionally, we have implemented a secondary classifier that evaluates the quality of each galaxy stamp, which allows to select only high-quality images when studying properties of galaxies on the basis of their DL morphology. From our LT/ET catalog of galaxies, we recover the expected color-magnitude diagram in which LT galaxies display bluer colors than ET ones. Furthermore, we also investigate the clustering of galaxies based on their morphology, along with their relationship to the surrounding environment. As a result, we deliver a full morphological catalog with 164314 objects complete up to \(r_{petro}<18\), covering \(\sim 1800\) deg\({}^{2}\), including a significant area of the Southern hemisphere that was not covered by previous morphology catalogues.
keywords: galaxies: fundamental parameters - galaxies: structure - techniques: image processing - catalogues
## 1 Introduction
Galaxy structure was one of the first properties of galaxies that was ever directly observed and studied. Initially thought to be 'nebulae',
it soon became evident that these objects showed distinct structural features like spiral arms or a smooth elliptical envelope (Zwicky, 1940; Vaucouleurs, 1959; Herschel, 1864; van den Bergh, 1998). Decades of studying galaxy shapes and structures thus resulted in several classification schemes, among which the 'Hubble tuning fork' system of classifying galaxies based on their observed visual characteristics has been widely used. Collectively known as galaxy'morphologies', galaxies can broadly be divided into two main categories namely early and late type galaxies. Late type galaxies are formed by spiral (S) and irregular/peculiar (Irr) galaxies. The spiral galaxies' branch, bifurcates into barred and un-barred systems. The early type galaxies are composed of elliptical and lenticular galaxies. Elliptical galaxies display an increasing ellipticity, from round (E0) to flat (E7) systems. Lenticular galaxies lie at the apex of the Hubble tuning fork due to their hybrid structure, presenting a bulge and a disk, as spiral galaxies, but without spiral arms.
Such morphological diversity often reflects the presence of different and composite stellar populations (Sanchez et al., 2007) and kinematics (Edelen, 1969; Wang et al., 2020). For example, S galaxies are characterized by the presence of a star-forming disk with blue spiral arms, which indicate rotationally supported stellar kinematics. E galaxies have, in general, more smooth featureless morphologies resulting from a lack of star formation. E galaxies present a range of kinematic profiles, being the E0 pressure-supported systems or slow rotators, while intermediate elliptical galaxies (E1/E7) present and increasing contribution of rotation to the total kinematic budget (Cappellari et al., 2011; Bernardi et al., 2019).
Furthermore, galaxy morphologies are found to be tightly correlated to the color bimodality observed in galaxy populations, thereby resulting in the existence of the younger blue star-forming galaxies with late-type (S) morphologies, and the older red passively evolving galaxies with early-type (E/S0) morphologies (Baldry et al., 2004). However, we are increasingly discovering that several sub-populations of galaxies do not neatly follow this dichotomy, i.e. red spirals and blue ellipticals exist (Bamford et al., 2009), and likely arise from a variety of physical processes, some of which may be environmentally driven (e.g., Vulcani et al., 2015).
Thus, the evolution of galaxy morphology has always been in tandem with the growth of galaxies' large-scale environment and their masses over cosmic time (Desai et al., 2007; Calvi et al., 2012; Crosset et al., 2014; Sarkar and Pandey, 2020; Wu, 2020). Indeed, using a dichotomous 'bulge/disk' definition for the Hubble-type morphologies, the redshift range \(1<z<2\) is found to be abundant with bulge+disk systems (e.g. Margalef-Bentabol et al., 2016), while massive galaxies in the local universe are majorly bulge-dominated (Buitrago et al., 2013).
Furthermore, higher redshift galaxies predominantly show peculiar/disturbed/irregular morphologies deviant from the classical morphologies observed at the Local Universe (e.g. Mortlock et al., 2013), suggesting that galaxies have undergone remarkable structural transformation over cosmic time (see also review by Conselice, 2014). Undeniably, galaxy morphology is a crucial evolutionary key in tracing and understanding galaxy evolution throughout cosmic times (e.g. Shao et al., 2015).
Ample opportunities are now being presented to investigate galaxy morphologies through multi-band sky surveys, giving us hundreds of thousands of galaxies while exploring large volumes of the sky at the same time (e.g., SDSS; York et al., 2000). The diverse methods employed by such sky surveys vary from human classification of specialists (Nair and Abraham, 2010; Ann et al., 2015), to citizen science (Lintott et al., 2008, 2010; Willett et al., 2013; Simmons et al., 2017), or from numerically estimating morphology from galaxy properties (Spiekermann, 1992; Storrie-Lombardi et al., 1992; Walmsley et al., 2020) to novel techniques like Principal Component Analysis (PCA; Kelly and McKay, 2004; Wjeisinghe et al., 2010), most of which heavily rely on image quality either due to resolution and/or sensitivity of the observations (e.g., Povic et al., 2015). However, migrating to automated methods of classifying galaxies is now necessary to deal with the huge data volumes resulting from such current and upcoming surveys e.g., the Legacy Survey of Space and Time (LSST; Tyson, 2002; Axelrod, 2006) by the Vera C. Rubin Observatory & sky surveys with the Nancy Grace Roman Space Telescope (Gehrels and, 2015).
Machine Learning (ML) is a powerful automated tool for extracting useful information from complex and varied imaging data sets, and assist in decision-making processes such as classification trees. The use of ML thus is limited not only for galaxy morphologies (Hohill et al., 2023) but also to detect gravitational lenses, interacting galaxies, to classify quasars (Freeman et al., 2013; Shamir et al., 2013; Holincheck et al., 2016; Bom et al., 2017; Ostrovski et al., 2017; Ma et al., 2019; Knabel et al., 2020; Zaborowski et al., 2022), and more recently to detect outliers in astronomical images (Margalef-Bentabol et al., 2020). These applications highlight the wide-ranging capabilities of ML in astrophysical research, enabling researchers to explore and understand diverse phenomena in the cosmos. In the last decade, a sub-field of ML known as Deep Learning (DL) has emerged as the main technique for computer vision applications (Lu et al., 2017; Abdel-Hamid et al., 2014; Vecchiotti et al., 2018), music classification (Choi et al., 2017), and medical prognostics & diagnostics (Li et al., 2018; Hannun et al., 2019).
DL is applied model development for processing complex, minimally reduced (or even raw) data from different sources, and extract relevant features that can then be effectively linked to other properties of interest. In particular, Deep Neural Networks (DNNs) are high-performance data-driven models that are capable of exceeding humans in classification tasks (Metcalf et al., 2019). In astronomy, several recent works have exploited this to show that DNNs can indeed be successfully used to identify not only the morphological features in raw images with minimal human intervention (Glazebrook et al., 2017; Lanusse et al., 2018; Jacobs et al., 2019; Madireddy et al., 2019; Cheng et al., 2019; Petrillo et al., 2017, 2019, 2019, 2020; Farias et al., 2020; Hausen and Robertson, 2020; Bom et al., 2022),but also outliers in astronomical images (Margalef-Bentabol et al., 2020).
In this paper, we present the morphological classification of galaxies into LT and ET, using the new Southern Photometric Local Universe Survey DR3 (S-PLUS; Mendes de Oliveira et al., 2019). As a follow-up to Bom et al. (2021) hereafter BOM21, our main aim is to apply a high-performance DL algorithm to the imaging data, to obtain a novel and reliable morphological catalogue in the Southern Hemisphere, with a complementary coverage to other morphological catalogues. Furthermore, we also develop the first Deep Network to evaluate the quality of the stamps and clean spurious detections. Finally, we take advantage of the high precision photometric redshifts derived using the 12 bands in S-PLUS to explore the dependence of morphology on the environment and color, used as a proxy for the galaxy stellar population properties. We compare the classification presented in this work with Vega-Ferrero et al. (2021), and we discuss the implications arising by studying differently classified objects on the current understanding of galaxy morphological categories.
This paper is organized as follows, in section 2 we describe the data from iDR3 used in this work, the sample selection, and
auxiliary data used, such as the photometric redshift. In section 3, we present the Deep Learning method used for galaxy morphology classification, and the novel- ties in its implementation since S-PLUS DR1 morphology paper (BOM21, Bom et al., 2021). In section 4, we present the results of the model, including deep learning performance. We also show the relation between environmental density and morphology, and we analyse the distribution of the different morphological classes in a (g-r) colour versus \(M_{r}\) absolute magnitude diagram. In section 5, we present our summary and discuss the results.
## 2 Data
### Southern Photometric Local Universe Survey
The Southern Photometric Local Universe Survey (S-PLUS) is performed with a robotic 86-cm telescope located at the Cerro Tololo Interamerican Observatory to cover \(\sim 9300\) deg\({}^{2}\) of the sky in 12 optical bands. S-PLUS uses a wide field optical camera with a field-of-view of 2 deg\({}^{2}\) and a plate scale of 0.55\({}^{\prime\prime}\) pixel\({}^{-1}\). The optical filters (the so-called Javalambre filter system, with 5 SDSS-like bands and 7 narrow bands Cenarro et al., 2019) are quite unique for the southern hemisphere and are optimal for source classification, given is better definition of the spectral energy distribution of the observed objects, than the usual 4 or 5-band systems. The narrow bands are designed to be centered on important stellar features, for instance, the OII line, Ca H+K, H\(\delta\) and H\(\alpha\). The survey reaches a typical limiting magnitude of r\(<\)21 AB mag for the broad bands and r\(<\)20 AB mag for the narrow bands (Mendes de Oliveira et al., 2019).
The third public data release of S-PLUS (DR3) covers \(\sim 2000\) deg\({}^{2}\) over the Southern Sky. It includes the areas covered in the previous Data Releases such as the Stripe 82. However, the images were reprocessed, with a new reduction and calibration of the data being done from DR2 to DR3, as described in Almeida-Fernandes et al. (2022). In figure 1 we present the area covered by DR3 in comparison with other surveys with available morphological catalogues. The area of the Stripe 82 (at the equator) has overlaps with a number of surveys, in optical and other wavelengths, and it has been used as a benchmark for checking the data reduction and calibration procedures. Other important area covered by the DR3 is the Hydra supercluster (the long vertical red rectangle at the far left of Figure 1).
#### 2.1.1 Sample selection
We use the full DR3 catalogue containing \(\sim 50\) millions of sources. During the DR1 morphological classification, we selected the objects only by Petrosian magnitude in \(r\) band (\(\rm r_{petro}\)) \(<17\) AB mag and probability of being a galaxy \(\rm prob_{gal}\geq 0.6\) (for further information see Nakazono et al., 2021). However, we had a visual inspection phase to remove undesired spurious detection (see BOM21). The current catalogue covers an area of 1800 deg\({}^{2}\), which makes the visual inspection unfeasible in a reasonable time scale with limited human resources. Therefore, we define more stringent cuts and include four extra constraints compared to BOM21. Additionally, we added an automated selection phase by Neural Network that is detailed in Section 3. Thus, we apply the following selection criteria to define our galaxy sample from the full catalogue of the S-PLUS DR3:
\[\rm r_{petro}<18\;\rm{AB}\;\rm{mag} \tag{1}\] \[\rm{prob_{gal}}\geq 0.7\] (2) \[0<=\rm{photoflag_{r}<=3}\] (3) \[\rm{BrightStarFlag=0}\] (4) \[\rm{R_{Kron}>=3}\] (5) \[\rm{FWHM_{a}>=1.5} \tag{6}\]
where \(\rm{photoflag_{r}}\) is a photometry quality flag from _SExtractor_(Bertin & Arnouts, 1996), \(\rm{R_{Kron}}\) is the Kron radius, i.e. the first moment of the surface brightness light profile, \(\rm{FWHM_{a}}\) is the Full Width at half maximum of the object divided by the median FWHM of all bright non-saturated stellar objects of the field. All those features are available and described in the SPLUS catalogue. The probability of being a galaxy, \(\rm prob_{gal}\)(Nakazono et al., 2021), and the flag indicating a presence of a bright star nearby, BrightStarFlag, are listed in the'star-galaxy-quasar' and'masks' Value Added Catalogues (VAC; see SPLUS.cloud for further details 1). Specifically, \(0<=\rm{photoflag_{r}<=3}\) ensure the goodness of _Sextractor_ fit in most of the cases of interest. The \(\it{BrightStarFlag}\) parameter is very effective in removing bright stars, and allow to clean the few stars which are erroneously assigned a probability higher than 0.7 of being galaxies by the star-galaxy-quasar classification (Nakazono et al., 2021). The conditions \(\rm{Kron_{Badia}>=3}\) and \(\rm{FWHM_{a}>=1.5}\) are included to select resolved objects (the average \(FWHM_{aging}\approx 1.2\)).
Footnote 1: [https://splus.cloud/catalogtools](https://splus.cloud/catalogtools)
Following these selection criteria, we obtained a final catalogue of 1634134 objects, for which we created image stamps in the 12 bands, with a size of 200\(\times\)200 pixels2. The final catalogue is mostly composed of reliable stamps, i.e. stamps centered on a galaxy, complete up to \(r_{petro}<18\). Further improvement on the sample selection are described in Section 3.2.
Footnote 2: The image cutout tasks can be found in this GitHub repository: [https://github.com/lucatelli/splus-tools](https://github.com/lucatelli/splus-tools).
#### 2.1.2 Samples definition
The supervised Deep Learning (DL) assessment requires to be trained on a sample of objects with known classification, i.e. a _labeled set_ (_Training/Validation and Test Set - I_), sharing as much as possible, the same properties of the sample where the algorithm will be applied in a second moment (_Blind Set - II_). In this section we describe the characteristics of the two samples, but we refer to Section 3 for more details on the DL algorithm and its performance.
We used the same objects presented in training and validation and the test scheme used in BOM21, which used Galaxy Zoo 1 unbiased morphological classification into elliptical and spiral galaxies (Lintott et al., 2008; Bamford et al., 2009; Lintott et al., 2010) as true label. Such choice was possible since s-PLUS DR1 is included in S-PLUS DR3. It is important to note, though, that since the reduction pipeline has been improved between the two data releases, new stamps were created using the novel images, to ensure the homogeneity of the two data sets (I and II). Another relevant difference between the data from DR1 and DR3 is the new photometric calibration applied for the S-PLUS DR3. This calibration consists of fitting synthetic stellar templates to well-known data
from other surveys, deriving precise zero-points and magnitudes that were tested on 170 STRIPES2 fields (see Almeida-Fernandes et al., 2022, for a detailed description of the method). We obtained the stamps for each object in the 12 bands from the DR3 data access, for both samples.
In total, there are 4232 objects in training sample \(I\), while set \(II\) is composed of 164314 objects. As presented in the top panel of Figure 2, the training sample, i.e. sample \(I\), is approximately complete only up to \(r_{proto}<17\)3. As described in Section 2.1.1 in this work we select objects up to \(r_{proto}<18\). The implications of this choice in the DL performance are discussed in Section 4. Both the samples I & II show similar distribution of \(R_{proto}\) for magnitudes \(<17\) (bottom panel of Figure 2)
Footnote 3: This magnitude limit is required in Galaxy Zoo 1 in order to perform the debiasing process, which requires spectroscopic redshifts, see Bamford et al. (2009) for more details
#### 2.1.3 Photometric redshifts
The S-PLUS DR3 photometric redshifts catalogue uses a DL model based on a Bayesian Mixture Density Network architecture. This specific configuration allows single-point estimates while also providing probability distribution functions (PDFs) for each galaxy. This network is trained on 12-band photometry from S-PLUS, cross-matched with the unWISE (Wide-field Infrared Survey Explorer, Lang, 2014), GALEX (Galaxy Evolution Explorer, Niemack et al., 2009), and 2MASS (The Two Micron All Sky Survey, Skrutskie et al., 2006) catalogs (W1/W2, NUV/FUV, and J/H/K magnitudes, respectively). Spectroscopic redshift targets are compiled from various surveys, including SDSS DR16, 2dFGRS, 2dFLenS, 6dFGS, and others. A total of 262,521 objects are used for training/validation, and an independent test set.
Due to its unique filter system with a set of broad- and narrow-band photometry, the current model is capable of providing accurate photometric redshifts, while also maintaining low bias and negligible outlier fraction. In fact, within the magnitude range of interest of the present work, \(r_{proto}\in[14,18]\), the median normalized bias stands \(\sim-0.0015\), the scatter is \(\sim 0.015\), and the outlier fraction is below 1%. The catalog not only includes single point estimates but also well-calibrated probability distribution functions, enabling users to evaluate the uncertainties associated with each estimate. Further information regarding the methodology and resulting findings can be found in Lima et al. (2022)4. Figure 3 shows the
Figure 1: S-PLUS DR3 footprint (used in this work) and the footprint of some recent galaxies morphology catalogues available in the literature.
Figure 2: Distribution of the r-petrosian magnitudes (\(\rm r_{proto}\)) for the whole sample, i.e. up to magnitude \(\rm r_{proto}\leq 18\) (_top_) and a normalised histogram of the magnitude distribution of \(I\) (Training/Validation set) and \(II\) (Blind set) up to \(\rm r_{proto}\leq 17\)_bottom_, i.e. the limiting magnitude of \(I\) - the training sample.
distribution of the photometric redshift for samples \(I\) and _II_. Specifically, _II_ is divided into the whole sample, with \(r_{proto}<18\) and a sub-sample with \(r_{proto}<17\), sharing the same magnitude limit as the training sample.
## 3 Deep Learning Classification
### Training, Validation and Test sample
We split the cross-match data between S-PLUS DR3 and Galaxy Zoo I STRIPE82, i.e., Dataset \(I\), into Training-Validation-Test sets. Dataset \(I\) contains unbiased classification only (Lintott et al., 2008; Bamford et al., 2009; Lintott et al., 2010). The data presents an 80 percent threshold on the probability of being a galaxy as true labels distributed in 29 percent of early-type galaxies (ETG) and 71 percent of late-type galaxies (LTG). This distribution reflects the proportion between the two classes, in the local Universe (\(0<z<0.2\)) as reported by (Lintott et al., 2010).
We split the DR3-Training dataset in 7 folds. These folds are subsamples of the training set used to perform a cross-validation procedure (Moreno-Torres et al., 2012). We have evaluated other choices for a number of folds. However, with more folds the validation set is smaller, the validation loss starts to be more unstable, and we found a good trade-off with 7 folds. Thus, as shown in figure 4, we define 7 different training and validations sets, each containing \(\sim 85\%\) and \(\sim 15\%\) of the data, respectively. This separation is made so there is no match between the validation sample for every fold. Additionally, this method guarantees that each object will be used at least once in the test set. We use 599 objects as a test set for performance evaluation, these are not used for training. As in BOM21, the training set based on debiased GZ1 contains 71% of LTG and 29% ETGs, and thus is an imbalanced dataset. Therefore, in order to train the Neural Network to prevent our model of being biased towards the most abundant class, we adopt the same data treatment scheme presented in BOM21, applying weights to each class. For a set of \(N\) objects in the training set and if the number of objects in the class \(\alpha\) is \(N_{\alpha}\), we define the weights as :
\[w_{\alpha}=\frac{N}{mN_{\alpha}}, \tag{7}\]
where \(m\) is the total number of classes. This is a standard procedure in ML field 5. The weights defined in Equation 7 are then applied in the objective or loss function minimized during the training phase. This procedure enables each of the classes to have the same impact on the loss function.
Footnote 5: see, e.g., [https://www.tensorflow.org/tutorials/structured_data/imbalanced_data](https://www.tensorflow.org/tutorials/structured_data/imbalanced_data)
### Non reliable Stamps
In the DR1 catalog from BOM21, all the stamps were visually inspected prior to the analysis with the DL algorithm aiming to prevent biases in the classification process caused by spurious objects. This became no longer feasible due to the size of the current and future S-PLUS data releases. Therefore, in this study, in order to generate a more robust catalog, we implemented a new DL model to separate _reliable_ from _non-reliable_ stamps. To this end, we use stamps that were excluded as _non-reliable_ from the DR1 classification as a training sample for this new DL model. This approach is advantageous to avoid spurious classifications such as faint galaxies in the same field of nearby saturated stars whose spikes can affect the accuracy of the magnitude estimation of that galaxy. Given the considerable extent of the dataset used in this study, other undesirable objects might include artifacts and problematic stamps in general. Therefore, it is essential to count with a robust method of distinguishing good images from low-quality images to use as an input for the main ETG/LTG DL model.
### Deep Learning Model
Following a strategy similar to that in BOM21, in this work we also made use of EfficientNet algorithms (Tan & Le, 2019), which are part of the Convolutional Neural Networks (CNN) family-models well-known for having high performance on visual pattern recognition problems in standard image datasets such as ImageNet (Deng et al., 2009). This kind of Network is based on an initial model similar to a MobileNet (MnasNet; Tan et al., 2019) and can be also scalable by parametrizing the number of layers if needed to gain in performance by making a more complex network while constraining the number of FLOugs-point Operations Per Second (FLOPS). Therefore, each parameter choice defining a model and thus defining a family of models. Additionally, this kind of model can also be easily adapted to classify datasets with different resolutions (Bom et al., 2022). In this contribution we made use of similar model based on EfficientNet B2 firstly described in (Tan & Le, 2019), with the minor adaptations detailed in BOM21. For a diagram presenting all the layers in this model please refer to figure 5 (b) and (c) in (Bom et al., 2022) paper.
Nonetheless, we implemented several innovations compared to the workflow described in BOM21. Firstly, we added a second EfficientNet B2 model to evaluate whether a stamp is reliable for morphological classification. The main goal of this NN is to identify spurious detections, such as crowded stamps where the central galaxy in the stamp is visually indistinguishable, stamps saturated by close bright stars, and galaxies that are not completely contained in the stamps. We explore how this non reliable stamp model would be best defined in terms of inputs. After initial tests following BOM21 approach, we used all 12 bands as inputs in contrast to the ETG/LTG model that shown to be best defined in terms of performance and stability of results by using \(g,r,i\) bands only.
Figure 3: Distribution of the Photometric Redshift for the training and blind sample. On top of the distribution only for the blind sample up to \(r_{\rm photo}<18\). On the bottom is the normalized distribution for both blind and training samples up to \(r_{\rm photo}<17\)
Although this choice is based on empirical results by adopting the same metrics presented in Bom21, the main difference here is likely due to the nature of patterns we are trying to characterize in the Reliable Stamp model. By visually inspecting the stamps, we find that some of the spurious detections presented large variability of shapes in different bands compared to reliable stamps, and thus are likely to be easily distinguishable by using more bands. For a full discussion of the band choice for finding ETG/LTG please refer to BOM21. A relevant difference in respect to the main ETG/LTG model developed for S-PLUS DR1 is that the probability assigned to a galaxy of being spiral or elliptical is no longer complementary, meaning that the sum of such probabilities is not equal to one, opening a space for a lot of interesting findings like the ones discussed in Section 4.3 and the possibility of pointing objects that do not fill in any category. This was implemented by changing the neural network activation function in the last layer from a _softmax_ to a _sigmoid_. In figure 5 we present a scheme of both DL models, detailing the input bands and also presenting an example of a given stamp flowing towards some of the network convolutional filters.
## 4 Results
### Training
The training process was performed with a Rectified Adam (RADAM, Liu et al., 2019) optimizer and the loss function is a traditional cross-entropy (Goodfellow et al., 2016). In Figure 6 we show the Loss and Accuracy achieved in the training procedure considering all 7 folds. The darkest line in the center corresponds to the mean value of those quantities for each epoch and the shaded area corresponds to the standard deviation between folds. In the top of Figure 6 we present the results for ETG/LTG model using 3 broad-bands as input, similar to BOM21. The training converges fast, around the third-fifth epoch with high accuracy \(\sim 0.9\). The additional degree of freedom added compared to BOM21, i.e., the probability of being LTG or ETG set to be independent, does not seem to affect the performance significantly. Considering the errorbars, we did not find significant overfitting over the entire range. However, towards the end of the training, around the 15th epoch the figure suggests the beginning of slight overfitting. Furthermore, by evaluating the loss function, the reliable/non reliable model, that uses the 12 band set as the input, presents a more unstable behavior: the convergence is slower, around the epoch 15. The validation presents some spikes that might be related to a regularization method present in the network. We also notice a tendency of overfitting from epoch \(\sim 19\) onwards. The validation accuracy does not reach 0.9 consistently. However, it is worth noticing that, differently from the 12 band model ETG/LTG presented in BOM21, the 12 band model for reliable/non reliable stamps has significant smaller errorbars suggesting that the model is robust, although the overall performance compared to ETG/LTG model as a classifier is expected to be lower.
\begin{table}
\begin{tabular}{l c c l} \hline \hline Sample & Subsample & Number of objects & Description \\ \hline I & _DR3_-Training & 4192 & ETG and LTG galaxies splitted between training and validation. \\ I & _DR3_-Test & 599 & ETG and LTG galaxies for performance test. \\ II & _DR3_-Blind & 46763 & galaxies for blind classification with \(r_{perturb}\leq 17\). \\ II & _DR3_-Extended & 161635 & galaxies with \(r_{perturb}\leq 18\) for blind classification. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sample Description of the samples used in this work.
Figure 4: In this figure we show the cross validation k-fold method applied to the training of the LTG / ETG model. Each fold is separated into training, validation and test. That process is made in a way that there is no match between each validation. Additionally, the training for each fold is slightly different, which reduces a possible bias concerning the selection of the objects that composes it. Considering that the technique will define an architecture with certain weights for each fold, the metrics in each training stage can be used to evaluate which fold has the best training set configuration. The numbers at the bottom indicate the size of the training, validation, and test sets in each fold.
### Performance
As outlined in the previous section, the cross-validation approach establishes a unique network configuration for each fold. Therefore, we may assess our model's performance on every individual fold. For both ETG/LTG or Reliable Stamp Model classification, we applied these individual folds to the test subsample.
#### 4.2.1 ETG/LTG Model
We evaluate the performance of our model by evaluating the trade-off of a precision x recall. For a given threshold \(t\) that defines which is our ETG if the predicted probability is higher than the \(t\), precision or purity measures how many correct predictions were made out of all positive predictions, and recall or completeness presents how many true positives were found among all the actual positives. In the bottom of Figure 7 we present the median precision-recall for \(t\) in the range \([0,1[\) for all folds and its respective standard deviation. Later, we define the best theshold \(t_{B}\) as the \(t\) in the precision-recall curve closest to the the point \((1,1)\) which would represent a perfect classifiers, i.e. with both purity and completeness equal to \(1\). This threshold \(t_{B}\) is set to \(\sim 0.60\). To understand the performance outcome with this choice we made use of a confusion matrix at the bottom of Figure 7. This performance assessment shows the number of correct and incorrect predictions, grouped by each class and therefore presents model performance in a classification task by revealing where it gets confused and makes mistakes. The model demonstrates correct classifications with over \(\sim 94\%\) of both ETG and LTG classifications. It is worth mentioning that for this specific performance assessment, we had to assign each galaxy to one category unambiguously. Hence, for this specific analysis we did not take advantage of the fact that the model assign independent probabilities of ETG/LTG. In Figure 8 top we present the probability distribution of the DR3-Blind set. We notice that the distribution for ETG and LTG classification are well separable with a strong peak around \(\sim 0\) and \(\sim 1.0\) as one should expect to a two-class classification.
We present in Figure 9 a comparison of the distribution of the photometric redshifts (see Section 2.1.3) of early (orange/red) and late (cyan/blue) type galaxies for the DR3-Blind/DR3-Training sample, respectively. It is noticeable that, against expectations, the number of early type galaxies seems to be larger at higher redshifts than the number of late type galaxies, both for the training and blind data-sets. In fact, Buitrago et al. (2013) does not find any strong evolution between the fraction or density of spheroid and disk galaxies for \(M_{*}>11\)\(M_{\odot}\) between \(0<z<0.2\). We visually inspected galaxies classified as early type at \(z>0.15\) to verify whether the classification is affected by the lack of resolution of the spiral arms. We conclude that the classification is overall correct (see Sections 5.1 where we compare with the morphological classification performed by Cheng et al. (2020) and Vega-Ferrero et al. (2021)) and that the lack of spiral galaxies at high redshift is related to the pre-selection of the stamps since high \(z\) spirals tend to have low surface brightness. The training sample used in this work is taken from The Galaxy Zoo project (Lintott et al., 2008; Bamford et al., 2009; Lintott et al., 2010), which provides a debiased morphological classification Bamford et al. (2009) for galaxies in a redshift range between \(z\geq 0.03\) and \(z<0.88\), where the lower limit is dictated by the incompleteness at low redshift, while the higher redshift is caused by the loss of objects fainter than \(M_{r}<-20.25\).
#### 4.2.2 Reliable Stamp Classification
We used the same analysis scheme to analyze the Reliable Stamp model. The bottom part of Figure 8 presents the Reliable stamp probability distribution. By comparing the probability distribution of both models we noticed that the reliable model presents wider peaks, which suggests the distribution is not as well separated as in
Figure 5: Workflow from the stamps taken from S-PLUS data while they passes through the model. Both architectures works in the same way, with the difference that the first one uses only the G, R and I bands available in S-PLUS as the network input. The LTG/ETG Model as well as the Reliable Stamp Model consists of some convolutional layers in the beginning responsible to compact and recognize patterns in the stamp, then, in the end, all that information passes through a dense layer that compacts it into a list containing 1408 keys represented by the bar code in the figure. Both models works with binary classification, then one more dense layer is needed to calculate the probability of each classification given by a sigmoid activation function.
the ETG/LTG model. This conclusion is also indicated by the loss optimization as discussed in Section 3.3.
In figure 10 we show the confusion matrix for the best fold and the precision vs recall plot considering all folds. The overall shape in the precision x recall curve is similar to the ETG/LTG model, however, the total area under the curve of the Reliable stamp model is smaller compared to the ETG/LTG model. The confusion matrix presents \(\sim 90\%\) true positives, which is also interesting since there is a vast variability of what is a non reliable stamp. Additionally, by making a visual assessment over the objects classified as non reliable we can find some interesting objects that we believe are worth investigating. We discuss this with more detail in section 4.4.
### Early-Type and Late-Type Galaxies
Galaxies present a wide range of morphologies (e.g., Buta, 2011; van den Bergh, 1998), from almost spherical ellipticals to grand design spiral galaxies (Grosbol & Dottori, 2012), with the increasing importance of the disk component along the Hubble sequence. At the vertex of the Hubble tuning fork, lie the lenticular galaxies, which present bulge and disk components as spiral galaxies, but lack spiral arms and relevant star-forming regions. Moreover, the gallery of galaxy types also encompasses irregular galaxies. Elliptical and lenticular galaxies are classified as 'early-type', while spirals and irregulars are called 'late-type' galaxies (here ETG and LTG, respectively).
In a binary classification (early or late-type galaxies), though, we are forcing the galaxies into one of two classes, while the classification could be more gradual, reflecting the complexity of galaxy shapes, such as when using the Numerical Hubble types. To account for this, the network architecture in this work was slightly changed when compared to the one used in BOM21, in order to make the probabilities of ETG or LTG not complementary, i.e. not necessarily summing to one. In fact, these probabilities are generated independently in a way that a galaxy can have a high probability (higher than the DL threshold, see Figure 8) of being both ETG and LTG. Galaxies that have a high probability of being both ETG and LTG are designated here as \(Amb1\). This brings an interesting _ambiguity_ to the model that can be explored to make the classifica
Figure 6: Accuracy and Loss in the training of the Late/Early-type (_Top_) model and Reliable Stamp model (_bottom_) as a function of epoch considering all folds. In blue we present these metrics for the training set and in orange the metrics for the validation set. The line in the middle represents the mean value between all 7 folds used in the cross validation k-fold method.
tion more gradual: a galaxy now can be classified as neither ETG nor LTG, and will be ascribed to class _Amb0_.
In our results, as shown in Figure 11, we can see that most of the galaxies that had a low probability of being a ETG or LTG (_Amb0_) were also classified as non reliable stamps, while those with higher probability of being either ETG or LTG (_Amb1_) were also classified as reliable. We note here that the galaxies with high probability of being non reliable stamps and with a low probability of being ETG or LTG, are the highest in number (1107), while the majority of galaxies that have high probability of being ETG or LTG (160) are classified as reliable stamps.
Figure 12 shows examples of reliable stamps, as defined using the 12 S-PLUS images, see Section 3.2, of galaxies belonging to the four different classes (ETG, LTG, _Amb\({}_{0}\)_, _Amb\({}_{1}\)_), from S-PLUS and Legacy surveys. The Legacy data are typically four magnitudes deeper than S-PLUS images and reveal faint outer features, so they can be used to understand the effects of depth and resolution in the ability of the DL method classify objects. In general, galaxies falling in the ETG class are ellipticals (left column, top and middle rows) or lenticulars (left column, bottom row). The LTG objects are either spiral or irregular galaxies (second column, first and middle rows), while the third row shows a disk-dominated lenticular galaxy. In Section 5.1 we compare the classification presented in this work with other works.
Galaxies are classified as _Amb0_ or _Amb1_ as the result of a combination of factors:
(i) faint/high redshift spiral galaxies can be misclassified as early-type galaxies, due to the pixel resolution and survey depth, which reflects in the difficulty of identifying the presence of spiral arms. In turn, they might present green dots of star formation, rendering them neither ETG nor LTG (see third column, middle panel of Figure 12);
(ii) clumpy star-forming galaxies could also be assigned to neither class, due to their un-smooth appearance and the absence of
Figure 8: Probability distributions for the classification of the blind set. On top of the distribution for the Late-type and Early-type classification. On the bottom the distribution of being a reliable stamp. In both cases, the dashed line represents the threshold used for the classification itself.
Figure 7: Performance in the _DR3_-Test sample for the ETG/LTG model. (Top) The Precision x Recall considering all folds. The purple line was made with the median value for every fold. (Bottom) The confusion matrix for the best fold.
clear spiral patterns, see third column top and bottom panel of Figure 12;
(iii) bulge-dominated spirals (see last column, top and middle images), due to the low surface brightness of their spiral arms, clearly visible in the Legacy data, but close to the image noise in S-PLUS data, may have a high probability of being both ETG and LTG galaxies.
(iv) lenticular galaxies can also be found in the _Amb\({}_{1}\)_ class, in particular lenticular galaxies with \(B/T\simeq 0.5\) are associated to both classes, due to their hybrid nature. These results will be further discussed in Section 5.3.
In the next section we show that some of the non-reliable stamps (NRS) are actually extraordinary objects.
### Extraordinary Non Reliable Stamps (NRS)
Figure 14 shows some example of objects identified as NRS. Generally, they are objects nearby saturated stars or crowded fields. In fact, even if we select the sample of objects to be analyzed maximizing the probability of being galaxies, see Section 2.1.1, contaminants still appear in the sample and the deep learning code makes a great job in identifying spurious objects. The number of NRS is nearly constant with redshift, as shown in Figure 13, while the number of Reliable stamps decreases with increasing redshift.
On the other hand, peculiar galaxies, especially if with clumpy star formation, or galaxies with a projected size larger than the stamp might fall in the category of NRS, as shown in Figure 15. Somehow, the deep learning method not only allows us to identify unwanted objects, but it also helps in finding peculiar objects, of high interest/relevance.
### Morphology as a probe of galaxy evolution and large scale structure formation
Galaxies evolve through time via different mechanisms: major and minor mergers, secular evolution, harassment, stripping, and strangulation Gunn & Gott (1972); Aragon-Salamanca (2008); Quilis et al. (2000); Kronberger et al. (2008); Byrd & Valtonen (1990); Bournaud et al. (2005). Many of these processes are environmental-dependent, i.e., they can occur only in clusters of galaxies (strangulation), or they are more likely in the field or groups (e.g., mergers). In general, it is now believed that minor mergers are more common than major mergers and that they are the main responsible for mass build-up in galaxies (Bournaud et al., 2007). Different evolutionary scenarios leave specific imprints on the galaxy morphology; i.e. major mergers tend to disrupt the stellar orbits, resulting in pressure dominated systems, characterized by an elliptical shape. On the other hand, secular processes, or environmentally driven mechanisms, as ram-pressure stripping, affect more the gaseous component, chasing the star formation. Moreover, a morphology-density relation had been proven in the last decades (Dressler et al., 1997; Dressler, 1980; Cappellari et al., 2011; Buitrago et al., 2013), where ETG inhabit the densest regions of the Universe, while spiral and irregular galaxies are more common in the fields. Galaxy morphologies are a powerful proof of galaxy evolution as well as structure formation, as we will discuss in the next subsections. In here we use only objects classified as reliable stamps with _photo\({}_{j}\)_, _odds\(>0.4\)_ and _r\(<17\)_ mag. The magnitude have been corrected for galactic extinction using the Clayton, Cardelli and Mathis (Cardelli et al., 1989) dust law.
#### 4.5.1 Correlation between morphology and colors
Figure 16 shows the color-magnitude diagram (g-r colour vs r-band absolute magnitude), color coded according to the galaxies' morphologies. The left panel presents the dual classification, where elliptical galaxies are shown in orange and spiral galaxies, in light
Figure 9: r-band apparent magnitude distribution for different Photometric Redshift bind distribution f blind sample (filled line), and for the training sample (dashed line), for Elliptical (yellow/red) and Spiral (blue/cyan) galaxies. Note that in the first magnitude bin the training sample is not present (\(z<\leq 0.02\)), since we used Galaxy Zoo data for the training, which are missing in this low mag bin (Lintott et al., 2008, 2010; Bamford et al., 2009)
blue, while in the right panel the colour scale shows the probability of being a spiral galaxy. It can be seen that elliptical (quiescent) galaxies inhabit the red sequence while spiral (star-forming) galaxies are mostly found in the 'blue cloud' as expected according to their dominant stellar populations (see, for example, Wong et al., 2012; Lima-Dias et al., 2021; Khanday et al., 2022). Interestingly, in the right panels it is possible to see that the probability of being spiral increases nearly from 0 to 1, going from the red cloud to the blue sequence, in a continuous manner. The intermediate region, where the probabilities range around 0.5, is known as Green Valley (see, for example, Zibetti et al., 2007) and it has been largely studied as a region of transition, where late-type galaxies could be quenching their star formation, turning into late-type galaxies, or early-type galaxies could be'rejuvenating', due to some interaction with other galaxies or accretion of gas (Smethurst et al., 2015). The morphology seems to be reflecting this transformation since the quenching is'removing' the spiral arms, decreasing the probability of being a spiral galaxy. On the other side, a sparkle of star formation in an early-type galaxy could create clumps, or star-forming regions, that would increase the probability of being a spiral galaxy.
#### 4.5.2 Morphology-density relation
There is a connection between the environment a galaxy live in and its morphology (Dressler, 1980), but both the galaxy stellar population and environment evolve with time. While the galaxy stellar population is related to the galaxy mass (more massive galaxies are more metal rich at a given time, Leaman et al. (2013) and gas content, the morphology is more related to the environment (spiral galaxies tend to live in low density environment, ellipticals in the center of galaxy clusters). Yet, a merger, whose probability is dictated by the environment a galaxy lives in, would affect both the galaxy mass and stellar population. Note that 20% of high mass (\(M_{*}\geq 10^{5.5}\ M_{\odot}\)) galaxies have experienced a major merger since \(z\simeq 6\)(Ventou et al., 2017) and minor mergers, accretions, fly-bys are very common in the history of the Universe.
We use a K-Nearest Neighbor method, with \(k=4,5,10\)(Baldry et al., 2010), to recover the projected density of the environment a galaxy live in, where \(k=4,5\) refers to local environments, while \(k=10\) is related to larger scales. Specifically, the density (\(\Sigma_{k}\)) at any given k is:
\[\Sigma_{k}=\frac{k}{\pi D_{k}^{2}}\frac{1}{\psi(D))}, \tag{8}\]
where \(k\) is the \(k\) nearest neighbour, D is the comoving distance and \(\psi(D)\) is a selection function to correct for the Malmquist bias (e.g. Santana-Silva et al., 2020). Figure 17 presents the number density of late (\(Prob_{interp}>0.9\)) and early type (\(Prob_{interp}>0.9\)) galaxies
Figure 11: Normalized fraction of galaxies that belong to class Amb0 (left), i.e. galaxies that have a low probability of being ETG or LTG, and of galaxies belonging to the class Amb1 (right), i.e. galaxies with a high probability of being ETG and LTG, classified as no-reliable stamps (blue) and reliable stamps (orange).
Figure 10: Performance in the \(DR3\)-R-Test concerning the Reliable/Non Reliable model. (Top) the Precision x Recall plot considering all folds. The purple line was made with the mean value for every fold. (Bottom) The confusion matrix for the best fold.
for increasing \(k=4\) density measures. The left panel of Figure 17, presents all galaxies with magnitude \(r\leq 17\), while the right panel split them into magnitude bins (represented by different line shapes, see Figure legend). Early-type galaxies are identified by orange/red lines, while late-type galaxies are shown as cyan/blue lines. The morphological classification provided in this work clearly reflects the morphology density relation, with early-type galaxies occupying the densest regions, and late-type galaxies being the dominant population in the field/low-density regions, see left panel in Figure 17. When looking at the magnitude dependence of the morphology-density relation, we see that it holds for different magnitudes bins, where the number density of early-type galaxies increases with increasing densities, while the opposite trend is found for late-type galaxies. Finally, we observe that the crossover density is lower for more luminous objects, indicating a correlation between lower densities and higher luminosities.
#### 4.5.3 Large scale structure as traced by galaxy morphology
Galaxies trace the large-scale structure of the Universe, yet they account only for \(\simeq 20\%\) of the total matter (e.g. Planck Collaboration et al., 2020), and their physics is affected by non-gravitational mechanisms such as baryonic effects, radiation pressure, feedback, etc. A simple but powerful tool that can bridge the gap between galaxies and the DM distribution is the Halo Model (Cooray and Sheth, 2002). One of the consequences of that description is that galaxy abundances and their properties (such as stellar mass, color, morphology and star formation rate) can be traced back to the DM halos and sub-halos, as well as their properties (such as mass, age, concentration and spin) Wechsler and Tinker (2018). From a large-scale structure perspective, the correlation function of the DM halos is related to the correlation function of the DM particles by the halo abundance, bias, and halo density profile. To a good approximation, more massive halos are less abundant and are more highly biased with respect to the DM field, but other halo properties such as concentration, age, and even spin (angular momentum) also play an important role (Montero-Dorta et al., 2020). Galaxies that populate halos and their sub-halos inherit those properties, including their bias - but they can also bring additional information that is not manifested in the halo properties, and which are indicative of the baryonic mechanisms such as ram-pressure stripping. Galaxy morphology is one of the additional indicators that can help distinguish between different types of halos and their environments, leading to a more accurate and precise description of the correlation functions of those tracers.
Figure 18 shows the redshift distribution, up to \(z\simeq 0.08\)(Bamford et al., 2009) for galaxies colour coded according to their probability of being late type, in order to characterize how morphologies evolve over time. Early type galaxies, plotted with larger symbols, are generally more clustered. The presence of galaxy clusters is emphasized by the Finger of Gods effect, caused by the peculiar velocities of galaxies that deviate from the Hubble flow.
Figure 12: In the first two panels we show some examples of stamps that were classified as Early-Type (first panel) or Late-Type (second panel). In the last two panels we have examples of stamps that would fall in the Ambiguous classification. Ambo are those stamps that had a low probability of being Early-Type and also Late-Type galaxies according to the defined threshold (\(\simeq 0.6\)). In the other hand, we have Amb\({}_{1}\) which are those objects that the model gave a high probability of belonging to both classes. Each panel is made with the same objects taken respectively from S-PLUS and LEGACY survey.
Figure 13: Number of NotReliable and Reliable stamps in increasing redshift bins.
## 5 Discussion and Concluding Remarks
### Comparison to other surveys
Vega-Ferrero et al. (2021) used DES galaxies with reliable morphological classification to assess whether CNNs are able to detect features that human eyes do not. To do that, they simulate the appearance that well morphologically classified DES galaxies would display at high redshifts, making them fainter and smaller. They find that, despite some of the features that distinguish ETGs from LTGs vanish after the simulation, the models are still able to correctly classify galaxies with an accuracy greater than 97%. The main conclusion of that work is that it is possible to correctly classify galaxies from faint and small size images using CNNs models, satisfying the following conditions: final apparent magnitude below \(m_{r}(z)<22.5\), and the size of the final image larger than 32\(\times\)32 pixels. DES data (DES DR1, Abbott et al. (2018)) has a median co-added catalog depth of \(m_{r}=24.08\) at signal-to-noise ratio S/N = 10, with a pixel size of 0.2636
In comparison, S-PLUS has a scale of 0.55 "/pixel and a depth in r-band of \(m_{r}=19.6\) at signal-to-noise ratio S/N = 10 (Almeida-Fernandes et al., 2022), resulting in lower resolution when compared with DES data, as clear from Figure 12.
S-PLUS DR3 and DES DR1 overlap, see Figure 1, resulting in a combined catalogue from Vega-Ferrero et al. (2021) and this work of 36183 galaxies, brighter than \(m_{r}<18.0\) and with a mean redshift of \(z_{ul}=0.11847\). Comparing the classification presented in this work, considering the depth of the DES images, allows us to investigate the goodness of the classification and the advantages of combining the results of the two DL codes, i.e. studying the reliable early and late-type classification. In Figure 19, top panel, we present a histogram of the probability of being late-type galaxies obtained in this work, for galaxies classified as 'robust spirals' (\(FLAG_{LTG}==5\)) in Vega-Ferrero et al. (2021). The dashed line indicates the threshold used in this work, in other words, every galaxy that stands in the right side of this line is classified as a spiral in both works. The blue histogram shows the distribution of the probability of being a LTG for all 'robust spiral' galaxies and presents the larger discrepancy with Vega-Ferrero et al. (2021). The Orange histogram shows the probability of being LTG for all 'robust spiral' stamps classified as reliable according to the second DL model, see Section 3.2. The green and red histograms represent all 'robust spiral' galaxies brighter than \(r<17\) mag, and among them all the ones classified as reliable stamps, respectively. The middle panel shows the same comparison for early-type galaxies. There is a non negligible fraction of galaxies with zero probability of being early-type galaxies in this work, but classified as elliptical in Vega-Ferrero et al. (2021). In the bottom panel, we reproduce the same plot, now including only objects with \(b/a>0.7\). This choice drastically decreases the number of discrepant classifications. Similar results are obtained when performing the same comparison with Cheng et al. (2023).
In Figure 20 we present the fraction of misclassified objects for different magnitudes bins, and in Figures available in the appendix6 it is possible to find examples of objects classified differently in the two papers. It is noticeable that in many cases of galaxies classified as early types in this work and late types in Vega-Ferrero et al. (2021), they are multiple object images, Low Surface Brightness, bulge dominated spiral galaxies, or faint/compact spiral galaxies, see Section 4.3. On the other hand, objects classified as late types in S-PLUS and early types in Vega-Ferrero et al. (2021) are often disk dominated (edge-on) lenticular galaxies or merger/disturbed systems.
Footnote 6: The appendix is presented as an online supplementary material.
In conclusion, the classification presented in this work is in agreement with Vega-Ferrero et al. (2021) with an average confidence level of \(\approx 92\%\) up to \(r<18\), for ETG and \(\approx 96\%\), for LTG, up to \(r<17\). The mismatch for ETG increases to 20% for objects fainter than \(r\simeq 17\) as a result of the fading of the spiral arms in the S-PLUS images. On the other hand, the mismatch for LTG
Figure 14: Example of not reliable stamps, from S-PLUS data (top) and LEGACY data (bottom). In the last column, it is visible an artifact, the third column present a crowded field, in the second column we find a saturated star compromising the galaxy image, and, finally, in the first column and irregular galaxy.
is mostly caused by the association of disk-dominated lenticular galaxies or edge-on red spirals (Sodre et al., 2013) to this class in this work, while there is a perfect agreement between the two classifications when considering only objects with \(q=B/A>0.7\) and \(r<14.5\), see blue line in Figure 20 and in the histogram presented in the appendix A. Implications from these results are further discusses in Section 5.3. Moreover, a visual inspection of the differently classified objects, see the panel figures in appendix A, reveals interesting objects resulting from a different structure of the DL codes and image depth and resolution, highlighting the importance of a diverse, open and collaborative scientific environment.
Combining Morphology and precise photometric redshifts with narrow band surveys: where do galaxies live?
The relation between galaxies' morphology, their mass and stellar population properties, and the environment they live in has been studied in great details and in a wealth of works (Paulino-Afonso et al., 2019; Coccato et al., 2020), as well as its redshift evolution (Gonzalez Delgado et al., 2015). Recent works show that the bulge growth, measured as bulge over total light ratio, is directly connected with the quenching of the star formation (Paulino-Afonso et al., 2019; Dimauro et al., 2022; Werner et al., 2022). Group pre-processing is also found to play an important role in galaxies' star formation quenching and morphological evolution (Gonzalez Delgado et al., 2022; Brambila et al., 2023).
S-PLUS photometric system allows to retrieve reliable photometric redshifts with a scatter of 0.023 (Lima et al., 2022), and to recover reliable density estimates (Lopes da Silva et al. in prep.). In Figure 21 we show the (g-r) colour vs 84-density measures for reliable stamps with \(Prob_{5\_total}>0.9\) (blue open dots), with \(Prob_{Ellipotal}>0.9\) (red open dots), and galaxies classified as edge-on by Vega-Ferrero et al. (2021) as filled yellow circles. The galaxy environment is more correlated to its morphology, than its colour, see Figure 17. ETGs have \((g-r)>0.7\). The bottom-left panel shows that the majority of late-type galaxies with \((g-r)>0.7\)
Figure 15: Example of extraordinary not reliable stamps. Large objects, whose projected radius is larger than the image stamp; star-forming galaxies, where more than one clump can be identified as an independent object during the catalogue extraction (Almeida-Fernandes et al., 2022); irregular galaxies as NGC 4038/NGC 4039; and dense regions of stars, maybe galactic clusters, can be found among the not reliable stamps. The probability of being a reliable stamp is given in the top left of each panel.
are classified as edge-on in Vega-Ferrero et al. (2021). As shown in Figure 12 disk-dominated lenticular galaxies can be associated to the late-type class, explaining the red colour of the disky-late-type galaxies. Moreover, edge-on star-forming spiral galaxies might suffer reddening, due to the presence of dust clouds surrounding the disk (Bamford et al., 2009; Sodre et al., 2013).
Figure 21 points out that ETG have red colours, \((g-r)>0.7\) and are more common in denser environments. If color is a proxy for galaxy stellar population, these findings would suggest that both the quenching of the stellar population and the environment are connected with the early type morphology. On the other hand, late-type galaxy span a range of colour and their number seems to be more connected to the environment they live in, see Figure 17.
### S0 galaxies formation scenarios and a physically motivated morphological classification
Lenticular galaxies are characterized by a hybrid morphology, with a bulge and a disk as spiral galaxies, but without spiral arms, as elliptical galaxies. van den Bergh (1990) suggests that the 'S0 classification type comprises a number of physically quite distinct types of objects that exhibit only superficial morphological similarities'. Recent studies based on observations (Fraser-McKelvie et al., 2018; Coccato et al., 2020) and on simulations (Deeley et al., 2021) showed that this class of objects is, indeed, composed by two or more subgroups, formed via different physical mechanisms that lead to a similar morphology. Specifically, stripped spiral galaxies could be the progenitors of disk-dominated lenticular galaxies, if the gas and arms of the spiral galaxies would be removed by interactions with the cluster environment, or by harassment in a group environment or generally pestering in all environments (Cortesi et al., 2013; Jaffe et al., 2015; Johnston et al., 2021). Another group of lenticular galaxies could be the result of major or minor mergers and multiple acretions (Tapia et al., 2017). Others, low mass S0s, could be pristine galaxies, formed at redshift \(z\simeq 2\) from mergers of the galaxy stellar/gaseous clumps (Saha and Cortesi, 2018), or the result of secular evolution (Mishra et al., 2018).
The discrepancy between the probability of being LTG predicted in this work and in Vega-Ferrero et al. (2021) decreases when only objects with \(B/A>0.7\) are considered, see Figure 19. Moreover, there is an extended population of late-type galaxies with red colors and high probability of being edge-on systems (Vega-Ferrero et al., 2021), see Figure 21.
Specifically, there are 126 objects classified as spiral galaxies in this work and as robust ellipticals in Vega-Ferrero et al. (2021), with \(B/A\leq 0.7\) and \(r_{parro}<17\) mag. At a visual inspection they are all consistent with being disk-dominated S0 galaxies (Coccato et al., 2020), or edge-on reddened spiral galaxies (Bamford et al., 2009; Sodre et al., 2013) and their average colour is \((g-r)\simeq 0.85\).
On the other hand, as discussed in Section 4.3, a fraction of galaxies classified as early-type galaxies in this work is comprised by bulge dominated lenticular galaxies.
The multiple origin of the S0-like isophotal profile seems to be depicted by the DL algorithm used in this work. This topic will be further studied in a follow-up work, where the DL classification will be correlated with galaxies' bulge-to-total light profiles.
### Summary
In this study, we employ a Deep Learning architecture among the top-ranking techniques for image classification of ETG/LTG while also introducing a model to predict the stamps that contain reliable information to be classified. Our method presented several innovations compared to the Bom21 model, including the possibility of objects that are not classified either as ETG or LTG.
Furthermore, we also make use of the precise photometric redshifts derived with 12 bands present in the S-PLUS. We recover the color diagrams for the morphological types and examine the local environment and density of ETG/LTG. Additionally, we assess large-scale structure traced by morphology. As a result, we provide a novel Value Added Catalogue (VAC) of galaxy morphologies, in the full footprint of S-PLUS DR3 which includes areas never explored for other galaxy morphology catalogues. The catalogue is composed by the results of two DL methods, which, for every stamp, recover the probability of having a given morphology, and of being a reliable stamp, as detailed below.
Figure 16: Colour-magnitude diagram. (g-r) colour versus the absolute magnitude in r-band, calculated using standard cosmological parameters and the luminosity distance (\(D_{L}\)) estimated from the photometric redshift. The _left_ panel shows the bin classification, while the _right_ panel, is colour coded according to the probability of being a late-type galaxy.
#### 5.4.1 A novel Valued Added morphology classification catalogue for the southern hemisphere
In order to mediate between the variety of galaxies morphologies and the binary classification applied in this work, we allow for an independent classification into early and late-type galaxies, i.e. the sum of the probability of belonging to each class does not sum up to one, see Section 4.2.1. As a consequence of this choice, some objects can be either classified to belong to both groups (using the binary classification) nor to any, see Section 4.3. The study of these two peculiar types of objects allows us to identify bulge dominated lenticular or spiral galaxies (\(Amb_{1}\)), as well as compact, flocculent, star-forming galaxies, see Figure 12. Finally, this catalogue of galaxy morphologies covers areas of the Southern Sky for which there is no release of morphological catalogues, for our knowledge, see Figure 1.
#### 5.4.2 A novel parameter to assign a probability of being a reliable stamp
An interesting correlation is found when comparing the number of galaxies with low probability of being LTG or ETG (\(Amb_{0}\)), with the probability of being a reliable stamp, see Section 3.2. In fact, the majority of objects with no bin classification as early nor late-type galaxies (see previous Section 5.4.1) have a low probability of being reliable stamps, see Figure 11. Figure 19 reveals that selecting only reliable stamps decreases the discrepancy with Vega-Ferrero et al. (2021) classification into ETG and LTG, especially for faint objects (\(m_{r}>17\)). Moreover, as shown in Figure 15, among the non reliable stamps, there are extraordinary objects, as the Antenna Galaxy, which will be identified and studied in a follow-up work.
## 6 Data availability
We publicly release our Value Added Catalogue (VAC) in the S-PLUS data base (splus.cloud).
Figure 17: Morphology-density relation. Normalized fraction of late and early-type galaxies, with a probability of belonging to a given class higher than 0.9, for increasing density bin. The k4 estimator traces the local densities. The _top_ panel shows the total distribution, while in the _middle_ panel, it is divided in different magnitudes for Late-type galaxies, which are drawn in a scale of blue. In the _bottom_ panel early-type galaxies are colored in orange. For easier comparison, lines of equal magnitude have the same style.
Figure 18: Redshift distribution up to z = 0.08 for galaxies colour coded according to their probability of being late-type galaxies galaxies. Galaxies with dual classification as early-types are drawn as larger circles, for a better vision.
## Acknowledgements
The S-PLUS project, including the T80-South robotic telescope and the S-PLUS scientific survey, was founded as a partnership between the Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), the Observatorio Nacional (ON), the Federal University of Sergipe (UFS), and the Federal University of Santa Catarina (UFSC), with important financial and practical contributions from other collaborating institutes in Brazil, Chile (Universidad de La Serena), and Spain (Centro de Estudios de Fisica del Cosmos de Aragon, CEFCA). We further acknowledge financial support from the Sao Paulo Research Foundation (FAPESP), the Brazilian National Research Council (CNPq), the Coordination for the Improvement of Higher Education Personnel (CAPES), the Carlos Chagas Filho Rio de Janeiro State Research Foundation (FAPERJ), and the Brazilian Innovation Agency (FINEP).
The authors who are members of the S-PLUS collaboration are grateful for the contributions from CTIO staff in helping in the construction, commissioning and maintenance of the T80-South telescope and camera. We are also indebted to Rene Laporte and INPE, as well as Keith Taylor, for their important contributions to the project. From CEFCA, we particularly would like to thank Antonio Marin-Franch for his invaluable contributions in the early phases of the project, David Cristobal-Hornillos and his team for their help with the installation of the data reduction package _rique_ version 0.9.9, Cesar Ifiguez for providing 2D measurements of the filter transmissions, and all other staff members for their support with various aspects of the project.
CMdO and LSI acknowledge funding for this work from FAPESP grants 2019/26492-3, 2019/11910-4, 2019/10923-5 and
Figure 19: In this figure we present a histogram showing the proportion of galaxies that were classified in accordance with our classification. The _top panel_ represents the galaxies classified as roubust LTG by DES and the _middle_ one represents those classified as roubust ETG by DES. The _bottom panel_ is like the middle one, but for robust ETG with \(b/a>0.7\), see text. The histograms were made using the probability of belonging to the corresponding classes obtained in this work, with the dashed line being the threshold used in our classification, in other words, every galaxy that stands in the right side of this line was classified equally by both works.
Figure 20: Fraction of objects with different classification between this work and Vega-Ferrero et al. (2021) for different magnitudes bin. Galaxies classified as late types in this work and as early types in VF21 are shown by the cyan line, while the blue line present the same selection but excluding edge-on objects, i.e. by imposing that the axis ratio \(q=b/a>0.7\). In orange is shown the behavior of objects classifies as early-type galaxies in this work and as spiral in VG21. grey line shows the global mismatch (the sum of the cyan and orange line), which indicate that the discrepancy in the classification in the two works increases with decreasing magnitude, as expected given the lower resolution and depth of S-PLUS in comparison with DES. We note than the total number of galaxies that are classified as ‘robust’ in Vega-Ferrero et al. (2021) and as reliable stamps in this work decreases after \(r=17\), causing the improvement of the match at \(r=18\).
2009/54202-8. GS, CMdo and LS acknowledge support, respectively, from CNPq grants 309209/2019-6, 115795/2020-0 and 304819/201794. NM acknowledges the University of Sao Paulo PUB grant 83-1 of 2020. A. C. acknowledge the financial support provided by FAPERJ grants E-26/200.607 e 210.371/2022(270993).
CRB acknowledges the financial support from CNPq (31607/2021-04) and from FAPERJ (grants 201.456/2022 and 210.330/2022) and the FINEP contract 01.22.0505.00 (ref. 1891/22). KK acknowledges full financial support from ANID through FONDECYT Postdoctorado Project 3200139, Chile.
The authors made use of multi GPU Sci-Mind machines developed and tested for Artificial Intelligence and would like to thank P. Russano and M. Portes de Albuquerque for all the support in infrastructure matters.
The authors made use and acknowledge TOPCAT 7 tool to analyse the data and astrotools (Cardoso, 2022) to visualize the objects. For complementary visual inspection and some panels the authors made use of small cut outs images from the Legacy Survey. The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey), DECaLS, BAS and MzLS together include data obtained, respectively, at the Blanc telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation. The authors thanks F. Ferrari and J. Crosset for inspiring discussions and suggestions.
Footnote 7: [http://www.starlink.ac.uk/topcat/](http://www.starlink.ac.uk/topcat/)(TOPCAT)
|
2305.00950 | Probabilistic 3D segmentation for aleatoric uncertainty quantification
in full 3D medical data | Uncertainty quantification in medical images has become an essential addition
to segmentation models for practical application in the real world. Although
there are valuable developments in accurate uncertainty quantification methods
using 2D images and slices of 3D volumes, in clinical practice, the complete 3D
volumes (such as CT and MRI scans) are used to evaluate and plan the medical
procedure. As a result, the existing 2D methods miss the rich 3D spatial
information when resolving the uncertainty. A popular approach for quantifying
the ambiguity in the data is to learn a distribution over the possible
hypotheses. In recent work, this ambiguity has been modeled to be strictly
Gaussian. Normalizing Flows (NFs) are capable of modelling more complex
distributions and thus, better fit the embedding space of the data. To this
end, we have developed a 3D probabilistic segmentation framework augmented with
NFs, to enable capturing the distributions of various complexity. To test the
proposed approach, we evaluate the model on the LIDC-IDRI dataset for lung
nodule segmentation and quantify the aleatoric uncertainty introduced by the
multi-annotator setting and inherent ambiguity in the CT data. Following this
approach, we are the first to present a 3D Squared Generalized Energy Distance
(GED) of 0.401 and a high 0.468 Hungarian-matched 3D IoU. The obtained results
reveal the value in capturing the 3D uncertainty, using a flexible posterior
distribution augmented with a Normalizing Flow. Finally, we present the
aleatoric uncertainty in a visual manner with the aim to provide clinicians
with additional insight into data ambiguity and facilitating more informed
decision-making. | Christiaan G. A. Viviers, Amaan M. M. Valiuddin, Peter H. N. de With, Fons van der Sommen | 2023-05-01T17:19:20Z | http://arxiv.org/abs/2305.00950v1 | # Probabilistic 3D Segmentation for Aleatoric Uncertainty Quantification in full 3D Medical Data
###### Abstract
Uncertainty quantification in medical images has become an essential addition to segmentation models for practical application in the real world. Although there are valuable developments in accurate uncertainty quantification methods using 2D images and slices of 3D volumes, in clinical practice, the complete 3D volumes (such as CT and MRI scans) are used to evaluate and plan the medical procedure. As a result, the existing 2D methods miss the rich 3D spatial information when resolving the uncertainty. A popular approach for quantifying the ambiguity in the data is to learn a distribution over the possible hypotheses. In recent work, this ambiguity has been modeled to be strictly Gaussian. Normalizing Flows (NFs) are capable of modelling more complex distributions and thus, better fit the embedding space of the data. To this end, we have developed a 3D probabilistic segmentation framework augmented with NFs, to enable capturing the distributions of various complexity. To test the proposed approach, we evaluate the model on the LIDC-IDRI dataset for lung nodule segmentation and quantify the aleatoric uncertainty introduced by the multi-annotator setting and inherent ambiguity in the CT data. Following this approach, we are the first to present a 3D Squared Generalized Energy Distance (\(D_{\text{GED}}^{2}\)) of 0.401 and a high 0.468 Hungarian-matched 3D IoU. The obtained results reveal the value in capturing the 3D uncertainty, using a flexible posterior distribution augmented with a Normalizing Flow. Finally, we present the aleatoric uncertainty in a visual manner with the aim to provide clinicians with additional insight into data ambiguity and facilitating more informed decision-making. Our code is publicly available at: [https://github.com/cviviers/prob_3D_segmentation](https://github.com/cviviers/prob_3D_segmentation)
Aleatoric Uncertainty, Segmentation, Normalizing Flows, Volumetric Medical Data Further author information: Christiaan G.A. Viviers: E-mail: [email protected]: Telephone: +31 (0)6 206 60171
## 1 Introduction
With the broad acceptance of deep learning-based computer-aided diagnosis and computer-aided detection (CAD) methods, increasingly more requirements are being posed for their successful deployment. These CAD methods typically assist clinicians with decision-making about potentially critical medical procedures or the planning thereof. While most research has focused on maximizing a specific accuracy metric, in practice, a high accuracy along with a strong indication of potential uncertainty or ambiguity in the model output is extremely valuable and highly relevant. Deep learning-based semantic segmentation methods using convolutional neural networks have successfully been adopted as CAD methods for a wide range of medical imaging modalities. While research has been conducted towards quantifying the types of uncertainty occurring when using a segmentation model, most of this work is limited to the quantification of the uncertainty in two-dimensional slices or images, where the latter often originate from a 3D volume such as in CT and MRI. However, these approaches fail to exploit the rich 3D features that may help in resolving ambiguities in the volume. In this research, we use the full 3D volume as input to derive a reliable estimate of the aleatoric uncertainty in the segmentation output. This approach potentially enables a better 3D visualization of this uncertainty in clinical practice.
Two types of uncertainty are typically prevalent in deep learning-based image analysis methods: _aleatoric_ and _epistemic_ uncertainty[1]. Aleatoric uncertainty is an estimate of the intrinsic, irreducible ambiguity in data. It is usually associated with inherent noise in the data and its acquisition process. Epistemic uncertainty is the
uncertainty about the model, either as a result of the architecture or the true parameter values of the model due to limited knowledge, e.g. a finite training set size. In this work, we employ the LIDC-IDRI lung CT dataset [2], which makes use of multiple ground-truth annotations per lung nodule. As described in earlier work of Valiuddin _et al._[3], the epistemic uncertainty - i.e. preferences, experiences and knowledge - of the annotators manifests into aleatoric uncertainty when providing annotations as ground-truth data. The different annotations per nodule adds ambiguity during training of a segmentation network. During the annotation process, the radiologist typically annotates on a single 2D plane of the 3D volume. However, whilst annotating, full access to the other two views of the CT scan are typically available on the same screen. This allows the annotator to correct the annotation if it does not align with the other two views and as a result, the annotator creates a true 3D annotation. In the LIDC-IDRI dataset, as described in Section 3.3, annotators were allowed multiple rounds of annotation, thereby potentially increasing the quality of the annotation by using the full 3D information available.
In recent research, various methods have been proposed to quantify the uncertainty arising in segmentation models or resulting from images. One increasingly popular approach is the Probabilistic U-Net [4], as proposed by Kohl _et al._. They propose combining a 2D U-Net with a conditional variational autoencoder (VAE) capable of learning a distribution over the possible annotations and ultimately construct a generative segmentation model. The Probabilistic U-Net provides compelling results in resolving the ambiguity in an image. More recently, various improvements to this model have been proposed [3, 5, 6], paving the way as one of the leading uncertainty quantification methods. Selvan _et al._[5] and Valiuddin _et al._[3] propose adding a Normalizing Flow to the posterior network of the Probabilistic U-Net. This allows the model to move away from modelling the ambiguity as strictly axis-aligned Gaussian and, instead, allows for a learned posterior distribution of varying complexity.
This work provides the following contributions. First, we propose a 3D probabilistic framework, which builds upon the research from both Kohl _et al._ and Valiuddin _et al._ that exploits the full-3D spatial information to resolve the uncertainty in the original CT volumes. Second, it is shown that more diverse segmentations are obtained when the posterior distribution is enhanced by a Normalizing Flow. This finding suggests that such modeling enables capturing the uncertainty more accurately. Third, we test the proposed method's ability to capture uncertainty on the LIDC-IDRI lung nodule datatset and are the first to present results in the 3D version of the \(D_{\mathrm{GED}}^{2}\) metric, and show that a high segmentation accuracy is obtained using a Hungarian-matched 3D IoU.
In this work, we present a 3D probabilistic segmentation model that exploits the full 3D-spatial information to more accurately quantify the aleatoric uncertainty, while maintaining 3D consistency. The proposed model is equipped with a Normalizing Flow to eliminate the strictly Gaussian latent space that is currently enforced to further improve the model's ability to quantify the uncertainty. More specifically, the model consists of a 3D U-Net and a 3D conditional VAE enhanced with the Normalizing Flow, to generate a diverse set of plausible segmentations.
## 2 Related Work
The Probabilistic U-Net [4] is capable of generating a diverse set of valid segmentation hypotheses. This model was proposed by Kohl _et al._ as a method for capturing the ambiguity in an image. Shi Hu _et al._[6] showed how this ambiguity can be interpreted as uncertainty. They propose adding variational dropout and an additional inter-grader variability term to the training objective of the Probabilistic U-Net. This change allows the model to capture a combination of the epistemic and aleatoric uncertainty. Selvan _et al._[5] improve the quantification of aleatoric uncertainty by adding a Normalizing Flow to the posterior network of the VAE [7] in the Probabilistic U-Net. Valiuddin _et al._[3] indicate that an additional metric, the Hungarian-matched IoU, is necessary to effectively evaluate the performance of these uncertainty quantification methods.
The aforementioned studies have all been conducted with a focus on 2D images and slices from the original 3D volume, which is a step away from the domain where the true uncertainty resides. In addition, these methods all heavily focus on a specially crafted subset of the LIDC-IDRI dataset. The image patches used to train and test the models exactly contain the regions where the ambiguity resides, and nothing more. In practice, at test time, this is not the case. In this work we use a fixed 3D volume surrounding the lung nodules and consider all the image patches, with or without uncertainty.
## 3 Methods
### Model Architecture
In this research, we extend the Probabilistic U-Net to the 3D domain and address a key limitation by augmenting the posterior network with a Normalizing Flow (NF). The network consists of a 3D U-Net, a Prior network, Posterior network enhanced with an NF, and a Feature Combination network. By combining the 3D spatial features extracted by the U-Net with samples taken from a latent distribution encapsulating the solution space, a set of diverse, but plausible segmentations can be generated. The standard deviation across these predictions can be interpreted as the aleatoric uncertainty. The U-Net [8] and 3D U-Net [9] have shown time and again their ability to segment structures of interest at state-of-the-art performance. While we employ the 3D U-Net to obtain the relevant 3D spatial information, this approach is generic and allows for any other segmentation network to be used. A deep CNN conditioned on the input CT scan is used to model a low-dimensional axis-aligned Gaussian latent space, representing the segmentation variants (Prior distribution). Another CNN-based axis-aligned Gaussian encoder (Posterior network) that is conditioned on both the query CT scan and a ground-truth segmentation, is utilized during training to model a posterior low-dimensional latent space. Valiuddin _et al._ point out the shortcomings in modelling the posterior distribution to be strictly Gaussian. As such, we augment our posterior network with either a 2-step planar or radial flow [10], to potentially increase the complexity of the captured posterior distribution, and thereby provide more meaningful updates to the prior network during training. Figure 1 portrays a detailed diagram of the proposed network architecture. During training, the Probabilistic 3D U-Net makes use of the Posterior network, Prior network, U-Net and the feature combination layers. Samples are taken from the image-label conditional distribution captured by the Posterior network and combined with the features extracted from the U-Net through the Feature Combination network. The loss is then computed using Equation (1). The Prior network follows the Posterior network during training, as enforced by the KL-divergence, and thus learns to capture this image-label conditional distribution from the image alone. At test time, the Posterior network is discarded and samples are taken from the Prior network instead. It should be noted that there is only one forward pass through the U-Net (for image feature extraction) and the Prior network (to capture the image-conditional distribution). However, multiple passes through the feature combination network are made, in order to combine a new sample from the Prior distribution with the image features.
Figure 1: Diagram of the 3D Probabilistic U-Net with an augmented flow posterior. The bottom network depicts the 3D U-Net, the Prior and Posterior network are shown at the top and a Feature combination network at the right combines samples taken from the captured distributions. The diagram additionally depicts both the training and testing configuration.
### Loss Function & Evaluation Criteria
In line with previous work on conditional variational autoencoders, our training objective consists of minimizing the variational lower bound [11]. This entails minimizing a cross-entropy difference (in our case) between the ground-truth segmentation (\(\mathbf{y}\)) and a prediction (\(\mathbf{s}\)), minimizing the Kullback-Leibler (KL) divergence between the posterior distribution (\(p_{\phi}\)) and the prior distribution (\(p_{\theta}\)), and finally, a correction term for the density transformation through the Normalizing Flow [12]. Given a query image (\(\mathbf{x}\)) and a posterior sample (\(\mathbf{z}\)), \(p_{\psi}\) (a U-Net and Feature combination network) generates a plausible segmentation (\(\mathbf{s}\)). This loss term can formally be specified by
\[\mathcal{L}(\mathbf{y},\mathbf{x},\theta,\phi,\psi)=-\mathbb{E}_{p_{\phi}( \mathbf{z}|\mathbf{y},\mathbf{x})}[\log p_{\psi}(\mathbf{y}|\mathbf{z}, \mathbf{x})]+\beta\cdot\left(\mathrm{KL}\left(\left.p_{\phi}(\mathbf{z}_{0}| \mathbf{y},\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x})\right.\right)- \mathbb{E}_{p_{\phi}(\mathbf{z}_{0}|\mathbf{y},\mathbf{x})}\left[\sum_{i=1}^ {K}\log\left(\left|\det\frac{df_{i}}{d\mathbf{z}_{i-1}}\right|\right)\right] \right). \tag{1}\]
These losses are combined and weighed using hyperparameter \(\beta\)[13, 14]. The original research provides a detailed derivation of the elbo loss [11], how it is used in the context of Probabilistic U-Net [4] and the NF-likelihood objective [3, 10, 12].
The metric \(D^{2}_{\mathrm{GED}}\) or Squared Generalized Energy Distance has become the _de-facto_ metric in the context of uncertainty quantification and the quantification of the distance between distributions of segmentations. This metric is defined as
\[D^{2}_{\mathrm{GED}}(P_{\mathrm{GT}},P_{\mathrm{Out}})=2\mathbb{E}\left[d( \mathbb{S},\mathbb{Y})\right]-\mathbb{E}\left[d(\mathbb{S},\mathbb{S}^{ \prime})\right]-\mathbb{E}\left[d(\mathbb{Y},\mathbb{Y}^{\prime})\right], \tag{2}\]
where \(d\) is a distance measure, \(1-\mathrm{IoU}(x,y)\), in our case. The parameters \(\mathbb{S}\) and \(\mathbb{S}^{\prime}\) are independent samples from the predicted distribution \(P_{\mathrm{Out}}\). The parameters \(\mathbb{Y}\) and \(\mathbb{Y}^{\prime}\) are the 4 samples from the ground-truth distribution \(P_{\mathrm{GT}}\). In addition to the \(D^{2}_{\mathrm{GED}}\), we also report the Hungarian-matched IoU. This compensates for a shortcoming in the \(D^{2}_{\mathrm{GED}}\) that when the predictions are relatively poor, the metric rewards sample diversity by definition. We duplicate the ground-truth set (4 annotations) to match the desired sample number when computing the Hungarian-matched IoU. This measure calculates the distance between two discrete distributions by determining an optimal coupling between the ground-truth and prediction set subject to the IoU metric.
### Dataset & Data Preparation
To evaluate the proposed method's ability to resolve the ambiguity in the data, we use the popular LIDC-IDRI dataset. This dataset contains the lung CT scans from 1,010 patients with manual lesion annotations from up to 4 experts. In total, there are 1,018 CT scans potentially containing multiple lung nodules of different levels of malignancy. In this work, we have used the annotations from a second reading, in which the radiologists were presented an anonymized version of the annotations from other experts and were allowed to make adjustments to their own annotations. Contrary to previous work, we use every nodule in the dataset if it has been annotated by at least one radiologist (potentially missed by three), regardless of the shape or severity of the nodule. We pre-process the CT scans by clustering all nodule annotations for a scan, by computing a distance measure between the annotations. If an annotation is within one voxel spacing of that particular CT scan from another annotation, it is grouped to belong to the same nodule. The scan is resampled to a 0.5 mm along the \(x\) and \(y\)-dimensions and 1 mm along the \(z\)-dimension. This is followed by cropping the CT scan and resulting annotations based on the center of mass of the first annotator's mask with a dimension of 96\(\times\)180\(\times\)180 voxels in the \(z,x,y\)-dimensions. Finally, if the nodule does not have at least 4 annotations, the ground-truth (GT) masks are filled with empty annotations. This addition is made to be consistent with previous work on this dataset [3, 4] and to capture the difficulty in detecting a nodule. This results in a total of 2,651 3D patches, each containing a nodule and four annotations. An example of the nodule in the CT scan and the four ground-truth annotations are depicted in Figure 2.
### Experiments
To compare the proposed approach against prior work, we conduct six experiments. We train the (1) original Probabilistic 2D U-Net and the (2) Radial NF-augmented Probabilistic 2D U-Net on 2D axial slices of the 3D volume. In practice, we filter the slices based on the presence of at least one positive annotation from any of the raters and use them for training, to avoid a heavily imbalanced training set. The (3) 3D U-Net, (4) Probabilistic
3D U-Net and an (5-6) NF-augmented (Radial and Planar) Probabilistic 3D U-Net are then trained on the 3D patches. In contrast to prior work where the 3D lesion was sliced and split into 2D images, where some 2D slices potentially land in the training set and some in the validation/test set, we conduct our experiments on a per-lesion basis. This avoids any potential model bias caused by the splitting and makes the proposed approach more clinically relevant, since we can present the uncertainty for each lesion.
In our implementation, we split the nodule data in a 70/15/15 training & validation/test split. During training, we randomly sample one of the four annotations to be used as ground-truth segmentation and crop the CT volume and label to 64\(\times\)128\(\times\)128 voxels. In line with previous work, for the 3D Probabilistic U-Net, the dimensionality of the latent space is set to \(L=6\). The proposed framework is implemented in PyTorch and extends on the work conducted by Wolny _et al.[15]_ We train using a batch size of 32 in the 2D case and 4 in the 3D case. An Adam optimizer with an initial learning rate of \(1\times 10^{-4}\) and a weight decay of \(1\times 10^{-5}\) is used. We reduce the learning rate by a factor of 0.2 if the validation loss does not decrease after 20 epochs. The \(\beta\) parameter is controlled using a cosine cyclical annealing strategy as descibed by Fu _et al.[13]_ In all our 3D Probabilistic U-Net experiments, we use the same hyperparameters and a hardware configuration with an RTX 3090Ti GPU (available from Nvidia Inc. Santa Clara, CA, USA). Training to completion takes about 2 days on the average. For performance evaluations, we report results using a more readily available RTX 2080Ti GPU.
## 4 Results
The results of our experiments are shown in Figure 3, Figure 4, Figure 5 and Table 1. In Figure 3, example predictions from all the models used in our experiments are showcased for qualitative evaluation. Here, \(\mu\) GT refers to the mean segmentation of the four raters and \(\mu\) Pred is the mean of the predictions. This mean prediction is the segmentation recommended by the Probabilistic 3D U-Net. Additionally, the figure depicts the variation in the segmentations. More specifically, the standard deviation of the ground-truth labels (\(\sigma\) GT) and the logits (after sigmoid activation) resulting from the model predictions (\(\sigma\) Pred) are depicted. In the figure it can be observed that this deviation across the predictions can be interpreted as the uncertainty. We scale the uncertainty heat map visualization to the the maximum standard deviation of a particular model's predictions. Additionally, the figure depicts a rather conservative segmentation of part of the lesion from the deterministic 3D U-Net segments, while the other models are capable of producing a more accurate segmentation.
Figure 2: _Example nodule in a slice from the CT scan and the four ground-truth annotations._
Figure 4 depicts the predictions for the 2D and 3D Prob.U-Net for 2 slices from a nodule in the test set. It can be seen that the 2D model misses the nodule in Slice 34, while its 3D counterpart correctly detects it. Although the 2D model has some uncertainty about the presence of the nodule, it is rather low.
Figure 3: _Example predictions for the same data slice with a nodule from the 3D U-Net, 2D & 3D Prob.U-Net in the test set. This same slice is used to get a comparative sense of model performance._
In Figure 5 multiple consecutive slices are depicted of a CT scan from our test set and Prob. 3D U-Net predictions for a nodule. Slice 25 displays some uncertainty from the model about the presence of a lesion although no rater indicated its existence yet. In the next slice the lesion is clearly delineated by the raters and the model captures the uncertainty in a similar fashion as the disagreement between them. Slices 27-31 and 33-35 are not shown, since the model correctly segments and captures the uncertainty in comparison to the raters. Slice 38 reveals the large lesion as shown by the annotations from the raters, but it rapidly disappears towards Slice 39. However, the model still segments the lesion in Slice 39 and expresses high uncertainty.
Figure 4: _Zoomed example 3D & 2D Prob.U-Net predictions for 2 slices from a nodule in the test set._
We quantitatively compare the proposed approach with the 2D counterparts, aiming to resolve the ambiguity in the LIDC-IDRI dataset in Table 1. We compute the 2D \(D^{2}_{\rm GED}\) and 2D Hungarian IoU on a per-slice basis and take the average across all the slices of the lesion, ignoring slices with empty ground-truths and predictions. The 3D \(D^{2}_{\rm GED}\) and 3D Hungarian IoU are immediately computed on a per-case level and then averaged across the test set. In the case of the 2D models, during the forward pass of a single 2D slice, an image conditional prior distribution is computed. We then draw 16 samples from this distribution. For the next slice in the series of the lesion, a completely new prior distribution is presented at inference time. As such, the uncertainty captured by this prior distribution is inconsistent over the individual slices of the lesion and it is not possible to reconstruct a consistent and true 3D segmentation using this approach (certainly not with 3D metrics such as 3D \(D^{2}_{\rm GED}\) and 3D IoU calculation). For the 3D Probabilistic U-Net, we report the 2D Hungarian-matched IoU and 2D \(D^{2}_{\rm GED}\) (2D IoU distance averaged along the \(z\)-axis) and the 3D Hungarian-matched IoU and \(D^{2}_{\rm GED}\).
Figure 5: _Example 3D Prob.U-Net predictions for a multiple slices from a nodule in the test set._
We compare the inference time of the 3D U-Net and the Prob. 2D and 3D U-Net per nodule volume (\(64\times 128\times 128\) voxels). We do not include additional results on the models with NF-augmented posteriors networks, since the 2-step low-dimensional bijective transformation has a negligible computational time footprint in comparison to the rest of the network. Table 2 showcases the computation time per operation for the different models and with different batch sizes (BS). It can be seen that the Prob. 3D U-net has a shorter inference time for a volume of this size, compared to its 2D counterpart.
## 5 Discussion
This research extends the Probabilistic U-Net to the 3D domain to utilize the rich 3D spatial information when resolving the uncertainty. We introduce the Probabilistic 3D U-Net and employ recent improvements in the 2D Probabilistic U-Net, by adding either a planar or radial flow to the posterior network. This augmentation with NFs enables capturing distributions of various complexity, thereby relaxing the strictly axis-aligned Gaussian constraint previously employed. To test the model's ability to capture the aleatoric uncertainty, we use the LIDC-IDRI dataset for benchmark tests.
Section 4 displays the results of the conducted experiments. For qualitative evaluations, Figure 3 showcases example predictions from all the models used in this research. It can be noted that all of the models perform well on this clearly defined lesions, except for the 3D U-Net. The 3D U-Net segments the lesion in a conservative manner, possibly due to seeing many empty ground-truth labels and not being able to capture this ambiguity in a meaningful way. The Prob. 3D U-Net with planar flow expresses more uncertainty about various parts of the lesion. Figure 4 highlights example predictions of the Prob. 3D U-Net where information from the complete 3D volume is used to detect, segment and resolve the uncertainty about the lesion in axial Slice 34 of the CT scan. The same nodule is missed by the Prob. 2D U-Net, due to a lack of information from prior slices. These results are also reflected in the 2D \(D^{2}_{\text{GED}}\) and Hungarian IoU, as shown in Table 1, with the 3D models outperforming the 2D models. Interestingly, in Figure 5, the 3D spatial awareness of the model is showcased through the uncertainty expressed in Slice 25. A rather large lesion is coming up (iterating through the CT slices in an ascending order) and the model expresses uncertainty about the exact starting position, since the raters segmented no lesion followed by a large lesion in consecutive slices. The same phenomenon can be seen moving from Slice 38 to 39, although here the model incorrectly (according to the raters) segments the lesion while it is still partially visible.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline Model & 2D\(\downarrow\)\(D^{2}_{\text{GED}}\) & 2D\(\uparrow\) IoU & 3D\(\downarrow\)\(D^{2}_{\text{GED}}\) & 3D\(\uparrow\) IoU \\ \hline Kohl _et al._ & 0.445 & 0.473 & N/A & N/A \\ Valiuddin _et al._ & 0.441 & 0.481 & N/A & N/A \\
3D U-Net & 1.283 & 0.332 & 1.263 & 0.383 \\ \hline
3D Prob.U-Net & 0.427 & 0.510 & 0.422 & 0.457 \\ + Planar Flow & **0.417** & 0.511 & **0.393** & 0.465 \\ + Radial Flow & 0.429 & **0.520** & 0.401 & **0.468** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluations on the LIDC-IDRI test set (15%) of the different methods on the \(D^{2}_{\text{GED}}\) and Hungarian IoU metric based on 16 samples.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Operation & 3D U-Net(BS 1) & 2D PU-Net(BS 1) & 2D PU-Net(BS 64) & 3D PU-Net(BS 1) \\ \hline Forward pass & 2.34 (ms) & 5.75 (ms) \(\times 64\) & 397.27 (ms) & 124.31 (ms) \\ Sample + F-comb (\(\times 1\)) & N/A & 0.51 (ms) \(\times 64\times 16\) & 157.09 (ms)\(\times 16\) & 8.44 (ms)\(\times 16\) \\ \hline Total & 2.34 (ms) & 892.29 (ms) & 2910.51 (ms) & 259.35 (ms) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Inference time (per operation) of the 3D U-Net and 16 samples from the Probabilistic 2D and 3D U-Net per nodule (\(64\times 128\times 128\) voxels). BS is the Batch Size.
By the performance evaluations presented in Table 2, it can be observed that it is the most efficient to present the uncertainty with a 3D Prob. U-Net. For a volume of 64\(\times\)128\(\times\)128 voxels, the 64 2D slices can be passed through the Prob. 2D U-Net in a large batch, but it scales poorly in comparison to a single-slice forward pass. The forward pass of the Prob. 2D U-Net with a batch size of 64 takes 397.27 ms to compute. Drawing 16 samples from the Prior distribution and combining it with the 2D U-Net features through the Feature combination networks takes 2513.44 ms (157.09 ms\(\times\)16). In total this approach will take 2910.51 ms to compute compared to the approximate 10\(\times\) speed improvement that is required for calculating the uncertainty with the Prob. 3D U-Net (259.35 ms). It should be noted that significant computational time drawbacks occur when using the Prob. 3D U-Net in comparison to the the standard 3D U-Net (2.34 ms for inference), although this is not a realistic alternative since no uncertainty can be expressed.
## 6 Conclusion
In CAD methods, it is important to provide clinicians with an accurate measure of uncertainty when they evaluate and plan their procedures. Accurately capturing and presenting segmentation uncertainty will increase clinician confidence in model predictions and facilitates better informed decision-making. Existing CT-based segmentation methods aim to do so by quantifying the uncertainty from 2D image slices, whereas the true uncertainty resides in the full 3D CT or MRI volume. We propose a novel 3D probabilistic segmentation model that is capable of resolving and presenting the aleatoric uncertainty in 3D volumes through diverse and plausible nodule segmentations. The model consists of a Deep 3D U-Net and a 3D conditional VAE that is augmented with an Normalizing Flow (NF) in the posterior network. NFs allow for more flexible distribution modelling and as such, we alleviate the strictly Gaussian posterior distribution that was previously enforced. We test our approach on the LIDC-IDRI lung nodule CT dataset. This is, to the best of our knowledge, the first approach that presents the 3D Squared Generalized Energy Distance (\(D_{\text{GED}}^{2}\)) and 3D Hungarian-matched IoU for lung nodule segmentation and uncertainty prediction. We quantify the uncertainty prediction performance and achieve a 0.401 3D \(D_{\text{GED}}^{2}\) and a Hungarian-matched 3D IoU of 0.468 with the radial Prob. 3D U-Net. In addition, since the model uses the full native 3D volumes, it is a step closer to the practical application of accurately segmenting and presenting uncertainty in 3D CT data. Finally, we present the aleatoric uncertainty, computed as the standard deviation across the model predictions, in a visual manner. This enables an interpretable expression of the uncertainty and is potentially providing clinicians additional insight into data ambiguity and allowing for more informed decision-making.
|
2305.11685 | Recycle-and-Distill: Universal Compression Strategy for
Transformer-based Speech SSL Models with Attention Map Reusing and Masking
Distillation | Transformer-based speech self-supervised learning (SSL) models, such as
HuBERT, show surprising performance in various speech processing tasks.
However, huge number of parameters in speech SSL models necessitate the
compression to a more compact model for wider usage in academia or small
companies. In this study, we suggest to reuse attention maps across the
Transformer layers, so as to remove key and query parameters while retaining
the number of layers. Furthermore, we propose a novel masking distillation
strategy to improve the student model's speech representation quality. We
extend the distillation loss to utilize both masked and unmasked speech frames
to fully leverage the teacher model's high-quality representation. Our
universal compression strategy yields the student model that achieves phoneme
error rate (PER) of 7.72% and word error rate (WER) of 9.96% on the SUPERB
benchmark. | Kangwook Jang, Sungnyun Kim, Se-Young Yun, Hoirin Kim | 2023-05-19T14:07:43Z | http://arxiv.org/abs/2305.11685v2 | Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
###### Abstract
Transformer-based speech self-supervised learning (SSL) models, such as HuBERT, show surprising performance in various speech processing tasks. However, huge number of parameters in speech SSL models necessitate the compression to a more compact model for wider usage in academia or small companies. In this study, we suggest to reuse attention maps across the Transformer layers, so as to remove key and query parameters while retaining the number of layers. Furthermore, we propose a novel masking distillation strategy to improve the student model's speech representation quality. We extend the distillation loss to utilize both masked and unmasked speech frames to fully leverage the teacher model's high-quality representation. Our universal compression strategy yields the student model that achieves phoneme error rate (PER) of 7.72% and word error rate (WER) of 9.96% on the SUPERB benchmark.
Kangwook Jang\({}^{1*}\), Sungnyun Kim\({}^{2*}\), Se-Young Yun\({}^{2}\), Hoirin Kim\({}^{1}\)\({}^{1}\)School of Electrical Engineering, KAIST
\({}^{2}\)Graduate School of AI, KAIST
{dnrrkdwkd12, ksn4397, yunseyoung, hoirkim}@kaist.ac.kr
**Index Terms**: speech self-supervised learning, model compression, attention map reusing, masking distillation
## 1 Introduction
Transformer-based speech SSL models [1, 2, 3] have been actively studied in speech processing field [4] as SSL arises as a successful representation learning approach in recent years [5, 6, 7, 8]. Especially for wav2vec 2.0 [9], HuBERT [10], and wavLM[11], all of which are inherited from BERT [12], show surprising performance in automatic speech recognition (ASR), comparable to supervised learning approaches [13, 14]. Since the versatility of speech SSL becomes also crucial, the above models have been further explored in various applications including automatic speaker verification (ASV)[15] or emotion recognition (ER) [16].
However, these models have huge number of parameters and are trained for very long time, which makes it hard for the resource-limited groups to train their own models. For instance, wav2vec 2.0 Large with 317M parameters should be pretrained for more than 290 days on a single V100 GPU [9] on LibriSpeech dataset [17]. This necessitates us to build a compressed model that allows much more parameter-efficient training and lower computational overhead.
Knowledge distillation (KD) [18] is a common model compression technique where a smaller student model is being trained by distilling the knowledge from a teacher model. Prior efforts in distilling large-scale speech SSL models have been made with reducing the number of Transformer layers or shrinking their width. DistillHuBERT [19] is distilled in a way of predicting multi-layer outputs of HuBERT, with most of the Transformer layers removed. FitHuBERT [20], instead of removing the layers, suggests cutting down the width of attention and feed-forward network (FFN) in each Transformer layer. LightHuBERT [21] creates a prunable supernet through distillation and conducts architecture search to make a small student.
Despite the effectiveness of previous approaches in mitigating the performance drop by compression, they still face several issues. (1) Wide and shallow students [19, 22] still exhibit degradation on content-related downstream tasks. (2) Layer-to-layer (L2L) distillation is proved to be effective [20, 22], however, it is counter-intuitive in terms of compression since every layer's parameters are required. (3) Pruning by architecture search [21] prepares an additional teacher-sized supernet using 32 GPUs, which is not end-to-end (E2E) and cannot be easily trained by resource-limited groups.
We suggest reusing attention maps across the student's Transformer layers, which is inspired by previous works [23, 24] that claimed the similarity between attention maps. Attention map reusing enables us to remove key and query parameters in certain Transformer layers, making it unnecessary to retain all layer parameters for L2L distillation. Furthermore, we can reinvest the saved parameters to other parts of Transformer.
We also propose a masking with L2L distillation for better speech representation quality of our student model. Masking speech frames is a widely used technique in speech SSL models [9, 10], trained by predicting the masked representation. This technique has been simply applied to distilling HuaBERT [21], but not in the L2L manner. Our novel masking distillation scheme aims to fully leverage the teacher's representation by extending the distillation loss to both masked and unmasked speech frames. We emphasize that our scheme is an E2E fashion and enhances the general quality of speech representation, especially in content- and semantics-related tasks.
Combining our two approaches described (Fig. 1), we reinvest the saved parameters from attention map reusing to FFN, and create our flagship model, **ARMHuBERT** (Attention map Reused Mask HuBERT). As evaluated on the SUPERB benchmark [25], ARMHuBERT achieves overall score [11] of 78.1, the state-of-the-art E2E distillation. It also reaches 7.72% PER in phoneme recognition (PR), and 9.96% WER in ASR.
## 2 Preliminaries
### Transformer-based Speech SSL Models
Recent dominant SSL models in speech field are wav2vec 2.0 [9], HuBERT [10], and wavLM [11], where these three model structures are identical except for detailed level. Specifically, they share 12 or 24 Transformer [26] layers and 7-layer 1D-CNN. Their pretraining schemes are based on masked prediction, estimating the codewords by output representation of the masked frames. Despite the superiority and scalability
of speech SSL models, large number of parameters and their computational overhead make it difficult to train these models. We thus implement the model compression on HuBERT and wavLM, the two dominant SSL models in speech, to demonstrate the effectiveness of our compression strategy.
### SUPERB Benchmark
The beginning of speech SSL models focused on content-related downstream tasks such as ASR or PR [27, 28], however, their versatility to other tasks has been recognized as crucial recently [11]. In this context, SUPERB benchmark [25] has been proposed to evaluate the generalizability of speech SSL models, covering the aspects of content, speaker, semantics, and paralinguistics. We evaluate our representation against the SUPERB benchmark to verify the generalizability of our student model. The SUPERB downstream tasks include PR, ASR, keyword spotting (KS), query-by-example spoken term detection (QbE), speaker identification (SID), ASV, speaker diarization (SD), intent classification (IC), slot filling (SF), and ER.
## 3 Methodology
### Attention Map Reusing
Attention map reusing is a technique for substituting the present layer's attention map with the previous one, which has been covered in several domains [23, 29]. Prior works [23, 24] have pointed out the similarity of the attention maps across heads and layers in pretrained Transformer models, such as BERT [12] and ViT [30]. We leverage this property by reusing the attention maps to compress the student model. Alternatively, we can reassign the amount of parameters saved by attention map reusing, without increasing the total number of parameters.
In Transformer's multi-head self-attention (MHSA) module [26], the input \(x\in\mathbb{R}^{n\times d}\) with the sequence length \(n\) is transformed to \(H\) independent queries, keys, and values by transformation matrices \(W_{h,k},W_{h,0}\in\mathbb{R}^{d\times d_{k}}\), and \(W_{h,v}\in\mathbb{R}^{d\times d_{v}}\), respectively, for each head \(h\). Here, \(d_{k}\), \(d_{v}\), and \(d\) are the width of the keys, values, and model, respectively.
\[\begin{array}{ll}K_{h}=W_{h,k}x,&K_{h}\in\mathbb{R}^{n\times d_{h}},\\ Q_{h}=W_{h,q}x,&Q_{h}\in\mathbb{R}^{n\times d_{h}},\\ V_{h}=W_{h,v}x,&V_{h}\in\mathbb{R}^{n\times d_{v}}\end{array} \tag{1}\]
Then, key and query are multiplied along the width axis to obtain a scaled dot-product attention map, \(A_{h}\in\mathbb{R}^{n\times n}\). Linear combinations of the attention map and value for each head are concatenated, and then projected to the original width.
\[A_{h}=\textit{softmax}\big{(}Q_{h}K_{h}^{\top}/\sqrt{d_{h}}\big{)}, \tag{2}\]
\[\textit{MHSA}(x)=\big{[}A_{1}V_{1},\dots,A_{H}V_{H}\big{]}W_{o},\ W_{o}\in \mathbb{R}^{Hd_{k}\times d} \tag{3}\]
Attention map reusing is to replace \(A_{h}\) with the previous layer's one. For instance, if we reuse the \(k\)-th previous attention map on the current layer \(\ell\), the ReuseMHSA module is
\[\textit{ReuseMHSA}(x)=\big{[}A_{1}^{\ell-k}V_{1}^{\ell},\dots,A_{H}^{\ell-k} V_{H}^{\ell}\big{]}\ W_{o}^{\ell}. \tag{4}\]
Accordingly, computing \(K_{h}\) and \(Q_{h}\) can be omitted, reducing the number of multiplications and additions by \((2nd^{2}+n^{2}d)\). Assuming \(d/H=d_{v}=d_{k}\), the omitted computation accounts for half of the original computation for MHSA, which is \((4nd^{2}+2n^{2}d)\). As a result, less parameters and multiply-accumulates (MACs) are required as more ReuseMHSA modules are employed (see Sec. 5.1).
### Masking Distillation
Attention map reusing has reduced the number of parameters, however, it may affect the representation quality of the student model. To improve the student's representation learning, we offer a novel masking distillation scheme that leverages the teacher's representation knowledge in a more sophisticated way.
Speech frame masking involves learning representation through masked prediction, where the model learns to represent masked frames accurately based on other unmasked frames. LightHuBERT [21], inspired by data2vec [8], has first applied the masking strategy to distilling HuBERT. In this approach, the teacher model guides the representation of masked frames. Let \(\mu(x)\) be the masked input, and \(f^{t}\) and \(f^{s}\) the teacher and
Figure 1: Our compression strategy involves reusing the attention map of the previous layer and extending the distillation process to masked (red arrow) and unmasked (blue arrow) representations. The input masked frames are identical for both teacher and student.
student model. Then, the masked loss function becomes
\[\mathcal{L}(x)=\frac{1}{|M|}\sum_{i\in M}\left\|f_{i}^{t}(x)-f_{i}^{s}(\mu(x)) \right\|_{2} \tag{5}\]
where \(f_{i}\) is the \(i\)-th frame of the speech representation, and \(M\) is the set of the masked frames.
In addition to the masked part loss (eq. 5), we suggest to employ an unmasked loss since the teacher model can provide high-quality representation even on the unmasked frames. However, if the masking process removes essential frames, distilling the intact form of \(f^{t}(x)\) can leak such essential knowledge that should have been removed. This induces biased predictions of the student, as it learns information that cannot be inferred from the masked input.
To prevent this, we make the teacher model receive the same masked input as the student does when distilling the unmasked part. Hence, the entire distillation loss becomes
\[\begin{split}\mathcal{L}(x)&=\sum_{\ell}\alpha_{ \ell}\big{[}\mathcal{L}_{m,\ell}(x)+\mathcal{L}_{u,\ell}(x)\big{]}\\ &=\sum_{\ell}\frac{\alpha_{\ell}}{|M|}\sum_{i\in M}\left\|f_{i, \ell}^{t}(x)-f_{i,\ell}^{s}(\mu(x))\right\|_{2}\\ &+\sum_{\ell}\frac{\alpha_{\ell}}{n-|M|}\sum_{i\notin M}\left\|f_{ i,\ell}^{t}(\mu(x))-f_{i,\ell}^{s}(\mu(x))\right\|_{2}\end{split} \tag{6}\]
where \(\alpha_{\ell}\) is the layerwise coefficient. \(\mathcal{L}_{m,\ell}\) and \(\mathcal{L}_{u,\ell}\) represent masked loss and unmasked loss of the \(\ell\)-th layer, respectively.
In summary, our novel masking distillation strategy appropriately guides the student's knowledge acquisition, by distilling not only the masked representation of unmasked data but also the unmasked representation of masked data (see Fig. 1). In Sec. 5.2, we investigate the strength of our masking strategy compared to other types of losses.
## 4 Results
### Implementation Details
We distilled the two dominant Transformer-based speech SSL models, HuBERT Base[10] and wavLM Base[11], that are pretrained on LibriSpeech 960 hours dataset[17]. Our student model consists of 12 layers of Transformers as the teachers, while the detailed design mostly follows FitHuBERT [20]: width of attention and FFN reduced and linear projections adopted at each layer. The layerwise coefficients \(\alpha_{\ell}\) are set to 0.1 except for the last layer, where it is set to 1. Unless specified, the LibriSpeech [17] dataset is distilled for 200 epochs with effective batch size of 72 including gradient accumulation.
Reuse patternWe employ an alternating reuse pattern for the attention maps, whereby the attention map of an even-numbered Transformer layer is repeated by that of the previous odd-numbered layer. We denote this pattern as \(2by6\), our default setting. We examine other reuse patterns in Sec. 5.1 in terms of performance, number of parameters, and MACs.
Model descriptionTo verify our masking distillation strategy, we first build a student model, MaskHuBERT, which employs masking distillation only. MaskHuBERT has the width of (attention, FFN) as (480, 640). Then, \(2by6\) reuse pattern is applied to MaskHuBERT, leading to 10.3% of parameter reduction. We extend this model to two options: ARMHuBERT and ARMHuBERT-S. ARMHuBERT is a reinvested version of MaskHuBERT, where the saved parameters from attention map reusing are reassigned to FFN, resulting in increased width of (480, 864). ARMHuBERT-S is a reduced version to match the parameters with previous works, having the width of (432, 816). To establish the universality of our strategy, we introduce ARMwwLM-S that is structurally identical to ARMHuBERT-S, with the only change in teacher from HuBERT to wavLM.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{Content} & \multicolumn{4}{c}{Speaker} & \multicolumn{4}{c}{Semantics} & \multicolumn{2}{c}{Parial} \\ \cline{3-14} & Params & Overall & PR & ASR & KS & QoE & SID & ASV & SD & IC & SF & ER \\ \cline{2-14} Models & Millions\(\downarrow\) & Score \(\uparrow\) & PER \(\downarrow\) & WER \(\downarrow\) & Acc \(\uparrow\) & MTWV \(\uparrow\) & Acc \(\uparrow\) & EER \(\downarrow\) & DER \(\downarrow\) & Acc \(\uparrow\) & F\(\uparrow\) & CER \(\downarrow\) & Acc \(\uparrow\) \\ \hline _Baselines_ & & & & & & & & & & & & & & & \\ FBANK [25] & 0 & 40.5 & 82.01 & 23.18 & 8.63 & 0.0058 & 8.5E-4 & 9.56 & 10.55 & 9.1 & 69.64 & 52.94 & 35.39 \\ HuBERT Base[10] & 94.70 & 80.8 & 5.41 & 6.42 & 96.30 & 0.0736 & 81.42 & 5.11 & 5.88 & 98.34 & 88.53 & 25.20 & 64.92 \\ wavLM Base[11] & 94.70 & 81.9 & 4.84 & 6.21 & 96.79 & 0.0870 & 84.51 & 4.69 & 4.55 & 98.63 & 89.38 & 22.86 & 65.94 \\ LightHuBERT \(a_{\text{small}}\)[21] & 94.7 & 27.00 & 79.1 & 6.60 & 8.33 & 96.07 & 0.0764 & 69.70 & 5.42 & 5.85 & 98.23 & 87.58 & 26.90 & 64.12 \\ \hline _960h distillation \(-\) \# params: 26.4M \(\sim\) 31.6M_ & & & & & & & & & & & & & \\ FitW2V2[20] & 31.63 & 76.5 & 12.22 & 11.44 & 96.04 & 0.0475 & 64.71 & 6.65 & 6.44 & 93.38 & 86.65 & 29.40 & 62.35 \\
3-1. On-Def [22] & 30.58 & 76.8 & 13.34 & 12.32 & 96.69 & 0.0489 & 75.71 & 6.48 & 6.56 & 94.15 & 82.89 & 34.65 & 63.95 \\
12-Hal-L2[22] & 26.87 & 77.6 & 10.67 & 10.96 & 97.24 & 0.0604 & 69.52 & 6.13 & 6.81 & 96.97 & 86.11 & 30.93 & 63.24 \\
**MaskHuBERT (ours)** & **26.64** & 77.8 & **7.30** & **9.77** & 96.36 & **0.0664** & 62.83 & **5.38** & 6.79 & 97.05 & 87.31 & 27.01 & 62.37 \\
**ARMHuBERT (ours)** & **26.45** & **78.1** & 7.72 & 9.96 & 96.88 & 0.0635 & 65.03 & 5.68 & 7.10 & **97.07** & **87.59** & **26.06** & 62.86 \\ \hline _960h distillation \(-\) \# params: 22.4M \(\sim\) 23.5M_ & & & & & & & & & & & & & \\ DistilBERT [19] & 23.49 & 75.9 & 16.27 & 13.37 & 95.98 & 0.0511 & 73.54 & 8.55 & 6.19 & 94.99 & 82.57 & 35.59 & 63.02 \\ FitHuBERT [20] & 22.49 & 74.5 & 13.32 & 12.09 & 96.27 & 0.0489 & 55.71 & 8.00 & 6.84 & 91.25 & 84.06 & 32.46 & 59.82 \\
**ARMHuBERT-S (ours)** & **22.39** & 77.5 & 8.63 & 10.82 & 96.82 & 0.0720 & 63.76 & **5.58** & 7.01 & 97.02 & 86.34 & 29.02 & 62.96 \\
**ARMwLM-S (ours)** & **22.39** & **78.9** & **7.42** & **10.03** & **97.01** & **0.0741** & 71.29 & 5.99 & 7.11 & **97.76** & **87.41** & **26.97** & **64.54** \\ \hline _100h distillation_ & & & & & & & & & & & & & \\ FitW2V2 [20] & 22.49 & 73.1 & 16.50 & 14.77 & 94.68 & 0.0380 & 51.65 & 7.43 & 6.94 & 90.03 & 81.95 & 34.74 & 62.87 \\ FitHuBERT [20] & 22.49 & 74.5 & 14.05 & 12.66 & 96.23 & 0.0579 & 54.24 & 7.88 & 7.19 & 94.20 & 83.41 & 34.00 & 61.67 \\
**ARMHuBERT-S (ours)** & **22.39** & 76.8 & 9.17 & 11.83 & 96.01 & 0.0569 & **66.48** & **5.92** & **6.23** & 95.97 & 83.89 & 33.29 & 63.29 \\
**ARMwLM-S
### SUPERRB Benchmark Results
In Table 1, we evaluate our student models on SUPERRB benchmark [25]. We follow the default fine-tuning recipes, including a learning rate scheduler, with the learning rate scaled to 10\(\times\) in SID task. MaskHuBERT outperforms 12-L Half-L2L, the previous state-of-the-art E2E distillation method, with less parameters used. Our observation indicates that incorporating our masking strategy into the L2L distillation [20, 22] results in enhancing the student's representation quality. Especially, MaskHuBERT highly improves the performances in content- and semantics-related tasks.
ARMHuBERT achieves a better overall score of 78.1 with less parameters than MaskHuBERT. Despite the removal of certain attention parameters, increasing the FFN width contributes to better quality of speech representation, achieving 7.72% PER and 9.96% WER. We find out that ARMHuBERT shows promising improvements when compared to MaskHuBERT in SF and SID tasks, exhibiting a similar level of performance in other tasks. In the end, the number of parameters and MACs in ARMHuBERT have decreased to 28% and 30% of the teacher model, HuBERT Base[10], respectively.
In a smaller parameter group, ARMHuBERT-S, the parameter-reduced version, outperforms DistilHuBERT and FitHuBERT by a large margin. Specifically, ARMHuBERT-S also shows the outstanding results in content- and semantics-related tasks, which means the consistency of the representations produced by MaskHuBERT and ARMHuBERT-S. In addition, the result that ARMawLM-S surpasses ARMHuBERT-S implies the universality of our strategy: without any modifications of the student model structure, replacing with a superior teacher model creates a better student. The results of the LibriSpeech [17] 100h distillation are also consistent with the formerly demonstrated results.
## 5 Discussions
In this section, we explore which layer's attention map should be reused in other layers and how to implement the masking distillation. Unless specified, we have conducted the distillations on 100-hour of LibriSpeech [17] and evaluated on the ASR, ASV, and SF tasks of the SUPERRB benchmark [25].
### Where to Reuse
Table 2 summarizes the performance depending on various attention map reusing patterns, and in general, the _2by6_ pattern performs the best. Other reuse patterns have reduced the Transformer's representation capacity due to overly frequent reusing. Assigning more parameters to FFN (-_up_) still has limit in terms of the performance gain. Comparing to no reuse pattern applied, the performance decrease of _2by6_ is small, but it takes advantages in 9.13% and 8.16% reduction of parameters and MACs, respectively. We note that the number of MACs in a single reuse MHSA module (eq. 4) is reduced by half, from 13.2G to 6.6G.
### How to Mask
Masking strategyTable 3 shows the efficacy of our masking strategy. We first eliminated the loss function on the unmasked frames (\(\mathcal{L}_{u,t}\)), making it equivalent to the L2L version of the LightHuBERT [21] distillation loss. This approach severely damaged performances, particularly in the ASR and ASV tasks. Next, we modified the unmasked loss function to distill from the unmasked input, _i.e._, only \(f^{t}(x)\) being distilled to the student. This also led to degraded performance in most tasks, revealing that our unmasked loss with masked input properly guides the knowledge acquisition without imposing biased predictions.
Masking ratioHigh value of masking ratio can lead to a student model producing good representation, as it has less information to infer with [10, 31]. However, it can also make the learning process more difficult. In Table 4, we examine the optimal masking ratios for each training set. For LibriSpeech 960h [17], both ratios of 0.4 and 0.8 produce excellent results. On the other hand, for the 100h dataset, ratio of 0.4 produces the best results overall. This implies that lower masking ratio is preferred in low-resource distillation setting. Accordingly, in our main experiments, we have used the ratios of 0.8 and 0.4 for the 960h and 100h distillation, respectively.
## 6 Conclusion and Future Work
In summary, we have proposed the universal compression strategy which involves attention map reusing and novel masking distillation. Our parameter-reinvested model, ARMHuBERT, achieves great performance in content- and semantics-related tasks. Our strategy can be applied to any Transformer-based speech SSL models, and contributes to enhancing the general quality of speech representation. Future work can focus on further improving our model on speaker-related tasks.
## 7 Acknowledgements
The study was supported by Korea Health Technology R&D Project through the Korea Health Industry Development Institute funded by the Ministry of Health and Welfare, Republic of Korea (HR18C0016).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline models & ratio & WER \(\downarrow\) & EER \(\downarrow\) & F1 \(\uparrow\) & CER \(\downarrow\) \\ \hline MaskHuBERT-960h & 0.4 & **9.75** & 5.58 & 86.94 & **26.79** \\ MaskHuBERT-960h & 0.8 & 9.77 & **5.38** & **87.31** & 27.10 \\ \hline MaskHuBERT-100h & 0.4 & **11.56** & **5.87** & **84.31** & **32.28** \\ MaskHuBERT-100h & 0.6 & 11.99 & 6.18 & 83.42 & 33.31 \\ MaskHuBERT-100h & 0.8 & 12.74 & 6.56 & 83.68 & 33.82 \\ MaskHuBERT-100h & sch & 12.07 & 6.29 & 83.84 & 33.50 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparisons with different masking ratios. “sch” indicates linear scheduling of the ratio as 0.4 to 0.8.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline pattern & reused layers & params & MACs & WER \(\downarrow\) & EER \(\downarrow\) & F1 \(\uparrow\) & CER \(\downarrow\) \\ \hline _6by4_ & \{1,7,10\} & 20.90 & 423 & 13.52 & 6.30 & 83.69 & 34.92 \\ _3by4_ & \{1,4,7,10\} & 21.65 & 437 & 12.37 & **5.67** & 83.29 & 33.60 \\ _6by2-up_ & \{1,7,7\} & 22.39 & 440 & 13.18 & 8.59 & 83.07 & 34.79 \\ _3by4-up_ & \{1,4,7,10\} & 22.39 & 445 & 12.39 & 6.06 & 83.79 & 33.49 \\ _2by6_ & \{1,3,5,7,9,11\} & 22.39 & 450 & **12.18** & 5.95 & **84.91** & **32.29** \\ \hline None & - & 24.64 & 490 & 11.94 & 5.87 & 84.78 & 31.38 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparisons of various reusing patterns. Parameter size (M) and MACs (G) are additionally measured. The width of (attention, FFN) for each model is (432, 816), while “-up” stiffk denotes more parameters assigned to FFN to match with 2by6. Masking is not applied here.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline methods & WER \(\downarrow\) & EER \(\downarrow\) & F1 \(\uparrow\) & CER \(\downarrow\) \\ \hline MaskHuBERT-100h & **11.56** & **5.87** & **84.31** & 32.28 \\ [–] distil. unmasked part & 13.23 & 7.96 & 82.78 & 33.53 \\ [–] distil. from masked input & 11.65 & 6.09 & 84.29 & **31.41** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on our masking strategy. |
2302.02138 | A Dombi Counterexample with Positive Lower Density | Let $r(k,A,n)$ denote the number of representations of $n$ as a sum of $k$
elements of a set $A \subseteq \mathbb{N}$. In 2002, Dombi conjectured that if
$A$ is co-infinite, then the sequence $(r(k,A,n))_{n \geq 0}$ cannot be
strictly increasing. Using tools from automata theory and logic, we give an
explicit counterexample where $\mathbb{N} \setminus A$ has positive lower
density. | Jeffrey Shallit | 2023-02-04T09:30:13Z | http://arxiv.org/abs/2302.02138v1 | # A Dombi Counterexample with Positive Lower Density
###### Abstract
Let \(r(k,A,n)\) denote the number of representations of \(n\) as a sum of \(k\) elements of a set \(A\subseteq\mathbb{N}\). In 2002, Dombi conjectured that if \(A\) is co-infinite, then the sequence \((r(k,A,n))_{n\geq 0}\) cannot be strictly increasing. Using tools from automata theory and logic, we give an explicit counterexample where \(\mathbb{N}\setminus A\) has positive lower density.
## 1 Introduction
Let \(\mathbb{N}=\{0,1,\ldots\}\) be the natural numbers, and let \(A\subseteq\mathbb{N}\). Define \(r(k,A,n)\) to be the number of \(k\)-tuples of elements of \(A\) that sum to \(n\). Dombi [5] conjectured that there is no infinite set \(F\) such that \(r(3,\mathbb{N}\setminus F,n)\) is strictly increasing. Recently Bell et al. [2] found a counterexample to this conjecture. However, the \(F\) of their example is quite sparse; it has upper density \(0\). In this note we give a simple explicit example of an \(F\) such that \(r(3,\mathbb{N}\setminus F,n)\) is strictly increasing and \(F\) has positive lower density. The novelty to our approach is the use of tools from automata theory and logic.
## 2 The example
Let \(F=\{3,12,13,14,15,48,49,50,\ldots\}\) be the set of natural numbers whose base-2 expansion is of even length and begin with \(11\). This is an example of an _automatic set_[1]; that is, there is a finite automaton accepting exactly the base-2 expansions of the numbers of \(F\). It is depicted in Figure 1. Here \(0\) is the initial state, and \(3\) is the only accepting state. The input is a binary representation of \(n\), starting with the most significant digit. |
2303.02211 | RR Lyrae Visual to Infrared Absolute Magnitude Calibrations. In the
light of Gaia DR3 | A probabilistic approach has been used in combination with the parallax data
from Gaia (e)DR3 to calibrate Period-Luminosity-(Abundance) (PLZ) Relations
covering a wide range of visual to Infrared observations of RR Lyrae stars.
Absolute Magnitude Relations are given, derived from the same selection of
stars, for $V$, $G$, $I$, $K_\mathrm{s}$ and WISE $W1$ as well as for for the
reddening free pseudo-magnitudes $WBV$, $WVI$ and finally also Gaia $WG$. The
classical relation between $M_V$ and [Fe/H] is redetermined and as an
illustration distances are given to a few selected objects. | K. Looijmans, J. Lub, A. G. A. Brown | 2023-03-03T20:49:49Z | http://arxiv.org/abs/2303.02211v1 | # RR Lyrae Visual to Infrared Absolute Magnitude Calibrations
###### Abstract
A probabilistic approach has been used in combination with the parallax data from _Gaia_ (e)DR3 to calibrate Period-Luminosity-(Abundance) (PLZ) Relations covering a wide range of visual to Infrared observations of RR Lyrae stars. Absolute Magnitude Relations are given, derived from the same selection of stars, for \(V\), \(G\), \(I\), \(K_{\rm s}\) and WISE \(W1\) as well as for for the reddening free pseudo-magnitudes \(WBV\), \(WVI\) and finally also _Gaia_\(WG\). The classical relation between \(M_{V}\) and [Fe/H] is redetermined and as an illustration distances are given to a few selected objects.
**Disclaimer**: this paper reflects the presentation as given by J. Lub at the RRLCEP2022 conference (september 2022). Unfortunately after preparing this report we found out that only invited contributions would be published in the Proceedings.
Stars: RR Lyrae stars, photometry, _Gaia_ (e)DR3, PLZ relations
0000-0002-4002-2885]K. Looijmans
0000-0002-4880-2858]J. Lub
0000-0002-4880-2858]A.G.A. Brown
## 1 Introduction
This talk is the fourth presentation in a series given at the RR Lyrae (and Cepheid) meetings initiated in 2015 at Visegrad (Lub, 2016, 2018, and 2021). The investigation started as an attempt to understand the origin of the \(K\)-\(\log_{10}P\) relation and then to use this to improve absolute magnitude determinations in other photometric (visual) bands, taking advantage of the tightness of the PL(Z) relations and the reduced interstellar absorption in the infrared. In the meanwhile the incredible improvements of parallax, proper motion and photometric data (_Gaia_ collaboration, 2016, 2018, 2021, 2022) have made it necessary to reconsider and extend the calibrations presented at Cloudcroft in 2019 (Lub, 2021). This progress is illustrated in Table 1. below. Much more is still to come.
It remains amusing, but not much more should be made out of this, to note how the parallax of RR Lyrae itself (in the last column) seems to increase as the precision of the determination increases with time.
## 2 Presentation of the RR Lyrae sample
Our sample of over 200 well studied RR Lyrae used before was updated with _Gaia_ (e)DR3 parallaxes and photometry as well as W1 photometry. Monson et al. (2017) have discussed sample of 55 brighter RR Lyrae giving us the opportunity to also add also improved V and I photometry. Interstellar reddening and absorption were as before based on Schlafly and Finkbeiner (2011) with a simple correction for the pathlength within the galactic disk. An alternative approach in Muhie et al. (2021) is based upon their \(V-K\) colours. After a comparison excluding stars too close to the galactic equator and obvious incorrect determinations (large negative absorptions from \(V-K\)) we could conclude that there was no offset beteeen the two methods.
RRc stars were fundamentalized by adding 0.1275 to \(\log_{10}P\) in conformity with the value derived from the three RRd stars in our sample and the results by Clementini et al. (2004) for M3. This gives as median values:
\[\log_{10}(P_{\rm F})=-0.28\,,\mbox{[Fe/H]}=-1.38\,,\left<V\right>=11.50\,.\]
## 3 Estimating PLZ relations.
### First approach
Our previous rather naive approach was to assume that in the PLZ relation:
\[M=a+b\log_{10}(P_{\rm F})+c(\mbox{[Fe/H]}+1.35)\]
the coefficients \(b\) and \(c\) are as given by the 2015 Framework paper by Marconi et al. (2015). Ideally photometric parallaxes \(\varpi_{\rm phot}\) based upon these PLZ relations can then be calculated from the fundamental relation:
\[(m-M)_{0}=-5\log_{10}(\varpi_{\rm phot})+10\]
where \(m\) is the observed (pseudo)-magnitude and parallaxes are given in milliarcseconds. The coefficient a is then adjusted to give a one to one relation with respect to the _Gaia_ parallaxes. The _Gaia_ zeropoint offset then follows from the differences between the _Gaia_ parallaxes and these so calculated photometric parallaxes. Our preliminary calibrations based upon _Gaia_ DR2 presented in 2019 in Cloudcroft (Lub, 2021) were:
\[K_{\rm s}+2.25\log_{10}(P_{\rm F})=-1.055+0.18({\rm[Fe/H]}+1.35)\]
\[WBV=\langle V\rangle_{\rm int}-3.06\langle B-V\rangle_{\rm mag}\]
\[=-1.035-2.49\log_{10}(P_{\rm F})\]
Whereas the _Gaia_ DR2 zeropoint bias (see e.g. Lindegren et al., 2018) was found as \(\varpi_{0}=-0.035\pm 0.010\) mas. Please note in the appendix the change in the definition of \(WBV\) used in this work.
### A probabilistic unbiased procedure to estimate PLZ relations.
It is clear that in this way no use is made of the full information available, as was pointed out almost immediately when _Gaia_ DR1 (TGAS) became available, by Sesar et al. (2017). They introduced a probabilistic approach taking into count the full amount of information available in the data. Following their lead one of us (Koen Looijmans) undertook to set up and implement a procedure to allow for all measurement errors as well as a bias in the parallaxes \(\varpi_{0}\) For the distance an exponentially decreasing volume density prior with a scale lenghth \(L\) as proposed by Bailer-Jones (2015) was adopted.
As an example we show in Fig. 1 the corner plot for the pseudo magnitude \(WBV\).
In the summary table (Table 2) the results for \(V\) and \(WVI\) are separated from the rest, because they derive from a different source with only 55 stars. The coefficient of the Period dependence in \(W1\), \(K_{\rm s}\), \(WVI\) and \(WBV\) are all very close to \(-2.50\). This might at first look surprising, but removing the effect of the interstellar absorption also reduces at the same time the effect of temperature variations, making \(WVI\) and \(WBV\) very much like an infrared magnitude, mainly measuring the stellar angular diameter. The \(I\) data also stand out because of a larger parallax offset \(\varpi_{0}\), possibly because they are brighter stars.
## 4 Distance determinations
Armed with our Period-Luminosity relations, we (re)derive the distances to selected globular clusters and the Large Magellanic Cloud as in Lub (2021). It should be kept in mind that pseudo-magnitudes such as WBV and WVI have the unfortunate property of increasing any calibration errors in the photometry. Errors are given as mean deviations of the mean.
First we discuss the Galactic Globular Clusters M3 and \(\omega\) Cen, representative of the two Oosterhoff groups (OI and OII), with each over 150 RR Lyrae stars : M3 (NGC 5272) (\(B\) and \(V\) data Cacciari et al. 2005, \(K_{\rm s}\) data Bhardwaj et al. 2020) and \(\omega\) Cen (NGC 5139) (\(B\), \(V\) and \(K_{\rm s}\) data Braga et al. 2016, 2018).
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Source & year & \(V_{\rm average}\) & \(\langle\varpi\rangle\) & \(\langle\sigma_{\varpi}\rangle\) & \(N_{\rm sample}\) & \(\varpi_{\rm RRLyrae}\) & References \\ & & & mas & mas & \(N_{\rm sample}\) & mas & \\ \hline Hipparcos & 1997 and 2007 & 11.09 & 0.992 & 3.193 & 143 & 3.46\(\pm\) 0.64 & 1,2 \\ HST & 2011 & 9.21 & 2.16 & 0.16 & 5(4) & 3.77\(\pm\) 0.13 & 3 \\ _Gaia_ DR1 & 2016 & 11.13 & 0.938 & 0.312 & 132 & 3.64\(\pm\) 0.23 & 4 \\ _Gaia_ DR2 & 2018 & 11.50 & 0.791 & 0.043 & 206 &? & 5 \\ _Gaia_ (e)DR3 & 2020 and 2022 & 11.50 & 0.824 & 0.021 & 207 & 3.985\(\pm\) 0.027 & 6,7 \\ _Gaia_ DR4 & (TBD) & 11.50 & - & 0.010 & 207 & - & \\ _Gaia_ DR5 & (TBD) & & & & & Final catalogue & \\ \hline \end{tabular} References: 1. Fernley et al. (1998), 2. Feast et al. (2008), 3. Benedict et al. (2011), 4,5,6 Brown et al. (2016), (2018), (2021), 7. Vallenari et al. (2022).
\end{table}
Table 1: Time evolution of RR Lyrae parallaxes (units: milliarcseconds)
* [] M3 \(WBV\) (all stars) \((m-M)_{0}=14.987\) (\(\pm 0.059\)) \(K_{\rm s}\) (all stars) 15.047 (\(\pm 0.033\)) \(\omega\) Cen \(WBV\) (all stars) \((m-M)_{0}=13.696\) (\(\pm 0.012\)) \(K_{\rm s}\) (all stars) 13.720 (\(\pm 0.004\))
Unfortunately in \(\omega\) Cen our preliminary result from \(WVI\) falls short: \((m-M)_{0}=13.578\) (\(\pm 0.009\)). This discrepancy remains for the moment unexplained.
In the Large Magellanic Cloud two fields, A and B, were studied in detail by Clementini et al. (2003), di Fabrizio et al. (2005), and Gratton et al. (2004), giving \(BVI\) lightcurves and abundances. Proceeding as before in Cloudcroft We derive a distance modulus \((m-M)_{0}\) of 18.539 (\(\pm 0.017\)) for field A and 18.504 (\(\pm 0.017\)) for field B. The I measurements are unfortunately very much noisier and will not be discussed any further here. For
Figure 1: Cornerplot for \(WBV\). Note that the coefficients \(a\), \(b\) and \(c\) have been changed with respect to our definition to conform with the order in which they occur in Sesar et al. (2017)
these same two fields, \(K\) measurements were added by Muraveva et al. (2015). More data are by Szewczyk et al. (2008) and Borissova et al. (2009) for stars in the general LMC field. We derive respectively in the same order: 18.584 for field A, 18.548 for field B, and 18.517 and 18.485 for the general field samples respectively. Errors of the median are of order \(\pm 0.018\). Recently Cusano et al. (2021) collected tens of thousands of stars with \(K_{\rm s}\) measurements and assuming an average [Fe/H] = \(-1.50\) we find directly \((m-M)_{0}=18.561\pm 0.052\). But this needs further investigation.
## 5 The \(M_{v}\) vs [Fe/H] relation
The slope of the trend of absolute magnitude with metal abundance [Fe/H] in the local RR Lyrae population was once a contentious issue, e.g. Sandage (1993), who advocated for a slope larger than 0.30. However this appeared to have been settled to a value closer to 0.20, based upon the discussion of LMC data by Gratton et al. (2004). Unfortunately they did not cover a complete range of abundances. As discussed in Lub (2016, 2018, 2021), application the \(K\)-\(\log_{10}P\) relation (and also the \(WBV\)-\(\log_{10}P\) relation) directly leads back to the conclusion favoured by Sandage. By calculating the absolute magnitudes with the parallaxes from the \(K\)-\(\log_{10}P\) and the \(WBV\) calibrations, the afore mentioned relation becomes (see also Muraveva et al. 2018):
\[M_{V}=0.624(\pm 0.008)+0.334(\pm 0.015)(\rm[Fe/H]+1.35)\]
A mean value of \(M_{V}=0.60\) has of course been in use for a long time.
## 6 _Gaia_ Magnittubes: \(Wg\)
The photometry from the _Gaia_ Satellite was originally not considered for this research. The reason for this was the fact that the published data are derived as straight means over the measured intensities, without reference to their phase in the lightvariation. However it clear as is true for WBV the combination from the _Gaia_ magnitudes \(G\), \(G_{\rm BP}\) and \(G_{\rm RP}\), viz.
\[WG=G-1.85(G_{\rm BP}-G_{\rm RP})\]
is a reddening free pseudo-magnitude derived from simultaneously measured intensities, which will reduce the lightcurve amplitude in a similar way as for \(WBV\).
This is indeed borne out by comparing \(WG\) with \(W1\),\(K_{\rm s}\) and \(WBV\). Somehow taking out most of the temperature dependence gives rise to a kind of pseudo IR magnitude mainly dependent on the angular diameter as mentioned before. Apart from having a slope indistinguishable from one the median scatter is 0.043, 0.056 and 0.072 (rms differences 0.056, 0.073 and 0.102) respectively. This is indicative of the precision of the photometric measurements, where apparently W1 is superior.
As a shortcut we have used our calibrations for \(WBV\) and \(K_{\rm s}\) to predict each stars' photometric parallax and taking the average value. A simple least squares approach, which of course then no longer takes into account the actually measured uncertanties and priors in the full probablistic approach the gives us:
\[WG=-1.093-2.475\log_{10}(P_{\rm F})+0.121(\rm[Fe/H]+1.35)\]
The errors on the coefficients are respectively: 0.020, 0.065 and 0.010 and with this choice of \(WG\) the _Gaia_ DR3 zeropoint bias \(\varpi_{0}\) comes out as \(-0.008\) with an rms of 0.045 (0.0032 for the median). A comparison with the separate investigation of Garofalo et al. (2022) shows very good agreement for coefficients and zeropoint.
\begin{table}
\begin{tabular}{l c c c c c} \hline Band & \(a\) & \(b\) & \(c\) & \(\varpi_{0}\) & \(L\) \\ & & & & mas & kpc \\ \hline \(W1\) & -1.153 \(\pm\) 0.042 & -2.53 \(\pm\) 0.10 & 0.162 \(\pm\) 0.016 & -0.0189 \(\pm\) 0.0080 & 0.502 \(\pm\) 0.022 \\ \(K_{\rm s}\) & -1.088 \(\pm\) 0.043 & -2.45 \(\pm\) 0.11 & 0.171 \(\pm\) 0.018 & -0.0149 \(\pm\) 0.0080 & 0.504 \(\pm\) 0.022 \\ \(V\) & 0.393 \(\pm\) 0.049 & -0.82 \(\pm\) 0.10 & 0.273 \(\pm\) 0.022 & -0.0137 \(\pm\) 0.0091 & 0.506 \(\pm\) 0.022 \\ \(WBV\) & -1.033 \(\pm\) 0.041 & -2.46 \(\pm\) 0.10 & 0.026 \(\pm\) 0.018 & -0.0127 \(\pm\) 0.0076 & 0.505 \(\pm\) 0.022 \\ \hline \(I\) & -0.115 \(\pm\) 0.062 & -1.22 \(\pm\) 0.18 & 0.218 \(\pm\) 0.025 & -0.035 \(\pm\) 0.019 & 0.325 \(\pm\) 0.026 \\ \(WVI\) & -1.063 \(\pm\) 0.055 & -2.47 \(\pm\) 0.10 & 0.131 \(\pm\) 0.031 & -0.027 \(\pm\) 0.016 & 0.327 \(\pm\) 0.026 \\ \hline \end{tabular} The coefficients \(a\), \(b\) and \(c\) are called \(M_{\rm ref}\), \(a\) and \(b\) respectively by Sesar et al. (2017), the quoted precisions are given by the average over the difference of the 84th and 16th percentile with the median
\end{table}
Table 2: Summary of MCMC solutions for PLZ relations in \(W1\), \(K_{\rm s}\), \(V\),\(WBV\), \(I\) and \(WVI\)
It will be of interest to see how the full Bayesian approach will change this result when it is finally done. At any rate this calibration is fully consistent with our results for the other bands.
## 7 Conclusions
_Gaia_ has set the zeropoint of the RR Lyrae Period-Luminosity relations with great precision. Here we have presented a set of Period-Luminosity-(Abundance) relations, which are internally consistent over the range fom \(V\) to \(W1\), because they are based on the same stellar sample. With only a few variables very good distances can de determined to all objects in the Local Group, which contain RR Lyrae stars.
RRLCEP2022 at La Palma was a great opportunity for a re encounter after more than two difficult years. We wish to thank the organizers, who unfortunately could not know that the conference would be delayed by an approaching tropical storm causing epic delays in arrivals on the island. This work was done by Koen Loojmans as a Bachelor research project as the final requirement for his BSc examination. J. Lub wishes to thank the Leiden Kerkhoven Bosscha Fund and Sterrewacht Leiden for travel support to La Palma. Extensive use was made of the data from the European Space Agency (ESA) mission _Gaia_ ([http://cosmos.ea.int/gaia](http://cosmos.ea.int/gaia)) processed by the _Gaia_ Data Processing and Analysis Consortium DPAC ([http://cosmos.esa.int/web/gaia/dpac/consortium](http://cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by the national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration
|
2310.02329 | Addressing type Ia supernova color variability with a linear spectral
template | Type Ia Supernovae (SNeIa) provided the first evidence of an accelerated
expansion of the universe and remain a valuable probe to cosmology. They are
deemed standardizable candles due to the observed correlations between its
luminosity and photometric quantities. This characteristic can be exploited to
estimate cosmological distances after accounting for the observed variations.
There is however a remaining dispersion unaccounted for in the current
state-of-the-art standardization methods. In an attempt to explore this issue,
we propose a simple linear 3-component rest-frame flux description for a
light-curve fitter. Since SNIa intrinsic color index variations are expected to
be time-dependent, our description builds-up upon the mathematical expression
of the well known SALT2 for rest-frame flux, whilst we drop the exponential
factor and add an extra model component with time and wavelength dependencies.
The model components are obtained by performing either Principal Component
Analysis (PCA) or Factor Analysis (FA) onto a representative training set. The
constraining power of the Pure Expansion Template for Supernovae (PETS), is
evaluated and we found compatible results with SALT2 for $\Omega_{m0}$ and
$\Omega_{\Lambda 0}$ within 68% uncertainty between the two models, with PETS'
fit parameters exhibiting non negligible linear correlations with SALT2'
parameters. For both model versions we verified that the first component
describes mainly color index variations, as a dominant effect on SNIa spectra.
The model nuisance parameter which multiplies the color index variation-like
fit parameter shows evolution with redshift in an initial binned cosmology
analysis. This behavior can be due to selection effects. Overall, our model
shows promise, as there are still a few aspects to be refined; however, it
still falls short in reducing the unaccounted dispersion. | Cássia S. Nascimento, João Paulo C. França, Ribamar R. R. Reis | 2023-10-03T18:05:33Z | http://arxiv.org/abs/2310.02329v3 | # Addressing type Ia supernova color variability with a linear spectral template
###### Abstract
We investigate the potential of a pure linear expansion for the rest-frame flux of a type Ia supernova light curve fitter based on the well known Spectral Adaptive Light Curve Template 2 (SALT2). We generate the expansion components by performing Principal Component Analysis (PCA) and Factor Analysis (FA) onto a representative training set. Then, we derive a Tripp-like expression for the distance modulus and fit the \(\Lambda\)CDM cosmological model on the Pantheon sample. The constraining power of the model, dubbed Pure Expansion Template for Supernovae (PETS), and SALT2 is evaluated and we found compatible results for \(\Omega_{m0}\) and \(\Omega_{\Lambda 0}\) within \(68\%\) uncertainty between the two models, with PETS' fit parameters exhibiting non negligible linear correlations with SALT2' parameters. We find non negligible correlations between PETS's fit parameters and the supernovae host galaxies masses, while the Hubble Diagram residues show no correlation with fit parameters, redshift or host galaxy mass. The model nuisance parameters, \(\alpha\) and \(\beta\), are slighted correlated and we find evidence for redshift evolution for \(\beta\). The intrinsic scatter, \(\sigma_{\rm in}\), shows a subtle redshift evolution that should be further investigated increasing the number of high redshift supernovae in the cosmology sample.
supernovae: general \(\cdot\) dust, extinction \(\cdot\) methods: data analysis \(\cdot\) methods: statistical \(\cdot\) cosmological parameters
## 1 Introduction
Measuring the acceleration of the universe and its properties has been a direct consequence of modeling Type Ia supernovae light curves (Riess et al. (1998) and Perlmutter et al. (1999)). From the relationship between magnitude and decay rate of the light curves, Phillips (1993) confirmed the possibility of using Type Ia supernovae (SNe), as standard candles in cosmology. Since then there has been a constant movement of revisitation and improvements regarding the training and fitting of Type Ia supernova models (Betoule, M. et al. (2014), Mosher et al. (2014), and Pierel et al. (2018)). This constant improvement is more recently powered by the need to reduce the systematic errors of the empirical standardization process as the statistics improved with the dataset growth. For the next generation surveys
such as Rubin (LSST) (Ivezic et al. (2019)) and Nancy Grace Telescope (WFIRST) (Spergel et al. (2015)), we can expect to detect around 300.000 supernovae up to a redshift of \(z=3\)(Rose et al. (2021)). This will correspond to an increase in the available Ia SNe sample of 2 orders of magnitude, resulting in a substantial reduce in statistical errors when measuring cosmological parameters.
As a consequence of the increasing datasets, modeling techniques have been tailored, in particular those that make use of Spectral Energy Distributions (SEDs) such as SALT2 (Guy, J. et al. (2007) and Guy, J. et al. (2010)) and SNEMO (Saunders et al. (2018)). However, despite the well-established use of SALT2, Betoule, M. et al. (2014) found after the standardization a remaining dispersion in magnitude of \(m=0.15\) resulting from systematic errors associated with, among other origins, dust extinction in the host galaxy and the standardization method itself. Aiming to better understand the impact of color variation description in Ia SNe rest-frame flux and inspired by the SALT2 light curve fitting model, in this paper we extensively explore the description of a generic Type Ia supernova spectral surface by a pure third-order phase-spectroscopic expansion model, initially proposed by Saunders et al. (2018).
Following SALT2 and SNEMO we do not separate the sources of color variation. Not separating the intrinsic and dust extinction reddening contributions can obscure one of the most important known correlations of type Ia supernovae, i.e. intrinsic color correlation with luminosity. However, a complete separation of both effects requires accurate dust distribution estimates for every host galaxy in the sample. Recent efforts were made using Ia SNe spectra to separate intrinsic and extrinsic color variations, Chotard, N. et al. (2011) and Sasdelli et al. (2016), both finding an total-to-selective extinction ratio consistent with the Milk-way value. Nevertheless, from the best of our knowledge, there is no widely accepted procedure to completely disentangle these contributions using only photometric data. We also chose not to deredden our data, avoiding the assumption both intrinsic and dust contributions can be described by tuning an extinction law, as is usually done for most fitters. Instead, we apply the decomposition methods to the reddened data after the SEDs reconstruction process and investigate the model performance when considering \(\Lambda\)CDM cosmology with our derived expression for the distance modulus. This description can be naturally extended to both UV and infrared regions by the expected increasing training sample.
Our model was trained over the Nearby Supernova Factory Data Release 9 (DR9) (Saunders et al. (2018)) data sample. On the spectra we applied a forward-backward filter to reduce systematic errors and then started the reconstruction of the SEDs. We perform 1-d Gaussian Process Regressions with gaussian priors over the kernel parameters and a mean behaviour information from Hsiao et al. (2007) to ensure a high resolution SED reconstruction. Next, we derive a distance modulus from the observed Ia SN rest-frame flux. Inspired by SUGAR (Leget, P.-F. et al. (2020)) and SNEMO (Saunders et al. (2018)) models, the spectral surfaces were decomposed using decomposition methods such as Principal Component Analysis and Factor Analysis. In the end, we compared the results in the context of light curve fitting and cosmological fits, investigating, among other behaviours, the evolution of the Hubble Diagram residues with respect to SNe redshifts.
In section 2, we present the pure expansion model and its peculiarities such as the training grid and limitations. In section 3, we illustrate the pre-training process of filtering the spectra and reconstructing the SEDs. In section 4, we present the process of decomposing the spectral surfaces via Principal Component Analysis (PCA) and Factor Analysis (FA), and analyse the light curve fits of a validation sample. In section 5, we derive our distance modulus equation and discuss the Pantheon fitting results. In sections 6 and 7, we discuss the cosmological results for both PCA and FA approaches when considering \(\Lambda\)CDM cosmology. Finally, in section 8, we present the final remarks.
## 2 The pure expansion spectral model
Based on the homogeneity hypothesis for type Ia supernovae, we analyse in details the light curve fitting and cosmology of a pure linear expansion model, a variation of the empirical flux modeling by SALT2 (Spectral Adaptive Light-Curve Template 2), Guy, J. et al. (2007) and Guy, J. et al. (2010), initially proposed by Saunders et al., 2018 and further investigated here. The underlying model describes the rest frame flux of a given type Ia supernova, \(\phi(p,\lambda;\mathbf{x})\), as a pure expansion,
\[\phi(p,\lambda;\mathbf{x}):=x_{0}[M_{0}(p,\lambda)+x_{1}M_{1}(p,\lambda)+x_{2} M_{2}(p,\lambda)+...]. \tag{1}\]
Here \(p\) and \(\lambda\) are, respectively, phase (i.e. number of days since maximum light in B-band) and wavelength. The parameter \(x_{0}\) controls the rest frame flux amplitude while the set of surfaces \(M_{i}(p,\lambda)\) consists of SEDs that, together with the remaining free parameters, \(x_{1}\) and \(x_{2}\), should accommodate the observed dispersion for these objects. From now on, we will refer to the pure expansion model as PETS (Pure Expansion Template for Supernovae).
This modeling arises from selecting a descriptive training sample and assuming a general type Ia supernova can be described as a linear combination of them. Next, applying a transformation to obtain an ordered basis and computing the amount of variability explained by the first few components enable us to reduce the dimensionality of the linear expansion while still describing a great amount of information about the initial set of objects.
The SALT2 model proposes an analogous expression, keeping only the first two terms, and multiplying this expansion by an exponential of a free parameter, \(c\), times a function of wavelength, a color law \(CL(\lambda)\), the latter also being determined by the training process. The SALT2 model exploit the two main empirical correlations seen in these objects, luminosity correlation with light curve shape and luminosity correlation with color, both perceived by Phillips (1993). Their \(M_{0}\) surface is identified as the mean surface of the training sample while \(M_{1}\) allows for additional variation to the mean surface, being interpreted as a stretch-like feature, while the exponential term accounts for color variations, often without mention to the nature of this variation.
The first step when obtaining a SALT2-like spectral time series model with an exponential term is to remove the reddening effect from the spectra and light curves inputted in the model training. This process relies on the assumption that dust and intrinsic variations affect the spectra in the same fashion, and by tuning the \(R_{V}\) parameter from an extinction law such as CCM89, Cardelli, Clayton, and Mathis (1989), or F99, Fitzpatrick (1999), it is possible to compensate for both effects. The phase independent reddening is then entirely described by this exponential term that deforms the time series templates along the wavelength axis. In this description \(c\) contains a environmental dependent fraction of color that is not correlated with luminosity. In this case, two supernovae with the same exact parameters are not guaranteed to have the exact same luminosity.
Following SALT2 and SNEMO, we chose not to distinguish reddening due to dust and intrinsic variations when constructing the model. Instead, we argue that if our sample is representative it should comprise a good range of supernova luminosities with different dust extinction contributions, making it still possible to standardize them even with non negligible dust extinction. On top of that, the distance modulus here derived removes the necessity of correlating the environmental dependent fraction of \(c\) with luminosity variations.
Using equation (1) we can obtain the photometric flux model (m) in observer frame, \(F_{Y}^{(m)}(p(1+z))\), see Kessler et al. (2009a), by integrating the spectroscopic flux, \(\phi(p,\lambda;\mathbf{x})\), through a given bandpass \(Y\) with observer frame transmission function \(T_{Y}(\lambda)\):
\[F_{Y}^{(m)}(p(1+z))=(1+z)\int_{0}^{\infty}d\lambda[\lambda\phi(p,\lambda)T_{Y} (\lambda(1+z))]\,, \tag{2}\]
where \(z\) is the redshift. It is important to reinforce our parameters can not be immediately recognized as stretch or color. For now, the fit parameters are non-physical parameters that corrects the type Ia supernova magnitude dispersion grounded on these object shared similarities.
When considering the linear model portrayed in equation (1) without a previous derredening, we remove the necessity of approximating the unknown intrinsic color variation behaviour by the effect of dust reddening. In addition, when performing this integration for a flux with an exponential term, one may consider to approximate this term, whether by a first order Taylor expansion or by choosing \(\lambda=\lambda_{eff}\). The former approximation starts to break with increasing color parameter at the UV region, reaching 30% relative discrepancy at 3000A for \(c=0.3\), while the latter approximation departs from the original expression in both directions centered on \(\lambda=\lambda_{eff}\) for each filter, reaching 50% of relative discrepancy at 3000A for \(c=0.3\). Hence the pure linear model avoid yet this other approximation when evaluating equation (2).
Our training sample, which we will discuss in further detail in Section 3 is the Supernova Nearby Factory Data Release 9 from Saunders et al. (2018). In Fig. 1 we show the 2-d histogram of these training sample spectra and the model boundaries chosen to ensure good data coverage.
Fig. 2 represents the density plot of the SALT2 \(M_{0}\) spectral surface with the white dashed lines marking the region where our model is defined. We can see at low wavelengths around the date of maximum light a non-negligible flux outside the limits of our model. Even though this is an important region where type Ia supernovae show greater variability, it is also responsible for less precise results in every light curve fitting method and is often omitted when fitting for cosmology. Increasing the number of spectra in this region will allow for an immediate extension of the pure linear expansion model without any approximation that breaks in the UV region.
## 3 The data set and pre-processing steps
Aiming to train our template surfaces \(\mathbf{M}=(M_{0},M_{1},M_{2})\), we chose the Data Release 9 from Nearby Supernova Factory1, Aldering et al. (2002). The sample used here for model training and validation consists of 2466 spectra from 171 spectroscopically confirmed type Ia supernovae with redshift ranging from \(0.01\) to \(0.08\). The spectra are already shifted to a common rest-frame (\(z=0\)) and corrected for Milky Way dust extinction following Schlegel, Finkbeiner, and Davis (1998) and Cardelli, Clayton, and Mathis (1989) (see Childress et al. (2013) and Rigault, M. et al. (2020) for
more details) and the observer-frame B-band date of maximum light from a SALT2 analysis is already subtracted. For more information about these procedures see Saunders et al. (2018).
This data set consists of high-quality selected supernovae that have at least five spectra, of those at least one is prior to the date of maximum light, and at least four are between 10 and 35 days. All dates are measured relative to the observer-frame B-band maximum light estimate, that defines the rest-frame phase as \(p=(t-t_{B,max})/(1+z)\). This higher number of spectroscopic data than is usually available allows us to construct a SED for each supernova, in a similar fashion as seen in Saunders et al. (2018). These SEDs are surfaces of specific flux in units of erg/s/cm\({}^{2}\)/A (multiplied by an arbitrary factor) as a function of phase, in units of days, and rest-frame wavelength, in units of A. To extract the components that best explain the variability in our data set, we begin by interpolating our SEDs and evaluating them in a regular grid.
The majority of the spectral data from our training sample range from 3300A to 8500A in wavelength and from -15 to 50 days in phase. Based on the 2-d histogram seen in Fig. 2, we chose the model boundaries and mesh grid size as a regular grid from -10 to 50 days, with 1-day bins and from 3400A to 8400A, with 10A bins. This choice ensures good data coverage while preventing the loss of bandpasses when fitting photometric data. Our model then includes the optical region and a small portion of near-infrared, where type Ia supernovae luminosities usually show less dispersion.
### Pre Processing: Filtering the spectra
In order to avoid nonphysical fluctuations of the specific flux we applied a forward-backward digital filter Filtfilt, from the Scientific Python library SciPy, Virtanen et al. (2020). Following Leget, P.-F. et al. (2020) we decided to apply a cutoff of \(100\) A, which represents the lowest half-width absorbing line in type Ia supernovae typical spectra.
We derive the uncertainty on the flux filtered spectra as the sum in quadrature of the contributions from measurement error and a filtering one. The latter is necessary to take into account the effect seen in some high oscillatory portions of the data, where the filtering can lead to values considerably far from the original data. This contribution is equal to one-third of the difference between the real and filtered data. This procedure will be important since, when applying the Gaussian Process Regression (GPR), oscillations around the real values in conjunction with underestimation of uncertainties can wrongfully lead to a nonphysical highly oscillatory regression.
### Pre Processing: Applying Gaussian Process Regression
After the filtering process we interpolate each spectra and its uncertainties and evaluate them onto a regular wavelength grid. Then, for each supernova we position their regularized data in a three-dimensional space of specific flux per wavelength per phase and to completely reconstruct each supernova SED we perform a Gaussian Process Regression (GPR) on each plane of constant wavelength.
Gaussian processes (GP) are supervised learning methods used in regression and classification problems. For a more detailed discussion see Rasmussen and Williams (2005). Here we seek one different function that explains the data for each wavelength plane. Each of these functions will be depicted by a one-dimensional GP with mean \(m(\textbf{p})=\mathbb{E}[f(\textbf{p})]\) and covariance function \(K(\textbf{p},\textbf{p}^{\prime})=\mathbb{E}[(f(\textbf{p})-m(\textbf{p}))(f( \textbf{p}^{\prime})-m(\textbf{p}^{\prime}))]\),
\[f(\textbf{p})\sim\mathcal{GP}(m(\textbf{p}),K(\textbf{p},\textbf{p}^{\prime})). \tag{3}\]
The input values in the regression, \(y_{i}\), are called the target values and they are, apart from a gaussian noise \(\epsilon_{i}\), equal to the function we wish to model via a GP,
\[y_{i}=f(p_{i})+\epsilon_{i},\epsilon_{i}\sim\mathcal{N}(0,\sigma_{i}^{2}), \tag{4}\]
in our case we perform a heteroscedastic regression since each input point has a different noise value.
The outline is to choose a kernel (i.e. a covariance function) that generates a family of curves with the desired characteristics and retain, after several samplings, only the ones that can describe the target values. Averaging over these curves provides the predictive function and uncertainties estimates. This can be accomplished in a bayesian modeling by describing a joint distribution for the target values \(y_{i}\) and for the function \(f\), evaluated at the test points (i.e. new points where we wish to calculate the function), \(f(p_{*})=:f_{*}\), and later conditioning the joint distribution on the observations.
We start from the assumption that the observed target values, \(y_{i}\), and the function \(f\) evaluated at the test points, \(f_{*}\), can be described by the same multivariate Gaussian distribution. This allows us to construct the joint distribution,
\[\begin{bmatrix}\textbf{y}\\ \textbf{f}_{*}\end{bmatrix}\sim\mathcal{N}\left(\textbf{m}(P),\begin{bmatrix} K(P,P)+\sigma^{2}&K(P,P_{*})\\ K(P_{*},P)&K(P_{*},P_{*})\end{bmatrix}\right), \tag{5}\]
where the multivariate distribution covariance matrix, \(\boldsymbol{\Sigma}\), was depicted in terms of the kernel function evaluated at each pair of points, whether from the train points vector, \(P\), or test points vector, \(P_{*}\). Lastly, \(\sigma^{2}\) is a diagonal matrix carrying the target values variances. This distribution is completely defined when specifying a mean vector, **m**(P), and covariance matrix, \(\boldsymbol{\Sigma}\).
We can then obtain the conditional distribution that provides the predictions for the function evaluated at the test points given the test points itself, the training points, and the corresponding observed target values,
\[\textbf{f}_{*}|P,\textbf{y},P_{*}\sim\mathcal{N}(\bar{\textbf{f}}_{*},\text{ cov}(\textbf{f}_{*})), \tag{6}\]
where the predictive mean is
\[\bar{\textbf{f}}_{*}=\textbf{m}(P_{*})+K(P_{*},P)[K(P,P)+\sigma^{2}]^{-1}( \textbf{y}-\textbf{m}(P)), \tag{7}\]
and the predictive covariance is
\[\text{cov}(\textbf{f}_{*})=K(P_{*},P_{*})-K(P_{*},P)[K(P,P)+\sigma^{2}]^{-1}K( P,P_{*}). \tag{8}\]
Depending on the data and the prior knowledge about the function behavior we can choose different kernels. A widely used one is the Radial Basis Function kernel, RBF, which describes the covariance between two points as an exponential decline with their squared distances and a set of two parameters that tune the variance and the correlation range.
We employ the Matern Kernel for our GPRs, which includes an additional parameter, \(\nu\), to regulate the smoothness of the function. The lower this parameter, the less smooth the function becomes, as compared to the RBF Kernel. In the limit that \(\nu\rightarrow\infty\), we recover RBF. The Matern kernel function is expressed as
\[k(p_{i},p_{j})=\frac{\sigma^{2}}{\Gamma(\nu)2^{\nu-1}}\left[\frac{\sqrt{2\nu}( p_{i}-p_{j})^{2}}{\Delta l}\right]^{\nu}K_{\nu}\left[\frac{\sqrt{2\nu}}{ \Delta l}(p_{i}-p_{j})^{2}\right], \tag{9}\]
where \(\sigma^{2}\) is the variance and \(\Delta l\) is the length scale parameter which controls the correlation range. \(K_{\nu}\) is the modified Bessel functions and \(\Gamma(\nu)\) is the Gamma function for a given \(\nu\). Here we are specially interested in \(\nu=5/2\) to keep a reasonable smoothness.
We apply this GPR formalism to our data via the Python library GPy (2012). A very important step when performing the GPR is using the mean information, \(\textbf{m}(P)\), otherwise, the spectra without pre-maximum data lead to a wrong behavior at lower phases, as can be seen in Fig. 3 (in this context the maximum is specific to each monochromatic light-curve and does not coincide with B-band date of maximum light). This issue affects several supernovae and directly influences the SEDs reconstruction mainly at the region of low wavelengths and before maximum B-band light, profoundly impacting
the forthcoming feature extraction. To avoid this incorrect regression we adopted the type Ia supernova template from Hsiao et al. (2007) as the aforementioned mean.
The original data was named by Saunders et al. (2018) as either Train or Test supernova followed by an identification number. Thus, the Train and Test flags do not represent our choice of training or validation objects, it just identifies a specific supernova. From the initial sample of 171 supernovas, six were excluded, two due to problems in GPR and four due to poor quality in their reconstructed SEDs. Regarding the latter, they had too few spectra taken after maximum, and even with template information they showed a large amount of nonphysical negative flux in this unconstrained region, they are Train_SN93, Test_SN15, Test_SN26 and Train_SN96. Train_SN30 was also excluded since it showed several variations from the mean which are not observed in any other object in this sample.
For each GPR we perform a Maximum Likelihood Estimation (MLE) to tune the kernel parameters. A parameter inference showed many GPRs had difficulty in constraining the length scale parameter, \(\Delta l\), often reaching as extremely low values as \(10^{-3}\) days, when it should be in the range of 9 days to 30 days since it represents the horizontal data scale. To compensate for this issue we added a gaussian prior on both kernel parameters when performing the MLE, successfully avoiding these low values that lead to overfitted regressions.
The length scale gaussian prior changes as wavelength grows. In general, the data shows a peak before \(t_{max,B}\) at lower wavelengths with a horizontal scale of about 15 days, value which was chosen as the initial prior. At around 5500A these monochromatic light curves get wider, the peak moves smoothly and hence we readjust the prior. Growing even further in wavelength another peak appears, changing the horizontal scale again. Essentially, at every point where we observed a specific behavior common to most SNe, we programmed changes to the prior. During this entire process, the monochromatic light curves are normalized to one, avoiding readjusts to the variance gaussian prior. This stage of the training process is carried out manually.
Another interesting behavior occurs at higher wavelengths, where the MLE favors lower length scales because of what seems to be a random fluctuation of data not accompanied by an explanatory uncertainty. This apparent underestimation of uncertainties leads to many undesirable regressions. Then, for every GPR at planes of \(\lambda\geq 5000\)A, a small fixed white kernel of variance \(5\times 10^{-4}\) was added to the Matern kernel, allowing the predictive curve to deviate slightly more from the target values. This white kernel can not be added to every GPR, otherwise, as we are using a personalized mean function, if the data is already really well explained by it, the prediction turns out to be exactly equal to the mean, since all small variations on the data will be solely explained by noise. For lower wavelengths we made fewer exceptions, only when directly detecting this same behavior of a wiggly prediction (low \(\Delta l\)), a new regression was performed adding the same fixed white kernel.
Figure 3: Gaussian process regression for the constant wavelength plane \(\lambda=3450\)Å. The black error bars are the filtered spectra data. In orange and blue we have GPR performed over this set of points with and without the mean information, respectively. The solid black curve represents the template used as mean in the former case.
## 4 Model Training
### Dimensionality reduction of supernovae SEDs with Principal Component Analysis
From the set of reconstructed SEDs, we select a random subsample of 147 to compose the training set and save the remaining objects for a further validation step. Based on the homogeneity hypothesis of type Ia supernovae, one may argue most of the information on this sample should be correlated and partially redundant. Thus, following Guy, J. et al. (2007), Kim et al. (2013), Sasdelli et al. (2014), and He, Wang, and Huang (2018), we investigate initially an orthonormal transformation to an ordered lower dimension basis via a Principal Component decomposition, initially proposed by Saunders et al., 2018 and here more thoroughly investigated.
The Principal Component Analysis (PCA), Hotelling (1933) and Pearson (1901), aims to find the uncorrelated directions that successively maximize the explained variance of the original data. This process translates into diagonalizing the sample covariance matrix, ordering the eigenvectors according to the highest eigenvalues, and projecting the original data onto the transpose eigenvectors, leading us to the ordered Principal Components (PCs). More detailed discussions can be found in Jolliffe (2002) and Barber (2012).
This process assumes the training set is representative of the general pair of type Ia supernova plus host galaxy and thus form a suitable basis for a linear description. Additionally, it assumes that there is a lower dimension hyperplane capable of describing the original data with a few first Principal Components. It is important to note PCA does not assume the existence of hidden variables and the Principal Components which are linear combinations of the original basis are not affected by the choice of the new basis dimension and neither are directly associated with any physical interpretation.
To find accurate PCs using Singular Value Decomposition (SVD) the mean of each feature, in our case each supernova, is required to be zero. Therefore, we remove the mean prior to applying PCA and then, we transform the original vector of means to the new basis, returning a contribution for each PC. This process is necessary otherwise the first surface that we will further verify that resembles an average type Ia supernova flux would show non physical negative values. A further treatment we choose not to apply is scaling each feature such that the standard deviation is equal to one. This is common when dealing with features of different scales, in which cases PCA could prioritize some over others. However, the scaling would treat the most common objects in equity with some less common, diverse objects with lower relative rest-frame fluxes seen in our sample.
We perform this dimensionality reduction through the python package Scikit-learn, Pedregosa et al. (2011). The input data is a \(30561\times 164\) matrix with the flattened SEDs placed as columns. Fig. 4 shows the cumulative explained variance as a function of the number of components, \(N_{c}\). For \(N_{c}=3\) we explain \(98.2\%\) of the sample variance while for \(N_{c}=10\) we explain about \(99.2\%\), but at the cost of including seven new components, which later translates to seven new light-curve fit parameters. This highly explained variance by the first component alone reassures the homogeneity of the type Ia supernovae composing our sample.
In Fig. 5 we see the projection of the original data onto the first three components of the ordered basis for different planes of constant wavelength. The top image of Fig. 5 shows our \(M_{0}(p,\lambda)\) element defined in equation 1, this surface has only positive values and resembles the typical behavior of a normal Ia SN SED, as occurs for most spectral template fitting models. The following \(M_{1}(p,\lambda)\) and \(M_{2}(p,\lambda)\) elements, also depicted in the same figure, add the finer details when describing a particular SED. Their main variations concentrate close to the day of maximum light. And while \(M_{1}(p,\lambda)\) shows a more symmetric behavior around zero flux, \(M_{2}(p,\lambda)\) shows higher amplitudes at specific wavelengths.
We verify that the major explained variability surface is less dependent on the training set in contrast to the ones that hold less explained variability. Indeed, \(M_{0}(p,\lambda)\) is the most similar among the other spectral fitting methods, as SALT2, Guy, J. et al. (2007), SNEMO2, SNEMO7, and SNEMO 15, Saunders et al. (2018), while the remaining surfaces are more sample dependent.
The first surface, \(M_{0}(p,\lambda)\), is often associated with the effective mean SED of the training sample. One could then expect the second and third elements to show some similarities with the next higher-order statistical moments, i.e standard deviation and skewness. Nonetheless, Fig. 5 shows \(M_{1}(p,\lambda)\) is not positive definite, and substituting the components for these statistical moments showed a worse performance.
### Dimensionality reduction of supernovae SEDs with Factor Analysis
Another method that was implemented for type Ia supernova feature extraction by Saunders et al. (2018) and Leget, P.-F. et al. (2020) is Factor Analysis (FA). FA relies on an important underlying assumption that there are latent (i.e. unobserved) variables which can explain our original data, often with a reduced set of components.
Figure 4: Cumulative explained variance in terms of number of components, both for Principal Component Analysis and Factor Analysis. As expected, first order templates holds a higher variance. Together, \(M_{0}\), \(M_{1}\) and \(M_{2}\) dominates the variability, exceeding/reaching \(98\%\) explained variability in PCA/FA description. PCA as expected explains better the sample variability than FA, since the former focus on maximizing this quantity while FA focus on the off-diagonal elements of the data covariance matrix.
Figure 5: The original components projected onto the first three PCA components. From top to bottom we show \(M_{0}\), \(M_{1}\) and \(M_{2}\). This components covers more than \(98\%\) of the total explained variance.
Figure 6: The original data projected onto the first three FA components in the absence of rotation
As we discussed previously, the PCs are constructed as linear combinations of the original variables, without any mention of their physical interpretations nor with an explicit model. Nonetheless, for Factor Analysis, the observed quantities, \(\mathbf{x}\), are the ones assumed to be linear combinations of the latent variables, \(\mathbf{f}\), except for an error term, \(\mathbf{e}\), (i.e. \(\mathbf{x}=\mathbf{\Lambda f}+\mathbf{e}\)), with the \(\mathbf{\Lambda}\) matrix carrying the coefficients. This generative model also requires a number of assumptions over the distributions and correlations of the data, the latent variables, also known as common factors, and the error terms, also known as the specific factors. These three quantities are assumed to have null expected values and the common and specific factors are assumed to be uncorrelated, with themselves and with each other.
Without loss of generality the common and specific factors are considered to be described by gaussian distributions, \(\mathbf{f}\sim\mathcal{N}(0,\mathbf{I})\) and \(\mathbf{e}\sim\mathcal{N}(0,\mathbf{\Psi})\), respectively. Then the conditional distribution of the observed variables is given by
\[p(\mathbf{x}|\mathbf{f})=\mathcal{N}(\mathbf{x}|\mathbf{\Lambda f}+\mathbf{\mu}, \mathbf{\Psi}). \tag{10}\]
Here \(\mathbf{\mu}\) is an offset and \(\mathbf{\Psi}\) is a diagonal matrix with different entries, characterizing a heteroscedastic noise. This decomposition consists in an iterative method using maximum likelihood estimate with an SVD approach and it was also implemented using the python package Scikit-learn, Pedregosa et al. (2011). It is interesting to note that there is a probabilistic description of PCA that is closely related to FA, Probabilistic Principal Component Analysis, known as PPCA. This also linear-Gaussian framework differs from FA only by defining an homoscedastic, isotropic noise, where \(\mathbf{\Psi}=\sigma^{2}\mathbf{I}\). In the limit \(\sigma^{2}\to 0\) it recovers the PCA results, hence the name. For a more detailed discussion on these methods see Bishop (2006) and Tipping and Bishop (1999).
Overall, as argued by Jolliffe (2002), PCA and FA can be understood as explaining different aspects of the sample covariance matrix. While PCA concentrates in explaining the diagonal elements through maximizing explained variance, FA concentrates on the off-diagonal elements by forcing the specific factors to be uncorrelated and leaving the off-diagonal elements to be fully explained by the common factors.
In addition, the solution for \(\mathbf{f}\) is not unique, arbitrary rotations can be performed over this matrix and some rotations can facilitate recognizing the role of the new variables. We can perform orthogonal and oblique rotations, the latter allowing for correlated factors. These rotations will be further explored in a future work, for now we provide the results for FA in absence of rotations.
Since FA searches for latent variables, it is common to analyse the correlations between the variables in the new basis with the variables in the old one, and recognize which older variables are more relevant, gaining some interpretation to these new projected values.
As in PCA, for accurate results it is necessary to center each feature before applying the algorithm. The mean values are stored, transformed and added back to each new basis component, after the model fitting.
In Fig. 6 we represent the surfaces that define a basis for describing a general type Ia supernova in FA context. The input was the same subset used for PCA feature extraction consisting of 147 SEDs constructed with SNFactory spectra and Gaussian Processes. As previously seen for our PCA surfaces, for SNEMO and SALT2, the first surface resembles the structure of an average Ia SN SED. The remaining surfaces then will be responsible for correcting finer details that differentiate each supernova. As the majority of the emitted flux is concentrated closer to the day of maximum light and around \(3000\)A to \(6000\)A, it is expected that the corrections also concentrate on the same region. The surfaces aside from the first one show portions of negative flux and it is interesting to note the second one shows a symmetry with respect to the horizontal axis, with the correction changing sign when growing in wavelength. And last, the third surface shows a peak around this same region where the second surface changes sign.
To have the same number of free parameters as the most widely used fitters, we assume the existence of three hidden variables. In Fig. 4 we see a comparison in explained variance between the two approaches. As expected the PCA performance is slightly better since this algorithm focuses on maximizing the explained variance. The model depicted in this figure, for comparison only, is FA with 10 hidden variables. FA with three components will also explain roughly \(98\%\) of this sample variability.
### Validation fits
The remaining 17 Ia supernovae that did not take part into the feature extraction form a test set for model validation. For this step we used the Python package for Supernova Cosmology SNCosmo, Barbary et al. (2022). With SNCosmo we were able to create a spectral time series as a function of arbitrary parameters and fit the light curves by generating synthetic photometry from the SEDs of the remaining Ia supernovae. To draw maximum information from this data we chose the set of filters used by The Carnegie Supernova Project (CSP), Krisciunas et al. (2017). Our photometric system consists on filters CSP-g, CSP-r and CSP-V9844 with AB magnitudes. The first two filters cover a range from about 4000A to 7000A but with a lower coverage around 5500A, this same region is mainly contained in CSP-V9844.
These three filters extract an important amount of information from our original SEDs and capture a good portion of the optical region.
The considered models were created truncating the rest-frame flux expansion, equation (1), whether after the first three terms or after the first ten terms. The fits quality was measured through chi squared per degrees of freedom. This quality fit ruler is not ideal since it strongly depends on the choices made to estimate the measurement errors in this synthetic photometry. However, as the validation fits only intends to compare the fitting ability of the methods and establish that the PCA/FA process produce reasonable model components, we maintain this parameter as a ruler for now.
Whether using PCA or FA the ten components versions performed better. When adopting PCA, 12 of the 17 validation fits had really good agreement between the three and ten components versions, the latter performing something similar to a fine tuning. The other five fits showed a greater improvement going from the PCA version to the FA version. For three components PCA, the chi squared per degrees of freedom was usually no lower than ten and sometimes reaching up to one extra order of magnitude, while for the ten components PCA in many occasions it was lower than one, probably indicating overfitting. Most of the light curves detailed behavior was captured by this ten components model but at the cost of adding seven new parameters.
It is important to note three components PCA has explained variability of \(98.2\%\), but this number refers to fitting the SEDs and not the flux integrated along a given set of filters. In other words, we should not expect the ability of explaining a SED as a linear combination of the model components to be similar in performance when dealing with correlated photometric data. We argue the three component versions struggles with the fine details due to this highly correlated synthetic data, since there is great overlap in filter transmissions.
Regarding the FA versions, the three components model performs better than its PCA analogous and this can be visualized in Fig. 7. The transition between three to ten FA components models seems to just tune the peaks height. Possibles rotation considered for FA, as varimax and quartimax, will only provide different parameters values due to projections on rotated axis, the fit quality itself does not change. Overall, this test set supernova portrayed in Fig. 7 sums up the main behavior observed when comparing FA and PCA fits, the former generally offers a lower \(\chi^{2}/ndof\) and apparently is better at reproducing finer details. Nevertheless, both methods produce satisfactory rest-frame model components and we proceed to incorporate both methods with three components in our cosmology analyses and observe which performs better when fitting real photometry data and how their different approaches will affect the cosmology results.
## 5 Cosmological parameters constraints
The next step is to apply the pure linear expansion model to a larger sample named Pantheon, introduced by Scolnic et al. (2018), which covers a greater redshift range. In Section 5.3 we will provide further information about the sample and the fitting process. For now, we describe how to construct and compare the Hubble diagrams for PETS and SALT2 models.
The theoretical distance modulus is obtained from the luminosity distance in units of Hubble distance today through
\[\mu_{th}(z;\boldsymbol{\theta},h)=5\log_{10}(\mathcal{D}_{L}(z;\boldsymbol{ \theta}))+\mu_{0}(h), \tag{11}\]
with \(\mu_{0}(h)=5\log_{10}(1000c/(\text{km}/\text{s}))-5\log_{10}h\). Here \(h\) is a dimensionless quantity parametrizing the Hubble constant, \(H_{0}=100h\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\boldsymbol{\theta}\) stands for the remaining cosmological parameters for a specific cosmological model. The luminosity distance in units of Hubble distance today is defined as
\[\mathcal{D}_{L}(z;\boldsymbol{\theta}):=\begin{cases}\dfrac{1+z}{\sqrt{ \Omega_{k0}}}\sinh\left(\sqrt{\Omega_{k0}}\int_{0}^{z}\dfrac{dz^{\prime}}{E(z ^{\prime};\boldsymbol{\theta})}\right)&\text{if }\Omega_{k0}>0,\\ (1+z)\int_{0}^{z^{\prime}}\dfrac{dz^{\prime}}{E(z^{\prime};\boldsymbol{\theta })}&\text{if }\Omega_{k0}=0,\\ \dfrac{1+z}{\sqrt{-\Omega_{k0}}}\sin\left(\sqrt{-\Omega_{k0}}\int_{0}^{z} \dfrac{dz^{\prime}}{E(z^{\prime};\boldsymbol{\theta})}\right)&\text{if }\Omega_{k0}<0,\end{cases} \tag{12}\]
where \(E(z;\boldsymbol{\theta})\) is the Hubble parameter in units of the Hubble constant and \(\Omega_{k0}\) is the dimensionless curvature parameter measured today.
The cosmological model considered is \(\Lambda\)CDM, assuming a universe with curvature, cold dark matter and cosmological constant, which give us the following dimensionless Hubble parameter
\[E_{\Lambda\text{CDM}}(z;\boldsymbol{\theta})=\sqrt{\Omega_{m0}(1+z)^{3}+ \Omega_{\Lambda 0}+\Omega_{k_{0}}(1+z)^{2}}. \tag{13}\]
Recalling that \(\Omega_{k_{0}}=1-\Omega_{m_{0}}-\Omega_{\Lambda_{0}}\). Here \(\mathbf{\theta}=(\Omega_{m0},\Omega_{\Lambda 0})\), representing the cosmological parameters we are interested in constraining and \(\Omega_{r0}\), the dimensionless radiation density parameter measured today, is neglected since it is about 5 orders of magnitudes smaller than the cosmological parameters \(\mathbf{\theta}\).
### SALT2 distance modulus
The SALT2 empirical distance modulus, that carries information from the derived light-curve fitting parameters, is given by
\[\mu_{\text{SALT2}}=m_{B}^{corr}-M_{B}=m_{B}^{*}-M_{B}+\alpha x_{1}-\beta c, \tag{14}\]
where \(m_{B}^{*}\) is the rest-frame B-band magnitude at maximum light, and according to Mosher et al. (2014) this quantity can be obtained from the light curve parameter \(x_{0}\) using \(m_{B}^{*}=-2.5\log_{10}x_{0}+10.635\). \(M_{B}\) is the type Ia supernovae absolute magnitude, the parameters \(x_{1}\) and \(c\) corrects the supernova apparent magnitude to account for the observed dispersion. Lastly, \(\alpha\) and \(\beta\) are nuisance parameters that need to be included in the parameter inference.
The statistical uncertainty matrix for this distance modulus can be portrayed as
\[\sigma_{\mu_{\text{SALT2}}}^{2}=\sigma_{m_{B}^{*}}^{2}+\alpha^{2}\sigma_{x_{1 }}^{2}+\beta^{2}\sigma_{c}^{2}+\alpha\sigma_{m_{B}^{*},x_{1}}-\beta\sigma_{m_ {B}^{*},c}-\alpha\beta\sigma_{x_{1},c}+\sigma_{\mu,z}^{2}, \tag{15}\]
where \(\sigma_{m_{B}^{*}}^{2}\), \(\sigma_{x_{1}}^{2}\), \(\sigma_{c}^{2}\), \(\sigma_{m_{B}^{*},x_{1}}\), \(\sigma_{m_{B}^{*},c}\) and \(\sigma_{x_{1},c}\) are elements of the covariance matrix for the light curve parameters. The last term propagates to the distance modulus an error due to the redshift measurement and due to peculiar velocities contamination. Following Kessler et al. (2009a), we adopt the distance-redshift relation for an empty universe leading to
\[\sigma_{\mu,z}=\sigma_{z}\left(\frac{5}{\log 10}\right)\frac{1+z}{z(1+z/2)}, \tag{16}\]
Figure 7: Light curve fitting of the representative test set supernova Train_SN58, as named by Saunders et al. (2018). On the left figure we have three components and ten components PCA fits, in solid and dashed lines, respectively. On the right figure we have the three component FA. The \(\chi^{2}/ndof\) for PCA versions are respectively, 5.38 and 0.47, and 5.22 for the FA version.
where \(\sigma_{z}^{2}=\sigma_{\text{spec}}^{2}+\sigma_{\text{pec}}^{2}\). The term \(\sigma_{\text{spec}}^{2}\) represents the measurement uncertainty and \(\sigma_{\text{pec}}^{2}\) is the contribution due to peculiar velocity uncertainties, estimated as \(0.0012\) by Kessler et al. (2009b).
### PETS distance modulus
The distance modulus can be written as the difference between apparent and absolute bolometric magnitudes, \(\mu(z)=m(z)-M\). The above description for SALT2's distance modulus relies on the empirical observed correlations between stretch and color with absolute magnitude in B-band. Recognizing the parameters \(x_{1}\) and \(c\) proposed in the empirical flux modelling as stretch and color, respectively, allows to insert those values directly in the distance modulus as a correction accounting for the absolute magnitude dispersion, under the assumption these two parameters describe completely the dispersion seen in Ia SNe light curves.
PETS' \(x_{1}\) and \(x_{2}\) parameters can not, by construction, be immediately recognized as either stretch or color, instead, we derive an expression for the distance modulus from equation (2), that already contains K-corrections. The rest-frame flux, as defined in equation (1), can be incorporated into the observed flux expression (equation (1)). Taking the base 10 logarithm of this quantity and dividing it by a reference value in the AB magnitude system yield the apparent magnitude measured by an observer in relative velocity with respect to the targeted Type Ia supernovae. We can manipulate this expression to recover a distance modulus that resembles that of SALT2.
If we define
\[I_{i}(p;z):=\int_{0}^{\infty}d\lambda M_{i}(p,\lambda)\lambda(1+z)T_{B}( \lambda(1+z)), \tag{17}\]
we can rewrite the observer-frame flux as
\[F_{B}(p(1+z))=x_{0}I_{0}(p;z)\left[1+x_{1}\frac{I_{1}(p;z)}{I_{0}(p;z)}+x_{2} \frac{I_{2}(p;z)}{I_{0}(p;z)}\right], \tag{18}\]
and at B-band maximum light we obtain
\[F_{B}(0;z)=x_{0}I_{0}(0;z)\left[1+x_{1}\alpha(z)+x_{2}\beta(z)\right], \tag{19}\]
where we defined two nuisance parameters dependent on redshift
\[\alpha(z):=\frac{I_{1}(0;z)}{I_{0}(0;z)},\qquad\beta(z):=\frac{I_{2}(0;z)}{I_{ 0}(0;z)}. \tag{20}\]
Using that \(\mu(z)=m_{B}(z)-M_{B}-K_{BB}(z)\) we have
\[\mu(z) =-2.5\log_{10}\left[\frac{F_{B}(0;z)}{F_{B,\text{ref}}}\right]-M_{B} \tag{21}\] \[=m_{B}(z)-2.5\log_{10}[1+x_{1}\alpha(z)+x_{2}\beta(z)]-M_{B},\]
with
\[m_{B}(z)=-2.5\log_{10}\left[\frac{x_{0}I_{0}(0;z)}{F_{B,\text{ref}}}\right]. \tag{22}\]
Now we can express the distance modulus as
\[\mu(z)=m_{B,\text{corr}}(z)-M_{B}, \tag{23}\]
with the corrected apparent magnitude
\[m_{B,\text{corr}}(z)=m_{B}(z)-2.5\log_{10}[1+x_{1}\alpha(z)+x_{2}\beta(z)], \tag{24}\]
where \(x_{1}\) and \(x_{2}\) are the same parameters used to account for dispersion in the rest-frame flux of type Ia supernovae.
Here, we need to approximate \(m_{B}\) for the rest-frame counterpart, \(m_{B}^{*}\), since when performing the integral \(I_{0}(0;z)\) as the redshift grows the overlap between \(M_{0}(0;z)\) and the redshifted filter transmission rapidly goes to zero.
In a first approximation we also neglect the redshift dependence of our nuisance parameters, \(\alpha(z)=\alpha\) and \(\beta(z)=\beta\). Afterwards we will allow for evolution with redshift by solving for a fixed cosmology in a redshift binned Hubble Diagram.
PETS distance modulus can be written as
\[\mu_{\text{PETS}}(z)=m_{B}^{*}(z)-2.5\log_{10}(1+\alpha x_{1}+\beta x_{2})-M_{B}, \tag{25}\]
where the redshift dependence of \(m_{B}^{*}\) is fully contained in \(x_{0}\). We have the following statistical uncertainty contribution
\[\begin{split}\sigma_{\mu_{\text{PES}}}^{2}&=\sigma_{m_{B }^{*}}^{2}+\alpha^{2}\gamma^{2}\sigma_{x_{1}}^{2}+\beta^{2}\gamma^{2}\sigma_{x_{ 2}}^{2}-2\alpha\gamma\sigma_{m_{B},x_{1}}-2\beta\gamma\sigma_{m_{B},c}\\ &+2\alpha\beta\gamma^{2}\sigma_{x_{1},x_{2}}+\sigma_{x_{2},\mu} ^{2},\end{split} \tag{26}\]
where the \(\gamma\) factor depends only on the nuisance parameters,
\[\gamma(\alpha,\beta):=\frac{2.5\log_{10}e}{1+\alpha x_{1}+\beta x_{2}}. \tag{27}\]
We do not expect the parameters other than \(m_{B}^{*}\) to carry the same information as their SALT2 counterparts. In our description \(m_{B}^{*}=-2.5\log_{10}x_{0}+\)const is the rest-frame apparent magnitude, i.e. apparent magnitude measured by an observer with no relative velocity with respect to the target. The constant term depends on the \(M_{0}(p,\lambda)\) surface of the underlying model and also on the shape of the chosen reference filter transmission, then this offset varies depending on the fitting model.
We also consider a correction due to different host galaxies masses, if \(M_{stellar}>10^{10}M_{\odot}\) we replace in the aforementioned equations \(M_{B}\) for \(M_{B}+\Delta_{M}\).
The parameter inference is performed through a Monte Carlo Markov Chain (MCMC) using the Python package emcee, Foreman-Mackey et al. (2013). Only with type Ia supernovae data we can not simultaneously constrain \(M_{B}\) and \(h\), then we define \(\mathcal{M}(M_{B},h)=M_{B}+\mu_{0}(h)\). We can write the \(\chi^{2}\) for PETS model, neglecting correlations between different supernovae, as
\[\chi^{2}_{\text{PETS}}(\theta,\delta,\mathcal{M})=\sum_{i}^{N}\frac{[\mu_{i, \text{PETS}}(z_{i},\delta,M_{B})-\mu_{th}(z_{i};\boldsymbol{\theta},h)]^{2}}{ \sigma_{i,\text{PETS}}^{2}(\delta)+\sigma_{\text{int}}^{2}}. \tag{28}\]
An analogous expression is defined for SALT2. The input log-likelihood is
\[\log L=-0.5\left[\chi^{2}_{\text{PETS}}(\boldsymbol{\theta},\mathcal{M}, \boldsymbol{\delta},\sigma_{\text{int}})+\sum_{i}^{N}\log\left[\sigma_{i, \text{PETS}}^{2}(\boldsymbol{\theta},\boldsymbol{\delta})+\sigma_{\text{int }}^{2}\right]\right], \tag{29}\]
where \(\boldsymbol{\delta}=(\alpha,\beta)\) are the model nuisance parameters. And \(\sigma_{\text{int}}\), the intrinsic scatter, is a free parameter that will store any remaining unmodeled coherent variability.
### Fitting Pantheon sample
To perform the cosmological analysis we chose the Pantheon sample from Scolnic et al. (2018). This sample consists of 1048 spectroscopically confirmed SNe Ia. In the low redshift region it includes CfA1-CfA4, Riess et al. (1999), Jha et al. (2006), Hicken et al. (2009a), Hicken et al. (2009b), and Hicken et al. (2012) and CSP, Folatelli et al. (2009), Contreras et al. (2010), and Stritzinger et al. (2011). Populating the intermediate redshift region we have SDDS, Frieman et al. (2007), Kessler et al. (2009a), and Sako et al. (2018), SNLS Conley et al. (2010) and Sullivan et al. (2011) and PS1 Rest et al. (2014) and Scolnic et al. (2014). And in high-z region we have data from HST, Suzuki et al. (2012), Riess et al. (2004), Riess et al. (2007), Graur et al. (2014), Rodney et al. (2014), and Riess et al. (2018). This sample is already cross-calibrated and the light curves were retrieved from SHANA, Kessler et al. (2009b).
The spectral time series model was constructed in SNCosmo using as input the first three components returned by either our Principal Component Analysis or Factor Analysis. We restrict our fitting to the region -10 days to 40 days and 3400A to 7000A so that the training set would be representative in this region. We correct for extinction by dust in Milky Way given the color excess, \(E(B-V)_{MW}\), following Fitzpatrick (1999). The fitting process returns for each supernova a value of \(t_{0}\) (date of maximum light in B-band), \(x_{0}\), \(x_{1}\) and \(x_{2}\), with corresponding uncertainties.
### Fitting results for PCA PETS
From the 1048 SNe in the initial sample, 1017 had a successful fit according to SNCosmo. However, after visual inspection is possible to note some low-quality fits. In most cases this happens when there is mainly data before or after the maximum, increasing the difficulty in constraining the date of maximum and, consequently, in obtaining accurate values for \(x_{1}\) and \(x_{2}\). It is worth to mention an apparent underestimation of uncertainties for the low redshift sample, leading to higher values of \(\chi^{2}/ndof\) even for visually high quality fits. Hence, we decide to apply cuts to the parameter space.
We encountered additional challenges when trying to determine the date of maximum for the low redshift sample. To address this issue, we used a flat prior \(\mathcal{U}(-5,5)\) around the date of maximum reported in the SNANA files. For the remaining objects we conducted fits for \(t_{0}\), \(x_{0}\), \(x_{1}\), and \(x_{2}\) without using any prior knowledge.
Together with this occasional underestimation of uncertainties, our lack of a complete model covariance makes the \(\chi^{2}/ndof\) parameter a bad quality fit ruler. The best combination found was to restrict the values of \(x_{1}\) and \(x_{2}\). The farther from the center of these distributions, the more likely was for the fit to be of low quality.
The cuts proposed for the PCA PETS parameters are shown in Table 1, where we can also see how many supernovae are accepted in each step. The cuts on \(\chi^{2}/ndof\), \(\sigma_{x_{1}}\) and \(\sigma_{x_{2}}\) are purposefully mild only to eliminate the ones that really distinguish from the underlying distributions. Fig. 8 shows the distributions for each parameter. In blue the original sample and in orange the reduced sample, after cuts. Most of the eliminated supernovas do not fail in all three cuts, many fits with high values of \(x_{1}\) and/or \(x_{2}\) have low \(\chi^{2}/ndof\), leaving us with a reduced sample of 691 Ia SNe.
In Table 2 we have the number of SNe selected per survey and in Fig. 9 we have the histograms of redshift distributions for each survey. As already expected from Scolnic et al. (2018) analysis, the fit parameters cuts with respect to the LOWZ sample affects more strongly CSP, we lost almost 2/3 of the original data, presumably also due to a great underestimation of uncertainties.
This same subsample is selected from SALT2 fits which were also performed over the same light curves using SNCosmo built-in "salt2 v. 2.4". In Fig. 10 we have a scatter plot for SALT2 parameters versus PCA PETS parameters. To address their relationships we calculate the Pearson Correlation Coefficient. This coefficient measures the linear correlation between a pair of parameters. Values closer to unity indicate the measurement of one feature provides a good estimate of the second. On the other hand, values closer to zero indicate that measuring one feature no information about the other is gained, regarding linear relationships.
Figure 10: Scatter plots of SALT2 and PCA PETS parameters with the Pearson correlation coefficients for each pair of parameters.
Figure 9: Redshift distribution of the supernovae sample used in the cosmological analysis from PCA PETS model.
We see positive correlations between the pairs \(x_{1}\) (SALT2), \(x_{1}\) (PCA PETS) and \(c\) (SALT2), \(x_{2}\) (PCA PETS). And negative correlations between the pairs \(x_{1}\) (SALT2), \(x_{2}\) (PCA PETS) and \(c\) (SALT2), \(x_{1}\) (PCA PETS). The stronger the correlation, the strongest is the indication of a similar origin for those features, i.e. the mechanism which drives the \(x_{1}\) (SALT2) for higher or lower values also affects \(x_{1}\) (PETS) in the same direction. As we constructed our model based on a similar hypothesis for pairs of Ia SNe plus host galaxies without light curve dereddening nor color extinction separation we do not expect a direct association of PCA PETS and SALT2 parameters. However, this higher Person coefficients indicate the Brighter-Broader and Brighter-Bluer correlations are still present but in a different fashion than in usual Tripp-like estimators.
Models that extract Principal Components from a set of Ia SNe undergo an approximation to reduce dimensionality, this decision balances the cumulative explained variance and number of components. In this process we necessarily discard information, what leads to unmodelled behavior. Our model eliminated other approximations at the cost of losing direct association with physical features, albeit through scatter plots (Fig. 10), we verify that our parameters still carry important information from Ia SNe color and stretch which are directly correlated with luminosity.
We can also analyse if the PCA PETS parameters show redshift dependence, this is portrayed in Fig. 11. For intermediate and high redshift there is no apparent dependence with redshift. For the low-z sample we observe a deviation for negative values of \(x_{1}\) and positive values of \(x_{2}\). However, this apparent dependence is due to the lower region being populated only by one sample, here called LOWZ, which favor host galaxies masses, measured through \(\log_{10}(M_{stellar}/M_{\odot})\) higher than 10. This is clear from the subsample plots in Fig. 12 and in Fig. 19 that shows the same preferences in the parameters dependence with host galaxy mass, hence reflecting a selection effect.
### Fitting results for FA PETS
We now analyse the fitting results when equation (1) \(M_{i}(p,\lambda)\) surfaces are the FA components. Table 3 shows the applied cuts over the Pantheon sample after light curve fitting along with the number of supernovas passing each cut. As for the PCA version, the cuts on \(\chi^{2}/ndof\), \(\sigma_{x_{1}}\) and \(\sigma_{x_{2}}\) were also purposefully mild.
Since the cuts criteria were different for each training approach, we are not able to directly compare both resulting tables. Nonetheless, we can investigate their impact on each survey. From Table 2 and Table 4 we should be able to assert, that for all surveys (LOWZ, SDSS, SNLS, PS1, HST), the FA training process shows a better performance, giving a relative number of passed supernovae higher than PCA (# pass cut SNe/Total # of SNe).
As the FA components seemed to capture more detailed behavior in the validation fits, it is expected a better performance when adopting this description. Indeed, when fitting the Pantheon sample, the parameters distribution showed less dispersion, with the number of outliers significantly reduced when comparing to PCA fits.
Figure 11: Scatter plot of fit parameters for PCA decomposition components method as a function of \(m_{B}^{*}\), a quantity that correlates linearly with \(\log_{10}(z)\). In blue we have the binned scatter with corresponding standard deviations.
\begin{table}
\begin{tabular}{l c} \hline Cuts & \# pass cut SNe \\ \hline fit & 1018 \\ \(-3.5\leq x_{1}\leq 0.25\) & 888 \\ \(|x_{2}|\leq 0.35\) & 898 \\ \(\chi^{2}/\text{ndof}\leq 15\) & 994 \\ \(\sigma_{x_{1}}\leq 1\) & 998 \\ \(\sigma_{x_{2}}\leq 1\) & 999 \\ \hline \end{tabular}
\end{table}
Table 3: Cuts applied to the light curve fit parameters for Pantheon sample with FA PETS.
Figure 12: Host galaxy mass distribution in selected samples for PCA PETS from each survey, except HST.
\begin{table}
\begin{tabular}{l c c} \hline Survey & Total \# of SNe & \# pass cut SNe \\ \hline LOWZ & 172 & 148 \\ SDSS & 315 & 243 \\ SNLS & 234 & 174 \\ PS1 & 272 & 220 \\ HST & 25 & 20 \\ \hline \end{tabular}
\end{table}
Table 4: Accepted SNe per survey when applying the cuts for Pantheon subsample using FA PETS.
Figure 13: Scatter plots of SALT2 and FA PETS parameters and the Pearson correlation coefficients for each pair of parameters.
We can also compare the scatter plots for FA PETS and SALT2 pairs of parameters. Fig. 13 reveals greater Pearson correlations coefficients between the two models than seen for the PCA approach. The greatest correlations are seen for the pairs \(x_{2}\) (PETS), \(x_{1}\) (SALT2) and \(x_{1}\) (PETS), \(c\) (SALT2) which can imply that in our model as no derredening was performed, the first order correction is mainly due to color variations instead of light curve shape variations.
Comparing Fig. 10 and Fig. 13 we see that in PCA the stretch and color information seem to be more diluted, with each PETS fit parameter showing a non-negligible Pearson correlation with SALT2' parameters. For the FA scatter plot the situation is different, the stretch and color information seem to be less diluted but appear in a different order of correction.
Following the analysis performed over the PCA fitting results, an evolution with \(m_{B}^{*}\), or equivalently \(\log_{10}z\), is shown in Fig. 14. We see no apparent behavior with redshift apart from \(x_{2}\) tendency to positive values at low redshift, which is consequence of the host galaxy masses distribution of the LOWZ sample portrayed in Fig. 23. Summing up, \(x_{1}\) and \(x_{2}\) parameters show correlations with host galaxy mass when dealing with PCA components while only \(x_{1}\) shows this same correlation when considering FA components. This reinforces the behavior seen previously that PCA dilutes color and stretch information, while FA components act on different aspects of Ia SN variability.
## 6 Cosmology results for PCA PETS
With light curve fitting results for PCA PETS we perform a cosmological parameter inference using emcee sampling package Foreman-Mackey et al., 2013, with the log-likelihood described in equation (29) and flat prior over all parameters. The marginalized contours and one-dimensional marginalized distributions for the cosmological parameters are shown in Fig. 15. The constraining power of our PCA PETS model is smaller than SALT2's but the marginalized one-dimensional distributions show good agreement inside the 95% confidence region. The reduced \(\chi^{2}\) for the best fit cosmology is 0.97 for PCA PETS while it is 1.012 for SALT2, considering the same subsample.
Table 5 shows the MCMC results for \(\Lambda\)CDM model, here only including this subsample of type Ia supernova as cosmological probes. The \(\Delta_{M}\) parameters are in agreement within the 95% confidence region, but our PCA PETS model estimates a lower correction in distance due to host galaxy mass differences. It is also worth to mention the slight correlation between the nuisance parameters seen in PCA PETS that is not observed in SALT2, as shows Fig. 16. As the principal components are by definition uncorrelated, a pure linear expansion has uncorrelated components, different from SALT2.
Figure 14: Scatter plot of fit parameters for FA decomposition components method as a function of \(m_{B}^{*}\), a quantity that correlates linearly with \(\log_{10}z\). In blue we have the binned scatter with corresponding standard deviations.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Model & \(\Omega_{m0}\) & \(\Omega_{\Lambda 0}\) & \(\mathcal{M}\) & \(\Delta_{M}\) & \(\alpha\) & \(\beta\) & \(\sigma_{lm}\) \\ \hline PCA PETS & 0.34 \(\pm\) 0.08 & 0.49 \({}^{+0.13}_{-0.14}\) & 23.93 \(\pm\) 0.02 & -0.028 \(\pm\) 0.015 & 0.129 \(\pm\) 0.005 & 0.124 \(\pm\) 0.005 & 0.147 \(\pm\) 0.006 \\ SALT2 (PCA subsample) & 0.31 \({}^{+0.06}_{-0.07}\) & 0.54 \({}^{+0.10}_{-0.11}\) & 24.13 \(\pm\) 0.01 & -0.043 \(\pm\) 0.011 & 0.131 \(\pm\) 0.006 & 2.62 \(\pm\) 0.07 & 0.094 \(\pm\) 0.005 \\ FA PETS & 0.36 \(\pm\) 0.07 & 0.48 \({}^{+0.13}_{-0.12}\) & 23.98 \(\pm\) 0.02 & -0.030 \(\pm\) 0.014 & 1.72 \(\pm\) 0.06 & 0.88 \(\pm\) 0.05 & 0.145 \(\pm\) 0.006 \\ SALT2 (FA subsample) & 0.29 \(\pm\) 0.06 & 0.49 \(\pm\) 0.10 & 24.14 \(\pm\) 0.014 & -0.048 \(\pm\) 0.010 & 0.126 \(\pm\) 0.006 & 2.65 \(\pm\) 0.07 & 0.097 \(\pm\) 0.005 \\ \hline \end{tabular}
\end{table}
Table 5: Result of marginalized parameters for \(\Lambda\)CDM cosmological model for our PCA PETS and FA PETS models as well as for SALT2 without distance bias correction.
Figure 16: Marginalized contours in \(\alpha\) x \(\beta\) plane, on the left for PCA PETS and on the right for SALT2 (PCA subsample).
Figure 15: Marginalized cosmology results comparison of SALT2 and PCA PETS for \(\Lambda\)CDM model.
The intrinsic scatter, \(\sigma_{\text{int}}\), is a term added in quadrature to the distance modulus uncertainty of each supernova and is meant to capture any behavior unaccounted by the model. PCA PETS returns a intrinsic scatter value 66% higher than SALT2.
In the top panel of Fig. 17 we show the Hubble Diagram (HD) for our Pantheon subsample, corrected for intrinsic dispersion using PCA PETS. The black solid line is the theoretical distance modulus with parameters fixed at the best fit displayed in Table 5, and the blue scatter points are drawn from equation (25) at each supernova redshift, with statistical errors given by the square root of equation (26) with parameters fixed at their best fits. In the bottom panel we have the residues for this HD, here defined as \(\mu_{\text{PETS}}-\mu_{th}\). A histogram of this residues show a mean of 0.006 mag, indicating no visible rigid translation of the distribution, what would indicate a bias or an uncounted systematic error. It also shows a dispersion of 0.19 mag and skewness of 0.21.
In Fig. 18 the HD residues dependence with redshift and fit parameters, \(m_{B}^{*}\), \(x_{1}\) and \(x_{2}\), are displayed for PCA PETS model. Each gray scatter point in the background represents a Ia SN from the underlying subsample, the binning results shown in blue helps to clarify the subgroups behavior. All parameters show normal dispersion around the solid black line which represents null residue. For intermediate redshift however, we see a greater dispersion, that could be related to the higher concentration of data on this region. Overall, no parameter showed correlation with higher residues.
In Fig. 19 we show the light-curve fit parameters \(x_{1}\), \(x_{2}\) and the HD residues dependence with host galaxy mass. The dependence seen here resemble that observed for SALT2 \(x_{1}\) and \(c\) parameters reported in Scolnic et al. (2018). The binned scatter points show a preference for positive values of \(x_{1}\) and negative values of \(x_{2}\) for lower host galaxy masses and the opposite for higher masses, growing/declining in a linear fashion, while the HD residues again does not exhibit any dependence.
Figure 17: Upper panel shows the Hubble Diagram for PCA PETS model. In the bottom graph is the HD residue as function of redshift.
Figure 18: Hubble Diagram residues for PCA PETS dependence with redshift and fit parameters.
Our first description of distance modulus, equation (23), shows redshift dependence not accommodated by our analysis. Keeping the apparent magnitude as its rest-frame counterpart but allowing the nuisance parameters and the intrinsic scatter to evolve with redshift and solving an MCMC simulation for all cosmological parameters fixed at their best fit values, provides the results displayed in Fig. 20. The binned results are calculated for groups of similar number of supernovas, around 40 per group. The nuisance parameters show a more clear dependence, tending to lower values with increasing redshift, while the intrinsic scatter show a slower rise with redshift. However, we need to take into account that the error bars shown here are only statistical contributions, including systematic terms would reduce this apparent evolution primarily for the intrinsic scatter. It is important to note the evolution on our nuisance parameters, must compensate each other when assuming they have a constant value since the HD residues does not absorb this redshift evolution.
The distance modulus and corresponding uncertainties for SALT2 can be obtained following the same procedure, using equation (14) and the square root of equation (15). In Fig. 21 we see no clear dependence with redshift for the difference \(\mu_{SALT2}-\mu_{\text{PETS}}\). There is an offset, which is due to the differences in assumptions when calculating the additive term for \(m_{B}^{*}\), since this value will depend, among others factors, on the shape of the B-band transmission function. There is also a clear growth in dispersion for higher redshifts, this region is the main responsible for the tail in the distribution show on the right, which has skewness of -0.08 and a standard deviation of 0.15 mag. These results do not include distance bias corrections, which according to the usual practise requires simulations of thousand of type Ia Supernovae to reproduce selection effects characteristics that are specific for each survey. As we chose to compare our results with SALT2's sharing the same input Pantheon subsample, we would have to perform a new set of simulations using BBC (BEAMS with Bias Correction), Kessler and Scolnic (2017), to account for the selection bias in our chosen subsample, the same would have to be performed for each SALT2 subsample used here since PCA and each FA rotations select a different set of objects, based on cuts applied to each fit parameters distributions.
Since our main goal on this paper was to evaluate the constraining power of a pure linear expansion model while observing the behavior of the nuisance parameters and possible residue correlations in contrast with SALT2 and we do not have yet a bias correction for our model, we chose to compare both models in equity, i.e. we do not correct SALT2 results for selection effects. The correction performed via BBC alters directly the values of \(m_{B}^{*}\), \(x_{1}\) and \(c\) parameters and amount to a total distance correction of almost \(-0.1\) mag for \(z\sim 0.6\). The correction is dependent with redshift and
Figure 19: Fit parameters and HD residue dependence with host galaxy mass for PCA PETS fits.
Figure 20: Evolution of PCA PETS fit parameters with redshifts.
becomes more visible at \(z\sim 0.3\) and further. BBC outputs a distance modulus for SALT2 since the nuisance parameters are not estimated together with cosmological parameters, which results in an increase of up to 30% compared with non-BBC fitting version values. Apart from this effect, the cosmological parameters are also significantly altered, what we could already expect from the correction being concentrated mainly on the high redshift supernovae. After applying BBC, \(\Omega_{m0}\) is driven to a slight lower value while \(\Omega_{\Lambda 0}\) increases more significantly.
## 7 Cosmology results for FA PETS
We now analyse the cosmological results when considering FA PETS model. As seen previously, the components retrieved from FA method concentrates in explaining the off-diagonal elements of the sample covariance matrix and show less explained variability when fitting the Ia SN SEDs in contrast to PCA PETS. This slight better performance in explaining variability does not necessarily translate to a better performance when fitting correlated photometric data, as also mentioned previously. Indeed, as seen in Section 5.5 the FA PETS fit parameters distributions for the Pantheon sample is better behaved and we lose less objects when applying the selection cuts.
Fig. 22 shows the confidence levels in the (\(\Omega_{m0}\), \(\Omega_{\Lambda 0}\)) plane and the given parameters posteriors. In addition to the confidence plot, the best fit parameters are given in Table 5 for both FA PETS and SALT2 considering the same subsample. Here the reduced \(\chi^{2}\) for the best fit cosmology is 0.96 for FA PETS and 1.013 for SALT2. For FA PETS constraints in Fig. 22 it is possible to detect a rigid translation in phase space towards higher \(\Omega_{m0}\) and lower \(\Omega_{\Lambda 0}\), the same behavior observed for PCA PETS relative to SALT2.
Figure 21: The left panel shows the dependence with redshift of the difference \(\mu_{\rm SALT2}-\mu_{\rm PCA\,\,pets}\). On the right panel is the histogram for those differences for all redshifts.
Figure 22: Marginalized cosmology results comparison of SALT2 and FA PETS for \(\Lambda\)CDM model.
Fig. 23 shows the dependence of the HD residues and light curve fit parameters with the host galaxies mass. For \(x_{1}\) there is no apparent correlation while for \(x_{2}\) there is a similar behavior previously seen for PCA PETS, now with a more steep slope. This reinforces the differences between the components of both methods, with the principal components and common factors clearly carrying distinct information. Regarding the HD residues, no host galaxy mass dependence is observed.
Apart from these distinctions, performing the MCMC analysis for redshift binned nuisance parameters and \(\sigma_{\text{int}}\) with the cosmological parameters fixed at their best fit values, we encounter a similar redshift evolution previously seen for PCA PETS. HD residues do not show dependencies with redshift nor fit parameters. FA PETS also predicts compatible results of \(\Omega_{m0}\) and \(\Omega_{\Lambda 0}\) parameters with SALT2 predictions, with a higher estimate of coherent intrinsic scatter, \(\sigma_{\text{int}}\). On top of that, a similar correlation of \(\alpha\times\beta\) is verified and no redshift dependent behaviour is observed for \(\mu_{\text{FA PETS}}-\mu_{\text{SALT2}}\).
## 8 Conclusion
In this paper we investigated two different models describing the rest-frame flux of type Ia supernova as pure linear expansions, called PCA PETS and FA PETS. The pure linear expansion rest-frame flux was originally proposed by Saunders et al., 2018 and here investigated in greater details and applied to \(\Lambda\)CDM cosmology. We obtained the model components through Principal Component Analysis and Factor Analysis after a robust reconstruction of the training sample SEDs using 1-d Gaussian Process Regression for each plane of constant wavelength with Hsiao et al., 2007 surface as the mean. This reconstruction provided a SED resolution of 10A \(\times\) 1 day. Following, we derived directly from the rest-frame flux definition an expression for the model distance modulus and fitted the Pantheon sample to constrain the \(\Lambda\)CDM cosmological parameters.
The driven idea was to take advantage of Ia SNe similarities and adopt an alternative description using decomposition methods over a training set of SEDs. Unlike the usual rest-frame flux description that includes an exponential term to model two different color-related effects, reddening due to dust and intrinsic color variations, here a pure expansion description was applied, removing the need to approximate the color variation effect by an extinction law. We verified this pure expansion is a reasonable assumption when constructing a light curve fitter and both PCA and FA methods showed showed reliable light-curve modeling results. The cosmological constraints for PETS favours higher values of
Figure 23: Fit parameters and HD residue dependence with host galaxy mass for FA fits.
\(\Omega_{m0}\) and slightly lower values of \(\Omega_{\Lambda 0}\), when compared to SALT2. With respect to this matter, further investigation of PETS model systematics is needed to conclude if this behaviour is related to the rest-frame flux description or if the distance bias correction would reconcile the best fits. In this description we lost the link between the fit parameters with the effects of color and stretch, but we were able to encounter a Tripp-like formula when bringing our model to observer-frame with minimal approximations.
Our analysis over PCA and FA PETS cosmological results showed inferences of cosmological parameters within 68% agreement with the most widely used fitter SALT2 results, with our models predicting more conservative statistical uncertainties over the cosmological parameters. The Hubble Diagram residues showed no evolution neither with redshift nor with host galaxy mass for both PCA and FA models. And the residues w.r.t. SALT2 distance modulus predictions show no dependence with redshift. In future works we plan on extending the model coverage to include UV region and better understand the model covariance. We also plan on including higher redshift Ia SNe data, when available, to re-access the nuisance parameters evolution with redshift and apply the effects of distance bias selection to our analysis.
Overall, both PCA and FA PETS models performance will heavily depend on how representative the training set is. These methods undergo an approximation when keeping only the first two terms of the rest-frame flux expansion, as the others empirical descriptions of Ia SN rest-frame fluxes. The main differences reside in what kind of information is being neglected. In order to avoid correlating dust extinction reddening with luminosity, we decided not to separate the reddening effect from our SEDs, but at the same time this raised questions whether important stretch and color information is being neglected. We argue that the fitting and constraining power of both PCA PETS and FA PETS are good indicators that this information is not being completely neglected but would instead be more diluted over the model components.
It is also important to access some aspects of the training procedure. SNFactory DR9 provides good quality spectra at a number of different epochs for 171 spectroscopically confirmed nearby Ia SNe and our resulting models are trained over 164 SEDs reconstructed with this spectra. Even with this high quality data, the SEDs reconstruction using gaussian process regression will many times recover wrong early phase behavior if we do not consider a supernova template. On top of that, underestimated uncertainties can also lead to poor regressions. These wrong predictions translate to bad reconstructed SEDs that can result in nonphysical behavior recovered by the principal components or by the common factors. To obtain a closer description to the reality and correctly recognize the aspects most common to the underlying group of Ia SNe, the training procedure depends not only on the amount of good spectra quality but also on the SED reconstruction procedure.
The high explained variance in the first component on both methods can indicate a lack of diversity in our training sample. Increasing the sample will help increase the sample variability completion. We should also be attentive to the difference in \(\Delta_{M}\) constraints, our FA PETS and PCA PETS obtained lower values than the SALT2 corresponding analysis. This result can be due to the different color treatments but for now the difference is not significant to make a firm statement. It is important to note we only considered a coherent intrinsic scatter which entered as a free parameter in the cosmological analysis. Other intrinsic scatter should be explored, giving further insight whether the neglected information was wavelength-dependent. In this scenario the best intrinsic scatter modeling may not be wavelength dependent for a model that does not correlates extinction and luminosity variations.
Regarding \(\alpha\) and \(\beta\) evolution with redshift seen in Fig. 20, as these nuisance parameters do not describe the correlations strengths, the evolution with redshift does not offer extra information around a possible type Ia SN evolution with redshift. Using oblique FA PETS will allow for a deeper analysis relating evolution. It is important to note this behaviour does not seem to impact the HD residues and inclusion of systematics uncertainties is likely to diminish the evolution seen for \(\sigma_{\text{int}}\) also in Fig. 20.
Mapping the systematics is essential to the current cosmological precision era and therefore supports improvements in current empirical models. FA PETS can be further explored if we consider common factor rotations. These rotations do not improve the fitting quality but can provide a different insight about the hidden variables. It is possible that a specific oblique rotation, that correlates the factors, lead to a set of hidden variables describing directly some intrinsic variation effects, like color and stretch, allowing us to recover physical interpretation for the model components. It is important to note that even though these two quantities are known to correlate with luminosity, a complete description of the explosion mechanism behind Ia SNe is not completely understood, so other hidden variables not included by most fitters, may play an important role in these empirical descriptions.
## Acknowledgements
We would like to express our gratitude to Martin Makler for his generosity and support throughout this collaboration. We would also like to thank Kyle Barbary and Kyle Boone for their helpful contributions towards resolving SNCOSMO GitHub issues, which significantly eased the fitting analysis process.
This work has also been supported by the Brazilian funding agencies CAPES and CNPq. CSN thanks the support of CNPq for the PhD scholarship no. 155994/2019-0. JPCF thanks the Brazilian funding agencies CAPES for the MS scholarship no. 88887.336370/2019-00 and CNPq for the PhD scholarship no. 140210/2021-0. RRRR acknowledges CNPq (grant no. 309868/2021-1).
## Data Availability
The type Ia supernova spectra used in this paper to construct the PETS models are from Data Release 9 from Nearby Supernova Factory publicly available in [https://snfactory.lbl.gov/](https://snfactory.lbl.gov/). The light curves used for cosmology are from SNAMA, available for download in [https://zenodo.org/record/4015325](https://zenodo.org/record/4015325). Our codes for training, light curve fitting and cosmology will be available after publication at [https://github.com/CassialNascimento/PETS_model_for_SN_Ia_LC_fitting](https://github.com/CassialNascimento/PETS_model_for_SN_Ia_LC_fitting).
## Software Citations
This work uses the following software packages:
* Astropy (The Astropy Collaboration et al. (2013) and Collaboration et al. (2018)).
* Emcee (Foreman-Mackey et al. (2013)).
* GetDist (Lewis (2019)).
* Matplotlib (Hunter (2007)).
* Quadpy (Schlomer et al. (2021)).
* Scipy (Virtanen et al. (2020)).
* SNANA (Kessler et al. (2009b)).
* SNCosmo (Barbary et al. (2022)).
* Scikit-learn (Buitinck et al. (2013) and Pedregosa et al. (2011)).
* Pillow (Kemenade et al. (2022)).
* Python (Van Rossum and Drake (2009)).
|
2308.00774 | Precision $e^-$ Beam Polarimetry at an $e^+e^-$ B Factory using Tau-Pair
Events | We present a new technique, `Tau Polarimetry', for measuring the longitudinal
beam polarization present in an $e^+e^-$ collider through the analysis of
$e^+e^-\rightarrow\tau^+\tau^-$ events. By exploiting the sensitivity of $\tau$
decay kinematics to the longitudinal polarization of the beams, we demonstrate
that the longitudinal polarization can be measured with a 3 per mil systematic
uncertainty at the interaction point using a technique that is independent of
spin and beam transport modeling. Using 424.2$\pm$1.8 fb$^{-1}$ of BABAR data
at $\sqrt{s}=10.58$ GeV, the average longitudinal polarization of the PEP-II
$e^+e^-$ collider has been measured to be $\langle P\rangle=0.0035 \pm
0.0024_{\textrm{stat}}\pm 0.0029_{\textrm{sys}}$. The systematic uncertainty
studies are described in detail, which can serve as a guide for future
applications of Tau Polarimetry. A proposed $e^-$ beam longitudinal
polarization upgrade to the SuperKEKB $e^+e^-$ collider would benefit from this
technique. | The BABAR Collaboration | 2023-08-01T18:28:00Z | http://arxiv.org/abs/2308.00774v1 | # Precision \(e^{-}\) Beam Polarimetry at an \(e^{+}e^{-}\) B Factory using Tau-Pair Events
###### Abstract
We present a new technique, 'Tau Polarimetry', for measuring the longitudinal beam polarization present in an \(e^{+}e^{-}\) collider through the analysis of \(\mathrm{e^{+}e^{-}}\rightarrow\tau^{+}\tau^{-}\) events. By exploiting the sensitivity of \(\tau\) decay kinematics to the longitudinal polarization of the beams, we demonstrate that the longitudinal polarization can be measured with a 3 per mil systematic uncertainty at the interaction point using a technique that is independent of spin and beam transport modeling. Using 424.2\(\pm\)1.8 fb\({}^{-1}\) of BABAR data at \(\sqrt{s}=10.58\) GeV, the average longitudinal polarization of the PEP-II \(e^{+}e^{-}\) collider has been measured to be \(\langle P\rangle=0.0035\pm 0.0024_{\mathrm{stat}}\pm 0.0029_{\mathrm{sys}}\). The systematic uncertainty studies are described in detail, which can serve as a guide for future applications of Tau Polarimetry. A proposed \(e^{-}\) beam longitudinal polarization upgrade to the SuperKEKB \(e^{+}e^{-}\) collider would benefit from this technique.
pacs: 13.88.+e, 14.60.Fg, 29.27.Hj
The BABAR Collaboration
## I Introduction
We present in this paper a novel method for measuring the average longitudinal beam polarization in an \(e^{+}e^{-}\) collider, referred to as 'Tau Polarimetry'. Tau Polarimetry uses \(\mathrm{e^{+}e^{-}}\rightarrow\tau^{+}\tau^{-}\)events measured in the detector and determines the average longitudinal beam polarization using the sensitivity of the \(\tau\) decay kinematics to the beam polarization. The technique is developed using data from the BABAR experiment at the PEP-II collider which operated with a center-of-mass energy of 10.58 GeV and is expected to have no beam polarization. Using BABAR data, this paper reports on the statistical sensitivity of the technique and the determination of the
dominant systematic uncertainties in the beam polarization. The motivation for this study is to benchmark the precision to which the beam polarization can be measured using Tau Polarimetery with Belle II at a future polarization upgrade of the SuperKEKB collider.
Precision measurements of the weak mixing angle can be performed with experimental determinations of the left-right asymmetry, \(A_{\rm LR}\), for each of the \(e^{+}e^{-}\to f\overline{f}\) processes, where \(f\) is a charged lepton or quark. The asymmetry is defined as the normalized difference between the production cross-sections for a left and right handed process:
\[A_{\rm LR}=\frac{\sigma_{\rm L}-\sigma_{\rm R}}{\sigma_{\rm L}+\sigma_{\rm R}}, \tag{1}\]
where the L and R subscripts refer to the chirality of the initial state electron in the \(e^{+}e^{-}\to f\overline{f}\) process. In the past, the SLAC Large Detector (SLD) experiment, operating at the \(Z\)-pole, used measurements of \(A_{\rm LR}\) to make the most precise determination of \(\sin^{2}\!\theta_{W}\)[1; 2]. At electron-positron colliders, a non-zero value of this asymmetry arises from \(\gamma-Z\) interference [3] and the measured value of \(A_{\rm LR}\) scales linearly with the average longitudinal polarization of the beams [4; 5]:
\[A_{\rm LR}^{f}\propto\left(\frac{sG_{F}}{\alpha}\right)g_{A}^{\epsilon}g_{V}^ {f}\langle P\rangle, \tag{2}\]
where \(G_{F}\) is the Fermi constant, \(s\) is the square of the \(e^{+}e^{-}\) center-of-mass (c.m.) energy, \(\alpha\) is the fine structure constant, \(g_{A}^{\epsilon}\) is the neutral current axial coupling of the electron, \(g_{V}^{f}=T_{3}^{f}-2Q_{f}\)sin\({}^{2}\!\theta_{W}\) is the neutral current vector coupling of fermion \(f\), where \(T_{3}^{f}\) is the third component of isospin, \(Q_{f}\) is the electric charge, and \(\sin^{2}\theta_{W}\) is the weak mixing angle. \(\langle P\rangle\) is the average longitudinal polarization of the mediator in the \(e^{+}e^{-}\) collision, defined as:
\[\langle P\rangle=\frac{R^{+}L^{-}-L^{+}R^{-}}{R^{+}L^{-}+L^{+}R^{-}}, \tag{3}\]
where \(L^{\pm}\) (\(R^{\pm}\)) is the fraction of positrons (\(+\)) or electrons (\(-\)) in their respective beams that have left-handed (right-handed) spin, so that (\(L^{\pm}+R^{\pm}\equiv 1\)).
Tau Polarimetry relies on two convenient properties. The first is the linear relationship between the longitudinal polarization present in the beams and the polarization of the \(\tau\) leptons produced in the \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) process, where at \(\sqrt{s}=10.58\) GeV [6]:
\[\begin{split} P_{\tau}=& P\frac{\cos\theta}{1+\cos^{ 2}\theta}\\ &-\frac{8G_{F}s}{4\sqrt{2}\pi\alpha}g_{V}^{\epsilon}\left(g_{A}^{ \epsilon}\frac{|\vec{p}|}{p^{0}}+2g_{A}^{\epsilon}\frac{\cos\theta}{1+\cos^{ 2}\theta}\right).\end{split} \tag{4}\]
\(P_{\tau}\) is the polarization of the \(\tau\), \(P\) is the longitudinal polarization of the beams, \(\theta\) is the angle between the emitted \(\tau^{-}\) and the electron beam in the c.m. frame, and \(\vec{p}\) and \(p^{0}\) are the 3-momentum and energy of the \(\tau\), respectively. At \(\sqrt{s}=10.58\) GeV the size of the electroweak correction is small and known with a high precision \((8G_{F}s)/(4\sqrt{2}\pi\alpha)g_{V}^{\epsilon}=-0.0029\pm 0.0001\). The majority of the uncertainty in the electroweak correction arises from the world average for \(g_{V}^{\epsilon}\) at \(m_{Z}\)[7]. The electroweak correction is accounted for in the analysis and the associated uncertainties are negligible compared to the systematic uncertainties in the beam polarization measurement.
The second property arises from the chirality of neutrinos and the correlation of the chirality and kinematic distributions of the \(\tau\) decay [8; 4]. This correlation has been exploited by LEP to extract precision measurements of the weak mixing angle [9; 10; 11; 7; 12]. By combining Eqn. 4 with the kinematic dependence on polarization, a precision measurement of \(\langle P\rangle\) can be made.
The longitudinal beam polarization in BABAR data is expected to be near zero due to the beam rings at PEP-II being unsuited to the build up of polarization through the Sokolov-Ternov effect [13; 14]. Any polarization that would build up under this effect would be transversely polarized and only a longitudinal component would be visible to this analysis. In addition the PEP-II design expects a depolarization time of 1.5 minutes for fully transversely polarized beams and a residual transverse polarization of less than 0.8% [14]. By measuring the near-zero average longitudinal polarization in PEP-II, BABAR is able to determine the dominant systematic uncertainties in the Tau Polarimetry method.
Due to the similarities between the BABAR and Belle II detectors, and the fact that both involve \(e^{+}e^{-}\) collisions at \(\sqrt{s}=10.58\) GeV (Belle II at SuperKEKB and BABAR at PEP-II), BABAR can demonstrate the feasibility of the Tau Polarimetry technique, and indicate the expected level of both statistical and systematic sensitivity that Belle II might achieve in a polarization-upgraded SuperKEKB collider. This polarization upgrade is being considered for SuperKEKB in an upgrade referred to as 'Chiral Belle' [15]. This upgrade would introduce polarization to the \(e^{-}\) beam only, which simplifies Eqn. 3 to \(\langle P\rangle=L^{-}-R^{-}\), as \(L^{+}=R^{+}=0.5\). This definition of \(\langle P\rangle\) is equivalent to the average longitudinal polarization of the \(e^{-}\) beam.
With the addition of \(e^{-}\) beam polarization Belle II intends to significantly improve the precision with which the neutral current vector couplings, and hence \(\sin^{2}\!\theta_{W}\), can be determined separately for electrons, muons, \(\tau_{\rm S}\), \(c\) quarks, and \(b\) quarks; enabling not only precision measurements of \(\sin^{2}\!\theta_{W}\) in a region away from the \(Z\)-Pole, but also the world's highest precision measurements of universality. Chiral Belle intends to also measure other fundamental parameters, such as the anomalous magnetic moment of the \(\tau\)[15; 8; 16]. The largest systematic uncertainty on these proposed measurements is expected to be the precision to which \(\langle P\rangle\) is known.
The Chiral Belle upgrade includes a Compton polarimeter on the electron beam to provide continuous monitoring of the beam polarization. The Compton po
larimeter must be physically located outside of the Belle II detector and as such is expected to have uncertainties related to the modeling of the spin transport when extrapolating to the polarization present at the interaction point (IP). Tau Polarimetry provides a second, and complementary, way to determine the average longitudinal polarization at the IP, although on a much longer time scale. The primary advantage of the Tau Polarimetry measurement is its independence of any spin transport modeling and an increased precision for large data sets, \(\mathcal{O}(100~{}\mathrm{fb}^{-1})\).
## II PEP-II and the BABAR detector
The BABAR detector [17; 18] operated from 1999 to 2008 at the PEP-II asymmetric \(e^{+}e^{-}\) collider, which collided 9.0 GeV electrons with 3.1 GeV positrons.
Particles in the BABAR detector were identified by combining information from its sub-detectors. Charged-particle momenta were determined using tracks measured both in a five-layer silicon vertex tracker and in a 40-layer drift chamber (DCH) operated in a 1.5 T solenoidal magnetic field. Photons and electrons had their energy and angle measured in the electromagnetic calorimeter (EMC) consisting of 6580 CsI(Tl) crystals. Muons were identified by resistive-plate chambers and streamer tubes in the instrumented magnetic-flux-return iron (IFR). Charged-particle identification (PID) was based on energy-loss measurements in the silicon vertex tracker and DCH, and on information from a ring-imaging Cherenkov detector, the EMC, and the IFR. The BABAR coordinate system features the z axis aligned with the principal axis of the solenoid field, which was offset by 20 mrad from the beam axis. The y axis was orientated upwards and the x axis was directed outwards from the center of PEP-II. The studies reported in this paper use the data collected by BABAR at a c.m. energy of 10.58 GeV, the \(\Upsilon\)(4S) resonance, with an integrated luminosity of 424.2\(\pm\)1.8 fb\({}^{-1}\)[19].
A total of 700 million polarized \(\tau^{+}\tau^{-}\) Monte Carlo (MC) simulated events, equivalent to 643 fb\({}^{-1}\), were produced for both a fully left and right handed beam polarization with the KK2f generator [20]. A number of MC generators were used to produce unpolarized samples of various processes of interest: the continuum \(\mu^{+}\mu^{-}\) and \(\tau^{+}\tau^{-}\) were produced with KK2f, which invoked TAUOLA [21] to simulate the decays of final-state \(\tau\) leptons; the \(e^{+}e^{-}\to e^{+}e^{-}\) Bhabha process was simulated using the BHWIDE [22] generator; and the EvtGen[23] generator provided the hadronic continuum MC. PHOTOS[24] was employed to calculate the final-state radiation effects. These simulated processes then underwent a detector response simulation implemented with Geant4[25; 26]. Roughly twice as much \(\mu^{+}\mu^{-}\) and \(c\overline{c}\) MC, and roughly four times as many \(u\overline{u},d\overline{d},s\overline{s},b\overline{b}\) and \(\tau^{+}\tau^{-}\) MC events were produced compared to the number expected in 424.4 fb\({}^{-1}\). As BABAR relies heavily on data-driven approaches to study and control Bhabha backgrounds, a smaller sample of Bhabha MC events was exploited for low-statistics studies.
## III Polarization sensitivity
While all \(\tau\) decay modes are sensitive to beam polarization, the hadronic decays are the most sensitive as there is only one neutrino carrying away angular momentum. In the case of the \(\tau^{-}\to\rho^{-}\nu_{\tau}\to\pi^{-}\pi^{0}\nu_{\tau}\) decay (and charge conjugate (c.c.)), which has the largest \(\tau\) decay branching fraction (25.49%) [27], three angular variables (including \(\cos\theta\), with \(\theta\) being the angle between the \(\tau^{-}\) momentum and the electron beam direction in the c.m. frame) are required to extract the beam polarization, and capture all the angular momentum information from the spin-1 \(\rho\) decay. The other two polarization sensitive variables are defined as [28]:
\[\cos\theta^{\star}=\frac{2z-1-m_{\rho}^{2}/m_{\tau}^{2}}{1-m_{\rho}^{2}/m_{ \tau}^{2}} z\equiv\frac{E_{\rho}}{E_{\mathrm{beam}}} \tag{5}\]
\[\cos\psi=\frac{2x-1}{\sqrt{1-m_{\pi}^{2}/m_{\rho}^{2}}} x\equiv\frac{E_{\pi}}{E_{\rho}} \tag{6}\]
where \(E_{\pi}\) and \(E_{\rho}\) are, respectively, the reconstructed energies of the charged pion and \(\rho\) in the c.m. frame, and \(E_{\mathrm{beam}}\equiv\sqrt{s}/2\). For the mass of the charged pion and the \(\tau\) we use the world-average values [27], while for the mass of the \(\rho\), due to its large width, we use the event-by-event reconstructed \(\pi^{\pm}\pi^{0}\) mass. The observable \(\theta^{\star}\) is defined as the polar angle of the \(\rho\) momentum in the \(\tau\) rest frame, where the polar axis is the boost direction of the \(\tau\) in the c.m. frame. Similarly, \(\psi\) is the polar angle of the charged pion momentum in the \(\rho\) rest frame, where the polar axis is the boost direction of the \(\rho\) in the c.m. frame. Both \(\cos\theta^{\star}\) and \(\cos\psi\) exhibit mirrored polarization sensitivity depending on whether the \(\tau^{-}\) decays in the forward (\(\cos\theta\)>0) or backward (\(\cos\theta\)<0) hemisphere. Figure 1 illustrates the angle definitions. The distributions of these variables are depicted in Figs. 2 to 4 for both the left and right chiral states of the electron beam.
Figure 1: Diagrams illustrating \(\theta\) (left) where \(f\) represents a final-state particle, \(\theta^{\star}\) (center), and \(\psi\) (right).
## IV Fitting Methodology
To extract the average beam polarization we perform a binned likelihood fit on the normalized distribution using the Barlow and Beeston method as implemented in ROOT [29, 30]. We fill three-dimensional histograms of \(\cos\theta^{*}\), \(\cos\psi\), and \(\cos\theta\) for each of the data, the \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) MC for a left polarized beam, the \(e^{+}e^{-}\to e^{+}e^{-}\) MC for a right polarized beam, the \(e^{+}e^{-}\to e^{+}e^{-}\) MC, the \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) MC, and the \(e^{+}e^{-}\to q\overline{q}\) MC, where \(q=u,d,s,c\). The \(b\overline{b}\) final-states were found to contribute no events to the final selection in MC studies.
A linear combination of the MC sample 3D histograms
is then fit to the data distribution.
\[H_{\text{data}}=a_{\text{L}}H_{\text{L}}+a_{\text{R}}H_{\text{R}}+a_{e}H_{e}+a_{ \mu}H_{\mu}+a_{uds}H_{uds}+a_{c}H_{c} \tag{7}\]
Where \(H\) refers to the histograms for data or the MC samples, and \(a\) refers to the weights in the fit. The relative weights of the non-\(\tau\) backgrounds (\(a_{e},a_{\mu},a_{uds},a_{c}\)) are fixed based on MC efficiency studies. The contributions from the \(\tau^{+}\tau^{-}\) MC for a left and right polarized \(e^{-}\) beam (\(a_{L}\) and \(a_{R}\)) are extracted from the fit and the average beam polarization is calculated from the difference, \(\langle P\rangle=a_{L}-a_{R}\).
### MC Validation
In order to validate the Tau Polarimetry technique at non-zero beam polarization states, the polarized \(\tau^{+}\tau^{-}\) MC is used to produce and measure different beam polarization states. This was done by splitting each of the left and right polarized \(\tau^{+}\tau^{-}\) MC in half, one half is used to fill the templates used to perform the polarization fit and the other for mixing beam polarization states. Specific beam polarization states can be created by combining left and right beam polarization MC samples with appropriate weights, e.g., 70% polarized is made with 85% left polarized MC and 15% right polarized MC. Using this technique we tested polarization states from \(-1\) to \(1\) in steps of \(0.1\), the results of which are presented in Fig. 5. The results from fits to the MC samples are in good agreement with the input MC beam polarization states, which demonstrates the measurement technique will yield the correct polarization for any beam polarization within uncertainties.
## V Event Selection
In order to obtain a pure sample of \(\tau^{-}\rightarrow\rho^{-}\nu_{\tau}\rightarrow\pi^{-}\pi^{0}\nu_{\tau}\) events, we tag the second \(\tau\) lepton in the event by a decay to \(\tau^{-}\to e^{-}\overline{\nu}_{e}\nu_{\tau}\) or \(\tau^{-}\rightarrow\mu^{-}\overline{\nu}_{\mu}\nu_{\tau}\) (or c.c.). Figure 6 shows the event topology for a signal event tagged with an electron. We select this topology by requiring the event to contain two charged particles, one of which is identified as a lepton, and a neutral pion.
The two charged particles are required to originate from within 3 cm of the collision point, as measured along the beam axis, and within 1.5 cm in the transverse plane. We split the event into two hemispheres based on the thrust axis [31; 32] of the event, the signal side and the tag side. The signal side is required to contain the \(\pi^{0}\)
and the tag side the lepton. The lepton is required to be consistent with either a muon or electron via PID requirements on the track. Both the muon and electron selectors have been trained with machine learning techniques; a boosted decision tree for muons, and an error correcting output code utilizing bootstrap aggregate decision trees for the electrons [18].
Neutral particles candidates are required to have energy depositions in the EMC exceeding 50 MeV with no associated charged particle identified nearby. Neutral particles within 40 cm (at the EMC) of a charged particle are combined with the charged particle to reduce sensitivity to MC modeling of split-offs arising from hadronic interactions of charged hadrons in the EMC. After this merging, the tag side of the event is required to be free of any neutral particles. Neutral pions are reconstructed from neutral clusters which exceed 100 MeV of deposited energy in one of two ways. First, BABAR is able to identify neutral pions where both photons are detected within the same EMC cluster (a'merged \(\pi^{0}\)') [17; 18]. If no \(\pi^{0}\) is identified this way, a search for a suitable candidate is performed by evaluating the invariant masses of pairs of neutral clusters. The invariant mass of the reconstructed neutral pion is required to be within a mass window of 115 MeV to 155 MeV for the event to be selected. If multiple candidates exist, the one closest to the \(\pi^{0}\) mass is accepted as the candidate.
The \(e^{+}e^{-}\to\ell^{+}\ell^{-}\) (\(\ell=e,\mu\)), and two-photon (\(e^{+}e^{-}\to e^{+}e^{-}X\)) events, where \(X\) is any allowed final state, are primarily rejected by requiring the transverse momentum of each charged particle to exceed 350 MeV, as well as the total event transverse moment (summed over all charged and neutral particles) to exceed 350 MeV. The surviving Bhabha events are reduced by approximately a factor of two by requiring the EMC energy in the lab frame not to exceed 10 GeV, at the cost of 0.028% of signal events.
The acceptance in \(\theta\) for charged particle tracks is slightly reduced such that each track is within the calorimeter acceptance, \(0.430<\theta_{\rm lab}<2.350\) rads. This fiducial requirement improves PID performance and data/MC agreement, and reduces the contamination from Bhabha events. The Bhabha contamination is further reduced by a factor of three by requiring \(-1<\cos\theta^{*}<0.9\) and \(-0.9<\cos\psi<1\).
The event selection is further refined by reconstructing the \(\rho\) on the signal side, and requiring the reconstructed mass to exceed 300 MeV. This ensures \(\cos\psi\) remains physical. The reconstructed \(\rho\) is also required to exhibit an angle between it's decay products in the c.m. frame of \(\cos\alpha<0.9\), where \(\alpha\) is the angle between the charged and neutral pion. This reduces sensitivity to MC modeling of events where the hadronic shower of the charged pion can overlap with the electromagnetic showers associated with the \(\pi^{0}\). As the true \(\tau\) direction is not reconstructed because of the missing neutrino, the reconstructed \(\rho\) direction is used to determine \(\cos\theta\). This approach was found to supply the least biased estimate for the true \(\tau\) direction through MC studies.
These requirements result in a final \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) selection that is 99.9% pure and selects 1.4% of all \(\tau^{+}\tau^{-}\) events. This corresponds to a 7.8% overall efficiency for selecting \(\tau^{\pm}\tau^{\mp}\to\rho^{\pm}\nu_{\tau}+\ell^{\mp}\nu_{\ell}\overline{\nu}\) events. The largest non-\(\tau\) background sources are Bhabha and \(\mu^{+}\mu^{-}\) events, each of which make up 0.05% of the final sample. The final event selection break-down as predicted by the MC simulations is shown in Table 1. There is a
Figure 5: Fit validation MC study: beam polarization outputs of fits as a function of input polarization, produced by mixing polarized \(\tau^{+}\tau^{-}\) MC as described in the text. The red points correspond to the measurements for positively charged signal candidates while the blue points correspond to the negatively charged candidates. Diagonal line plotted to show optimal correlation.
small but statistically significant difference between the efficiency for selecting left and right polarized events; \(\Delta\varepsilon=0.011\%\pm 0.001\%\). This is small enough that it will have a negligible effect on the extracted polarization.
## VI Fit Results
As is evident in Figures 2 to 4, the distributions for a left-handed electron beam generates distributions for \(\tau^{-}\) leptons that are the same as those for right-handed beams and \(\tau^{+}\) leptons. Consequently, we fit the positive and negative charged distributions separately. As the BABAR data is split into chronological periods, runs 1 through 6, each run is treated independently. As such we obtain six measurements of the beam polarization and corresponding statistical and systematic uncertainties. Table 2 shows the fit results for each run and the associated statistical uncertainty only. Taking the weighted mean of these fit results gives the overall average beam polarization for PEP-II to be \(\langle P\rangle=0.0035\pm 0.0024_{\rm stat}\). The two dimensional projections of \(\cos\theta^{*}\) and \(\cos\psi\) for positively charged events are shown in Fig. 7 and the negatively charged events in Fig. 8. The one dimensional projections are included in Appendix A.
## VII Systematic Uncertainty Studies
Each of the systematic uncertainties have been evaluated with a method that best suits the particular source of systematic uncertainty. The first method used is a controlled variation of MC distributions in order to adjust the fit templates and determine the effect on the beam polarization measurements. The second method is a variation of the selection applied to the variable. This is primarily used in regions where the selection is designed to remove uncontrolled sources of backgrounds or poor MC modeling. The final method is used to evaluate the PID requirements, where different selectors are employed and the different effects on the data and MC are used as an estimator of the bias introduced by the selectors. This section discusses these methods in more detail and how they apply to each variable. For all of the approaches, the intent is to capture an approximate 68% interval on the systematic variations. The systematic uncertainties are combined in a way that accounts for correlations between runs and summed in quadrature to deliver a total uncertainty. Table 3 shows a summary of all the systematic uncertainties associated with the polarization measurement.
### Controlled variation of MC templates
#### 1. \(\pi^{0}\) efficiency correction
The \(\pi^{0}\) selection efficiency is notably different in data and MC and so is corrected in the polarized \(\tau\) MC fit templates using the lab-frame momentum and lab-frame \(\cos\theta\) distributions in the unpolarized \(\tau\) MC and data. This is done by binning the \(\pi^{0}\) lab-frame momentum and \(\cos\theta\) data/unpolarized-MC ratios, for both \(\tau\) charges combined, to obtain a set of correction factors. These corrections are then applied to the polarized MC. A systematic uncertainty is evaluated by varying the correction factors up and down by the statistical uncertainty in each bin. This process results in a systematic uncertainty of \(\sigma=0.0013\). By combining both charged states in the correction procedure, the efficiency correction is independent of polarization effects. To verify the correction does not introduce a bias to the polarization measurement, the procedure was performed on a 70% polarized MC sample, which demonstrated a negligible effect on the polarization measurement.
#### 2. Neutral particle energy calibration
The energy calibration of photons in the BABAR detector is known to within 0.3% [18]. Increasing and decreasing the energy calibration of all photons in the MC results in a systematic uncertainty of \(\sigma=0.0010\).
#### 3. Boost correction
As the beam energies in PEP-II are asymmetric a boost is required to move between the lab and c.m. frames. As a mismodeling of the boost vector can affect the polarization measurement, a sample of \(e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}\) events were studied to quantify the effect. A small offset in the acollinearity of the muon pairs between the data and MC indicated a 4 MeV discrepancy in the z component of the boost vector. Correcting this offset in the MC templates shifts the data polarization fit by the assessed systematic uncertainty of \(\sigma=0.0004\).
\begin{table}
\begin{tabular}{l c} \hline \hline MC source & Fraction \\ \hline Bhabha & 0.046\% \\ \(\mu^{+}\mu^{-}\) & 0.046\% \\ \(u\overline{u}d\),\(d\overline{d}s\overline{s}\) & 0.030\% \\ \(c\overline{c}\) & 0.006\% \\ \(b\overline{b}\) & 0.000\% \\ \(\tau^{+}\tau^{-}\) & 99.871\% \\ \hline \hline Tau Signal & Fraction \\ \hline \(\tau^{-}\to e^{-}\overline{\nu}_{e}\nu_{\tau}\) & 0.018\% \\ \(\tau^{-}\rightarrow\mu^{-}\overline{\nu}_{\mu}\nu_{\tau}\) & 0.031\% \\ \(\tau^{-}\rightarrow\tau^{-}\nu_{\tau}\) & 0.035\% \\ \(\tau^{-}\rightarrow\rho^{-}\nu_{\tau}\rightarrow\pi^{-}\pi^{0}\nu_{\tau}\) & 87.858\% \\ \(\tau\rightarrow(a_{1}\rightarrow\pi^{\pm}\pi^{0}\pi^{0})\overline{\nu}_{\tau}\) & 9.785\% \\ \(\tau\rightarrow\) else & 2.145\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fraction of event types expected in data in the final event selection based on MC efficiencies. The \(\tau\) pair events are further broken down to show the decay mode composition of the events selected on the signal side.
## 4 Momentum calibration and resolution
The same selection of \(e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}\) events is used to correct and quantify the momentum calibration and momentum resolution of the charged particles. This is done by first fitting the \(p_{CM}/p_{CM}^{\rm Max}\) distribution with a Crystal Ball function [33], where \(p_{CM}^{\rm Max}\) is the beam constrained maximum muon momentum (\(\sqrt{s}/2-2m_{\mu}\)). The fit is performed on both data and MC for each run and a scaling factor, \(S_{p}\), and resolution factor, \(R_{p}\), are extracted. \(S_{p}\) is the ratio of the mean values of the Gaussian components of the fits (\(S_{p}\equiv\overline{\mu}_{\rm data}/\overline{\mu}_{\rm MC}\)), and \(R_{p}\) is similarly the ratio of the widths (\(R_{p}\equiv\sigma_{\rm data}/\sigma_{\rm MC}\)). From these two factors, the momentum is corrected as:
\[p_{\rm recon}^{\rm corr}=(p_{\rm truth}-R_{p}(p_{\rm truth}-p_{\rm recon}))S_{p}, \tag{8}\]
where recon and truth refer respectively to MC that has undergone a detector response simulation or not. Typical values of \(S_{p}\) differ from 1 by \(\pm 0.1\%\) and the statistical uncertainties are \(\sim\)0.01%. The resolution factor is more significant, \(R_{p}\sim\)0.92, and has a statistical uncertainty of 0.1%. In order to evaluate a systematic uncertainty associated with the correction, the two factors are varied by the respective uncertainties found in the Crystal Ball fit. The shifts in the corrected momentum due to these variations result in a systematic uncertainty of \(\sigma=0.0004\) for the momentum calibration, and \(\sigma=0.0003\) for the momentum resolution.
## 5 \(\tau\) direction definition
In order to evaluate the level of bias in \(\cos\theta\) due to the choice of the \(\rho\) direction as the estimator, we evaluate the acollinearity between the \(\rho\) and tagging lepton direction in data and MC. Adjusting the \(\rho\) direction in each event by \(\Delta\cos\theta=\pm 0.001\), as indicated by the study, results in a systematic uncertainty of \(\sigma=0.0003\).
## 6 Angular resolution
The angular resolution in \(\theta\) is 0.897 mrad [34]. Varying the angles by this factor and evaluating the effect on the polarization measurement results in a systematic uncertainty of \(\sigma=0.0003\).
## 7 Background contributions
The effects of the background contributions, primarily Bhabha and \(e^{+}e^{-}\rightarrow\mu^{+}\mu^{-}\) events, are evaluated conservatively by varying the weights of their respective templates in the polarization fit by a factor of 2. This method results in a systematic uncertainty of \(\sigma=0.0003\).
## 8 \(\tau\) branching fraction
The \(\tau\) branching fraction uncertainties are evaluated by varying the weights of the \(\tau\) decay templates in the fit. The uncertainties in the world-average branching fractions [27] are used, obtaining a systematic uncertainty of \(\sigma=0.0002\).
### Variation of selection value
#### 1 Split-off modeling
In order to reduce sensitivity to the modeling of low energy neutrals emitted by charged particles interacting hadronically in the EMC, all energy depositions in the EMC within 40 cm of the charged particle at the EMC surface are recombined with the charged energy deposition. The distance in MC modeling agrees with the data distribution to within 0.72 cm, so a \(\pm 1\) cm variation is conservatively used for the systematic study. This results in a systematic uncertainty of \(\sigma=0.0011\).
#### 2.2 \(\pi^{0}\) mass acceptance window
The systematic uncertainty associated with the 115-155 MeV window for the reconstructed \(\pi^{0}\) mass is expected to be partially related to the overall photon energy calibration. However, the presence of two photons in the reconstruction also brings in correlations and angular dependencies. At the risk of partially double-counting systematic uncertainties, a separate systematic uncertainty is conservatively assigned to the acceptance window as well. This is done by varying the acceptance window by \(\pm 1\) MeV, based on the agreement between the average data and MC reconstructed mass. This variation is performed on each side of the acceptance, which results in a systematic uncertainty of \(\sigma=0.0008\).
#### 3.1 \(\rho\) decay product collinearity
The opening angle between the charged and neutral pion in the \(\rho\) decay in the c.m. frame is a particularly sensitive variable in this analysis. Removing events with approximately collinear
\begin{table}
\begin{tabular}{l l l l} \hline \hline Data Set (fb\({}^{-1}\)) & Positive Charge & Negative Charge & Average Polarization \\ \hline Run 1 (20.4) & 0.0018\(\pm\)0.014 & -0.0047\(\pm\)0.014 & -0.0014\(\pm\)0.010 \\ Run 2 (61.3) & 0.0075\(\pm\)0.0083 & 0.0007\(\pm\)0.0083 & 0.0041\(\pm\)0.0059 \\ Run 3 (32.3) & 0.0151\(\pm\)0.012 & -0.0047\(\pm\)0.012 & 0.0048\(\pm\)0.0083 \\ Run 4 (99.6) & -0.0035\(\pm\)0.0072 & 0.0010\(\pm\)0.0067 & -0.0011\(\pm\)0.0049 \\ Run 5 (132.3) & -0.0028\(\pm\)0.0062 & 0.0136\(\pm\)0.0064 & 0.0052\(\pm\)0.0045 \\ Run 6 (78.3) & 0.0036\(\pm\)0.0089 & 0.0133\(\pm\)0.0088 & 0.0084\(\pm\)0.0062 \\ \hline
424.18\(\pm\)1.8 & 0.0015\(\pm\)0.0034 & 0.0055\(\pm\)0.0034 & 0.0035\(\pm\)0.0024 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average beam polarization measured for each run period of the BABAR data set. The average for each run is obtained from the weighted mean of the positive and negative fit results. The reported uncertainties are statistical only.
charged pions and \(\pi^{0}\)'s by requiring \(\cos\alpha>0.9\), improved the data/MC agreement and reduced the fit discrepancies between the separate charged fits. A study of the modeling and the selection threshold was carried out and the systematic uncertainty of \(\sigma=0.0007\) was determined by varying the \(\cos\alpha>0.9\) requirement by \(\pm 0.001\). This uncertainty in \(\cos\alpha\) was established by studying the difference between the mean of the reconstructed \(\rho\) mass
in data and MC, which is related to the uncertainty in \(\cos\alpha\). This uncertainty in \(\cos\alpha\) was further validated by comparing the shifts in data and MC means in the \(\cos\alpha\) distribution after the cut was applied.
## IV **Merged \(\pi^{0}\) likelihood**
The merged \(\pi^{0}\) candidates are associated with a likelihood score. At low likelihood values, a significant amount of \(\mu^{+}\mu^{-}\) and Bhabha events can mimic the presence of a \(\pi^{0}\) in the final state. An ac
ceptance value for the likelihood was established at the point where nearly all di-lepton events are excluded. A systematic variation in the acceptance was determined from the level of agreement in data/MC and this variation results in a systematic uncertainty of \(\sigma=0.0007\).
## 5 Track event transverse momentum
The transverse momentum associated with the charged particle tracks is closely associated with the overall momentum scale and resolution factors. However at low values of \(p_{T}\), Bhabha and unmodeled two-photon final-states can contaminate the data set. Based on the comparison of data and MC, the 350 MeV minimum energy selection criteria was varied by \(\pm 2\) MeV. This results in a systematic uncertainty of \(\sigma=0.0008\).
## 6 Total transverse momentum
An additional systematic uncertainty is required for the total \(p_{T}\) to account for any unmodeled effects contributing to the data set. For the total \(p_{T}\), a \(\pm 1\) MeV variation was used to estimate a \(\sigma=0.0003\) systematic uncertainty.
## 7 Maximum calorimeter response
The requirement for events to deposit less than 10 GeV in the EMC removes about half of the remaining Bhabha backgrounds. A \(\pm 8\) MeV variation is used to assess the systematic uncertainty of \(\sigma=0.0003\). The \(\pm 8\) MeV variation is determined from the level of data and MC agreement in the means of the EMC energy distributions for events exceeding 10 GeV.
## 8 \(\rho\) mass acceptance
The requirement for the reconstructed \(\rho\) mass to exceed 300 MeV is needed to ensure \(\cos\psi\) remains physical. The level of data to MC agreement in the mass distribution shows agreement at a \(\pm 2\) MeV level. Varying the selection by this amount results in a systematic uncertainty of \(\sigma=0.0003\).
## 9 \(\cos\theta^{\star}\) and \(\cos\psi\) acceptance
The acceptance for \(\cos\theta^{\star}\) and \(\cos\psi\) is constrained in order to remove Bhabha events. As the Bhabha distributions are not well modeled a variation on the selection value is used to evaluate the systematic uncertainty. MC comparisons with data found variations of \(\pm 0.002\) and \(\pm 0.01\) respectively in the level of agreement. Performing the polarization fits with these variations yields systematic uncertainties of: \(\sigma_{\theta^{\star}}=0.0002,\sigma_{\psi}=0.0002\).
### Lepton identification
The uncertainty associated with the different criteria in the lepton identification procedures was evaluated by switching between BABAR predefined selection algorithms. For both the muon and electron selection algorithms, the fit response was evaluated with the use of lepton selectors with more stringent requirements for classifying particles as leptons. This reduces the selection efficiency by \(\sim\)5% for the muons, and \(\sim\)1% for the electrons. Systematic uncertainties of \(\sigma=0.0012\) and \(\sigma=0.0005\) are assigned for the muon and electron identification respectively. This approach was limited by the statistical uncertainties associated with the change in selection efficiency rather than a systematic bias in the polarization fit.
### Other effects
In addition to the primary systematic sources, the efficiency of the \(\tau\) trigger decision, luminosity weightings, particle quality definitions, and effects of histogram rebinning are all evaluated. All of these effects are negligible compared to the uncertainties already discussed.
### Total systematic uncertainty
The total systematic uncertainty in the polarization measurement is found by summing the uncertainties in quadrature and results in \(\sigma_{\rm sys}=0.0029\). This result, and the breakdown of the uncertainties across all runs is presented in Table 3.
## VIII Conclusions and discussion
Using Tau Polarimetry to measure the average longitudinal polarization relies on the Standard Model, and so the existence of Beyond the Standard Model (BSM) physics could potentially appear as a non-zero value in the final result. At PEP-II, no beam polarization is expected so a significant deviation from zero could indicate a BSM bias. Even though the measurement in this analysis is in good agreement with zero polarization a number of potential BSM effects were considered. Those related to the coupling of the electron and \(\tau\) polarization through deviations from SM expectations in \(g^{\ell}_{V,A}\) (\(\ell=e,\mu,\tau\)) are some of the most likely potential sources. The current world-average on \(g^{\nu}_{V}\) suggests BSM effects could contribute a bias on the order of 0.0001, which is negligible for this analysis but could become a small fraction of the uncertainties for future experiments. A more substantial sector where BSM effects could arise is in the \(\tau\) Michel parameter measurements, and specifically the chirality, \(\xi\), of \(\nu_{\tau}\). In the SM \(\xi=1\) and has been experimentally constrained to \(\xi=0.985\pm 0.030\)[27]. Any deviations from 1 in this parameter directly bias the average beam polarization measurement, and Tau Polarimetry would benefit from an improved measurement of \(\xi\).
While this analysis assumes the polarization is only present in the \(e^{-}\) beam, Tau Polarimetry measures the average polarization of the mediator in the \(e^{+}e^{-}\) collision. In the case that both beams are polarized, Tau Polarimetry cannot disentangle the individual beam polarizations
without a secondary measurement of the \(e^{+}e^{-}\to\tau^{+}\tau^{-}\) cross-section.
In the development of this analysis, a number of key features were identified from which any future deployment of Tau Polarimetry at other \(e^{+}e^{-}\) colliders would benefit. One of them is the systematic cancellation obtained from combining the results of the fits from the two electric charges. This is due to the effects of beam polarization on the kinematic observables being inverted with the sign of \(\cos\theta\), or equivalently the electric charge. This means that any non-polarization sensitive biases will affect positively and negatively charged signals in opposite ways, and the biases will largely cancel out when averaged. Therefore, a large discrepancy between the polarization fits of the separate charges can indicate an uncontrolled source of bias.
A major source of systematic uncertainty is related to the MC modeling of photon and \(\pi^{0}\) processes. Modeling issues were observed in three related variables: the angular separation of the final-state charged and neutral pions, the overall neutral pion efficiency, and the modeling of the calorimeter response to neutral particles in close proximity to charged particles. These potential sources of systematic uncertainties could be significantly reduced by the choice of a final state without a neutral pion, such as the \(\tau^{-}\to\pi^{-}\nu_{\tau}\) decay. However, we found that the dependence on PID modeling as well as the increased dilepton backgrounds introduce additional biases.
In a SuperKEKB upgraded with electron beam polarization, Belle II will benefit from having an existing unpolarized data set to compare the performance of Tau Polarimetry with and without polarization. Assuming the beam polarization is flipped in a controlled manner, Belle II will also be able to demonstrate the performance of Tau Polarimetry on arbitrary beam polarizations by using sub-sets of the polarized data. This should be considered as a necessary step in verifying the performance of Tau Polarimetry at non-zero polarizations.
The average longitudinal polarization of PEP-II has been measured to be \(\langle P\rangle=0.0035\pm 0.0024_{\rm stat}\pm 0.0029_{\rm sys}\). This measurement demonstrates that a 0.3% absolute systematic uncertainty can be achieved on the beam polarization measurement with approximately 500 fb\({}^{-1}\) of data.
## IX Acknowledgments
We are grateful for the extraordinary contributions of our PEP-II colleagues in achieving the excellent luminosity and machine conditions that have made this work possible. The success of this project also relies critically on the expertise and dedication of the computing organizations that support BABAR, including GridKa, UVic HEP-RC, CC-IN2P3, and CERN. The collaborating institutions wish to thank SLAC for its support and the kind hospitality extended to them. We also wish to acknowledge the important contributions of J. Dorfan and our deceased colleagues E. Gabathuler, W. Innes, D.W.G.S. Leith, A. Onuchin, G. Piredda, and R. F. Schwitters.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Source & Run 1 & Run 2 & Run 3 & Run 4 & Run 5 & Run 6 & Combined \\ \hline \(\pi^{0}\) efficiency (VII.1.1) & 0.0025 & 0.0016 & 0.0013 & 0.0018 & 0.0006 & 0.0017 & **0.0013** \\ Muon PID (VII.3) & 0.0018 & 0.0018 & 0.0029 & 0.0011 & 0.0006 & 0.0016 & **0.0012** \\ Split-off modeling (VII.2.1) & 0.0015 & 0.0017 & 0.0016 & 0.0006 & 0.0016 & 0.0020 & **0.0011** \\ Neutral energy calibration (VII.1.2) & 0.0027 & 0.0012 & 0.0023 & 0.0009 & 0.0014 & 0.0008 & **0.0010** \\ \(\pi^{0}\) mass (VII.2.2) & 0.0018 & 0.0028 & 0.0010 & 0.0005 & 0.0004 & 0.0004 & **0.0008** \\ \(\cos\alpha\) (VII.3.3) & 0.0015 & 0.0009 & 0.0016 & 0.0007 & 0.0005 & 0.0005 & **0.0007** \\ \(\pi^{0}\) likelihood (VII.2.4) & 0.0015 & 0.0009 & 0.0015 & 0.0006 & 0.0003 & 0.0010 & **0.0006** \\ Electron PID (VII.3) & 0.0011 & 0.0020 & 0.0008 & 0.0006 & 0.0005 & 0.0001 & **0.0005** \\ Particle transverse momentum (VII.2.5) & 0.0012 & 0.0007 & 0.0009 & 0.0002 & 0.0003 & 0.0006 & **0.0004** \\ Boost modeling (VII.3) & 0.0004 & 0.0019 & 0.0003 & 0.0004 & 0.0004 & 0.0004 & **0.0004** \\ Momentum calibration (VII.4) & 0.0001 & 0.0014 & 0.0005 & 0.0002 & 0.0001 & 0.0003 & **0.0004** \\ Max EMC acceptance (VII.2.7) & 0.0001 & 0.0011 & 0.0008 & 0.0001 & 0.0002 & 0.0005 & **0.0003** \\ \(\tau\) direction definition (VII.4.5) & 0.0003 & 0.0007 & 0.0008 & 0.0003 & 0.0001 & 0.0004 & **0.0003** \\ Angular resolution (VII.4.6) & 0.0003 & 0.0008 & 0.0003 & 0.0003 & 0.0002 & 0.0003 & **0.0003** \\ Background modeling (VII.4.7) & 0.0005 & 0.0006 & 0.0010 & 0.0002 & 0.0003 & 0.0003 & **0.0003** \\ Event transverse momentum (VII.2.6) & 0.0001 & 0.0013 & 0.0005 & 0.0002 & 0.0002 & 0.0004 & **0.0003** \\ Momentum resolution (VII.4) & 0.0001 & 0.0012 & 0.0004 & 0.0002 & 0.0001 & 0.0005 & **0.0003** \\ \(\rho\) mass acceptance (VII.2.8) & 0.0000 & 0.0011 & 0.0003 & 0.0001 & 0.0002 & 0.0005 & **0.0003** \\ \(\tau\) branching fraction (VII.4.8) & 0.0001 & 0.0007 & 0.0004 & 0.0002 & 0.0002 & **0.0002** \\ \(\cos\theta^{\star}\) acceptance (VII.2.9) & 0.0002 & 0.0006 & 0.0004 & 0.0001 & 0.0001 & 0.0004 & **0.0002** \\ \(\cos\psi\) acceptance (VII.2.9) & 0.0002 & 0.0003 & 0.0002 & 0.0002 & 0.0002 & 0.0003 & **0.0002** \\ \hline Total & 0.0058 & 0.0062 & 0.0054 & 0.0030 & 0.0026 & 0.0038 & **0.0029** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of systematic uncertainties associated with the Tau Polarimetry polarization measurement. The combined column accounts for correlations between runs in the combination. |
2305.01496 | Gravitational Redshift Detection from the Magnetic White Dwarf Harbored
in RX J1712.6-2414 | Gravitational redshift is a fundamental parameter that allows us to determine
the mass-to-radius ratio of compact stellar objects, such as black holes,
neutron stars, and white dwarfs (WDs). In the X-ray spectra of the close binary
system, RX J1712.6$-$2414, obtained from the Chandra High-Energy Transmission
Grating observation, we detected significant redshifts for characteristic
X-rays emitted from hydrogen-like magnesium, silicon ($\Delta E/E_{\rm rest}
\sim 7 \times 10^{-4}$), and sulfur ($\Delta E/E_{\rm rest} \sim 15 \times
10^{-4}$) ions, which are over the instrumental absolute energy accuracy
(${\Delta E/E_{\rm rest} \sim 3.3} \times 10^{-4}$). Considering some possible
factors, such as Doppler shifts associated with the plasma flow, systemic
velocity, and optical depth, we concluded that the major contributor to the
observed redshift is the gravitational redshift of the WD harbored in the
binary system, which is the first gravitational redshift detection from a
magnetic WD. Moreover, the gravitational redshift provides us with a new method
of the WD mass measurement by invoking the plasma-flow theory with strong
magnetic fields in close binaries. Regardless of large uncertainty, our new
method estimated the WD mass to be $M_{\rm WD}> 0.9\,M_{\odot}$. | Takayuki Hayashi, Hideyuki Mori, Koji Mukai, Yukikatsu Terada, Manabu Ishida | 2023-04-28T16:46:06Z | http://arxiv.org/abs/2305.01496v1 | # Gravitational Redshift Detection from the Magnetic White Dwarf Harbored in RX J1712.6\(-\)2414
###### Abstract
Gravitational redshift is a fundamental parameter that allows us to determine the mass-to-radius ratio of compact stellar objects, such as black holes, neutron stars, and white dwarfs (WDs). In the X-ray spectra of the close binary system, RX J1712.6\(-\)2414, obtained from the _Chandra_ High-Energy Transmission Grating observation, we detected significant redshifts for characteristic X-rays emitted from hydrogen-like magnesium, silicon (\(\Delta E/E_{\rm rest}\sim 7\times 10^{-4}\)), and sulfur (\(\Delta E/E_{\rm rest}\sim 15\times 10^{-4}\)) ions, which are over the instrumental absolute energy accuracy (\(\Delta E/E_{\rm rest}\sim 3.3\times 10^{-4}\)). Considering some possible factors, such as Doppler shifts associated with the plasma flow, systemic velocity, and optical depth, we concluded that the major contributor to the observed redshift is the gravitational redshift of the WD harbored in the binary system, which is the first gravitational redshift detection from a magnetic WD. Moreover, the gravitational redshift provides us with a new method of the WD mass measurement by invoking the plasma-flow theory with strong magnetic fields in close binaries. Regardless of large uncertainty, our new method estimated the WD mass to be \(M_{\rm WD}>0.9\,M_{\odot}\).
0000-0002-8820-7882]Takayuki Hayashi
0000-0002-8881-8888]Hideyuki Mori
0000-0002-4171-6072]Koji Mukai
0000-0002-4171-6072]Yukikatsu Terada
## 1 Introduction
Main-sequence (MS) stars having a mass of less than \(8\,M_{\odot}\) will evolve into a white dwarf (WD), a compact object with a radius of the order of \(10^{4}\,\)km. Besides, the WDs can be evolved into even by the MSs of more than \(8\,M_{\odot}\) in binaries due to binary evolution. The WD is supported against its gravity by electron degeneracy pressure. The WD in a close binary system is fed by a mass accretion from its companion star. Hence, as the WD becomes more massive, its radius shrinks to reinforce the degeneracy pressure. However, the WD mass has an upper limit (Chandrasekhar mass \(\sim 1.38\,M_{\odot}\)) where the degeneracy pressure no longer supports its gravity (Chandrasekhar 1931). A WD exceeding the mass limit is expected to bring about an explosion called a type-Ia supernova or an accretion-induced collapse into a neutron star. Therefore, the mass is a key parameter of the WDs. The type-Ia supernovae are used to calculate the distances from our galaxy to their host galaxies, demonstrating the accelerating expansion of the Universe (Riess et al., 1998; Perlmutter et al., 1999).
A gravitational redshift enables us to directly measure the WD mass-radius ratio. In weak gravitational regime, the gravitational redshift can be written as
\[v_{g}=cz=\frac{c\Delta E}{E_{\rm obs}}\simeq\frac{c\Delta E}{E_{\rm rest}}=0.635 \frac{M_{\odot}}{R_{\odot}}\,{\rm km\,s^{-1}}, \tag{1}\]
where \(c\) is the speed of the light, \(z\) is the redshift parameter, \(E_{\rm obs}\) and \(E_{\rm rest}\) are observed and rest-frame energies, respectively, and \(\Delta E\equiv-(E_{\rm obs}-E_{\rm rest})\). Since Einstein proposed three measurements to test the general relativity, one of which is to measure the gravitational redshift from stars (Einstein, 1916), the gravitational redshift has been employed to calculate the WD mass. For example, the gravitational redshift of the first WD Sirius B is \(v_{g}=cz=c\times(2.688\pm 0.026)\times 10^{-4}=80.65\pm 0.77\,{\rm km\,s^{-1}}\) and thus the WD mass is \(1.017\pm 0.025\,{\rm M_{\odot}}\)(Joyce et al., 2018). This technique has been applied for the WDs in the common proper motion binaries (Greenstein & Trimble, 1967; Koester
1987; Silvestri et al., 2001), in binaries in which the motion of both constituent star were well determined (Sion et al., 1998; Long and Gilliland, 1999; Smith et al., 2006; Steeghs et al., 2007; van Spaandonk et al., 2010; Parsons et al., 2012, 2017; Joyce et al., 2018), or in an open cluster (Greenstein and Trimble, 1967; Wegner et al., 1989; Claver et al., 2001; Pasquini et al., 2019), where the Doppler shift can be precisely measured to break the degeneracy with the gravitational redshift. However, there is no detection report of the gravitational redshift from highly magnetized (\(B\gtrsim 0.1\) MG), spun-up (\(P_{\rm spin}\lesssim 10^{3}\) s) WDs, or from a WD with the X-ray.
Magnetic cataclysmic variables (mCVs) harbors a highly-magnetized WD, which is often highly spun up via mass accretion. The cataclysmic variables (CVs) are close binaries consisting of a WD and a companion of late-type star (Mukai, 2017). In the CVs, the accreting gas from the companion brings the angular momentum to the WD, causing the WD spin-up and frequently forming the accretion disk around the WD. Furthermore, as WD shrinks caused by mass gain, its magnetic field should be strengthened (Das et al., 2013). Such a strong magnetic field either prevents the accretion disk from reaching the WD surface (intermediate polar: IP; Patterson, 1994) or the prevents forming the accretion disk itself (polar; Cropper, 1990). The accreting gas is captured along the magnetic field and accelerated by the WD potential to a velocity greater than the sound velocity. Hence, a standing shock forms near the WD surface. According to Rankine-Hugoniot relations, the post-shock gas is heated up to \(\sim 10^{8}\) K, and then the gas is highly ionized. While the post-shock gas is descending to the WD, the plasma is cooled down by emitting X-rays. The electrons and ions are gradually recombined so that the hydrogen- and helium-like (H- and He-like) ions of various elements (e.g., neon (Ne), magnesium (Mg), silicon (Si), sulfur (S), Argon (Ar), and iron (Fe)) are produced. The recombined ions return to their ground states by emitting X-ray line emissions, as is indicated by observed X-ray spectra. The ratio of the line intensities is contingent upon the ion population in the plasma flow. The features in the X-ray spectra, such as the cut-off energy of the continuum spectra, line intensity ratio, and line energy shift caused by the Doppler shift and/or the gravitational redshift, allow us to measure the temperature, density, velocity, and gravitational potential in the post-shock plasma in principle, all of which are linked to the WD potential, that is, the WD mass.
RX J1712.6\(-\)2414, also known as V2400 Oph, is an IP that was discovered from the _ROSAT_ All-Sky Survey (Buckley et al., 1995). Follow-up optical/near-infrared observation detected a circular polarization with the spin period of 927 s, indicating WD's relatively strong magnetic field of \(\left(9\hbox{--}27\right)\times 10^{6}\) G and that we always see only one of the magnetic poles; the accretion flow onto this magnetic pole is nearly parallel to our line of sight. This IP has been known as a diskless IP since it does not show the 927-s spin modulation in most cases but shows the synodic 1003-s modulation in the X-ray. On the other hand, the spin modulation was observed in 2001 by _XMM-Newton_, and in 2005 and 2014 by _Suzaku_(Joshi et al., 2019). However, the best-fit spectral model parameters of the 2001 _XMM-Newton_ observation are consistent with those of the 2000 _XMM-Newton_ observation where the spin modulation was not detected (Joshi et al., 2019), which means the effect of whether the disk forms on the plasma flow structure is minor. We observed RX J1712.6\(-\)2414 by High-Energy Transmission Grating (HETG) of the _Chandra_ observatory to investigate the velocity profile of the plasma in the accretion flow. The HETG spectra potentially allow us to measure the Doppler shift of \(\sim 30\) km s\({}^{-1}\)(Ishibashi et al., 2006).
## 2 Observation and Data Reduction
We carried out the X-ray observation of RX J1712.6\(-\)2414 with _Chandra_ in May 2020. Table 1 shows the observation log of RX J1712.6\(-\)2414. The observation was divided into six intervals with the following observation IDs: 21274, 23038, 23039, 23244, 23267, and 23268. The total exposure time was 169.15 ks. The HETG modules were inserted between the X-ray optics and the CCD chips. The HETG consists of two independent gratings: the Medium Energy Grating (MEG) covers the 0.4-5.0 keV with an energy resolution \(\Delta E/E\sim 1/300\) at 2 keV, and the High Energy Grating (HEG) covers the 0.8-10 keV with \(\Delta E/E\sim 200\) at 6 keV. We chose the FAINT and Timed Exposure modes for the instrumental setup. First, applying the latest calibration files of the instruments (the 4.9.5 version), we reprocessed the observational data with CIAO v4.12 (Fruscione et al., 2006). We did not apply any filters to the data. After applying the barycentric correction, we then created the averaged X-ray spectra, following the instructions of the _Chandra_ analysis1.
Footnote 1: [https://cxc.harvard.edu/ciao/](https://cxc.harvard.edu/ciao/)
Figure 1 shows X-ray HETG spectra around H-like K\({}_{\alpha}\) lines of Ne, Mg, Si, S, Ar, and Fe. We focused on these emission lines to measure each energy shift because they consist of only K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\), whose intensity ratio is invariant to the plasma temperature or density, and can be easily separated from lines emitted by ions in the other states. To determine their energy centroids, we extracted the spectra in the energy ranges of 0.98-1.07, 1.44-1.51, 1.96-2.05, 2.52-2.72, 3.18-3.44, and 6.85-7.10 keV for Ne, Mg, Si, S, Ar, and Fe, respectively. We fitted a power-law function to the continuum and two Gaussians incorporating the redshift parameter (zgauss) to the H-like K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) lines by using Xspec (version 12.11.0; Arnaud 1996). The Gaussian centers were fixed at the rest-frame energies of K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) lines tabulated in Table 2 and their widths were fixed at 0. The redshift parameter was common to the H-like K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) for each ions, and left free to fit the model to the data. Their intensity ratio was fixed at the nominal of \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}=2\). The energy ranges we selected were narrow; no photoelectric absorption was introduced.
While the energy shifts of the K\({}_{\alpha}\) lines from H-like Ne, Ar, and Fe ions are marginal because of insufficient photon statistics, those from H-like Mg, Si, and S ions are statistically constrained as shown in Table 2: \(\Delta E/E_{\rm rest}=6.9^{+0.0}_{-0.2}\times 10^{-4}\) for Mg, \(7.4^{+0.0}_{-0.7}\times 10^{-4}\) for Si, and \(15.4^{+5.5}_{-4.6}\times 10^{-4}\) for S, which correspond to the line-of-sight velocities of \(2.1^{+0.0}_{-0.1}\times 10^{2}\), \(2.2^{+0.0}_{-0.2}\times 10^{2}\), and \(4.6^{+1.7}_{-1.4}\,\rm km\,s^{-1}\times 10^{2}\), respectively. The errors represent the statistical 90% confidence level.
The instrumental absolute energy accuracy is \(\Delta E/E_{\rm rest}\simeq\pm 3.3\times 10^{-4}\) (i.e., \(\pm 1\times 10^{2}\,\rm km\,s^{-1}\)) 2. Even considering the instrumental accuracy, the emission lines demonstrate the energy shift of \(\Delta E/E_{\rm rest}>3\times 10^{-4}\) and the line-of-light velocity of \(v>1\times 10^{2}\,\rm km\,s^{-1}\).
Footnote 2: [https://cxc.harvard.edu/proposer/POG/html/index.html](https://cxc.harvard.edu/proposer/POG/html/index.html)
## 4 Discussion
\begin{table}
\begin{tabular}{c c c c c} \hline Observation ID & Exposure (ks) & Start date & Instrument \\ \hline
21274 & 24.7 & 2020-05-30 07:31:32 & HETG\({}^{a}\) \\
23038 & 22.9 & 2020-05-29 03:35:56 & HETG\({}^{a}\) \\
23039 & 37.6 & 2020-05-06 04:05:57 & HETG\({}^{a}\) \\
23244 & 37.6 & 2020-05-06 21:24:29 & HETG\({}^{a}\) \\
23267 & 24.7 & 2020-05-31 01:29:08 & HETG\({}^{a}\) \\
3268 & 21.7 & 2020-05-31 18:53:25 & HETG\({}^{a}\) \\ \hline \multicolumn{4}{l}{\({}^{a}\)High-Energy Transmission Grating} \\ \end{tabular}
\end{table}
Table 1: _Chandra_ observations of RX J1712.6\(-\)2414
\begin{table}
\begin{tabular}{c c c c c c c} \hline & Ne & Mg & Si & S & Ar & Fe \\ \hline Significance (\(\sigma\)) & 3.5 & 7.2 & 11.6 & 5.8 & 2.3 & 4.9 \\ \(E_{\rm rest}\) of K\({}_{\alpha 1}\) (keV)\({}^{1}\) & 1.0220 & 1.4726 & 2.0061 & 2.6227 & 3.3230 & 6.9732 \\ \(E_{\rm rest}\) of K\({}_{\alpha 2}\) (keV)\({}^{1}\) & 1.0215 & 1.4717 & 2.0043 & 2.6197 & 3.3182 & 6.9520 \\ Energy centroid of K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) (keV)\({}^{2}\) & 1.0218 & 1.4723 & 2.0055 & 2.6217 & 3.3214 & 6.9660 \\ \(z\simeq\Delta E/E_{\rm rest}\) (\(\times 10^{-4}\))\({}^{2}\) & 1.0\({}^{+\infty}_{-\infty}\) & 6.9\({}^{+0.0}_{-0.2}\) & 7.4\({}^{+0.0}_{-0.7}\) & 15.4\({}^{+5.5}_{-4.6}\) & 10.5\({}^{+12.2}_{-12.4}\) & 1.2\({}^{+\infty}_{-\infty}\) \\ \(v\) (\(\times 10^{2}\) km s\({}^{-1}\))\({}^{2}\) & 0.3\({}^{+\infty}_{-\infty}\) & 2.1\({}^{+0.0}_{-0.1}\) & 2.2\({}^{+0.0}_{-0.2}\) & 4.6\({}^{+1.7}_{-1.4}\) & 3.1\(\pm\) 3.7 & 0.4\({}^{+\infty}_{-\infty}\) \\ Energy centroid of K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\) (keV)\({}^{3}\) & 1.4722 & 2.0052 & 2.6212 & & \\ \(z\simeq\Delta E/E_{\rm rest}\) (\(\times 10^{-4}\))\({}^{3}\) & & 6.4\({}^{+0.4}_{-0.0}\) & 3.4\({}^{+4.0}_{-0.4}\) & 16.0\({}^{+0.0}_{-6.1}\) & & \\ \(v\) (\(\times 10^{2}\) km s\({}^{-1}\))\({}^{3}\) & & 1.9\({}^{+0.1}_{-0.0}\) & 1.0\({}^{+1.1}_{-0.1}\) & 4.8\({}^{+0.0}_{-1.8}\) & & \\ \hline \multicolumn{4}{l}{\({}^{1}\) AtomDB: [http://www.atomdb.org](http://www.atomdb.org)} \\ \multicolumn{4}{l}{\({}^{2}\) Assuming the nominal intensity ratio \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}=2\)} \\ \multicolumn{4}{l}{\({}^{3}\) Assuming the optically-thick intensity ratio \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}=1\)} \\ \end{tabular}
\end{table}
Table 2: Summary of fitting to the H-like ion emission lines.
Figure 1: X-ray spectra obtained from the _Chandra_ High Energy Grating (HEG; black) and Medium Energy Grating (MEG; red). We show the spectra including emission lines from H-like ions of Ne, Mg, Si, S, Ar, and Fe. The thick solid lines are the best-fit models. Meanwhile, the thin solid lines are the components of the best-fit models, consisting of a power-law function and two Gaussians. The residuals from the best-fit model are shown at the bottom of each panel. We also show the position of the rest-frame energy of each emission line by a vertical dotted line and the energy shift value.
We found redshifts of \(z\simeq\Delta E/E_{\rm rest}=\) 7 - \(15\times 10^{-4}\), which correspond to the line-of-sight velocity of 200-450 km s\({}^{-1}\) with the H-like K\({}_{\alpha}\) lines of Mg, Si, and S from RX J1712.6\(-\)2414. The detected redshifts are statistically significant and surpass the instrumental accuracy of \(\Delta E/E_{\rm rest}\simeq\) 3 \(\times 10^{-4}\) (i.e., 100 km s\({}^{-1}\) in the line-of-light velocity).
We realized that the measured redshifts cannot be explained by the current accretion models. The magnetically channeled accretion plasma flow is modeled by energy and momentum equations combined with the equation of state of the ideal gas, assuming that the plasma velocity is zero at the WD surface (Aizu, 1973) (see Appendix). The solutions determine the temperature, density, and velocity profiles of the plasma along the flow. The maximum temperature of the post-shock plasma in RX J1712.6\(-\)2414 was measured to be 23-26 keV (Yuasa et al., 2010; Xu et al., 2016; Joshi et al., 2019). Therefore, the WD mass is never less than 0.6 \(M_{\odot}\) based on the jump conditions of the strong shock (equation 5 and 8). Figure 2 shows the relations between the temperature and \(\Delta E/E_{\rm rest}\) (i.e., the line-of-sight velocity) along the flow with the WD mass of 0.6 \(M_{\odot}\). Note that we assume here the exact pole-on geometry. Thus, the line-of-sight velocity equals to the actual one. A simple analytic model with the isobaric approximation (Frank et al., 2002) shows that the accreting plasma becomes faster with a lighter WD by comparing at a certain temperature as is indicated by equation 14. The plasma velocity measured with an emission line is the velocity of the local plasma whose temperature is at the emissivity maximum of the corresponding line (hereinafter called the line peak emissivity temperature). The local plasma velocity can be measured with the centroid energy of the emission line which has the emissivity peak there. The plasma velocity that the emission lines can measure is that at the certain temperature (hereinafter called the line peak emissivity temperature), where the corresponding species dominates the ion population. We note that the velocity is independent of the cooling function (Equation 14) and, therefore, whether the cyclotron cooling is significant does not matter. In fact, the numerical calculation shows that the temperature\(-\)velocity relations involving and not involving the cyclotron cooling are almost identical for the high specific accretion rate (i.e., accretion rate per unit area) of \(a=1\) g cm\({}^{-2}\) cm\({}^{-1}\) (Figure 2). Moreover, the plasma is more quickly decelerated than the prediction of the isobaric model because the pressure increases as the plasma descends. A small specific accretion rate (\(a=0.01\) g cm\({}^{-2}\) cm\({}^{-1}\) in Figure 2) enlarges the increase in the pressure and makes the plasma velocity even slower (Hayashi and Ishida, 2014). In summary, the fastest flow is realized by the lightest WD mass (0.6 \(M_{\odot}\) for RX J1712.6\(-\)2414) and a high enough specific accretion rate. The observed line-of-sight velocities of \(\gtrsim 1\times 10^{2}\) km s\({}^{-1}\) are significantly faster than the theoretical fastest flow (\(\simeq 30\) km s\({}^{-1}\) and 80 km s\({}^{-1}\) at the line peak emissivity temperatures of Mg and S, respectively).
We investigated other possibilities that increase the \(\Delta E/E_{\rm rest}\). RX J1712.6\(-\)2414 is located at (\(l\), \(b\)) = (+359\(\fdg\)87, +8\(\fdg\)74) in the Galactic coordinate, and its distance and proper motions (\(\mu_{\rm ra}\), \(\mu_{\rm dec}\)) are 699.7\({}^{+9.7}_{-10.8}\) pc and (-1.765, 2.557) in mas yr\({}^{-1}\)(Gaia Collaboration et al., 2022; Bailer-Jones et al., 2021), respectively. Assuming the space motion along the galactic pole \(W=0\), the radial velocity is calculated at \(-2\) km s\({}^{-1}\). Indeed, the optical spectra obtained from the South African Astronomical Observatory revealed that the systemic velocity is less than 20 km s\({}^{-1}\)(Buckley et al., 1995). Furthermore, the line-of-sight velocity associated with the binary motion is nullified in our phase-averaged spectra and does not affect the result.
Although significant optical depth shifts the energy centroids of the emission lines (Del Zanna et al., 2002), it is not enough to explain the observed redshifts. The K\({}_{\alpha}\) lines are ensembles of the K\({}_{\alpha 2}\) and K\({}_{\alpha 1}\) lines, whose intensity ratio (\(I_{\rm K_{\alpha 2}}/I_{\rm K_{\alpha 1}}\)) affects the energy centroid of the K\({}_{\alpha}\) lines. The optical depth at the K\({}_{\alpha 1}\) line is double that at the K\({}_{\alpha 2}\) line. Therefore, the K\({}_{\alpha 1}\) line is more easily attenuated than the K\({}_{\alpha 2}\) line; thus, the K\({}_{\alpha}\) line energy centroid shifts toward the red side. This effect approximately halves \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}\) and makes it unity at the optically thick limit (Kastner and Kastner, 1990; Mathioudakis et al., 1999) (see Appendix). We fitted the power-law and 2-Gaussians model in the same manner in SS3 by assuming \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}=1\). The computed \(\Delta E/E_{\rm rest}\) and velocity are also presented in Table 2, which still shows the velocities of \(\gtrsim 1\times 10^{2}\) km s\({}^{-1}\). Although all of them are consistent with the corresponding result with \(I_{\rm K_{\alpha 1}}/I_{\rm K_{\alpha 2}}=2\) within the statistical error, the H-like Si K\({}_{\alpha}\) line gave us the best-fit \(\Delta E/E_{\rm rest}\), different from the corresponding by \(4\times 10^{-4}\). However, this line showed a fine structure in the fitting residual (see Figure 5), implying that optically thick limit (i.e., \(I_{\rm K_{\alpha 2}}/I_{\rm K_{\alpha 1}}=1\)) is not a good assumption.
The H-like K\({}_{\alpha}\) lines from the pre-shock accreting gas do not contaminate those lines from the post-shock plasma. The pre-shock accreting plasma close to the shock is photoionized by the X-ray irradiation from the post-shock plasma and emits H-like K\({}_{\alpha}\) lines (Luna et al., 2010). However, the velocity of the pre-shock gas is 4.1\(\times 10^{3}\) km s\({}^{-1}\) and \(\Delta E/E_{\rm rest}=1.4\times 10^{-2}\) with the pole-on geometry even if the WD is light as much as possible i.e., 0.6 \(M_{\odot}\). Such highly Doppler-shifted line spectroscopically goes out of the line of the same ion from the post-shock plasma.
Consequently, we conclude that the measured \(\Delta E/E_{\rm rest}\) requires a gravitational redshift caused by the WD. Considering the gravitational redshift to our accretion-flow model (Hayashi & Ishida, 2014), as well as the systemic velocity and the instrumental absolute energy accuracy, we estimated the WD mass of RX J1712.6\(-\)2414 to be \(>0.9\,M_{\odot}\) (Figure 3). The WD mass estimated by previous works (e.g., \(0.62^{+0.06}_{-0.05}\,M_{\odot}\)(Yuasa et al., 2010), \(0.72\pm 0.05\,M_{\odot}\)(Suleimanov et al., 2019), and \(0.67^{+0.06}_{-0.05}\,M_{\odot}\)(Shaw et al., 2020)) is lighter than ours. One plausible explanation for the discrepancy is the cyclotron cooling that softens the X-ray spectrum and was not considered in the previous mass estimations. The magnetic field of RX J1712.6\(-\)2414 is \((9-27)\times 10^{6}\,\)G and comparable to that of polars in which the cyclotron cooling is significant (Wu et al., 1994). Another reasonable reason is the X-ray reflection that is maximized at the pole-on
Figure 2: \(\Delta E/E_{\rm rest}\) and the line-of-sight plasma velocity measured with the emission lines of H-like Mg, Si, and S ions (squares). The temperature of data points represents the line peak emissivity temperature of the the corresponding ion (see the text). Each error bar shows the sum of the statistical error for a 90% confidence and the instrumental absolute energy uncertainty. The lines represent theoretical temperature-velocity relations (Hayashi & Ishida, 2014) of the plasma flow with the WD mass of \(0.6\,M_{\odot}\) assuming the exact pole-on geometry: thick solid lines are the cases of \(a=1\,{\rm g\,cm^{-2}\,cm^{-1}}\) and \(B=0\,\)G (black), \(a=1\,{\rm g\,cm^{-2}\,cm^{-1}}\) and \(B=30\)M G (red), and \(a=0.01\,{\rm g\,cm^{-2}\,cm^{-1}}\) and \(B=0\,\)G (blue). The black dash-dotted line shows the isobaric accretion flow.
geometry and causes complicated systematic error (Hayashi et al., 2021). In previous studies, X-ray reflection was not taken into account.
Lastly, we note that more precise spectral modeling reduces the contribution of the plasma velocity to the redshift, which improves the accuracy of the gravitational redshift estimation. We assume that the H-like K\({}_{\alpha}\) lines are emitted at the corresponding line peak emissivity temperature. However, the lines are indeed emitted from a temperature range specified for each ion. Accreting plasma neighboring on the WD has higher density and thus emits more intense X-rays. Meanwhile, the neighboring plasma is slower so that the velocity average weighted over the X-ray intensity is slower than the velocity at the line peak emissivity temperature we used in Figures 2 and 3. The more precise spectral modeling may necessitate greater WD mass, but it is beyond our main aim to report the gravitational redshift detection.
Figure 3: Same as Figure 2, except for theoretical calculations involving the binary system motion and the gravitational redshift. Thick solid lines represent the calculations with the WD mass of \(0.9\,M_{\odot}\) (red), \(1.3\,M_{\odot}\) (black), and \(1.4\,M_{\odot}\) (blue). The dashes, dotted, and thin lines show the components of the gravitational redshift, the plasma flow velocity (Hayashi & Ishida, 2014), and the binary systemic velocity, respectively. The pole-on geometry is assumed for the calculations.
## 5 Conclusion
We observed the diskless intermediate polar RX J1712.6-2414 with the High-Energy Transmission Grating (HETG) of the Chandra Observatory to study the velocity profile of the plasma in the accretion flow. We found significant redshifts for the K\(\alpha\) lines of hydrogen-like magnesium, silicon (\(\Delta E/E_{\rm rest}\sim 7\times 10^{-4}\)), and sulfur (\(\Delta E/E_{\rm rest}\sim 15\times 10^{-4}\)) ions, which are above the instrumental absolute energy accuracy (\(\Delta E/E_{\rm rest}\sim 3.3\times 10^{-4}\)). We considered several factors producing the redshift, such as the Doppler shift associated with the plasma flow velocity and the systemic velocity, the optical depth, and the gravitational redshift, and then concluded that the gravitational redshift is the major contributor to the observed redshift. This is the first gravitational redshift detection from a magnetic WD. The gravitational redshift provides us with a new method of the WD mass measurement, which estimates the WD mass to be \(M_{\rm WD}>0.9\,M_{\odot}\).
## Appendix
### Plasma flow model
An accretion plasma flow channelled by a magnetic field is modeled as a 1-dimensional flow. Fundamental equations of a 1-dimensional flow are the mass continuity equation:
\[\frac{{\rm d}}{{\rm d}z}(\rho v)=0, \tag{1}\]
the momentum equation:
\[\rho v\frac{{\rm d}v}{{\rm d}z}+\frac{{\rm d}P}{{\rm d}z}=\rho F, \tag{2}\]
and the energy equation:
\[\frac{{\rm d}}{{\rm d}z}\left[v\left(\frac{1}{2}\rho v^{2}+\frac{\gamma P}{ \gamma-1}\right)\right]=\rho vF-\varepsilon. \tag{3}\]
Here, \(\rho\) denotes mass density, \(v\) presents flow velocity, \(P\) denotes pressure, \(F\) is an external force, \(\epsilon\) is a radiative cooling rate, and \(\gamma\) is an adiabatic index of 5/3. The integral form of equation 1 is
\[\rho v=a, \tag{4}\]
where \(a\) is called a specific accretion rate, that is, the accretion rate per unit area. The simultaneous equations 2, 3 and 4 are resolved under initial conditions derived from the strong-shock jump condition calculated by the Rankine-Hugoniot relations with the free-fall velocity:
\[v_{0} = 0.25\sqrt{2GM_{\rm WD}/(R_{\rm WD}+h)}, \tag{5}\] \[\rho_{0} = \frac{a}{v_{0}},\] (6) \[P_{0} = 3av_{0},\] (7) \[T_{0} = 3\frac{\mu m_{\rm H}}{k}v_{0}^{2}, \tag{8}\]
where \(M_{\rm WD}\) is a WD mass, \(R_{\rm WD}\) is a WD radius, \(G\) is the constant of gravitation, and \(h\) is a shock height. The equation of state for the ideal gas is used:
\[P=\frac{\rho kT}{\mu m_{\rm H}}. \tag{9}\]
A boundary condition of soft landing is also assumed at the WD surface:
\[v_{\rm WD}=0. \tag{10}\]
By assuming that the shock height is negligibly low compared with the WD radius (\(R_{\rm WD}\gg h\), i.e., the post-shock region is low enough), the Bernoulli's principle requires the total of the pressure and the ram pressure is constant:
\[P_{0}+\rho_{0}v_{0}^{2}=P_{\rm WD}=\rm constant, \tag{11}\]
where \(P_{\rm WD}\) is the pressure at the WD surface at which the velocity is zero. From equations 4 and 7, we obtain
\[P_{0}=3\rho_{0}v_{0}^{2}. \tag{12}\]
In other words, the pressure is increased only by a factor of 4/3 through the entire flow. Thus, an isobaric flow is a good approximation. With the constant pressure, from equations 4 and 9, the following is derived:
\[\frac{v}{v_{0}}=\frac{\rho_{0}}{\rho}=\frac{T}{T_{0}}. \tag{13}\]
From equations 5, 8 and 13, the relation between the temperature and the velocity is written as
\[v=\frac{v_{0}}{T_{0}}T=\left(\frac{3\mu m_{\rm H}}{4k}\sqrt{\frac{2GM_{\rm WD }}{R_{\rm WD}}}\right)^{-1}T. \tag{14}\]
A result of this simple analytical model is shown in the dot-dash line (labelled with "isobaric") in Figures 2 and 3.
Assuming only the Bremsstrahlung for the radiation cooling process, equations 2, 3, 4, and 13 give us a simple thermal function of \(z\):
\[\frac{T}{T_{0}}=\left(\frac{z}{h}\right)^{2/5}, \tag{15}\]
where
\[h=kT_{0}^{1/2}v_{0}/(\mu m_{\rm H}\Lambda_{\rm m,br}\rho_{0}) \tag{16}\]
and
\[\Lambda_{\rm m,br}\sim 7\times 10^{20}\,{\rm erg\,g^{-1}\,s^{-1}}. \tag{17}\]
More realistic models have been numerically calculated. The cooling via line emission is included into the cooling function according to plasma codes (Yuasa et al., 2010; Hayashi and Ishida, 2014). Some models involve the cyclotron cooling, which is usually important in the polars (this radiation mainly appears in the infrared band), as well by assuming optically thick radiation (Wada et al., 1980; Cropper et al., 1999; Belloni et al., 2021). Note that the difference in the cooling function does not affect the relation between the temperature and velocity, as shown in equation 14. Moreover, in a real plasma flow with finite height, the gravitational force works. \(F\) in equations 2 and 3 is represented as follows to include this effect.
\[F=\frac{GM_{\rm WD}}{(R_{\rm WD}+z)^{2}}. \tag{18}\]
The calculation of the realistic model are shown in Figures 2 and 3.
to the oscillator strength (\(f\)) of the corresponding transition (Jordan, 1978). Moreover, the oscillator strength of a K\({}_{\alpha 1}\) line is double that of the K\({}_{\alpha 2}\) line as
\[\frac{\tau_{{\rm K}_{\alpha 1}}}{\tau_{{\rm K}_{\alpha 2}}}=\frac{f_{{\rm K}_{ \alpha 1}}}{f_{{\rm K}_{\alpha 2}}}=2. \tag{20}\]
The ratio of the escape probability was calculated (Kastner & Kastner, 1990) for a situation in which the emitters (i.e., excited H-like ions) and absorbers (i.e., H-like ions at the ground state) are identically distributed as in the post-shock plasma. Figure 4 depicts the escape probability ratio for lines with an optical depth ratio of 2, i.e., K\({}_{\alpha 1}\) and K\({}_{\alpha 2}\). At the optically thick limit, the escape probability ratio approximately converges on 0.5 and \(I_{{\rm K}_{\alpha 1}}/I_{{\rm K}_{\alpha 2}}\simeq 1\) at the optically thick limit from equation 19.
We fitted the power-law and 2-Gaussians model for Mg, Si, and S in the same manner in the SS3 by using \(I_{{\rm K}_{\alpha 1}}/I_{{\rm K}_{\alpha 2}}=1\). Figure 5 shows the best-fit models with the data and Table 2 shows the best-fit energy shift and velocity.
## Acknowledgement
The authors are grateful to all of the _Chandra_ project members for developing the instruments and their software, the spacecraft operations, and the calibrations. We also thank Maruzen-Yushodo Co. Ltd. and Xtra Inc. for their language editing service of our English. This research has made use of data obtained from the Chandra Data Archive and the Chandra Source Catalog, and software provided by the Chandra X-ray Center (CXC) in the application packages
Figure 4: The ratio of the escape probability of K\({}_{\alpha 1}\) line to K\({}_{\alpha 2}\) line as a function of optical depth of the K\({}_{\alpha 1}\) line (log\(\tau_{{\rm K}_{\alpha 1}}\)).
CIAO and Sherpa. Support for this work was provided by the National Aeronautics and Space Administration through Chandra Award Number GO9-20022A issued by the Chandra X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060. This work was also supported by JSPS Grant-in-Aid for Scientific Research(C) Grant Number JP21K03623.
|
2308.10567 | General procedure for free-surface recovery from bottom pressure
measurements: Application to rotational overhanging waves | A novel boundary integral approach for the recovery of overhanging (or not)
rotational (or not) water waves from pressure measurements at the bottom is
presented. The method is based on the Cauchy integral formula and on an
Eulerian--Lagrangian formalism to accommodate overturning free surfaces. This
approach eliminates the need to introduce {\em a priori} a special basis of
functions, providing thus a general means of fitting the pressure data and,
consequently, recovering the free surface. The effectiveness and accuracy of
the method are demonstrated through numerical examples. | Joris Labarbe, Didier Clamond | 2023-08-21T08:52:32Z | http://arxiv.org/abs/2308.10567v1 | General procedure for free-surface recovery from bottom pressure measurements: application to rotational overhanging waves
###### Abstract.
A novel boundary integral approach for the recovery of overhanging (or not) rotational (or not) water waves from pressure measurements at the bottom is presented. The method is based on the Cauchy integral formula and on an Eulerian-Lagrangian formalism to accommodate overturning free surfaces. This approach eliminates the need to introduce _a priori_ a special basis of functions, providing thus a general means of fitting the pressure data and, consequently, recovering the free surface. The effectiveness and accuracy of the method are demonstrated through numerical examples.
## 1. Introduction
Nonlinear water waves have been extensively studied since the mid eighteenth century, when Euler introduced his eponymous equation. Since then, surface gravity waves have attracted much attention in their modelling, although scientists quickly realised their inherent complexity. This is one reason why physicists and mathematicians are still interested by the richness of this problem, making it an endless source for research in fluid dynamics. For instance, one crucial concern in environmental and coastal engineering is to accurately measure the surface of the sea for warning about the formation of large waves near coasts and oceanic routes. One solution to this problem is reconstructing the surface using a discrete set of measurements obtained from submerged pressure transducers [17]. This approach avoids the limitations of offshore buoy systems, which are susceptible to climatic disasters, located on moving boundaries, and lacking accuracy in wave height estimates [13]. Consequently, solving the nonlinear inverse problem associated with water-waves is a timely request for building practical engineering apparatus that rely on _in situ_ data.
While the hydrostatic theory was originally used to tackle this problem when first formulated [1, 12], it was only recently that nonlinear waves began to be addressed [15]. Some works, such as [9], considered conformal mapping to successfully obtain reconstruction formulae. However, the numerical cost associated with solving these implicit relations renders conformal mapping inefficient when dealing with real physical data, i.e., when pressure data are given at known abscissas of the physical plane, not of the conformally mapped one. Actually, introducing some suitable holomorphic functions, it is possible to efficiently solve this nonlinear problem while staying within the physical plane for the recovery procedure [5, 3]. These studies demonstrated the convergence of the reconstruction process and the ability to recover waves of maximum amplitude [6]. Furthermore, the method was adapted to handle cases involving linear shear currents [7], remarkably recovering the unknown magnitude of vorticity alongside the wave profile and associated parameters.
The recovery method studied in [3, 5, 6, 7] has two shortcomings, however. First, it was developed for non-overhanging waves, so it cannot directly address these waves that can occur,
in particular, in presence of vorticity. Second, part of this reconstruction procedure is based on analytic continuation of some well-chosen eigenfunctions. If for some waves a "good" choice of eigenfunctions is clear _a priori_, this is not necessarily the case for complicated wave profiles. By "good" choice, we mean a set of eigenfunctions that provides an accurate representation of the free surface with a minimal amount of modes and, at the same time, that can be easily computed. Even though Fourier series (or integrals) can be used in principle, a large number of eigenfunctions may be required to accurately represent the free surface. Since a surface recovery from bottom pressure is intrinsically ill-conditioned, using a large number of Fourier modes may lead to numerical issues. Another basis should then be preferably employed. For instance, for irrotational long waves propagating in shallow water (i.e., cnoidal waves), the use of Jacobian elliptic functions is effective [3, 5]. However, such alternative basis are not always easily guessed. Thus, it is desirable to derive a reconstruction procedure independent of a peculiar basis of eigenfunctions. Here, we propose a recovery methodology addressing these two shortcoming.
In this article, we derive a general formulation to address the surface recovery problem using a boundary integral formulation. While a similar approach was described by Da Silva and Peregrine [11] for computing rotational waves, to our knowledge it has never been applied in the context of a recovery procedure. The Cauchy integral formula, although singular by definition, proves advantageous from a numerical perspective. Dealing with singular kernels, the integral formulation easily allows for the consideration of arbitrary steady rotational surface waves (periodic, solitary, aperiodic) without the need to select a peculiar basis of functions to fit the pressure data. Additionally, this method facilitates the parametrisation of the surface profile, enabling the recovery of overhanging waves with arbitrarily large amplitudes.
Overturning profiles are known to be hard to compute accurately [18], which presents a significant challenge due to the ill-posed nature of our problem. Nevertheless, by considering a mixed Eulerian-Lagrangian description at the boundaries, we demonstrate the feasibility of the recovery process. To illustrate the robustness of our method, we present two examples of rotational steady waves: a periodic wave with an overturning surface and a solitary wave. In both scenarii, we achieve good agreement in recovering the wave elevation, albeit with the necessity of using refined grids in the regions of greatest surface variation. Refining a grid where needed is way much easier than finding a better basis of functions, that is a feature of considerable practical interest.
The method presented in this study consists in the first boundary integral approach to solve this nonlinear recovery problem. We expect this work to pave the way for addressing even more challenging configurations, including the extension to three-dimensional settings, which will inevitably involve Green functions in the integral kernels.
The paper is organised as follow. The mathematical model and relations of interest are introduced in section 2, with an Eulerian description of motion. In order to handle overhanging waves, their Lagrangian counterparts are introduced in section 4. In section 3, we derive an equation for the free surface, allowing us to compute reference solutions for testing our recovery procedure. This procedure is described in the subsequent section 5. Numerical implementation and examples are provided in section 6. Finally, in section 7, we discuss our general results, as well as the future extensions and implications of this work.
## 2. Mathematical settings
We give here the classical formulation of our water-wave problem in Eulerian variables. Physical assumptions and notations being identical to that of Clamond et al. [7], interested readers should refer to this paper for further details.
### Equations of motion and useful relations
We consider the steady two-dimensional motion of an incompressible inviscid fluid with constant vorticity \(\omega\). The fluid is bounded above and below by impermeable free surface and solid horizontal bed, respectively. Our focus lies on traveling waves of permanent form that propagate with a constant phase speed \(c\) and wavenumber \(k\) (\(k=0\) for solitary and more general aperiodic waves). We adopt a Galilean frame of reference moving with the wave, thus ensuring that the velocity field appears independent of time for the observer. Consequently, we can express the fluid domain, denoted as \(\Omega\), as the set of points \((x,y)\) (Cartesian coordinates) satisfying \(x\in\mathds{R}\) and \(-d\leqslant y\leqslant\eta(x)\), where \(\eta(x)\) represents the surface elevation from rest and \(d\) is the mean water depth. Thus, the mean water level is located at \(y=0\), such that
\[\left\langle\eta\right\rangle\stackrel{{\mathrm{def}}}{{=}}\frac {k}{2\pi}\int_{-\pi/k}^{\pi/k}\eta(x)\mathrm{d}x=0, \tag{1}\]
where \(\left\langle\cdot\right\rangle\) denotes the Eulerian averaging operator (c.f. Figure 1).
In this setting, the velocity field \(\mathbf{u}=(u,v)\) and pressure \(p\) (divided by the density and relative to the reference value at the surface) are governed by the stationary Euler equations
\[\mathbf{\nabla}\mathbf{\cdot}\mathbf{u}=0,\qquad\mathbf{u}\mathbf{\cdot}\mathbf{\nabla}\mathbf{u}+\mathbf{ \nabla}p+\mathbf{g}=0,\] (2 \[a,b\] )
where \(\mathbf{g}=(0,g)\), the acceleration due to gravity \(g>0\) acting downwards. Equations of motion (2) are supplemented with kinematic and dynamic conditions at the upper and lower boundaries
\[\mathbf{u}\mathbf{\cdot}\mathbf{n}-\mathbf{u}\mathbf{\cdot}\mathbf{\nabla}\eta =0\quad\text{at}\quad y=\eta(x), \tag{3a}\] \[p =0\quad\text{at}\quad y=\eta(x),\] (3b) \[\mathbf{u}\mathbf{\cdot}\mathbf{n} =0\quad\text{at}\quad y=-d, \tag{3c}\]
where \(\mathbf{n}\) is the outward normal vector (see Figure 1 for a sketch of this configuration).
Since our physical system is two-dimensional _per se_, we introduce a scalar stream function \(\psi\) such that \(u=\psi_{y}\) and \(v=-\psi_{x}\), so (2\(a\)) is satisfied identically and \(\omega=-\psi_{xx}-\psi_{yy}\) is assumed constant. Thus, the Euler equations can be integrated into the Bernoulli equation
\[2(p+gy)+u^{2}+v^{2}=B_{\mathrm{s}}-2\omega(\psi-\psi_{\mathrm{s}})\stackrel{{ \mathrm{def}}}{{=}}B(\psi), \tag{4}\]
for a constant \(B_{\mathrm{s}}\)[7]. Alternatively, we can define a Bernoulli constant at the bottom \(B_{\mathrm{b}}\stackrel{{\mathrm{def}}}{{=}}B_{\mathrm{s}}-2 \omega(\psi_{\mathrm{b}}-\psi_{\mathrm{s}})\). In (4), as in the rest of the article, subscripts's' and 'b' denote that the fields are evaluated, respectively, at the surface and at the bottom. The free surface and the seabed being both streamlines, \(\psi_{\mathrm{s}}\) and \(\psi_{\mathrm{b}}\) are constant in this problem.
Because here \(p_{\mathrm{s}}=0\) (constant atmospheric pressure set to zero without loss of generality), we have the relations relating some average bottom quantities and parameters [7]
\[\left\langle p_{\mathrm{b}}\right\rangle=gd,\qquad\left\langle u_{\mathrm{b}}^ {2}\right\rangle=B_{\mathrm{b}},\qquad\left\langle u_{\mathrm{b}}-\left(1+ \eta_{x}^{2}\right)u_{\mathrm{s}}\right\rangle=\omega d.\] (5 \[a,b,c\]
Although we decide to set the reference frame as moving with the wave celerity, it is still useful to consider Stokes' first and second definitions of phase speed
\[c_{1}\stackrel{{\mathrm{def}}}{{=}}-\left\langle u_{ \mathrm{b}}\right\rangle=-\omega d-\left\langle\left(1+\eta_{x}^{2}\right)u_{ \mathrm{s}}\right\rangle, \tag{6}\] \[c_{2}\stackrel{{\mathrm{def}}}{{=}}-\left\langle \frac{1}{d}\int_{-d}^{\eta}u\mathrm{d}y\right\rangle=\frac{\psi_{\mathrm{b}}- \psi_{\mathrm{s}}}{d}=-\frac{\omega d}{2}-\frac{\omega\left\langle\eta^{2} \right\rangle}{2d}-\frac{\left\langle\left(1+\eta_{x}^{2}\right)h\,u_{\mathrm{ s}}\right\rangle}{d}, \tag{7}\]
where \(h\stackrel{{\mathrm{def}}}{{=}}\eta(x)+d\) is the local water depth.
### Holomorphic functions
The vorticity being constant, potential theory still holds when using a Helmholtz representation to subtract the contribution of the linear shear current [7]. Therefore, it is convenient to introduce the complex relative velocity
\[W(z)\stackrel{{\mathrm{def}}}{{=}}U(x,y)-\mathrm{i}V(x,y)=(y+d) \omega+u(x,y)-\mathrm{i}v(x,y), \tag{8}\]
that is holomorphic in \(\Omega\) for the complex coordinate \(z\stackrel{{\mathrm{def}}}{{=}}x+\mathrm{i}y\). (Obviously, the complex velocity \(w\stackrel{{\mathrm{def}}}{{=}}u-\mathrm{i}v\) is not holomorphic if \(\omega\neq 0\).) The relative complex velocity (8) is related to the relative complex potential \(F(z)\stackrel{{\mathrm{def}}}{{=}}\Phi(x,y)+\mathrm{i}\Psi(x,y)\) by \(W=\mathrm{d}F/\mathrm{d}z\), where \(U=\Phi_{x}=\Psi_{y}\) and \(V=\Phi_{y}=-\Psi_{x}\) (see Clamond et al. [7] for more details).
Following Clamond and Constantin [5], we introduce a complex "pressure" function as
\[\mathfrak{P}(z)\stackrel{{\mathrm{def}}}{{=}}gd+\tfrac{1}{2}B_{ \mathrm{s}}+\omega(\psi_{\mathrm{s}}-\psi_{\mathrm{b}})-\tfrac{1}{2}W(z)^{2}= gd+\tfrac{1}{2}B_{\mathrm{b}}-\tfrac{1}{2}W(z)^{2}, \tag{9}\]
that is holomorphic in the fluid domain \(\Omega\), its restriction to the flat bed \(y=-d\) having zero imaginary part and real part \(p_{\mathrm{b}}\). Thus, \(p_{\mathrm{b}}\) determines \(\mathfrak{P}\) uniquely throughout the fluid domain, i.e., \(\mathfrak{P}(z)=p_{\mathrm{b}}(z+\mathrm{i}d)\). Note that \(p\) introduced in (2) coincides with the real part of \(\mathfrak{P}\) only on \(y=-d\) because the former is not a harmonic function in the fluid domain [8, 10].
As for irrotational waves, it is useful [3, 4, 7] to introduce the anti-derivative of \(\mathfrak{P}(z)\)
\[\mathfrak{Q}(z)\stackrel{{\mathrm{def}}}{{=}}\int_{z_{0}}^{z} \left[\mathfrak{P}(z^{\prime})-gd\right]\mathrm{d}z^{\prime}=\int_{z_{0}}^{z} \tfrac{1}{2}\left[B_{\mathrm{b}}-W(z^{\prime})^{2}\right]\mathrm{d}z^{\prime}, \tag{10}\]
where \(z_{0}\) is an arbitrary constant. For the same abscissa \(x\), the functions \(\mathfrak{Q}\) at the free surface (i.e., \(\mathfrak{Q}_{\mathrm{s}}(x)\)) and at the bottom (i.e., \(\mathfrak{Q}_{\mathrm{b}}(x)\)) satisfy the relation
\[\mathfrak{Q}_{\mathrm{s}}(x)-\mathfrak{Q}_{\mathrm{b}}(x)=\int_{x-\mathrm{i}d }^{x+\mathrm{i}\eta(x)}\left[\mathfrak{P}(z)-gd\right]\mathrm{d}z=\frac{ \mathrm{i}h(x)B_{\mathrm{b}}}{2}-\int_{x-\mathrm{i}d}^{x+\mathrm{i}\eta(x)} \frac{W(z)^{2}}{2}\mathrm{d}z. \tag{11}\]
Figure 1. Definition sketch in the referential moving with the wave.
### Cauchy integral formula
In the complex \(z\)-plane, the boundaries are analytical curves defined by \(z_{\rm s}=x+{\rm i}\eta\) and \(z_{\rm b}=x-{\rm i}d\). For a holomorphic function \(\Xi(z)\), the Cauchy integral formula applied to the fluid domain \(\Omega\) (assuming non-intersecting and non-overturning seabed and free surface) yields
\[{\rm i}\vartheta\,\Xi(z)={\rm P.V.}\oint\frac{\Xi(z^{\prime})}{z^{\prime}-z}{ \rm d}z^{\prime}=\int_{-\infty}^{\infty}\frac{\Xi_{\rm b}^{\prime}{\rm d}x^{ \prime}}{z_{\rm b}^{\prime}-z}-\int_{-\infty}^{\infty}\frac{(1+{\rm i}\eta_{x} ^{\prime})\,\Xi_{\rm s}^{\prime}{\rm d}x^{\prime}}{z_{\rm s}^{\prime}-z}, \tag{12}\]
where \(\vartheta=\{2\pi,0,\pi\}\) respectively inside, outside and at the smooth boundary of the domain. We emphasis that, in this paper, all integrals must be taken in the sense of the Cauchy principal value (P.V.), even if it is not explicitly mentioned for brevity. When \({\rm Im}\{\Xi_{\rm b}\}=0\), the bottom boundary condition can be taken into account with the method of images, yielding
\[\Xi(z)=\frac{{\rm i}}{\vartheta}\int_{-\infty}^{\infty}\frac{(1+{\rm i}\eta_{ x}^{\prime})\,\Xi_{\rm s}^{\prime}{\rm d}x^{\prime}}{z_{\rm s}^{\prime}-z}- \frac{{\rm i}}{\vartheta}\int_{-\infty}^{\infty}\frac{(1-{\rm i}\eta_{x}^{ \prime})\,\bar{\Xi}_{\rm s}^{\prime}{\rm d}x^{\prime}}{\bar{z}_{\rm s}^{ \prime}-z-2{\rm i}d}, \tag{13}\]
where an overbar denotes the complex conjugation. Note that the formula (13) is valid in finite depth (provided that \({\rm Im}\{\Xi_{\rm b}\}=0\)), and in infinite depth if \(\Xi_{\rm b}\to 0\) as \(d\to\infty\). Examples of functions satisfying these conditions are \(\Xi=W+c_{1}\), \(\Xi=W^{2}-B_{\rm b}\) and \(\Xi=\mathfrak{P}-gd\), so (13) provides an expression for computing these functions in arbitrary depth.
### Integral formulations for periodic waves
For \(L\)-periodic waves (with \(L=2\pi/k\)), the kernel is repeated in the horizontal direction, along the interval \(\mathcal{I}=[0,L]\), leading to the Cauchy integral formula with Hilbert kernel
\[\Xi(z)=\frac{{\rm i}k}{2\vartheta}\int_{\mathcal{I}}\left[\cot\!\left(k\frac{ z_{\rm s}^{\prime}-z}{2}\right)\left(1+{\rm i}\eta_{x}^{\prime}\right)\Xi_{\rm s }^{\prime}-\cot\!\left(k\frac{z_{\rm b}^{\prime}-z}{2}\right)\Xi_{\rm b}^{ \prime}\right]{\rm d}x^{\prime}. \tag{14}\]
Alternatively, using the method of images, along with the identity (61), the Cauchy integral can be written
\[\Xi(z)=\frac{k}{\vartheta}\int_{\mathcal{I}}\Big{[}\,{\rm L}_{ \rm i0}\{{\rm e}^{{\rm i}k(z_{\rm s}^{\prime}-z)}\}\left(1+{\rm i}\eta_{x}^{ \prime}\right)\Xi_{\rm s}^{\prime}+\,{\rm L}_{\rm i0}\{{\rm e}^{{\rm i}k(z- \bar{z}_{\rm s}^{\prime}+2{\rm i}d)}\}\left(1-{\rm i}\eta_{x}^{\prime}\right) \bar{\Xi}_{\rm s}\Big{]}\,{\rm d}x^{\prime}\] \[\qquad\qquad+\pi\vartheta^{-1}\langle(1+{\rm i}\eta_{x})\Xi_{\rm s }+(1-{\rm i}\eta_{x})\bar{\Xi}_{\rm s}\rangle, \tag{15}\]
where \({\rm L}_{\rm i\nu}\) is the \(\nu\)th polylogarithm whose definition is given in appendix A along with useful relations. It is worth noticing that the last term in the right-hand side of (15) corresponds to the zeroth Fourier coefficient (not present when the holomorphic function \(\Xi(z)\) has zero mean over the wave period). Equation (15) can be rewritten using the identity (60), yielding
\[\Xi(z)=\frac{{\rm i}}{\vartheta}\int_{\mathcal{I}}\frac{\partial }{\partial z} \Big{[}\,{\rm L}_{\rm i1}\{{\rm e}^{{\rm i}k(z_{\rm s}^{\prime}-z)}\} \left(1+{\rm i}\eta_{x}^{\prime}\right)\Xi_{\rm s}^{\prime}-\,{\rm L}_{\rm i1} \{{\rm e}^{{\rm i}k(z-\bar{z}_{\rm s}^{\prime}+2{\rm i}d)}\}\left(1-{\rm i} \eta_{x}^{\prime}\right)\bar{\Xi}_{\rm s}\Big{]}\,{\rm d}x^{\prime}\] \[\qquad\qquad+2\pi\vartheta^{-1}\left\langle{\rm Re}\left\{\Xi_{ \rm s}\right\}-\eta_{x}\,{\rm Im}\left\{\Xi_{\rm s}\right\}\right\rangle, \tag{16}\]
At the free surface (where \(\vartheta=\pi\), \(z=z_{\rm s}\) and \({\rm d}z_{\rm s}=(1+{\rm i}\eta_{x}){\rm d}x\)), carefully applying the Leibniz integral rule (c.f. formula (67) in appendix B) on the singular term in the integrand, equation (16) reduces to
\[(1+{\rm i}\eta_{x})\,\Xi_{\rm s}= \frac{{\rm i}}{2\pi}\frac{{\rm d}}{{\rm d}x}\int_{\mathcal{I}} \Big{[}\,{\rm L}_{\rm i1}\{{\rm e}^{{\rm i}k(z_{\rm s}^{\prime}-z_{\rm s})}\} \left(1+{\rm i}\eta_{x}^{\prime}\right)\Xi_{\rm s}^{\prime}-\,{\rm L}_{\rm i1} \{{\rm e}^{{\rm i}k(z_{\rm s}-\bar{z}_{\rm s}^{\prime}+2{\rm i}d)}\}\left(1-{ \rm i}\eta_{x}^{\prime}\right)\bar{\Xi}_{\rm s}\Big{]}\,{\rm d}x^{\prime}\] \[+(1+{\rm i}\eta_{x})\,\langle{\rm Re}\left\{\Xi_{\rm s}\right\}- \eta_{x}\,{\rm Im}\left\{\Xi_{\rm s}\right\}\rangle\,. \tag{17}\]
It should be noted that, obviously, aperiodic equations can be obtained from the periodic ones letting \(L\to\infty\) (i.e. \(k\to 0^{+}\)). Thus, for now on, we only consider periodic waves.
## 3. Lagrangian description
This section focuses on addressing the limitation of the Eulerian framework that hinders the computation of overhanging waves, which are characterised by multi-valued surfaces. While one option to address this challenge involves employing an arclength formulation, as elaborated by Vanden-Broeck [18], we have chosen to adopt a Lagrangian formalism in this study. One benefit of the Lagrangian approach is its inherent capability to aggregate collocation points near the wave crest, where they are most crucial [11].
Our approach is similar to that used by Da Silva and Peregrine [11], although we modify the integral kernels in terms of polylogarithms and we primitive the Cauchy integral formula to remove the strong singularity from the kernels following Clamond [4]. The resulting analytical expression is characterised by a weak logarithmic singularity, and it is suitable for calculating various types of waves, including solitary and periodic waves, overhanging or not.
Considering the (rotational) complex velocity \(w=u-\mathrm{i}v\) evaluated at the surface, let introduce the holomorphic function \(\log\left(-w/\sqrt{gd}\right)=q-\mathrm{i}\theta\) and define accordingly
\[q_{\mathrm{s}} \stackrel{{\mathrm{def}}}{{=}}\mathrm{Re}\left\{ \log\left(-w_{\mathrm{s}}/\sqrt{gd}\right)\right\}=\frac{1}{2}\ln\left[(u_{ \mathrm{s}}^{2}+v_{\mathrm{s}}^{2})/\sqrt{gd}\right],\] \[\theta_{\mathrm{s}} \stackrel{{\mathrm{def}}}{{=}}-\mathrm{Im}\left\{ \log\left(-w_{\mathrm{s}}/\sqrt{gd}\right)\right\}=-\operatorname{atan2} \left(v_{\mathrm{s}},-u_{\mathrm{s}}\right), \tag{18}\]
where we notably used the Bernoulli's principle at the surface.
Exploiting the impermeability and isobarity of the free surface, one gets
\[\tan\theta_{\mathrm{s}}=\eta_{x}=\frac{v_{\mathrm{s}}}{u_{\mathrm{s}}}=\frac {\sigma v_{\mathrm{s}}}{\sqrt{B_{\mathrm{s}}-2g\eta-v_{\mathrm{s}}^{2}}}= \frac{\sqrt{B_{\mathrm{s}}-2g\eta-u_{\mathrm{s}}^{2}}}{\sigma u_{\mathrm{s}}}. \tag{19}\]
Hence, extracting \(u_{\mathrm{s}}\) and \(v_{\mathrm{s}}\) from the latter relations, we have
\[u_{\mathrm{s}}=\sigma\cos(\theta_{\mathrm{s}})\sqrt{B_{\mathrm{s}}-2g\eta}, \quad v_{\mathrm{s}}=\sigma\sin(\theta_{\mathrm{s}})\sqrt{B_{\mathrm{s}}-2g \eta}. \tag{20}\]
We now consider the Lagrangian description of motion, with \(t\) denoting the time, thus \(u_{\mathrm{s}}=\mathrm{d}x/\mathrm{d}t\) and \(v_{\mathrm{s}}=\mathrm{d}\eta/\mathrm{d}t\) (\(\mathrm{d}/\mathrm{d}t\) the temporal derivative following the motion). From the second expression in (20), we deduce that
\[\frac{\mathrm{d}}{\mathrm{d}t}\sqrt{B_{\mathrm{s}}-2g\eta}=-\sigma g\sin( \theta_{\mathrm{s}}), \tag{21}\]
and hence, considering a crest at \(t=0\) (where \(\eta(0)=a\) is the wave amplitude), we have
\[\sqrt{B_{\mathrm{s}}-2g\eta}=\mu-g\sigma\int_{0}^{t}\sin(\theta_{\mathrm{s}}^ {\prime})\,\mathrm{d}t^{\prime},\quad\mu\stackrel{{\mathrm{def}} }{{=}}\sqrt{B_{\mathrm{s}}-2ga}, \tag{22}\]
where \(\theta_{\mathrm{s}}^{\prime}\stackrel{{\mathrm{def}}}{{=}}\theta _{\mathrm{s}}(t^{\prime})\). Therefore, all quantities at the free surface can be expressed in terms of the surface angle \(\theta_{\mathrm{s}}\), using \(t\) as independent variable, e.g.
\[z_{\mathrm{s}}(t) =\sigma\int_{0}^{t}\left[\mu-g\sigma\int_{0}^{t^{\prime}}\sin( \theta_{\mathrm{s}}^{\prime\prime})\,\mathrm{d}t^{\prime\prime}\right]\exp \left(\mathrm{i}\theta_{\mathrm{s}}^{\prime}\right)\!\mathrm{d}t^{\prime}+ \mathrm{i}a, \tag{23}\] \[w_{\mathrm{s}}(t) =\frac{\mathrm{d}\bar{z}_{\mathrm{s}}}{\mathrm{d}t}=\sigma\left[ \mu-g\sigma\int_{0}^{t}\sin(\theta_{\mathrm{s}}^{\prime})\,\mathrm{d}t^{ \prime}\right]\exp\left(-\mathrm{i}\theta_{\mathrm{s}}\right)\!,\] (24) \[\eta(t) =\mathrm{Im}\,z_{\mathrm{s}}=\frac{B_{\mathrm{s}}}{2g}-\frac{1}{2 g}\left[\mu-g\sigma\int_{0}^{t}\sin(\theta_{\mathrm{s}}^{\prime})\,\mathrm{d}t^{ \prime}\right]^{2}. \tag{25}\]
In the Eulerian description of motion, the wave period is constant and depends on the reference frame (moving at a constant phase speed) where one observes the fluid. On the other hand, in the Lagrangian context, the period \(T_{L}\) of the free surface differs from the Eulerian period due to the Stokes drift [14]. The Lagrangian period being such that \(\eta(t+T_{L})=\eta(t)\), we exploit expression (25) which, after some elementary algebra and simplifications, yields
\[\left[2\mu-g\sigma\int_{t}^{t+T_{L}}\sin(\theta_{\rm s}^{\prime})\,{\rm d}t^{ \prime}-2g\sigma\int_{0}^{t}\sin(\theta_{\rm s}^{\prime})\,{\rm d}t^{\prime} \right]\int_{t}^{t+T_{L}}\sin(\theta_{\rm s}^{\prime})\,{\rm d}t^{\prime}=0, \tag{26}\]
that is necessarily satisfied for all times if and only if
\[\int_{0}^{T_{L}}\sin(\theta_{\rm s}^{\prime})\,{\rm d}t^{\prime}=0, \tag{27}\]
thus defining the Lagrangian period \(T_{L}\).
## 4. Equations for the free surface
Since we do not have access to measurements of the bottom pressure for rotational waves, we must generate these data from numerical solutions of the exact equations. Thus, this section aims to derive a comprehensive formulation for the computation of surface wave using a boundary integral method. From these solutions, the bottom pressure is subsequently obtained and used as input for our surface recovery procedure.
### Eulerian formulation
From expressions (4) and (8), the complex (irrotational part of the) velocity at the surface is given explicitly by
\[W_{\rm s}=\omega h+\sigma\left(1-{\rm i}\eta_{x}\right)\sqrt{(B_{\rm s}-2g \eta)/(1+\eta_{x}^{2})}, \tag{28}\]
\(\sigma=\mp 1\) denoting waves propagating upstream or downstream, respectively. The parameter \(\sigma\) is introduced for convenience in order to characterise the (arbitrarily chosen) direction of the wave propagation in a 'fixed' frame of reference, i.e., \(\sigma=-1\) if the wave travels toward the increasing \(x\)-direction in this frame and, obviously, \(\sigma=+1\) if the wave travels toward the decreasing \(x\)-direction.
Considering the holomorphic function \(\Xi=W+c\) (\(c\) being an arbitrary definition of the phase speed), the left-hand side of (17) follows directly from (28)
\[\left(1+{\rm i}\eta_{x}\right)\Xi_{\rm s}=\left(\omega h+c\right)\left(1+{\rm i }\eta_{x}\right)+\sigma\sqrt{(B_{\rm s}-2g\eta)(1+\eta_{x}^{2})}, \tag{29}\]
where the radicand is purely real since \(B_{\rm s}\geqslant\max(2g\eta)\) for all waves. Substituting expression (29) in (17), the integral term splits into several contributions as
\[\omega\eta\left(1+{\rm i}\eta_{x}\right)= \frac{{\rm i}}{2\pi}\frac{{\rm d}}{{\rm d}x}\left[\int_{\mathcal{ C}}\,{\rm Li}_{1}\{{\rm e}^{{\rm i}k(z_{\rm s}^{\prime}-z_{\rm s})}\}\left( \omega h^{\prime}+c\right){\rm d}z_{\rm s}^{\prime}+\int_{\mathcal{C}}\,{\rm Li }_{1}\{{\rm e}^{{\rm i}k(z_{\rm s}-\bar{z}_{\rm s}^{\prime}+2{\rm i}d)}\} \left(\omega h^{\prime}+c\right){\rm d}\bar{z}_{\rm s}^{\prime}\right.\] \[+\left.\sigma\int_{\mathcal{I}}\,{\rm Li}_{\nu}\Big{(}{\rm e}^{{ \rm i}k(z_{\rm s}^{\prime}-z_{\rm s})}\Big{)}-\,{\rm Li}_{\nu}\Big{(}{\rm e}^{{ \rm i}k(z_{\rm s}-\bar{z}_{\rm s}^{\prime}+2{\rm i}d)}\Big{)}\,\sqrt{(B_{\rm s }-2g\eta)(1+\eta_{x}^{2})}{\rm d}x^{\prime}\right]\] \[+\left.(1+{\rm i}\eta_{x})\left\langle u_{\rm s}\left(1+\eta_{x}^{ 2}\right)\right\rangle-\sigma\sqrt{(B_{\rm s}-2g\eta)(1+\eta_{x}^{2})}, \tag{30}\]
where \(\mathcal{C}\) represents the free surface path, i.e., we use the brief notations \(\int_{\mathcal{C}}(\cdots){\rm d}z_{\rm s}^{\prime}\stackrel{{ \rm def}}{{=}}\int_{\mathcal{I}}(\cdots)(1+{\rm i}\eta_{x}^{\prime}){\rm d }x^{\prime}\) and \(\int_{\mathcal{I}}(\cdots){\rm d}x^{\prime}\stackrel{{\rm def}}{{= }}\int_{0}^{L}(\cdots){\rm d}x^{\prime}\). The first term inside the square bracket of
the right-hand side of (30) reduces to
\[J_{1} \stackrel{{\rm def}}{{=}}\int_{\mathcal{C}}\,\mathrm{Li} _{1}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{\prime}-z_{\mathrm{s}})}\Big{)} \left(\omega h^{\prime}+c\right)\mathrm{d}z_{\mathrm{s}}^{\prime}=\omega\int_{ \mathcal{C}}\,\mathrm{Li}_{1}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{ \prime}-z_{\mathrm{s}})}\Big{)}\,\eta^{\prime}\,\mathrm{d}z_{\mathrm{s}}^{\prime}\] \[=-\frac{\omega}{k}\int_{\mathcal{I}}\,\mathrm{Li}_{2}\Big{(} \mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{\prime}-z_{\mathrm{s}})}\Big{)}\, \mathrm{d}x^{\prime},\]
where we exploited the property [4]
\[\int_{\mathcal{C}}\,\mathrm{Li}_{\nu}\Big{(}\mathrm{e}^{\mathrm{i}kz_{\mathrm{ s}}}\Big{)}\,\mathrm{d}z_{\mathrm{s}}=0. \tag{31}\]
Similarly, we have
\[J_{2}\stackrel{{\rm def}}{{=}}\int_{\mathcal{C}}\, \mathrm{Li}_{1}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}-\bar{z}_{\mathrm{s }}^{\prime}+2\mathrm{i}d)}\Big{)}\left(\omega h^{\prime}+c\right)\mathrm{d} \bar{z}_{\mathrm{s}}^{\prime}=-\frac{\omega}{k}\int_{\mathcal{I}}\,\mathrm{Li} _{2}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}-\bar{z}_{\mathrm{s}}^{ \prime}+2\mathrm{i}d)}\Big{)}\,\mathrm{d}x^{\prime}. \tag{32}\]
Finally, after integrating the whole expression (30) and retaining the imaginary part only, we obtain the equation for the free surface
\[K= \,\frac{\omega\eta^{2}}{2}-\eta\left\langle u_{\mathrm{s}}(1+\eta _{x}^{2})\right\rangle+\frac{\omega}{2\pi k}\int_{\mathcal{I}}\mathrm{Re}\{ \mathcal{L}_{2}\}\,\mathrm{d}x^{\prime}\] \[-\frac{\sigma}{2\pi}\int_{\mathcal{I}}\mathrm{Re}\{\mathcal{L}_{1 }\}\sqrt{(B_{\mathrm{s}}-2g\eta^{\prime})(1+\eta_{x}^{\prime 2})}\,\mathrm{d}x^{ \prime}, \tag{33}\]
where \(K\) is an integration constant obtained enforcing the mean-level condition (1), i.e,
\[K=\left\langle\frac{\omega\eta^{2}}{2}+\frac{\omega}{2\pi k}\int_{\mathcal{I}} \mathrm{Re}\{\mathcal{L}_{2}\}\,\mathrm{d}x^{\prime}\,-\frac{\sigma}{2\pi} \int_{\mathcal{I}}\mathrm{Re}\{\mathcal{L}_{1}\}\sqrt{(B_{\mathrm{s}}-2g\eta^ {\prime})(1+\eta_{x}^{\prime 2})}\,\mathrm{d}x^{\prime}\right\rangle. \tag{34}\]
From now on, the same notation \(K\) is used to denote integration constants in the different surface recovery formulas. The exact values of these constants is obtained, as above, enforcing the condition (1). Moreover, we have introduced, for brevity, the notation for the kernels
\[\mathcal{L}_{\nu}\stackrel{{\rm def}}{{=}}\,\mathrm{Li}_{\nu} \Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{\prime}-z_{\mathrm{s}})}\Big{)} -\,\mathrm{Li}_{\nu}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}-\bar{z}_{ \mathrm{s}}^{\prime}+2\mathrm{i}d)}\Big{)}\,. \tag{35}\]
Equation (33) is a nonlinear integro-differential equation for the computation of the surface elevation \(\eta\). Once \(\eta\) is obtained solving (33) numerically, one can compute the corresponding bottom pressure. This bottom pressure can then be considered as "experimental data" to illustrate the reconstruction procedure described below. This is necessary when such data are not available from physical measurement, as it is the case with rotational waves.
### Lagrangian formulation
Expression (27) provides an implicit definition of \(T_{L}\) if the wavelength \(L\) is fixed _a priori_. It is now of interest to convert our formula for the surface elevation (33) using the Lagrangian description introduced previously (the variable of interest now becomes \(\theta_{\mathrm{s}}\)). After some simplifications, we obtain
\[\frac{\omega\sigma}{2}\eta^{2}-\frac{k\eta}{2\pi}\int_{0}^{T_{L}} (B_{\mathrm{s}}-2g\eta)\mathrm{d}t-\frac{1}{2\pi}\int_{0}^{T_{L}}\mathrm{Re}\, \{\mathcal{L}_{1}\}(B_{\mathrm{s}}-2g\eta^{\prime})\mathrm{d}t^{\prime}\] \[\qquad+\frac{\omega\sigma}{2\pi k}\int_{0}^{T_{L}}\mathrm{Re}\, \{\mathcal{L}_{2}\}\cos\!\left(\theta_{\mathrm{s}}^{\prime}\right)\sqrt{B_{ \mathrm{s}}-2g\eta^{\prime}}\,\mathrm{d}t^{\prime}=K, \tag{36}\]
where \(K\) is recovered using condition (1) in the same way as to obtain (34).
The computation of (36) involves weak logarithmic singularities in the kernel of the \(\mathcal{L}_{1}\) operator. In the numerical implementation, we use a similar approach as the one presented by Clamond [4], by subtracting the regular part of the operator. Thence, we obtain an explicit expression for the regularized finite integral as
\[\int_{0}^{T_{L}}\operatorname{Re}\left\{\mathcal{L}_{1}\right\}(B_ {\mathrm{s}}-2g\eta^{\prime})\mathrm{d}t^{\prime}= -2g\int_{0}^{T_{L}}\operatorname{Re}\left\{\mathcal{L}_{1}\right\} (\eta^{\prime}-\eta)\mathrm{d}t^{\prime}\] \[+(B_{\mathrm{s}}-2g\eta)\int_{0}^{T_{L}}\operatorname{Re}\left\{ \mathcal{L}_{1}-\widehat{\mathcal{L}}_{1}\right\}\mathrm{d}t^{\prime}, \tag{37}\]
where we introduced
\[\widehat{\mathcal{L}}_{\nu}\stackrel{{\mathrm{def}}}{{=}} \operatorname{L}_{\mathrm{i}\nu}\{\mathrm{e}^{\mathrm{i}\tau(t^{\prime}-t)} \}-\operatorname{L}_{\mathrm{i}\nu}\{\mathrm{e}^{\mathrm{i}\tau(t-t^{\prime })}\mathrm{e}^{-2kd}\}, \tag{38}\]
with \(\tau\stackrel{{\mathrm{def}}}{{=}}2\pi/T_{L}\). Considering \(t\to t^{\prime}\) in both integrands in (37), we have
\[\lim_{t\to t^{\prime}}\operatorname{Re}\left\{\mathcal{L}_{1}\right\}(\eta- \eta^{\prime}) =0, \tag{39}\]
\[\lim_{t\to t^{\prime}}\operatorname{Re}\left\{\mathcal{L}_{1}-\widehat{ \mathcal{L}}_{1}\right\} =\log\!\left[\left(\frac{1-\mathrm{e}^{-2kh}}{1-\mathrm{e}^{-2kd}} \right)\frac{\tau/k}{\sqrt{B_{\mathrm{s}}-2g\eta}}\right]. \tag{40}\]
The numerical computation of Lagrangian surface waves is thus done by solving the nonlinear expression (36), providing a given wave amplitude \(\left.\eta\right|_{t=0}=a\). The wave period \(T_{L}\), as well as the Bernoulli constant \(B_{\mathrm{s}}\), are both obtained from the Lagrangian counterpart of (1) and the requirement that \(\int_{0}^{T_{L}}\mathrm{d}z_{\mathrm{s}}=2\pi\sigma/k\). Moreover, since the abscissa \(x(t)\) is given explicitly from the definition of the wave profile \(\eta(t)\) through the real part of formula (23), we only have to solve expression (36) for \(\eta\) -- and not for both \(x\) and \(\eta\), as done previously by Da Silva and Peregrine [11] and by Vanden-Broeck [18] -- thus allowing us to further reduce the numerical cost.
## 5. Surface recovery from bottom pressure
Building upon the last three sections, we propose a nonlinear integral equation to recover the surface elevation in terms of a given measure of pressure \(p_{\mathrm{b}}(x)\) at the seabed. This'measurement' is given here numerically by solving expression (36) and, subsequently, computing the pressure from the surface profile (exploiting the boundary integrals and the Bernoulli equation).
The principal benefit in establishing an integral formulation lies in the fact that no specific eigenfunctions are needed to fit the pressure data. In fact, our derived equation remains applicable regardless of the nature of the wave under consideration, be it periodic or not, overhanging or not. We reemphasise here that, although the equations consider \((2\pi/k)\)-periodic waves, their aperiodic counterparts are easily obtained letting \(k\to 0^{+}\).
### Eulerian boundary integral formulation
Let consider the Cauchy integral formula (17) (without the method of images) for the holomorphic function \(\Xi(z)=\mathfrak{P}(z)-gd\). It yields
\[\mathfrak{P}-gd= \frac{k}{\vartheta}\int_{\mathcal{I}}\Big{[}\operatorname{Li}_{0} \!\left(\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{\prime}-z)}\right)(1+ \mathrm{i}\eta_{x}^{\prime})(\mathfrak{P}_{\mathrm{s}}^{\prime}-gd)- \operatorname{Li}_{0}\!\left(\mathrm{e}^{\mathrm{i}k(z_{\mathrm{b}}^{\prime} -z)}\right)(\mathfrak{P}_{\mathrm{b}}^{\prime}-gd)\Big{]}\,\mathrm{d}x^{\prime}\] \[+\langle(1+\mathrm{i}\eta_{x})(\mathfrak{P}_{\mathrm{s}}-gd)-( \mathfrak{P}_{\mathrm{b}}^{\prime}-gd)\rangle. \tag{41}\]
In order to simplify the latter expression, we exploit both definitions of \(\mathfrak{P}_{\mathrm{b}}\stackrel{{\mathrm{def}}}{{=}}p_{b}(x)\) and \(\mathrm{d}\mathfrak{L}_{\mathrm{s}}/\mathrm{d}x=(\mathfrak{P}_{\mathrm{s}}-gd) (1+\mathrm{i}\eta_{x})\) (we recall that \(\mathfrak{L}_{\mathrm{s}}\) is a periodic function). Hence, expression
(41) can be rewritten
\[\mathfrak{P}-gd=\frac{\mathrm{i}}{\vartheta}\int_{\mathcal{I}}\frac{\partial}{ \partial z}\left[\,\mathrm{Li}_{\mathrm{i}}\left(\mathrm{e}^{\mathrm{i}k(z_{ \mathrm{s}}^{\prime}-z)}\right)(1+\mathrm{i}\eta_{x}^{\prime})(\mathfrak{P}_{ \mathrm{s}}^{\prime}-gd)-\,\mathrm{Li}_{\mathrm{i}}\left(\mathrm{e}^{\mathrm{i }k(z_{\mathrm{b}}^{\prime}-z)}\right)(p_{\mathrm{b}}^{\prime}-gd)\right] \mathrm{d}x^{\prime}. \tag{42}\]
Before pursuing, let introduce the compact notations for the polylogarithmic kernels
\[\mathcal{K}_{\nu}\stackrel{{\mathrm{def}}}{{=}}\mathrm{Li}_{\nu} \Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{s}}^{\prime}-z_{\mathrm{s}})}\Big{)} \quad\text{and}\quad\mathcal{J}_{\nu}\stackrel{{\mathrm{def}}}{{=}} \mathrm{Li}_{\nu}\Big{(}\mathrm{e}^{\mathrm{i}k(z_{\mathrm{b}}^{\prime}-z_{ \mathrm{s}})}\Big{)}\,. \tag{43}\]
We now evaluate expression (42) at the free surface, reducing it to
\[(\mathfrak{P}_{\mathrm{s}}-gd)(1+\mathrm{i}\eta_{x})=\frac{\mathrm{i}}{2\pi} \frac{\mathrm{d}}{\mathrm{d}x}\int_{\mathcal{I}}\left[\mathcal{K}_{1}(1+ \mathrm{i}\eta_{x}^{\prime})(\mathfrak{P}_{\mathrm{s}}^{\prime}-gd)-\mathcal{ J}_{1}(\mathfrak{P}_{\mathrm{b}}^{\prime}-gd)\right]\mathrm{d}x^{\prime}. \tag{44}\]
As one can notice, the left-hand side of (44) is the integrand of \(\mathfrak{Q}_{\mathrm{s}}\). For that reason, we first integrate the whole expression over the \(x\)-coordinate and then split the contributions from the surface and the bottom
\[\mathfrak{Q}_{\mathrm{s}}=\frac{\mathrm{i}}{2\pi}\int_{\mathcal{I}}\mathcal{K }_{1}(1+\mathrm{i}\eta_{x}^{\prime})(\mathfrak{P}_{\mathrm{s}}^{\prime}-gd) \mathrm{d}x^{\prime}-\frac{\mathrm{i}}{2\pi}\int_{\mathcal{I}}\mathcal{J}_{1}( p_{\mathrm{b}}^{\prime}-gd)\mathrm{d}x^{\prime}+K, \tag{45}\]
for a constant of integration \(K\) also obtained applying condition (1).
The next step is to decompose the integrand in the first integral of (45). To do so, we merely replace the complex velocity by its expression (8) and exploit the identity (31) to cancel out some constant terms, i.e.
\[\int_{\mathcal{C}}\mathcal{K}_{1}(\mathfrak{P}_{\mathrm{s}}^{ \prime}-gd)\mathrm{d}z_{\mathrm{s}}^{\prime}= -\frac{1}{2}\int_{\mathcal{C}}\mathcal{K}_{1}W_{\mathrm{s}}^{ \prime 2}\mathrm{d}z_{\mathrm{s}}^{\prime}=-\frac{1}{2}\int_{\mathcal{C}} \mathcal{K}_{1}\left[\omega h^{\prime}+(1-\mathrm{i}\eta_{x}^{\prime})\sqrt{ \frac{(B_{\mathrm{s}}-2g\eta^{\prime})}{(1+\eta_{x}^{\prime 2})}}\right]^{2} \mathrm{d}z_{\mathrm{s}}^{\prime}\] \[= -\frac{\omega^{2}}{2}\int_{\mathcal{C}}\mathcal{K}_{1}h^{\prime 2} \mathrm{d}z_{\mathrm{s}}^{\prime}-\omega\sigma\int_{\mathcal{C}}\mathcal{K}_{ 1}h^{\prime}(1-\mathrm{i}\eta_{x}^{\prime})\sqrt{\frac{(B_{\mathrm{s}}-2g \eta^{\prime})}{(1+\eta_{x}^{\prime 2})}}\mathrm{d}z_{\mathrm{s}}^{\prime}\] \[-\frac{1}{2}\int_{\mathcal{C}}\mathcal{K}_{1}(1-\mathrm{i}\eta_{ x}^{\prime})^{2}\frac{(B_{\mathrm{s}}-2g\eta^{\prime})}{(1+\eta_{x}^{\prime 2})} \mathrm{d}z_{\mathrm{s}}^{\prime}. \tag{46}\]
After some algebraic manipulations, it further reduces to
\[\int_{\mathcal{C}}\mathcal{K}_{1}(\mathfrak{P}_{\mathrm{s}}^{ \prime}-gd)\mathrm{d}z_{\mathrm{s}}^{\prime}= -\frac{\omega^{2}}{2}\int_{\mathcal{C}}\mathcal{K}_{1}\eta^{\prime 2} \mathrm{d}z_{\mathrm{s}}^{\prime}-\omega^{2}d\int_{\mathcal{C}}\mathcal{K}_{1} \eta^{\prime}\mathrm{d}z_{\mathrm{s}}^{\prime}\] \[-\frac{1}{2}\int_{\mathcal{I}}\mathcal{K}_{1}(B_{\mathrm{s}}-2g \eta^{\prime})(1-\mathrm{i}\eta_{x}^{\prime})\mathrm{d}x^{\prime}\] \[-\omega\sigma\int_{\mathcal{I}}\mathcal{K}_{1}h^{\prime}\sqrt{(B_{ \mathrm{s}}-2g\eta^{\prime})(1+\eta_{x}^{\prime 2})}\mathrm{d}x^{\prime}\] \[= -\frac{\mathrm{i}\omega^{2}}{k}\int_{\mathcal{I}}\mathcal{K}_{2} \eta^{\prime}\eta_{x}^{\prime}\mathrm{d}x^{\prime}+\frac{\omega^{2}d+g}{k}\int_ {\mathcal{I}}\mathcal{K}_{2}\mathrm{d}x^{\prime}\] \[-\int_{\mathcal{I}}\mathcal{K}_{1}(B_{\mathrm{s}}-2g\eta^{\prime })\mathrm{d}x^{\prime}\] \[-\omega\sigma\int_{\mathcal{I}}\mathcal{K}_{1}h^{\prime}\sqrt{(B_ {\mathrm{s}}-2g\eta^{\prime})(1+\eta_{x}^{\prime 2})}\mathrm{d}x^{\prime}. \tag{47}\]
Substituting (47) into (45), the imaginary part yields the Eulerian integral formulation for the surface recovery
\[2\pi\operatorname{Im}\{\mathfrak{Q}_{\mathrm{s}}\}= \frac{\omega^{2}}{2k}\int_{\mathcal{I}}\operatorname{Im}\{ \mathcal{K}_{2}\}(\eta^{\prime 2})_{x}\mathrm{d}x^{\prime}-\int_{\mathcal{I}} \operatorname{Re}\{\mathcal{K}_{1}\}(B_{\mathrm{s}}-2g\eta^{\prime})\mathrm{d}x ^{\prime}\] \[+\frac{\omega^{2}d+g}{k}\int_{\mathcal{I}}\operatorname{Re}\{ \mathcal{K}_{2}\}\mathrm{d}x^{\prime}-\int_{\mathcal{I}}\operatorname{Re}\{ \mathcal{J}_{1}\}(\mathfrak{P}_{\mathrm{b}}^{\prime}-gd)\mathrm{d}x^{\prime}\] \[-\omega\sigma\int_{\mathcal{I}}\operatorname{Re}\{\mathcal{K}_{1 }\}h^{\prime}\sqrt{(B_{\mathrm{s}}-2g\eta^{\prime})(1+\eta_{x}^{\prime 2})} \mathrm{d}x^{\prime}-2\pi\operatorname{Im}\{K\}, \tag{48}\]
where \(K\) is the same constant as in (45).
Equation (48) is a nonlinear integral equations for the free surface recovery from the bottom pressure. Being strictly Eulerian, this equation is not suitable for overhanging waves. For the latter, one can proceed as follow.
### Hybrid formulation
Equation (48) involves integrals at the free surface and at the bottom. In practice, the bottom pressure is given at some known abscissa \(x\), so the bottom integral must be kept in Eulerian form. However, Eulerian integrals are not suitable for overhanging waves, so we rewrite surface integrals in their Lagrangian counterparts. Doing so, the inverse problem is described by a mixed Eulerian-Lagrangian formalism.
Thus, the surface integrals in (48) being rewritten in Lagrangian description, the bottom integral being kept in Eulerian form, one gets the general expression for the surface recovery
\[2\pi\operatorname{Im}\{\mathfrak{Q}_{\mathrm{s}}\}= \frac{\omega^{2}}{k}\int_{0}^{T_{L}}\left[k^{-1}\operatorname{Re} \{\mathcal{K}_{3}\}+\left(h^{\prime}+g\omega^{-2}\right)\operatorname{Re}\{ \mathcal{K}_{2}\}\right]\cos(\theta_{\mathrm{s}}^{\prime})\sqrt{B_{\mathrm{s }}-2g\eta^{\prime}}\mathrm{d}t^{\prime}\] \[-\int_{0}^{T_{L}}\operatorname{Re}\{\mathcal{K}_{1}\}\left[\cos (\theta_{\mathrm{s}}^{\prime})\sqrt{B_{\mathrm{s}}-2g\eta^{\prime}}+\sigma \omega h^{\prime}\right]\left(B_{\mathrm{s}}-2g\eta^{\prime}\right)\mathrm{d}t ^{\prime}\] \[-\int_{0}^{T_{L}}\operatorname{Re}\{\mathcal{J}_{1}\}(p_{ \mathrm{b}}^{\prime}-gd)\mathrm{d}x^{\prime}-2\pi\operatorname{Im}\{K\}. \tag{49}\]
This reformulation is necessary for practical recovery of overhanging waves. It is of course also suitable for non-overhanging waves.
## 6. Numerical illustrations
### Details on the overall methodology
From a numerical standpoint, the nonlinear integral equations (33) and (49), employed respectively for computing the surface profile and recovering it from the bottom pressure, possess notable characteristics. First, these equations eliminate the need for evaluating the derivative of \(\theta_{\mathrm{s}}\) at any point. Second, we can rely on Fourier analysis since the kernels are periodic and use trapezoidal rule for numerical integration.
Since we already know the value of \(g\) and the wavenumber \(k\) (or equivalently the wave period \(L\)) as inputs in our numerical scheme, we can deduce the depth of the layer \(d\) from the definition of the hydrostatic law [3, 7]. For simplicity, we consider that the constant vorticity \(\omega\) is known. However, \(\omega\) can also be determined adapting the procedure described by Clamond et al. [7]. Then, initialising our numerical schemes with an appropriate initial guess, either from linear theory or by a previous iteration, as done by Da Silva and Peregrine [11], we observe fast convergence for a discrete set of \(N\) equidistant points. In our simulations,
we typically use \(N=128\) for moderate waves and \(N=512\) for waves (regardless of whether they exhibit overturning or not) with large amplitudes.
When investigating overhanging waves, the non-algebraic nonlinearities inherent in the problem often pose numerical challenges. The most troublesome issue arises from aliasing errors present in the functions spectra, a phenomenon occasionally referred to as "spectral blocking" [2], leading to exponential growth of high frequencies. As a consequence, this aliasing effect prevent the spectral accuracy inherent in our formulation. To address this issue, we employ the "zero padding" method, which involves increasing the size of the quadrature in Fourier space while appending zeros above the Nyquist frequency. Subsequently, we transform the functions back to physical space, compute the nonlinear terms, and filter out the previously introduced zero frequencies from the spectrum. For quadratic nonlinear terms, enlarging the degree of quadrature by a factor of \(3/2\) has proven sufficient to mitigate this phenomenon [16]. However, in the context of non-algebraic nonlinearities encountered in this study, this argument does not hold, and the exact value of the enlargement factor remains unknown. Instead, we utilize a factor of \(2\) (typically suitable for cubic nonlinearities) when performing products throughout the algorithm. Although we experimented with larger factors in our
Figure 2. Numerical demonstration of the surface recovery procedure for an overhanging and periodic steady wave with a constant vorticity \(\omega\sqrt{d/g}=3\sqrt{2}\). (a) Eulerian representation of the bottom pressure. The dashed dot line corresponds to the mean bottom pressure. (b) Surface wave profile (blue circles) obtained from expression (33). The red line represents the surface reconstruction achieved from \(p_{\mathrm{b}}\) through equation (49), while the dashed dot line indicates the mean water level.
simulations, it appeared that this value was adequate for eliminating spurious frequencies in most aliased spectra.
In addition to aliasing, we noticed that it is numerically more efficient to perform a change of coordinates in evaluating the finite integrals. Particularly, significant variations in \(\theta_{\text{s}}\) occur where the wave undergoes overturning, indicating a requirement for additional collocation points in these regions. Thus, rather than evaluating the previous integrals with respect to the time variable \(t\in[0,T_{L}]\), we introduce a new integration variable \(\Xi(t)\), defined as
\[\frac{\text{d}\xi}{\text{d}t}\overset{\text{\tiny def}}{=}\sqrt{1+\beta\sin( \theta_{\text{s}})^{2}}, \tag{50}\]
where \(\beta\) is a scaling parameter arbitrarily set. A similar change of coordinate can be found in [11]. In most simulations involving overhanging waves, we employ a value of \(\beta=2\pi/(kdN)\).
Finally, we solve the whole system of equations (33) and (49) with the built-in iterative solver fsolve from Matlab software, using the Levenberg-Marquardt algorithm.
### Periodic overhanging wave
The first case of interest, as shown in figure 2, depicts a periodic overturning wave of large amplitude. This particular scenario poses significant computational challenges, making it an ideal benchmark for evaluating the robustness of our
recovery procedure. Utilizing the pressure data highlighted in figure 1(a), we successfully reconstruct the surface profile using the expression 49, resulting in excellent agreement, as evident from figure 1(b). Notably, the change of variables achieved through equation 50 effectively concentrates the points in regions where \(\theta_{s}\) varies the most. Consequently, we attain a high level of accuracy, with \(||\eta-\eta^{\text{ex}}||_{\infty}\approx 4.48\times 10^{-3}\), where \(\eta^{\text{ex}}\) corresponds to a numerical solution obtained by solving equation (36). Furthermore, for the computation of the unknown Bernoulli constant at the surface, the error is approximately \(|B_{\text{s}}-B_{\text{s}}^{\text{ex}}|\approx 5.83\times 10^{-3}\).
We emphasize the effectiveness of our recovery procedure by presenting the velocity field within the fluid layer in both upper panels of figure 3. Notably, a pair of stagnation points is observed in the panel 2(c) on the flat bottom boundary, which often poses challenges when utilizing conformal mapping techniques. In the present context, since our methodology operates solely in the physical plane, our general approach can readily reconstruct the surface profile regardless of the presence or absence of stagnation points.
### Rotational solitary surface wave
Computing solitary waves using our procedure is straightforward but necessitates considering a relatively large numerical domain to accurately capture their behavior far from the crest. Meanwhile, to ensure that the wave is located above the mean water level, we substitute the condition (1) with
\[\eta^{(n+1)}=\eta^{(n)}-\min\eta^{(n)}, \tag{51}\]
instead, at each numerical iteration \(n\).
Figure 4. Surface recovery procedure for a solitary steady wave with a constant vorticity \(\omega\sqrt{d/g}=1\). Both panels have the same legend than figure 2.
The recovery process for a solitary wave is presented in figure 4, which illustrates the pressure data in the upper panel and the corresponding surface profile in the lower panel. We consider here a solitary wave of relatively small amplitude because it is quite challenging. Indeed, the larger the solitary wave, the faster its decay (thus requiring a smaller computational box) and the larger the ratio signal/noise (for field data). So, in that respect, the recovery of small amplitude solitary waves is more challenging.
In the present specific case, the agreement with a given numerical solution is great, yielding numerical errors of \(||\eta-\eta^{\mathrm{ex}}||_{\infty}\approx 1.46\times 10^{-4}\) and \(|B_{\mathrm{s}}-B_{\mathrm{s}}^{\mathrm{ex}}|\approx 4.21\times 10^{-5}\) for the surface profile and the Bernoulli constant, respectively. Unfortunately, spurious infinitesimal oscillations (located) far from the wave crest) prevent us to reach a much better accuracy that we would expect from our method. In order to facilitate these computations and remove unwanted oscillations, another change of variables can be implemented (similar to the approach (50) used for the periodic case) to concentrate the quadrature points near the crest, rather than far away where the elevation is infinitesimal. However, we reserve this task for future investigations, which will provide more comprehensive details on the efficient computation of solitary waves within this context.
## 7. Discussion
This work presents a novel and comprehensive boundary integral method for recovering surface water-waves from bottom pressure measurements. Despite the inherent complexity of this inverse problem, we successfully formulate the relatively simple expression (49) for surface recovery, enabling the computation of a wide range of rotational steady waves. A significant advantage of this approach lies in the integral formulation, which eliminates the need for arbitrarily selecting a basis of functions to fit the pressure data, as done previously in [3, 5, 7].
To demonstrate the robustness and efficiency of our method, we showcased two challenging examples: an overturning wave with a large amplitude and a solitary wave. In both cases, we accurately recovered the surface profile and the hydrodynamic parameters with good agreement. Although it might be possible to adapt our numerical procedure to compute extreme waves (with angular surface) with (or without) overhanging profiles, this task is left for future investigations. In fact, our main goal here is a proof of concept and to provide clear evidence on the effectiveness of this new formulation.
In conclusion, this article, along with the proposed boundary integral formulation, represents a significant milestone in solving the surface wave recovery problem, providing a solid foundation for future extensions, such as its potential application to three-dimensional configurations. Indeed, in 3D holomorphic functions cannot be employed but integral representations via Green functions remain, so an efficient fully nonlinear surface recovery is conceivable.
**Funding.** Joris Labarbe has been supported by the French government, through the UCA\({}^{\mathrm{JEDI}}\)_Investments in the Future_ project managed by the National Research Agency (ANR) with the reference number ANR-15-IDEX-01.
**Declaration of interests.** The authors report no conflict of interest.
## Appendix A Logarithms and Polylogarithms
The function \(\ln(x)\) denotes the _natural logarithm_ (Napierian logarithm) of a real positive variable \(x\) (\(x\in\mathds{R}^{+}\)), and \(\log(z)\) denotes the _principal logarithm_ of a complex variable \(z\in\mathds{C}\), i.e.,
\[\log(z)\stackrel{{\mathrm{def}}}{{=}}\ln|z|+\mathrm{i}\arg(z), \qquad-\pi<\arg(z)\leqslant\pi. \tag{52}\]
This definition requires that the argument of any complex number lies in \(\,]-\pi;\pi]\). It implies, in particular, that \(\arg(z^{-1})=-\arg(z)\) if \(z\not\in\mathds{R}^{-}\) and that \(\arg(z^{-1})=\arg(z)=\pi\) if \(z\in\mathds{R}^{-}\). We have the special relations
\[\log(-z) =\mathrm{i}\pi+\log(z)+2\mathrm{i}\pi\left\lfloor-\arg(z)/2\pi \right\rfloor, \tag{53}\] \[\log\!\left(\mathrm{e}^{\mathrm{i}z}\right) =\mathrm{i}z+2\mathrm{i}\pi\left\lfloor\tfrac{1}{2}-\mathrm{Re}( z/2\pi)\right\rfloor,\] (54) \[\log\!\left(-\mathrm{e}^{\mathrm{i}z}\right) =\mathrm{i}(\pi+z)+2\mathrm{i}\pi\left\lfloor-\mathrm{Re}(z/2\pi )\right\rfloor, \tag{55}\]
where \(\left\lfloor\cdot\right\rfloor\) is the rounding toward \(-\infty\).
The polylogarithms can be defined, for \(|z|<1\) and \(\nu\in\mathds{C}\), by
\[\mathrm{L}\mathrm{i}_{\nu}(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{n^{\nu}}, \tag{56}\]
and for all complex \(z\) by analytic continuation [19]. With the above definition of the complex logarithm, we have the special inversion formulae
\[\mathrm{L}\mathrm{i}_{0}(z)+\,\mathrm{L}\mathrm{i}_{0}(z^{-1})+1 =0, \tag{57}\] \[\mathrm{L}\mathrm{i}_{1}(z)-\,\mathrm{L}\mathrm{i}_{1}(z^{-1})+ \log(-z) =\left\{\begin{array}{ll}0&\mathrm{if}\quad z\not\in[0;1],\\ 2\mathrm{i}\pi&\mathrm{if}\quad z\in[0;1],\end{array}\right.\] (58) \[\mathrm{L}\mathrm{i}_{2}(z)+\,\mathrm{L}\mathrm{i}_{2}(z^{-1})+ \frac{1}{2}\log^{2}(-z)+\frac{1}{6}\pi^{2} =\left\{\begin{array}{ll}0&\mathrm{if}\quad z\not\in[0;1],\\ 2\mathrm{i}\pi\log(z)&\mathrm{if}\quad z\in[0;1],\end{array}\right. \tag{59}\]
The polylogarithms for \(\nu\in\mathds{N}^{*}\) are single-valued functions in the cut plane \(z\in\mathds{C}\backslash[1;+\infty[\) and the inversion formula can be used to extend their definition for \(z\in[1;+\infty[\). Note that the inversion formula depends on the definition of the principal logarithm that is not unique, thus several variants can be found in the literature.
We finally note other relations useful in this paper
\[\mathrm{L}\mathrm{i}_{\nu}\!\left(\mathrm{e}^{\pm\mathrm{i}z}\right) =\mp\mathrm{i}\mathrm{d}\,\mathrm{L}\mathrm{i}_{\nu+1}\!\left( \mathrm{e}^{\pm\mathrm{i}z}\right)/\mathrm{d}z, \tag{60}\] \[\mathrm{L}\mathrm{i}_{0}\!\left(\mathrm{e}^{\pm\mathrm{i}z}\right) =\pm\tfrac{\mathrm{i}}{2}\cot(z/2)-\tfrac{1}{2},\] (61) \[\mathrm{L}\mathrm{i}_{1}\!\left(\mathrm{e}^{\pm\mathrm{i}z}\right) =\pm\mathrm{i}\pi\left[\mathrm{Re}\{z\}/2\pi\right]+\tfrac{ \mathrm{i}}{2}\arg(z^{2})-\mathrm{i}\arg(\mp\mathrm{i}z)-\tfrac{1}{2}\log\! \left(4\sin^{2}(z/2)\right)\mp\tfrac{\mathrm{i}}{2}z\] (62) \[\mathrm{L}\mathrm{i}_{2}\!\left(\mathrm{e}^{\pm\mathrm{i}z}\right) =\frac{\pi^{2}}{6}+\frac{z^{2}}{4}\mp\left(\arg(z^{2})-2\arg( \mp\mathrm{i}z)\right)\frac{z}{2}\mp\frac{\mathrm{i}}{2}\int_{0}^{z}\log\! \left(4\sin^{2}\!\left(\frac{z^{\prime}}{2}\right)\right)\mathrm{d}z^{\prime}, \tag{63}\]
where \([\cdot]\) denotes the rounding toward zero, the last relation being valid for \(-2\pi<\mathrm{Re}\{z\}<2\pi\).
## Appendix B Singular Leibniz integral rule
Let be \(K(x,y)\) a function regular everywhere for \(y\in[a,b]\), except perhaps at \(y=y_{0}\in]a,b[\) (\(y_{0}\) generally depending on \(x\), as well as \(a\) and \(b\)) where \(K\) may be singular, its finite integral
being taken in the sense of Cauchy principal value, i.e.
\[J(x)= \,\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-12.499803pt}}{{\vbox{\hbox{$-$}}\kern-9.899968pt}}\! \int_{a(x)}^{b(x)}K(x,y)\mathrm{d}y\stackrel{{\mathrm{def}}}{{=}} \lim_{\epsilon\to 0^{+}}\left\{\int_{a(x)}^{y_{0}(x)-\epsilon}K(x,y)\mathrm{d}y+ \int_{y_{0}(x)+\epsilon}^{b(x)}K(x,y)\mathrm{d}y\right\}, \tag{64}\]
exists. The first derivative of \(J\) is thus
\[\frac{\mathrm{d}J}{\mathrm{d}x} =\lim_{\epsilon\to 0^{+}}\left\{\frac{\mathrm{d}}{\mathrm{d}x} \int_{a}^{y_{0}-\epsilon}K(x,y)\mathrm{d}y+\frac{\mathrm{d}}{\mathrm{d}x}\int_ {y_{0}+\epsilon}^{b}K(x,y)\mathrm{d}y\right\}\] \[=\lim_{\epsilon\to 0^{+}}\left\{\int_{a}^{y_{0}-\epsilon}\frac{ \partial K(x,y)}{\partial x}\mathrm{d}y+\frac{\mathrm{d}y_{0}}{\mathrm{d}x}K( x,y_{0}-\epsilon)-\frac{\mathrm{d}a}{\mathrm{d}x}K(x,a)\right.\] \[\qquad\qquad\left.+\int_{y_{0}+\epsilon}^{b}\frac{\partial K(x,y )}{\partial x}\mathrm{d}y+\frac{\mathrm{d}b}{\mathrm{d}x}K(x,b)-\frac{\mathrm{ d}y_{0}}{\mathrm{d}x}K(x,y_{0}+\epsilon)\right\}\] \[= \,\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.49981pt}}{{\vbox{\hbox{$-$}} \kern-9.899968pt}}{{\vbox{\hbox{$-$}}\kern-9.899968pt}}\! \int_{a}^{b}\frac{\partial K(x,y)}{\partial x}\mathrm{d}y+\frac{\mathrm{d}b}{ \mathrm{d}x}K(x,b)-\frac{\mathrm{d}a}{\mathrm{d}x}K(x,a)\] \[\quad-\frac{\mathrm{d}y_{0}}{\mathrm{d}x}\lim_{\epsilon\to 0^{+}} \left\{K(x,y_{0}+\epsilon)-K(x,y_{0}-\epsilon)\right\}. \tag{65}\]
For instance, \(a\) and \(b\) being constant and for a sufficiently well-behaving function \(\varphi\), we have
\[\,\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.49981pt}}{{\vbox{\hbox{$-$}} \kern-9.899968pt}}{{\vbox{\hbox{$-$}}\kern-9.899968pt}}\! \int_{a}^{b}\frac{\varphi(y)}{x-y}\mathrm{d}y =\] \[= \tag{66}\]
where the limit is zero if \(\varphi\) is Holder continuous (sufficient but unnecessary condition). This formula is a consequence of (65) but also of the choice of the antiderivative of \(1/(x-y)\). Indeed, one can also write
\[\,\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.49981pt}}{{\vbox{\hbox{$-$}} \kern-9.899968pt}}{{\vbox{\hbox{$-$}}\kern-9.899968pt}}\! \int_{a}^{b}\frac{\varphi(y)}{x-y}\mathrm{d}y= \,\mathchoice{{\vbox{\hbox{$-$}}\kern-13.499794pt}}{{ \vbox{\hbox{$-$}}\kern-12.49981pt}}{{\vbox{\hbox{$-$}} \kern-9.899968pt}}\!\int_{a}^{b}\frac{\partial\log(x-y)\varphi(y)}{ \partial x}\mathrm{d}y\] \[= \,\frac{\mathrm{d}}{\mathrm{d}x}\mathchoice{{\vbox{\hbox{$ -$}}\kern-13.499794pt}}{{\vbox{\hbox{$-$}} \kern-12.49981pt}}{{\vbox{\hbox{$-$}}\kern-9.899968pt}}\! \int_{a}^{b}\log(x-y)\varphi(y)\mathrm{d}y+\mathrm{i}\pi\varphi(x)+\lim_{ \epsilon\to 0^{+}}\ln(\epsilon)\left[\varphi(x+\epsilon)-\varphi(x-\epsilon) \right], \tag{67}\]
where we have used the relations \(\log(\epsilon)=\ln(\epsilon)\) and \(\log(-\epsilon)=\ln(\epsilon)+\mathrm{i}\pi\), since \(\epsilon\in\mathds{R}^{+}\). The relation (66) is more convenient when dealing only with real variables, while (67) is more suitable for complex formulations. The reason behind the latter argument is because \(\log(x)\) can be continued analytically in the complex plane, which is not the case with \(\ln|x|\).
|
2303.09344 | First time-resolved measurement of infrared scintillation light in
gaseous xenon | Xenon is a widely used detector target material due to its excellent
scintillation properties in the ultraviolet (UV) spectrum. The additional use
of infrared (IR) scintillation light could improve future detectors. However, a
comprehensive characterization of the IR component is necessary to explore its
potential. We report on the first measurement of the time profile of the IR
scintillation response of gaseous xenon. Our setup consists of a gaseous xenon
target irradiated by an alpha particle source and is instrumented with one IR-
and two UV-sensitive photomultiplier tubes. Thereby, it enables IR timing
measurements with nanosecond resolution and simultaneous measurement of UV and
IR signals. We find that the IR light yield is in the same order of magnitude
as the UV yield. We observe that the IR pulses can be described by a fast and a
slow component and demonstrate that the size of the slow component decreases
with increasing levels of impurities in the gas. Moreover, we study the IR
emission as a function of pressure. These findings confirm earlier observations
and advance our understanding of the IR scintillation response of gaseous
xenon, which could have implications for the development of novel xenon-based
detectors. | Mona Piotter, Dominick Cichon, Robert Hammann, Florian Jörg, Luisa Hötzsch, Teresa Marrodán Undagoitia | 2023-03-16T14:24:01Z | http://arxiv.org/abs/2303.09344v2 | # First time-resolved measurement of infrared scintillation light in gaseous xenon
###### Abstract
Xenon is a widely used detector target material due to its excellent scintillation properties in the ultraviolet (UV) spectrum. The additional use of infrared (IR) scintillation light could improve future detectors. However, a comprehensive characterization of the IR component is necessary to explore its potential. We report on the first measurement of the time profile of the IR scintillation response of gaseous xenon. Our setup consists of a gaseous xenon target irradiated by an alpha particle source and is instrumented with one IR- and two UV-sensitive photomultiplier tubes. Thereby, it enables IR timing measurements with nanosecond resolution and simultaneous measurement of UV and IR signals. We find that the IR light yield is in the same order of magnitude as the UV yield. We observe that the IR pulses can be described by a fast and a slow component and demonstrate that the size of the slow component decreases with increasing levels of impurities in the gas. Moreover, we study the IR emission as a function of pressure. These findings confirm earlier observations and advance our understanding of the IR scintillation response of gaseous xenon, which could have implications for the development of novel xenon-based detectors.
## 1 Introduction
Xenon is employed as a target medium in different applications including particle-, astroparticle-, neutrino-, and medical physics [1; 2]. While most detectors use it in its liquid state [3; 4; 5; 6; 7; 8], there are also experiments utilizing its solid state [9] or its gaseous state at high pressures [10; 11]. All these applications use the 175 nm [12] ultraviolet (UV) light emitted in the xenon scintillation process.
Infrared (IR) photons are also emitted in the de-excitation process that follows an energy deposition in the xenon medium. This radiation was first detected in the gaseous phase [13] and, shortly after, in liquid xenon [14]. From these early studies, the infrared signal appeared to be influenced only slightly by the presence of electronegative impurities, while electrons in the charge signal were lost due to oxygen present in the system [14]. The infrared emission in the gas has been measured to be mainly at a wavelength of \(\sim\)1.3 um, shifting towards longer wavelengths as the pressure increases [15; 16]. The infrared light yield in gaseous xenon was found to be of a similar magnitude compared to the UV yield [17], constituting a promising additional signal for future applications. First studies point out that the yield in liquid xenon could be significantly lower than in gas [18]. Studies of the IR scintillation light of xenon can also be relevant for dual-phase time projection chambers [1] which record their charge signal via proportional scintillation in the gas phase. The additional detection of IR photons emitted in this process could help, for instance, to improve the energy resolution of this signal, or eventually to improve the discrimination between different types of particles.
In this paper, we report on the observation of infrared photons emitted by gaseous xenon after excitation by alpha particles. We find a strong dependence of the IR signal on the level of outgassing impurities. In addition, we analyze the time profile of the IR scintillation response in all measurements. Our observations reveal at least two components with distinct time scales. Furthermore, we estimate the IR light yield and study the IR signal for gas pressures ranging from 500 mbar to 1050 mbar.
The document is organized as follows: in section 2, we start introducing the general scintillation mechanism in xenon. Sections 3 and 4 describe the experimental setup and the measurements performed. Finally, the results are presented and discussed in sections 5 and 6.
## 2 Xenon scintillation process
Energy deposited by charged particles in xenon leads to the emission of scintillation light. The target atoms get excited or ionized as shown by process A in figure 1. Ionized xenon atoms form Xe\({}_{2}^{+}\) excimers (B). These can recombine with surrounding electrons into excited atomic states (C) which can be also reached by direct excitation.
The de-excitation can happen via different processes like atomic transitions (producing narrow IR lines), via collisions with neighboring atoms, or via an excimer transition, emitting a broad spectrum centered approximately at 1.3 (D). The measured infrared spectrum of gaseous xenon (redrawn from [21]) is shown later in figure 4 (blue curve) together with the sensitivity of the PMT employed for the measurement. The spectrum contains both the broad excimer emission and a collection of sharp lines from atomic transitions. Finally, all excitations end at the lowest lying excimer states Xe\({}_{2}^{*}\) (E). These decays via the emission of ultraviolet photons with a central wavelength of 175 nm, with two characteristic lifetimes corresponding to a singlet and a triplet transition (labeled F in figure 1).
## 3 Experimental setup
The apparatus employed for the measurement consists of a cylindrical chamber of 64 cm length and 10 cm diameter (see figure 2), hosting two UV-sensitive photo-multiplier tubes (PMTs), and a connection to an infrared-sensitive PMT (see subsection 3.2).
The chamber volume is filled with gaseous xenon (GXe) and is irradiated with an \({}^{241}\)Am alpha source of 3.7 k Bq activity and an energy of 5.49 MeV. An aperture with a diameter of 2.5 mm is placed 3 mm in front of the source. In this way, alpha traces are collimated and deposit their energy between the UV PMTs in the gaseous volume.
The signals of all three photomultipliers are recorded and digitized using a CAEN V1743 ADC board, featuring a 3.2 GS/s effective sampling frequency. To exploit the excellent time resolution of the infrared PMT with narrow pulses of just a few ns, a digitizer with a high sampling frequency is preferred. We already employed and characterized this digitizer in [22].
Figure 1: Illustration of excitation and relaxation processes in xenon (not to scale). Infrared scintillation is emitted by atomic or excimer transitions. Figure inspired by [15] as well as [19; 20]
Figure 2: Top: Design of the setup employed to measure infrared radiation in gaseous xenon. Bottom: Electric field simulation (using COMSOL) of the setup’s interior. A few typical alpha trajectories (from SRIM) are shown in white
To estimate the magnitude of the electric field in the region where alpha particles deposit their energy, a COMSOL simulation is used [23]. The simulation is performed in three dimensions and contains all relevant elements of the setup. Figure 2 (bottom) displays the simulated electric field norm in the central plane. The length of the alpha tracks (in white) is calculated using the SRIM simulation toolkit [24]. The alpha particles travel mostly along the middle axis between the UV PMTs (about 2 cm for 1 bar pressure), crossing field regions close to 0 between the source and the aperture, and values from \(\sim 400\) V/cm to about 170 V/cm behind the aperture. This electric field caused by the high voltage of the PMTs prevents part of the ionization electrons from recombining.
### Gas handling system
To maintain a high level of purity of the gaseous xenon, we use a purification system that continuously removes impurities released from material outgassing. This is important since impurities like O\({}_{2}\) or water vapor are known to decrease the UV signal in xenon [25]. For this purpose, the xenon is recirculated and purified through a zirconium hot getter (SAES MonoTorr PS4). By running the xenon either through the purifier or through its bypass, the effect of the impurities on the UV and IR light yields can be investigated. Figure 3 shows a schematic drawing of the purification system.
A small recirculation pump is employed to extract the xenon gas from the chamber and push it through the purifier at a mass flow of \(\sim 650\) sccm at 1000 mbar. Throughout a run, the flow is kept at a constant level by a mass flow controller (MFC). The system also includes a vacuum pump, a temperature sensor, and a pressure sensor (P).
### Response of photosensors
For the measurement of the IR photons, a Hamamatsu H10330C PMT was procured. This PMT has an internal thermoelectric cooler that keeps it at a temperature of about \(-60^{\circ}\)C. Compared to photodiodes used in previous experiments [17; 18], the IR-PMT has two distinct advantages. Firstly, it is sensitive to single photons, and secondly, it has a fast time response with a rise time and a fall time of 0.9 ns and 1.7 ns, respectively. The fast timing allows studying the time structure of the infrared pulse (see section 5). Figure 4 shows the quantum efficiency (\(QE\)) of our IR-PMT overlaid on top of the gaseous xenon spectrum as measured in [21].
The PMT is sensitive to wavelengths from 950 to \(\sim 1\,650\) nm, which includes the excimer transition at 1.3 \(\mathrm{\SIUnitSymbolMicro m}\) and several atomic transition lines.
To detect the UV xenon scintillation light, two 3-inch round Hamamatsu R11410 PMTs [26] are utilized. This sensor type was optimized together with the company, and employed in the XENON1T and XENONnT experiments [27; 28; 29]. Its sensitivity ranges from about 160 nm to about 600 nm.
The response of the three photosensors is characterized via in-situ calibrations with either an ultraviolet (365 nm) or an infrared (950 nm) LED connected to
Figure 4: Quantum efficiency (red) of our H10330C PMT (measured by Hamamatsu) together with the infrared emission spectrum of gaseous xenon (blue). Spectral data for 1 bar xenon redrawn from [21] (arbitrary units). The filled area represents the excimer de-excitation (D in figure 1)
Figure 3: Schematics of the gas handling system of the setup. Continuous removal of impurities can be achieved by recirculating the gas through a purifier.
the system via optical fibers. The PMTs are illuminated with pulsed light signals from either of the two LEDs. The gain is determined in a model-independent way using a subsequent blank measurement without LED light as described in [30]. The UV PMTs were operated at a voltage of \(-1300\,\mathrm{V}\) (below the typical voltage of \(-1500\,\mathrm{V}\)) to avoid saturation due to large UV scintillation signals produced by the alpha interactions. The gains of the two UV PMTs at the operating voltage were determined to be \(1.6\times 10^{6}\) and \(0.8\times 10^{6}\) for the left and right PMT, respectively. The gain of the IR-sensitive PMT was measured to be \(3.52\times 10^{6}\) at an operating voltage of \(-800\,\mathrm{V}\). In figure 5, we show the response of the IR PMT to the LED calibration. A peak-to-valley ratio of about 1.1 is found. We also measure the dark count rate of the PMT which is \(1.8\times 10^{5}\,\mathrm{Hz}\) at the operational temperature of about \(-60^{\circ}\mathrm{C}\).
## 4 Measurements and data analysis
Before each measurement, the chamber is pumped down to a pressure of about \(10^{-4}\,\mathrm{mbar}\). Then, fresh xenon is filled into the chamber at \(1\,\mathrm{bar}\), the photosensors are turned on, and the data acquisition is started.
The events are triggered by the analog coincidence of the two UV PMTs. The acquired raw waveforms from each PMT are processed with the HeXe processor [31], which identifies peaks and computes various characteristics, including peak width and area. To have a homogeneous alpha selection, two selection criteria are applied to the UV PMT signal: a PMT signal asymmetry cut and an energy cut. The asymmetry cut has the purpose of choosing alpha traces traveling almost exactly between the two UV PMTs. Figure 6 shows the fraction of UV signal in the left PMT as a function of the total UV signal size. This parameter is defined as:
\[\text{Area fraction left}=\frac{A_{\ell}}{A_{r}+A_{\ell}}, \tag{1}\]
where \(A_{\ell}\) and \(A_{r}\) represent the signal pulse areas in the left and right PMT, respectively.
The event distribution is centered slightly above 0.5 (half of the signal in the left PMT) and has a width corresponding to the events that leave the source under an angle smaller than 90 degrees. The shift above 0.5 is either due to a small misalignment of the source from the center of the two UV PMTs or due to uncertainties on PMT properties: PMT \(QE\)s or gain estimation. Asymmetry values between the 30% and 70% quantiles of the distribution are chosen to select alpha tracks traveling through the middle of the setup. We also make a selection on the total UV area in order to remove pile-up events (at \(20\,000\,\mathrm{PE}\)) and alphas with energy losses (tail toward low areas). Both cuts result in the selection of the red box in figure 6.
To obtain the IR time profile, we extract the arrival time of each photon in the waveform separately. The time distribution of is then obtained by calculating the times of all IR photons relative to the time of the large UV signals, where the pulse time is defined as the first sample to the left of the 10% pulse height in both cases. For this method, we need to be able to resolve individual photons. This is feasible since the time resolution of the IR PMT is sufficiently sharp (rise- and fall-times of
Figure 5: Gain calibration of the H10330C IR PMT. The single photo electron (SPE) spectrum is obtained by subtracting the weighted LED OFF spectrum from the LED ON spectrum
Figure 6: Area fraction of the UV PMT area seen in the left PMT as a function of the total UV area. The red box represents the region selected for the timing analysis and light yield calculation
\(\sim 1\,\mathrm{ns}\)) and the number of photons in one time window is comparably small.
In order to remove PMT noise, the area of each IR photon is required to be larger than \(0.17\,\mathrm{PE}\) and the pulse width, defined as the time duration containing \(80\%\) of the pulse area, should be larger than \(1.85\,\mathrm{ns}\). To calculate the IR signal strength, we sum over all photons detected in the DAQ window. The infrared photons located just at the end of the digitized window are removed to avoid a biased reconstruction.
The data presented in the following comprises two runs: one at constant pressure of \(1\,\mathrm{bar}\), where the level of impurities in the gas was varied, and the other at high purity, where the pressure was varied between \(500\) and \(1050\,\mathrm{mbar}\).
## 5 Results and discussion
This section contains the results derived from the measurements described above. It includes a study of the IR signal for different gas purity conditions, the investigation of the signal time profile, an estimation of the IR light yield, and a study of its pressure dependence.
### Effect of impurities on the IR signal
First, we study the dependence of the infrared signal strength on the xenon purity. The chamber was filled with xenon from a storage bottle, and the measurement was started at time \(t=0\,\mathrm{min}\). Figure 7 shows the signal size as a function of time, with the IR and UV signals displayed in the top and bottom panels, respectively. The two-dimensional distributions show all events after the asymmetry cut. For the IR signals, we show the total number of photons in the waveform and the black solid line indicates the mean of the distribution. In the first measurement phase, we operated without purification. Both the UV signal size and the mean number of IR photons decreased slightly with time in this phase, due to outgassing of impurities by the chamber and the materials inside. The xenon purification was started by circulating it through a hot getter at about \(185\,\mathrm{min}\) after the start of the run (dashed line). This was followed by a prompt increase of the signals by a factor of about \(12\) and \(10\) for IR and UV, respectively. At time \(260\,\mathrm{min}\) (dotted line), the purification was stopped by bypassing the getter, and both signals started deteriorating immediately. It can be seen that, right before the purification was stopped, the UV signal size was still slightly increasing. Therefore, somewhat higher signals can be expected for longer purification times. Although this should be a small effect, we plan to investigate it further in future measurements.
Using a photomultiplier tube as an infrared sensor allows studying the time structure of this signal with \(\mathcal{O}\)(ns) resolution. To investigate the time distribution of IR photons, we calculate the time of each photon as explained in section 4. We study the timing profile of the IR photons for different purity levels by choosing data in various time intervals of figure 7. Figure 8 (left) shows the time distribution of IR photons. For the time period before the purification was started (red curve), the tail of the distribution is primarily composed of a fast component with a decay time of \(\sim 2\,\mathrm{ns}\). Once the purification started (times \(>200\,\mathrm{min}\)), an additional slow component with a decay time of about \(1\,\mathrm{\SIUnitSymbolMicro s}\) becomes evident (blue). At the end of the run, the purifier is bypassed and impurities slowly accumulate again. This results in a decreasing time constant for the slow component (purple).
For comparison, we show in figure 8 (right) the average waveform of the UV pulses for the same purity conditions. For the period of bad purity (red), the signal is lower in amplitude and has a narrower pulse width. This is qualitatively similar to the observation in the infrared signal.
Our results show a notable increase in the intensity of the IR signal (factor 12) as the level of impurities in the xenon decreases, which is in tension with early
Figure 7: Evolution of the number of IR photons (top panel) and UV signal size (bottom panel) for different xenon purity conditions. The solid line in the top panel indicates the mean number of IR photons. The dashed line represents the time at which purification through the getter was started and the dotted line when it was stopped
measurements in liquid xenon [14]. There, it was stated that the infrared signal only slightly depends on the presence of impurities. Our hypothesis is that the impurity dependence of the IR signal arises, at least partly, from suppressed electron recombination. Since electrons are more likely to attach to impurities at higher impurity concentrations, they are less likely to recombine and contribute to the scintillation signal. To a certain degree, we would expect an impurity dependence of the signal also in liquid, as electron recombination is present both in liquid and gaseous xenon.
Looking at the arrival time of IR photons, we find two time components. We attribute the fast component to excitation luminescence from both the decay of xenon excimers (D in figure 1) or atomic transitions. The slow component could be driven by electron thermalization and recombination (B in figure 1). We observe that the slow component is strongly affected by the purity, which could be interpreted as a reduction of recombination electrons due to attachment to electronegative impurities, such as O\({}_{2}\). We can exclude that secondary scintillation by the drifting electrons is a major contribution to the observed signal. The electric field in the analysis volume is below 400 V/cm and, therefore, below the \(\sim 1\) kV/cm\(\cdot\)bar required for this process (see [32] and references therein). The fast component is almost not affected by the change in impurity concentration. Similarly, we find that the size and shape of the UV pulse vary with purity, becoming smaller and narrower for decreasing purity. This is likely the result of absorption of UV light via impurities [25] as well as quenching processes affecting the slow scintillation decay component (triplet), which has a lifetime of \(\sim 100\) ns [33]. Furthermore, the absence of recombination signal at delayed times can be partially responsible for the narrower distribution, as electrons are attached to impurities.
Following our hypothesis that the slow IR component primarily originates from recombination luminescence, we can estimate the amount of photons arising from direct excitation relative to the ones coming from recombination. Note that, since our digitizer has a small acquisition window compared to the length of the pulse, we need to extrapolate and add the missing part of the slow component (see section 5.2). We find that the ratio of excitation luminescence to recombination luminescence in the IR would be below 1%.
### Estimation of the infrared light yield
The light yield, defined as the number of photons emitted per unit energy deposited, can be expressed as:
\[Y=\frac{\mu}{E_{\alpha}\cdot\epsilon\cdot QE}, \tag{2}\]
where \(\mu\) is the number of detected photons per event, \(E_{\alpha}\) is the deposited energy, \(QE\) is the PMT quantum efficiency, and \(\epsilon\) is the solid angle acceptance.
The number of detected photons per event, \(\mu\), is estimated via the mean value \(\tilde{\mu}\). For the infrared signal, \(\tilde{\mu}\) is the mean number of observed photons per event.
Figure 8: Scintillation response in gaseous xenon at 1 bar for different purity levels (see regions in figure 7). Left: Arrival time of each IR photon relative to the UV time. Right: Average waveform of the UV pulses
For the UV spectrum, \(\tilde{\mu}\) is the mean of the summed UV area after applying a strict energy cut (as shown in figure 6) such that alphas with losses in the source or the aperture are not considered. Due to the short acquisition window of the DAQ (total of \(320\,\mathrm{ns}\)), the number of registered IR photons needs to be corrected for the amount of missing signal to obtain \(\mu\). To estimate this, we fit a function to the tail of the IR photon time distribution. Since the underlying distribution of the slow component is unknown, we use three simple fitting functions: linear, exponential, and a function proportional to \(1/(1+\Delta t/T_{\mathrm{r}})^{2}\) with the recombination time \(T_{\mathrm{r}}\), as motivated by the recombination process in [34]. The computed light yields are scaled based on the extrapolated fraction of the signal outside the acquisition window, which varies between different fitting models. For \(1\,\mathrm{bar}\) xenon pressure, the linear fit (\(lin\)) yields a correction factor of about 2, while the exponential fit (\(exp\)) yields about 3.5, and the recombination model fit (\(rec\)) yields about 6.5.
The solid angle acceptance is estimated via a simplified Monte Carlo simulation. In the simulation, photons are emitted isotropically from the entire alpha track and the intersections with the UV PMT photo-cathode or the lens of the IR PMT are recorded. The acceptance for \(1\,\mathrm{bar}\) xenon pressure is \(15.5\%\) for each UV PMT and \(0.25\%\) for the IR PMT. This approach neglects possible additional signals due to reflections on the surrounding surfaces, constituting only an approximate estimate of the respective solid angle acceptances. Note, however, that the inside of the chamber is not polished, likely making it a bad reflector. In addition, quartz (the material of the UV PMT windows) transmits about \(90\%\) of the infrared light and can thus also be considered a bad reflector. The length of the alpha track is calculated separately using the SRIM simulation toolkit [24]. The calculation of the deposited energy takes into account the small fraction of energy that is lost in the region between the source surface and the aperture in front of it. For \(1\,\mathrm{bar}\) pressure the visible energy of the alpha corresponds to \(5.1\,\mathrm{MeV}\) which is about \(92\%\) of the total alpha energy. Finally, the quantum efficiency of the sensors is considered: \((35\pm 3)\%\) for the UV PMTs and \((9.0\pm 0.5)\%\) for the IR PMT.
The measurement of the IR light yields is subject to a range of systematic and statistical uncertainties. Systematic uncertainties include those related to the gain of the PMT, energy deposition before the aperture, solid angle acceptance, and PMT \(QE\). The relative PMT \(QE\) uncertainty, at \(5.6\%\), is the largest of these. However, the dominant systematic uncertainty is due to the waveform correction, resulting from the short acquisition window of the digitizer as described above. For this reason, we give the results for all three signal extrapolations. Statistical uncertainties arise from the estimation of the mean number of PE and the fit to the slow component.
In this study, our focus is on investigating the IR scintillation response, and we study the UV signal just to validate our procedure. Thus, we only give the signal size in the observed window as a lower value.
To calculate the light yields, we use a dedicated measurement taken over \(30\,\mathrm{min}\) and with a constant high purity. The light yield results for the IR and UV signals of gaseous xenon at \(1\,\mathrm{bar}\) and room temperature are reported in table 1.
The yield of the IR scintillation light is in the same order of magnitude as that of UV even though the number of IR photons detected per event is low compared to the UV. This is due to the very small solid angle for detection and the low \(QE\) of the IR-PMT (see section 4). In addition, a larger fraction of the IR signal is outside the acquisition window. Our reported IR light yield values may underestimate the actual yield as our PMT covers only part of the IR emission spectrum (see figure 4). Although the PMT's spectral sensitivity window covers the entire continuous excimer emission, there may be additional atomic transitions outside of this range that could contribute to the overall yield.
The infrared light yield values are all lower than the value of \(20\,000\,\mathrm{ph}\)/MeV measured in [17]. The reason for the disagreement is currently under investigation. One difference between the measurements is the xenon pressure, \(2\,\mathrm{bar}\) in [17] and \(1\,\mathrm{bar}\) in our case. We observe a small dependence of the signal with pressure (see section 5.3), but we have no measurements above \(1.05\,\mathrm{bar}\) and therefore cannot judge if this is the reason. Another difference is that our setup does not have, at the moment, a homogeneous electric field along the particle track. An upgrade of the system is planned to repeat the measurements at well-defined field values. Finally, we note that the spectral response of the infrared sensors is different in both setups. While ours is sensitive from \((950-1650)\,\mathrm{nm}\), the one used in [17] is sensitive
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Signal** & **Ph. det. \(\tilde{\mu}\) [PE]** & **Yield \(Y\) [ph/MeV]** \\ \hline IR (\(lin\)) & & \(3620\pm 50\,_{\mathrm{stat}}\pm 230\,_{\mathrm{syst}}\) \\ IR (\(exp\)) & \(1.7456\pm 0.0026\) & \(6120\pm 110\,_{\mathrm{stat}}\pm 400\,_{\mathrm{syst}}\) \\ IR (\(rec\)) & & \(11150\pm 220\,_{\mathrm{stat}}\pm 700\,_{\mathrm{syst}}\) \\ UV & \(10155.0\pm 0.4\) & \(>18700\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: IR and UV light yields for gaseous xenon at \(1\,\mathrm{bar}\) pressure and room temperature. The IR light yield is given for three different extrapolations of the signal beyond the acquisition window
from \((700-1600)\,\)nm. We expect a large fraction of the IR signal to come from the excimer continuum at \(\sim 1.3\,\)um. Nevertheless, we cannot rule out the possibility that unknown strong lines with wavelengths below \(800\,\)nm could account for part of the observed difference.
For the UV component, the average energy required to create a UV photon in xenon for an alpha particle is reported to be \((35-50)\,\)eV [35; 36]. This would correspond to a light yield \(\sim(20\,000-28\,000)\,\)ph/MeV, which is in the same order of magnitude as our measurement at \(\sim 18\,700\,\)ph/MeV.
### Pressure dependence of the IR signal
We also study the dependence of the light yield on the xenon gas pressure. For this purpose, we vary the gas pressure in the range of 500 to 1050 mbar and determine the light yield with the same procedure as described above. Figure 9 shows the IR light yields as a function of pressure for the three signal extrapolation models described in section 5.2.
The data presented in the plot were acquired over about 5.5 hours, with a 30-minute measurement for each data point. We started the run at a pressure of about 750 mbar and gradually increased up to about 1050 mbar (two consecutive 30 min acquisitions, combined in the plot), before decreasing the pressure again with the same steps. In this way, there are two measurements for each pressure, with a time difference in between. The first data point at 750 mbar is slightly lower than the second one because the purifier was only running since 10 min and the signal was still slightly growing. At the end of the run, the pressure was lowered to 500 mbar. However, due to the low pressure, the UV PMTs experienced a high voltage breakdown after about five minutes. As a result, the uncertainties of the data points at 500 mbar are statistics dominated.
All values have been corrected to account for the solid angle acceptance of the IR PMT due to the longer alpha track at low pressures. Indeed, the average length of the alpha track (calculated by SRIM) varies from 2.3 cm at 1050 mbar to 4.8 cm at 500 mbar. Thus, the collection efficiency increases for decreasing pressures from 0.22% at 1050 mbar to 0.30% at 500 mbar. We also correct the alpha energy deposition, since the fraction of energy deposited after the aperture is pressure dependent as well. The fraction of the alpha energy observed after the aperture changes from 91.7% at 1050 mbar to 96.9% at 500 mbar.
For the pressure range studied here, we observe an increase in the light yield with pressure, which is noticeable across all signal extrapolation models. At low pressures, the average distance between the freed electrons and the ions is larger, increasing the probability of the electrons escaping recombination [37]. Additionally, due to the presence of a non-negligible electric field in the gaseous volume, we anticipate that at lower pressures, an even greater number of electrons will be drifted away, leading to a slightly reduced signal from recombination (see similar effect in the UV [38]). The change in pressure also leads to a shift in the emission spectrum, as pointed out in [16]. The range tested here is, however, small enough that the complete excimer spectrum is contained in the sensitive region of the IR-PMT and a minor effect is expected.
Figure 10 (left) shows the arrival time of the IR photons for three of the pressure values studied. We observe that the slow component is slightly suppressed for lower pressure and that the time constant is longer. This is in agreement with our hypothesis that recombination probability decreases with decreasing pressures.
The UV pulses show for low pressures a double bump shape (see figure 10 right). This behavior is also observed in the data of [39; 40; 41]. As in these early publications, we would interpret the delayed bump ("xenon afterglow") in the time spectrum as originating from electron recombination processes. For decreasing pressure values, the recombination time increases and the second bump is further delayed. Other processes that could play a role in the UV time structure are the excimer formation, whose time should become longer for low pressures, or a contribution from the 3\({}^{\rm rd}\) continuum (decay of Xe\({}_{2}^{2+}\), for instance, at 270 nm) [35; 42]. Future measurements are planned to investigate this further. For the lower pressure time profiles in figure 10 (left)
Figure 9: IR light yield as a function of pressure for three different extrapolation methods. Uncertainties are shown as error bars (statistical uncertainty) and shaded bands (systematic uncertainty)
one can also see a slight indication of a dip for arrival times between \(\sim\)10 and 50 ns. This feature might be an effect similar to the double bump structure in the UV signal (see figure 10 right).
## 6 Discussion and outlook
Our measurements confirm the emission of infrared light by gaseous xenon as measured in earlier works [13; 17; 15; 16]. We find that the IR signal is strongly affected by electronegative impurities, in contrast to early measurements in liquid xenon [14]. Looking at the time structure of the signal, we observe two main components. Although we do not have a consistent model describing all our data, we think that those components likely originate from direct excitation (fast component) and from electron recombination (slow). We show that the time constant of the slow component decreases with an increasing impurity concentration in the xenon, while the fast component stays almost constant. Two distinctive ionization regions for energy depositions from alpha particles are reported in literature: a dense core and a penumbra around it. Accordingly, recombination processes have two different components: a fast one for the core and a slow one for the penumbra [37]. While our data does not provide conclusive evidence, it is possible that the slow component originates from the core region. The electrons in the penumbra are likely drifted away by the electric field in the chamber. Independent of the extrapolation method used to calculate the IR light yield, our measurements indicate that the IR light yield is lower than that of the UV at a gas pressure of 1 bar. This might be related to the different population of states in direct excitation and recombination. As the energy deposition of charged particles populates all xenon excited states, some states might decay without the emission of IR photons. Since recombined electrons populate higher lying excited states, we expect that their decay emit both IR and UV photons. Furthermore, some of the de-excitation paths through IR atomic levels are outside the sensitivity of our sensor. Our data additionally shows that the light yield increases with pressure between 500 mbar and 1050 mbar.
While the hypothesis that the majority of the infrared signal (slow component) comes from electron recombination is supported by all IR measurements, this interpretation does not align with the observed shape of the UV pulses, which lack a long-lived component. Additionally, we find that the fraction of the fast IR component is consistently below 1% in all our measurements, which differs from the exciton-to-ion ratio obtained via the UV component in the literature. The UV ratio approaches a value around \((60-65)\%\) for decreasing pressure values [36; 39]. The difference could be partly explained by strong atomic lines outside the sensitivity window of our PMT in the case of direct excitation.
Measurements of the infrared signal with a homogeneous electric field (zero-field and higher electric field)
Figure 10: Left: Arrival time of IR photons (like figure 8 left) for different xenon gas pressures. At the lowest pressure, we acquired data for only about five minutes, whereas for the other two pressures, one hour of data is used. Right: Average waveform of the UV pulses for gaseous xenon for different pressures
are planned for the upcoming runs with an upgraded setup. In the future, we plan to investigate dependence of the infrared emission in gaseous xenon on the electric field strength and particle-type, similar to our study of UV yields in liquid xenon [43]. We also aim to acquire data with a longer time window to alleviate the dominant systematic uncertainty and, thus, allow for more precise quantification of the IR light yield. The goal of our measurements is to explore the potential of infrared radiation to advance current astroparticle physics detectors. Ultimately, we aim to investigate if this can improve energy resolution and/or particle separation in double-phase liquid xenon detectors [44; 45].
## 7 Acknowledgments
We thank Andreas Ulrich and Edgar Sanchez Garcia for the valuable discussions regarding the scintillation process in noble gases. We thank our technicians Steffen Form, Michael Reissfelder, and Hannes Bonet for their technical support during the construction of the system. We acknowledge the support of the Max Planck Society.
|
2307.14184 | Lagrangian statistics of dense emulsions | The dynamics of dense stabilized emulsions presents a rich phenomenology
including chaotic emulsification, non-Newtonian rheology and ageing dynamics at
rest. Macroscopic rheology results from the complex droplet microdynamics and,
in turn, droplet dynamics is influenced by macroscopic flows via the competing
action of hydrodynamic and interfacial stresses, giving rise to a complex
tangle of elastoplastic effects, diffusion, breakups and coalescence events.
This tight multiscale coupling, together with the daunting challenge of
experimentally investigating droplets under flow, hindered the understanding of
dense emulsions dynamics. We present results from 3D numerical simulations of
dense stabilised emulsions, resolving the shape and dynamics of individual
droplets, along with the macroscopic flows. We investigate droplet dispersion
statistics, measuring probability density functions (PDF) of droplet
displacements and velocities, changing the concentration, in the stirred and
ageing regimes. We provide the first measurements ever, in concentrated
emulsions, of the relative droplet-droplet separations PDF and of the droplet
acceleration PDF, which becomes strongly non-Gaussian as the volume fraction is
increased above the jamming point. Cooperative effects, arising when droplets
are in contact, are argued to be responsible of the anomalous superdiffusive
behaviour of the mean square displacement and of the pair separation at long
times, in both the stirred and in the ageing regimes. This superdiffusive
behaviour is reflected in a non-Gaussian pair separation PDF, whose analytical
form is investigated, in the ageing regime, by means of theoretical arguments.
This work paves the way to developing a connection between Lagrangian dynamics
and rheology in dense stabilised emulsions. | Ivan Girotto, Andrea Scagliarini, Roberto Benzi, Federico Toschi | 2023-07-26T13:22:25Z | http://arxiv.org/abs/2307.14184v1 | # Lagrangian statistics of dense emulsions
###### Abstract
The dynamics of dense stabilized emulsions presents a rich phenomenology including chaotic emulsification, non-Newtonian rheology and ageing dynamics at rest. Macroscopic rheology results from the complex droplet microdynamics and, in turn, droplet dynamics is influenced by macroscopic flows via the competing action of hydrodynamic and interfacial stresses, giving rise to a complex tangle of elastoplastic effects, diffusion, breakups and coalescence events. This tight multiscale coupling, together with the daunting challenge of experimentally investigating droplets under flow, hindered the understanding of dense emulsions dynamics. We present results from 3d numerical simulations of dense stabilised emulsions, resolving the shape and dynamics of individual droplets, along with the macroscopic flows. We investigate droplet dispersion statistics, measuring probability density functions (PDF) of droplet displacements and velocities, changing the concentration, in the stirred and ageing regimes. We provide the first measurements ever, in concentrated emulsions, of the relative droplet-droplet separations PDF and of the droplet acceleration PDF, which becomes strongly non-Gaussian as the volume fraction is increased above the jamming point. Cooperative effects, arising when droplets are in contact, are argued to be responsible of the anomalous superdiffusive behaviour of the mean square displacement and of the pair separation at long times, in both the stirred and in the ageing regimes. This superdiffusive behaviour is reflected in a non-Gaussian pair separation PDF, whose analytical form is investigated, in the ageing regime, by means of theoretical arguments. This work paves the way to developing a connection between Lagrangian dynamics and rheology in dense stabilized emulsions.
## I Introduction
Understanding the dynamics of dense suspensions of soft, athermal particles such as emulsions, foams or gels is crucial for many natural and industrial processes [1; 2; 3]. A key question concerns the connection between mechanisms occurring at the microstructure level with the macroscopic flow and rheological properties in these systems [4; 5; 6; 7]. For instance, irreversible topological rearrangements, corresponding to local yielding events, are known to be directly related to the inhomogeneous fluidisation of soft glassy materials [8; 9; 10; 11; 12]. A clear comprehension of the relevant processes and time-scales characterizing the microdynamics relies on tracking single material meso constituents (droplets, bubbles, etc) [13; 14; 15; 16; 17; 18; 19]. Highly packed emulsions/foams are typically characterized in simple flows (oscillatory Couette, Poiseuille, etc), or even at rest, in the ageing regime [20; 21; 22; 23; 24]. Lagrangian studies of dispersions in complex, high Reynolds number flows, on the other hand, are widely represented in the literature, but in extremely diluted conditions [25; 26]. The investigation of the microdynamics of concentrated systems in complex flows, of relevance, e.g., for emulsification processes [27; 28; 29], is, in fact, a formidable task due to the need to cope, at the same time, with the interface dynamics two-way-coupled to the hydrodynamics and with the droplet/bubble tracking. This is what we address here, namely the statistical Lagrangian dynamics of droplets in dense emulsions subjected to chaotic flows. We remark that this is the first investigation of this kind. We employ a mesoscopic numerical method recently developed to simulate the hydrodynamics of immiscible fluid mixtures, stabilized against full phase separation. In a previous contribution, we showed how, by means of a suitable combination of chaotic stirring and injection of the disperse phase, it is possible to prepare a three dimensional dense emulsion, that was then rheologically characterized, evidencing its yield stress character [30]. In the present paper, we discuss and employ a tracking algorithm for the trajectories of individual droplets to investigate the droplet dynamics, in both semi-diluted and highly concentrated conditions, under stirring and during ageing. We study the statistics of droplet velocities and accelerations, focusing on the detection of non-Gaussian signatures and how they are related to the nature of droplet-droplet interactions. We discuss single and pair droplet dispersion, showing that at high volume fractions a superdiffusive behaviour is observed in both the stirred and ageing regimes. For pair dispersion in the ageing regime we propose theoretical models that show good agreement with the measurements. Let us underline that measurements of the droplet acceleration PDFs and of the droplet pair dispersion in densely packed emulsions have not been addressed before. Remarkably, our results suggest that both non-Gaussian statistics and superdiffusion emerge as soon as the volume fraction exceeds a value comparable with that of random close packing of spheres, to be considered a proxy of the jamming point. Therefore, this phenomenology is likely to be ascribable to cooperative effects resulting from the complex elastoplastic
dynamics of the emulsion. The paper is organized as follows. In section II we present the numerical method and we provide an extensive introduction to the tracking algorithm. The main results are reported in section III, organized in subsections relative to the stirred and ageing regimes. Conclusions and perspectives are drawn in section IV.
## II Methods
### Multicomponent emulsion modeling
Our numerical model is based on a three-dimensional (3D) two-component lattice Boltzmann method [31] in the Shan-Chen formulation [32; 33]. The lattice Boltzmann equation for the discrete probability distribution functions, \(f_{\sigma l}\), reads (the time step is set to unity, \(\Delta=1\))
\[f_{\sigma l}(\mathbf{x}+\mathbf{e}_{l},t+1)-f_{\sigma l}(\mathbf{x},t)=-\frac {1}{\tau_{\sigma}}\left(f_{\sigma l}(\mathbf{x},t)-f_{\sigma l}^{\rm eq}( \mathbf{x},t)\right)+S_{\sigma l}^{\rm(tot)}(\mathbf{x},t) \tag{1}\]
where the index \(l\) runs over the discrete set of nineteen 3D lattice velocities (\(D3Q19\) model) \(\{\mathbf{e}_{l}\}\) (\(l=0,1,\ldots,18\)), and \(\sigma\) labels each of the two immiscible fluids, conventionally indicated as \(O\) and \(W\) (for, e.g., 'oil' and 'water'). The equilibrium distribution function is given by the usual polynomial expansion of the Maxwell-Boltzmann distribution, valid in the limit of small fluid velocity, namely:
\[f_{\sigma l}^{\rm eq}(\mathbf{x},t)=\rho_{\sigma}\omega_{l}\left(1+\frac{ \mathbf{e}_{l}\cdot\mathbf{u}}{c_{s}^{2}}+\frac{(\mathbf{e}_{l}\cdot\mathbf{u} )^{2}}{2c_{s}^{4}}-\frac{\mathbf{u}\cdot\mathbf{u}}{2c_{s}^{2}}\right) \tag{2}\]
with \(\omega_{l}\) being the usual set of suitably chosen weights so to maximise the algebraic degree of precision in the computation of the hydrodynamic fields and \(c_{s}=1/\sqrt{3}\) a characteristic molecular velocity (a constant of the model). The hydrodynamical fields (densities and total momentum) can be computed out of the lattice probability density functions \(f_{\sigma l}\) as \(\rho_{\sigma}=\sum_{l}f_{\sigma l}\) and \(\rho\mathbf{u}=\sum_{\sigma l}f_{\sigma l}\mathbf{e}_{l}\) (where \(\rho=\sum_{\sigma}\rho_{\sigma}\) is the total fluid density). The source term \(S_{\sigma l}^{\rm(tot)}=\omega_{l}\mathbf{e}_{l}\cdot\mathbf{F}_{\sigma}^{\rm (tot)}/c_{s}^{2}\) stems from all the forces (internal and external) acting in the system, \(\mathbf{F}_{\sigma}=\mathbf{F}_{\sigma}+\mathbf{F}_{\sigma}^{\rm(ext)}\). In particular, \(\mathbf{F}_{\sigma}\), incorporates the two kinds of lattice interaction forces, \(\mathbf{F}_{\sigma}=\mathbf{F}_{\sigma}^{(r)}+\mathbf{F}_{\sigma}^{(f)}\) and \(\mathbf{F}_{\sigma}^{(r)}\) is the standard Shan-Chen inter-species repulsion of amplitude, which is responsible of phase separation, and reads \(G_{\rm GW}>0\):
\[\mathbf{F}_{\sigma}^{\rm(r)}(\mathbf{x},t)=-G_{\rm GW}\rho_{\sigma}(\mathbf{x },t)\sum_{l,\sigma\neq\bar{\sigma}}\omega_{l}\rho_{\bar{\sigma}}(\mathbf{x}+ \mathbf{e}_{l},t)\mathbf{e}_{l}. \tag{3}\]
The second term, \(\mathbf{F}_{\sigma}^{\rm(f)}\), consists of a short range intra-species attraction, involving only nearest-neighbouring sites (\(\mathcal{I}_{1}\)), and a long range self repulsion, extended up to next-to-nearest-neighbours (\(\mathcal{I}_{2}\)) [34], namely:
\[\begin{split}\mathbf{F}_{\sigma}^{\rm(f)}(\mathbf{r},t)=& -G_{\sigma\sigma,1}\rho_{\sigma}(\mathbf{x},t)\sum_{l\in\mathcal{I}_{1}} \omega_{l}\rho_{\sigma}(\mathbf{x}+\mathbf{e}_{l},t)\mathbf{e}_{l}\\ &-G_{\sigma\sigma,2}\rho_{\sigma}(\mathbf{x},t)\sum_{l\in \mathcal{I}_{1}\bigcup\mathcal{I}_{2}}p_{l}\rho_{\sigma}(\mathbf{x}+\mathbf{e} _{l},t)\mathbf{e}_{l},\end{split} \tag{4}\]
where \(G_{\rm GO,1},G_{\rm WW,1}<0\) and \(G_{\rm GO,2},G_{\rm WW,2}>0\) and \(p_{l}\) are the weights of the \(D3Q125\) model. This type of repulsive interaction \(G_{\sigma\sigma,2}\) represents a mesoscopic phenomenological modelling of surfactants and provides a mechanism of stabilisation against coalescence of close-to-contact droplets (the superscript 'f' stands in fact for 'frustration'), promoting the emergence of a positive disjoining pressure, \(\Pi>0\), within the liquid film separating the approaching interfaces [34; 35; 36].
The large-scale forcing needed to generate the chaotic stirring that mixes the two fluids enters into the model through \(\mathbf{F}_{\sigma}^{\rm(ext)}\), which takes the following form:
\[F_{\sigma i}^{\rm(ext)}(\mathbf{x},t)=A\rho_{\sigma}\sum_{j\neq i}\left[\sin( k_{j}r_{j}+\Phi_{k}^{(j)}(t))\right], \tag{5}\]
where \(i,j=1,2,3\), \(A\) is the forcing amplitude, \(k_{j}\) are the wavevector components, and the sum is limited to \(k^{2}=k_{1}^{2}+k_{2}^{2}+k_{3}^{2}\leq 2\). The phases \(\Phi_{k}^{(j)}\) are evolved in time according to independent Ornstein-Uhlenbeck processes with the same relaxation times \(T=L/u_{\rm rms}\), where \(L\) is the cubic box edge and \(u_{\rm rms}\) is a typical large-scale velocity [37; 38].
### Droplet tracking
We present now the tracking algorithm and discuss its implementation. The algorithm combines (i) the process of identification of all droplets constituting the emulsion at two different and consecutive time steps (hereafter called _labeling_), with (ii) a stage describing the kinematics of each droplet (the actual _tracking_).
In the labeling step, individual droplets, defined as connected clusters of lattice points such that the local density exceeds a prescribed threshold (equal to \(\rho_{O}^{(\max)}-\rho_{O}^{(\min)}\)), are identified by means of the Hoshen-Kopleman algorithm [39; 40]. This approach echoes what is known in the image processing jargon as Connected Component Labeling [41]; similar techniques have been recently applied to multiphase fluid dynamics in Volume of Fluid simulations [42; 43].
The tracking is based on the computation of the probability to obtain volume transfer among droplets in space and time as described in [44; 45; 46]. This is performed as follows. Let us suppose that in the domain, at time \(t_{1}\), there are \(N_{1}\) droplets and at a later dump \(t_{2}=t_{1}+\Delta t\) there are \(N_{2}\) droplets. We define a set of droplet indicator fields \(\rho_{k}(x,t)\) (with \(k\) running over the all droplets), which are equal to \(k\) if, at time \(t\), \(x\) is inside the \(k\)-droplet, and are equal to \(0\) elsewhere. In the following, the "state" representing the droplet will be denoted in the ket notation (reminiscent of quantum mechanics states) as \(|k,t\rangle\) (the state is assumed to be normalized by square root of the droplet volume, \(\sqrt{V_{k}}\)), such that the transition probability for a droplet \(k_{1}\) at a time \(t_{1}\) to end up in a droplet \(k_{2}\) at a time \(t_{2}\) is given by the bra-ket expression:
\[P_{k_{1}\to k_{2}}=\langle k_{2},t_{2}|k_{1},t_{1}\rangle=\frac{1}{V}\int \rho_{k_{2}}(\mathbf{x},t_{2})\rho_{k_{1}}(\mathbf{x},t_{1})d\mathbf{x},\quad V =\sqrt{V_{k_{1}}V_{k_{2}}} \tag{6}\]
This transition probability is equal to \(1\) in the case droplets \(k_{1}\) and \(k_{2}\) perfectly overlap and it is zero if they do not overlap at all. A high \(P\) value gives us, therefore, the confidence in having re-identified the same droplet at two different time steps. What happens if a droplet is not deforming and just translating with uniform velocity \(\mathbf{v}\)? We expect that the probability will decrease due to an imperfect overlap between the droplet \(k\) at time \(t\) and the same droplet \(k\), displaced by \(\mathbf{v}\Delta t\) at time \(t+\Delta t\), where \(\mathbf{v}\) is the average velocity of all grid points included into a droplet. Therefore, we expect that the maximal correlation will occur for:
\[\langle k,t+\Delta t|k,t\rangle=\frac{1}{V}\int\rho_{k}(\mathbf{x}+\mathbf{v} \Delta t,t)\rho_{k}(\mathbf{x},t+\Delta t)d\mathbf{x} \tag{7}\]
Of course the amount of this effect is proportional to \(\Delta t\). In order to reduce the effect of the translation of the droplet \(k\) at a given \(\Delta t\) we implement a Kalman filter, evaluating the overlap against the predicted at the same \(t+\Delta t\) shifting the initial position at time \(t\) forwards by \(\mathbf{v}\Delta t\). For all the data shown, the tracking is implemented with \(\Delta t=100\) lattice Boltzmann time steps.
## III Results
In this section we provide results aimed at characterizing the dynamics of individual droplets (e.g., their velocities and accelerations, as well as absolute and relative dispersion), at changing the volume fraction of the dispersed phase from \(\phi=38\%\) to \(\phi=77\%\). All simulations were performed on a cubic grid of side \(L=512\) lattice points, the kinematic viscosity was \(\nu=1/6\) for both components, (in lattice Boltzmann units; hereafter dimensional quantities will be all expressed in such units) and the total density \(\rho_{f}=1.36\) (giving a dynamics viscosity \(\eta=\rho_{f}\nu\approx 0.23\)). With reference to Fig. 1, where we plot the temporal variation of the number density of droplets \(N_{D}/L^{3}\), we give first a cursory description of the /it emulsification process, indicated as phase (\(I\)) in the figure (further details can be found in Girotto _et al._[30]). All simulations are run for a total of \(9\cdot 10^{6}\) time steps. Starting from an initial condition with \(\phi=30\%\), where the two components are fully separated by a flat interface, the emulsion is created applying the large-scale stirring force, Eq. (5), with magnitude \(A=4.85\cdot 10^{-7}\), while injecting the dispersed phase until the desired value is reached. The duration of the injection phase, \(t_{\rm inj}\), depends, then, on the target volume fraction (see Table 1). The forcing is applied up to \(t_{F}=3\cdot 10^{6}\). The evolution of the system is then monitored for further \(6\cdot 10^{6}\) time steps. The tracking algorithm is activated at \(t\geq t_{0}=1.25\cdot 10^{6}\); for what we call, hereafter, _stirred regime_ (phase (II) in Fig. 1) we collect statistics over the interval \(t\in[t_{0},t_{F}]\) (which is statistically stationary, as it can be appreciated from the figure of \(N_{D}(t)\), but also looking at other observable, such as the mean square velocity), whereas, for the _ageing regime_ (phase (III) in Fig. 1), we consider data in the interval \(t\in[t_{A}^{(i)},t_{A}^{(f)}]\), with \(t_{A}^{(i)}=7\cdot 10^{6}\) and \(t_{A}^{(f)}=9\cdot 10^{6}\), (the intermediate relaxation phase, \(t\in[t_{F},t_{A}^{(i)}]\), is not shown in Fig. 1.
In the sake of clarity of visualisation). In Fig. 2 we show snapshots of the morphology of the emulsions at \(t=t_{F}\), for different volume fractions. Semi-diluted emulsions present a high number of small spherical droplets, whereas densely packed emulsions are constituted of a
Rysunek 1. Number density of droplets, \(N_{D}/L^{3}\), as a function of time for a set of four simulations, labelled with the corresponding target (steady state) volume fractions (\(\phi\)) of the dispersed phase (see Table 1 for further details). The solid lines (color coded as for the droplet number density) indicate the time evolution of the volume fraction during emulsification. The vertical dashed line highlights the starting time of tracking, \(t_{0}=1.25\cdot 10^{6}\). All simulations are stirred with the same forcing amplitude parameter (\(A=4.85\cdot 10^{-7}\), see Eq. (5)), except for the case indicated with hollow circles (\(A=4.05\cdot 10^{-7}\), run labelled as \(\phi_{6}\) in Table 1). The relaxation phase \(t\in[t_{F},t_{A}^{(i)}]\) is omitted for the sake of clarity of visualisation.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \(\phi\) & \(t_{\rm inj}\) & \(\overline{N_{D}}^{(s)}\) & \(U_{\rm rms}^{(s)}\) & \(a_{\rm rms}^{(s)}\) & \(T_{L}^{(s)}\) & \(\langle d\rangle\) & \(d_{\rm rms}\) & \(t_{G}\) & \(U_{\rm rms}^{(a)}\) & \(a_{\rm rms}^{(a)}\) & \(T_{L}^{(a)}\) \\ \hline \hline \(\phi_{1}=38\%\) & \(1.5\cdot 10^{5}\) & \(3300\) & \(2.8\cdot 10^{-2}\) & \(4.6\cdot 10^{-4}\) & \(1.8\cdot 10^{4}\) & \(30\) & \(4\) & \(180\) & \(2.0\cdot 10^{-5}\) & \(1.2\cdot 10^{-7}\) & \(2.6\cdot 10^{7}\) \\ \(\phi_{2}=49\%\) & \(3\cdot 10^{5}\) & \(3100\) & \(2.5\cdot 10^{-2}\) & \(3.6\cdot 10^{-4}\) & \(2.0\cdot 10^{4}\) & \(34\) & \(5\) & \(97\) & \(1.7\cdot 10^{-5}\) & \(7.7\cdot 10^{-8}\) & \(3.0\cdot 10^{7}\) \\ \(\phi_{3}=64\%\) & \(4.6\cdot 10^{5}\) & \(2400\) & \(2.0\cdot 10^{-2}\) & \(2.6\cdot 10^{-4}\) & \(2.6\cdot 10^{4}\) & \(40\) & \(6\) & \(80\) & \(5.1\cdot 10^{-5}\) & \(2.6\cdot 10^{-7}\) & \(10^{7}\) \\ \(\phi_{4}=70\%\) & \(5.2\cdot 10^{5}\) & \(1600\) & \(1.8\cdot 10^{-2}\) & \(2.2\cdot 10^{-4}\) & \(2.8\cdot 10^{4}\) & \(48\) & \(8\) & \(1.1\) & \(6.0\cdot 10^{-5}\) & \(4.2\cdot 10^{-7}\) & \(8.5\cdot 10^{6}\) \\ \(\phi_{5}=77\%\) & \(5.8\cdot 10^{5}\) & \(800\) & \(1.7\cdot 10^{-2}\) & \(2.1\cdot 10^{-4}\) & \(3.0\cdot 10^{4}\) & \(60\) & \(17\) & \(0.1\) & \(-\) & \(-\) & \(-\) \\ \(\phi_{6}=77\%\) & \(5.8\cdot 10^{5}\) & \(970\) & \(-\) & \(-\) & \(-\) & \(-\) & \(58\) & \(12\) & \(-\) & \(7.0\cdot 10^{-5}\) & \(6.7\cdot 10^{-7}\) & \(7.3\cdot 10^{6}\) \\ \end{tabular}
\end{table}
Table 1: Relevant averaged quantities from the simulations at the different volume fractions \(\phi\). The overline, \(\overline{()}\), indicates time average, the brackets, \(\langle()\rangle\), indicate an average over time and over the ensemble of droplets. The superscripts \((s,a)\) indicate that the averages have been taken over the statistically steady stirring regime (\(t\in[t_{0},t_{F}]\)) and over the ageing regime (\(t\in[7\cdot 10^{6},9\cdot 10^{6}]\)), respectively. The various columns contain: \(t_{\rm inj}\), injection endtime; \(\overline{N_{D}}^{(s)}\), mean droplet number (stirring); \(U_{\rm rms}^{(s)}\), root mean square droplet velocity (stirring); \(a_{\rm rms}^{(s)}\), root mean square droplet acceleration (stirring); \(T_{L}^{(s)}=L/U_{\rm rms}^{(s)}\), large eddy turnover time (stirring); \(\langle d\rangle\), mean droplet diameter; \(d_{\rm rms}\), standard deviation of the droplet diameter; \(t_{G}\), mean (dimensionless) droplet life time; \(U_{\rm rms}^{(a)}\), root mean square droplet velocity (ageing); \(a_{\rm rms}^{(a)}\), root mean square droplet acceleration (ageing); \(T_{L}^{(a)}=L/U_{\rm rms}^{(a)}\), large eddy turnover time (ageing). Numerical values of the simulations parameters are: kinematic viscosity, \(\nu=1/6\); total fluid density, \(\rho_{f}=1.36\); surface tension, \(\Gamma=0.0238\); forcing amplitude, \(A=4.85\cdot 10^{-7}\) (except for the run \(\phi_{6}\) for which \(A=4.05\cdot 10^{-7}\)); simulation box side, \(L=512\).
smaller number of non-spherical larger droplets. We report in Table 1 the mean and root mean square (rms) values of some relevant observables (averaged in space and in time over the stirred phase, \(t\in[t_{0},t_{F}]\), and over the ageing phase, \(t\in[7\cdot 10^{6},9\cdot 10^{6}]\), respectively), for the various volume fractions considered. One can immediately notice that, at increasing the volume fraction, accelerations and velocities rms decrease, implying a higher effective viscosity, while at the same time, the trend of the root mean square droplet diameter, \(d_{\rm rms}\), shows an increase in polydispersity.
Fig. 3 shows the rates of breakup (\(\overline{\beta}\)) and coalescence (\(\overline{\kappa}\)), i.e. the number of events per unit time, averaged over the stirred regime as a function of the volume fraction \(\phi\) (in the inset we report \(\beta(t)\) and \(\kappa(t)\) as a function of time for \(\phi=70\%,77\%\)). In the steady state the system is in a dynamical equilibrium with \(N_{D}(t)\) essentially constant (see Fig. 1), therefore breakup and coalescence rates approximately balance each other, \(\beta(t)\approx\kappa(t)\); moreover, both mean rates are extremely low (\(\sim 10^{-3}\)) for \(\phi<\phi_{c}=64\%\), i.e. below jamming, and increase steeply with \(\phi\) for \(\phi>\phi_{c}\). Interestingly \(\phi_{c}\) is in the expected range of volume fractions for random close packing of spheres in 3D [47]. The growth of \(\overline{\beta}\) and \(\overline{\kappa}\) above \(\phi_{c}\) is particularly steep, suggesting a divergent behaviour as the volume fraction approaches a "critical" value which can be arguably identified with the occurrence of the catastrophic phase inversion, \(\phi_{\rm cpi}\). In particular, the divergence turns out to have a power-law character (as highlighted in the inset where we plot the
cologarithm of \(\overline{\beta}\) and \(\overline{\kappa}\) vs \(\log(\phi_{\rm cpi}-\phi))\): the solid line depicts the fitting function \(g(\phi)=\frac{C}{(\phi_{\rm cpi}-\phi)^{q}}\), with fitted values \(\phi_{\rm cpi}=90.5\%\) and \(q=4.5\).
The number of breakup and coalescence events depends, of course, on the intensity of the hydrodynamic stresses involved. To see how these quantities can give a flavour of the stability of the emulsion to the applied forcing, let us introduce an average _droplet life time_, \(t_{G}=\frac{\overline{N}_{\rm D}U_{\rm max}}{\beta L}\), adimensionalized by the large eddy turnover time; from Table 1, we see that, below jamming (\(\phi\leq\phi_{c}\)) \(t_{G}\gg 1\), i.e. the droplets, on average, tend to preserve their integrity along the whole simulation, whereas in the densely packed systems (\(\phi>\phi_{c}\)) \(t_{G}\) abruptly drops (down to \(t_{G}\approx 0.1\) for the largest volume fraction, \(\phi=77\%\)).
### Velocity and acceleration statistics: stirred regime
In Fig. 4 we report the PDFs of the droplet velocities for volume fractions \(\phi=38\%\), \(\phi=64\%\) and \(\phi=77\%\). In both cases the PDFs show a bell shape, but in a range of intermediate values of velocities \(|v|\) they develop regions with non-monotonic curvature, which are more pronounced in the concentrated case. We ascribe such peculiar shape to the droplet-droplet interactions (collisions and/or elastoplatic deformations). The characteristic velocity \(v_{c}\) can be then estimated from the balance of the elastic force and the Stokesian drag for a spherical droplet of diameter \(d\), \(F_{S}=3\pi\eta dv_{c}\). The elastic force acting on droplets squeezed against each other is due to the disjoining pressure II stabilising the inter-droplet thin film; at mechanical equilibrium the disjoining pressure equals the capillary pressure at the curved droplet interface [48], therefore the force can be estimated as the Laplace pressure times the cross sectional area, \(F_{\rm el}\sim\frac{2\Gamma}{d}\pi\left(\frac{d}{2}\right)^{2}\), where \(\Gamma\) is the surface tension. Letting \(F_{\rm el}\sim F_{S}\) gives \(v_{c}\sim\frac{\Gamma}{6\eta}=0.0175\); from Fig. 4 we see that, indeed, the inflections are located around \(v\sim\pm v_{c}\). To test our conjecture further, we have run a simulation setting the competing lattice interactions responsible for the emergence of the disjoining pressure to zero (\(G_{\rm OO,1}=G_{\rm WW,1}=G_{\rm OO,2}=G_{\rm WW,2}=0\) in Eq. 4), thus, effectively, we enforce \(\Pi=0\). The resulting velocity PDF is reported in Fig. 4 where we observe that, in fact, the inflectional regions disappear.
The PDF of droplet accelerations, reported in Fig. 5, is Gaussian in the (semi)diluted emulsion (\(\phi=38\%\)) but, as the volume fraction is increased above \(\phi_{c}=64\%\) the PDFs tend to develop _fat_ (non-Gaussian) tails. A working fitting function is a stretched exponential of the type:
\[P(\tilde{a})=C\exp\left(-\frac{\tilde{a}^{2}}{\left(1+\left(\frac{\beta|\tilde {a}|}{\sigma}\right)^{\gamma}\right)\sigma^{2}}\right), \tag{8}\]
where \(\tilde{a}=a/a_{\rm rms}\). The non-Gaussianity here, unlike turbulence, cannot be grounded on the complexity of the velocity field and of the associated multifractal distribution of the turbulent energy dissipation [49]. We are not facing, in fact, a fully developed turbulent flow and, moreover, the non-Gaussian signatures become evident at increasing the volume fraction above the jamming point \(\phi_{c}\), where the effective viscosity is higher (and, hence, the effective Reynolds number is lower). The origin of the non-Gaussianity is to be sought in the complex elastoplastic dynamics of the system, which is driven by long-range correlated irreversible stress relaxation events. Remarkably, in concentrated emulsions it has been shown that the spatial distribution of stress drops in the system displays a multifractal character [50] which might be responsible of the acceleration statistical properties at high volume fraction, in a formal analogy with the phenomenology of turbulence. The curves corresponding to Eq. (8) are reported in Fig. 5, for various values of the parameter \(\gamma\), which gauges the deviations from the Gaussian form, and fixed \(\beta=0.35\) (obtained from best fitting) and \(\sigma=1\), the standard deviation of the Gaussian limit, such that for \(\gamma=0\) the PDF reduces to the normal distribution \(\propto e^{-x^{2}/2}\). This is, indeed, the case for the lowest volume fraction, \(\phi=38\%\); the non-Gaussianity parameter \(\gamma\) then increases monotonically with \(\phi\) up to \(\gamma=1.6\) for \(\phi=77\%\).
### Dispersion: stirred regime
We focus now on the spatial dispersion of both single droplets as well as of droplet pairs. The single droplet (or absolute) dispersion is defined in terms of the statistics of displacements, \(\Delta{\bf X}={\bf X}(t_{0}+T)-{\bf X}(t_{0})\), where \({\bf X}(t)\) is the position of the droplet centre of mass at time \(t\) (\(t_{0}=1.25\cdot 10^{6}\), let us recall, is the starting time of tracking). In Fig. 6 we show the mean square displacement (MSD), \(\langle\Delta X^{2}\rangle\) for volume fractions \(\phi=38\%,64\%\) and \(77\%\). Values are reported in logarithmic scale, with the time normalised (hereafter) by a characteristic large scale time defined
Rysunek 5. PDFs of the droplet accelerations, for \(\phi=38\%,64\%,77\%\), computed over the, statistically steady, forced state. The PDFs are normalized to have unitary area. The values of \(a_{\rm rms}\) are given in Table 1. The solid lines are fits from Eq. (8) with \(\sigma=1\) and \(\beta=0.35\), whereby the non-Gaussianity parameter \(\gamma\) takes the values \(\gamma=0\) (corresponding to the Gaussian distribution) for \(\phi=38\%\), \(\gamma=1.32\) for \(\phi=64\%\) and \(\gamma=1.6\) for \(\phi=77\%\).
Rysunek 6. Mean square displacement (MSD), \(\langle\Delta X^{2}\rangle\) for \(\phi=38\%,64\%\) and \(77\%\), in the forced regime. The MSD goes initially as \(T^{2}\), indicating a ballistic dynamics, followed by a diffusive growth, \(T^{1/2}\), for \(\phi<\phi_{c}\), whereas in the densely packed system (\(\phi=77\%\)) a superdiffusive behaviour \(T^{\alpha}\), with \(\alpha=1.15\).
as \(t_{L}=\sqrt{L/A}\approx 32500\) (which is independent of the volume fraction), where \(A\) is the amplitude of the applied forcing. In the diluted case, the MSD (Fig. 6) shows a crossover at around \(T\sim 5t_{d}\) between an initial ballistic motion, \(\langle\Delta X^{2}\rangle\sim T^{2}\), and a diffusive behaviour at later times, \(\langle\Delta X^{2}\rangle\sim T\). This is consistent with the typical Lagrangian dynamics of particles advected by chaotic and turbulent flows [51]; for intermediate times, though, we observe a transitional region, in correspondence, approximately, of the crossover, where the curve presents an inflection point with locally super-ballistic slope. This is an interestingly non-trivial behaviour that certainly deserves further investigation. At increasing the volume fraction the long time growth becomes steeper, suggesting that a superdiffusive, regime \(T^{\alpha}\), with \(\alpha=1.15\), may occur.
A deeper insight on the small scale dynamics can be grasped by looking at the pair dispersion, namely the statistics of separations
\[\mathbf{R}_{ij}(t)=\mathbf{X}_{i}(t)-\mathbf{X}_{j}(t), \tag{9}\]
at time \(t\) for all pairs of droplets \(i,j\) that are nearest neighbours (i.e. such that their corresponding cells in a Voronoi tessellation of the centre of masses distribution are in contact) at \(t=t_{0}\). The observable in Eq. (9) is, in fact, insensitive to contamination from mean homogeneous large scale flows, if present. In Fig. 7 we report the mean square value \(R^{2}(t)\equiv\langle|\mathbf{R}_{ij}|^{2}\rangle_{\{ij\}}\) (where the average is over the initially neighbouring pairs) as a function of time, for \(\phi=38\%,64\%\) and \(77\%\). Analogously to the MSD, \(R^{2}(t)\) grows in time and, after an initial ballistic transient, it follows a \(t^{\alpha}\) law, which is diffusive (\(\alpha=1\)) for concentrations below \(\phi_{c}\) and superdiffusive (\(\alpha=1.2\)) for \(\phi>\phi_{c}\).
### Velocity and acceleration statistics: ageing regime
When the large scale forcing is switched off, in diluted conditions (below the close packing volume fraction), the system relaxes via a long transient where the kinetic energy decays to zero. Instead, at high volume fraction (in the jammed phase), the emulsion is never completely at rest, due to diffusion and droplets elasticity favouring the occurrence of plastic events, local topological rearrangement of few droplets (i.e. during the "ageing" of the material). Therefore, we consider here only the latter situation and focus on the case \(\phi=77\%\); hereafter, we present data obtained with forcing amplitude \(A=4.05\cdot 10^{-7}\) (see the \(\phi_{6}\) row in Table 1), which yielded a larger number of droplets (\(\sim 10^{3}\)) in the steady state, such to improve the statistics. The PDF of the droplet velocities is reported in Fig. 8.
Since there is no mean flow, the PDF is an even function of its argument. We show, therefore, the distribution of the absolute values in logarithmic scale, in order to highlight the power-law behaviour \(v^{-3}\). Interestingly, the PDF of acceleration also develops a power law tail \(P(a)\sim a^{-3}\) (the PDFs for velocity and acceleration do, in fact, overlap, upon rescaling by the respective standard deviations, see Fig. 8), reflecting the fact that, when stirring is switched off, the high effective viscosity overdamps the dynamics, thus enslaving the acceleration to the velocity (by Stokesian drag), \(a\sim u/\tau_{s}\) (assuming Stokes time equal for all droplets, which is reasonable given the very low spread of size distribution in the ageing regime [30]).
### Dispersion: ageing regime
In the ageing regime at the largest volume fraction, \(\phi=77\%\), the MSD goes as \(\langle\Delta X^{2}\rangle\sim T^{2}\) for short times, signalling a ballistic regime, followed by a super-diffusive regime \(\langle\Delta X^{2}\rangle\sim T^{3/2}\) (see inset of Fig. 9). The short time ballistic regime is consistent with a theoretical prediction based on the superposition of randomly distributed elastic dipoles (following structural micro-collapses) [52] and with results from experiments with colloidal gels [53] and foams [24]. The scaling \(\langle\Delta X^{2}\rangle\sim T^{3/2}\) is, instead, slightly steeper than the experimentally measured \(\sim T^{1.2}\)[24]. The ballistic regime \(\langle\Delta X^{2}\rangle\sim T^{2}\) entails a power law tail of the PDF of separations for short times, \(P(\Delta X)\sim\Delta X^{-3}\), corresponding to the self part of the van Hove distribution [54], as reported in Fig. 9[24; 17]. This observation finds correspondence, as one could expect, in the PDF of the droplet velocities, shown in Fig. 8.
The study of pair dispersion, reported in Fig. 10, evidences, for the mean square pair separation (in the inset), a ballistic regime, \(\langle R^{2}\rangle\sim t^{2}\), followed by a superdiffusive behaviour, \(\langle R^{2}\rangle\sim t^{4/3}\). The persistence of ballistic motion is expected to match the decorrelation of trajectories following a plastic events, therefore the crossover time, \(t_{c}\), can be approximately estimated as the time taken by a droplet to travel over the typical size of a rearrangement, \(\xi\approx 2d\) (which is an intrinsic scale for correlation lengths in soft glassy materials [8; 11]); since the characteristic velocity is \(v_{c}\sim\frac{\Gamma}{6\eta}\) (see Fig. 4 and discussion thereof), we get \(t_{c}\sim\frac{12d\eta}{\Gamma}\approx 7\cdot 10^{3}\), indicated in the inset of Fig. 10 with a dashed line.
By analogy with the Richardson's description of turbulent diffusion [55; 56; 51], we propose a phenomenological approach to derive the full pair separation PDF in the superdiffusive regime. We assume that the such PDF evolves according to a generalized diffusion equation, with a scale-dependent effective diffusivity, \(D_{\rm eff}\) which, dimensionally, should be proportional to \(\frac{d\langle R^{2}\rangle}{dt}\). Since \(\langle R^{2}\rangle\sim t^{4/3}\) (and, consequently, \(t\sim R^{3/2}\)), we have
\[D_{\rm eff}\propto\frac{d\langle R^{2}\rangle}{dt}\sim t^{1/3}\quad\Rightarrow \quad D_{\rm eff}\propto R^{1/2}\equiv cR^{1/2}. \tag{10}\]
The diffusion equation, thus, reads
\[\partial_{t}P(R,t)=\partial_{R}\left(cR^{1/2}\partial_{R}P(R,t)\right), \tag{11}\]
that admits as solution (with the condition of unit area at all times) the following non-Gaussian distribution
\[P(R,t)=\mathcal{A}\frac{2R^{2}}{27c^{2}t^{2}}\exp\left(-\frac{4R^{3/2}}{9ct} \right). \tag{12}\]
The PDFs of pair separations measured at two instants of time in the superdiffusive regime, \(t_{1}\approx 150t_{c}\) and \(t_{2}\approx 300t_{c}\), are shown in Fig. 10 together with the prediction of Eq. (12), with fitting parameter \(c=0.25t_{c}^{-1}\), plotted as solid lines. The agreement obtained between theory and numerics is quite remarkable.
Conclusions
We presented results on the statistics of droplet velocities, accelerations and of droplet absolute and relative dispersion in stabilized emulsions at various volume fractions, from semi-diluted to highly concentrated systems. We employed a recently developed method for _in silico_ emulsification of binary immiscible liquid mixtured with high volume fractions of the dispersed phase, equipped with a novel tracking algorithm which allowed us to study the emulsion physics at the droplet-resolved scale from a Lagrangian viewpoint, across various concentrations, from the semi-dilute to jammed regimes. Our results highlighted how the elastic properties and the plastic microdynamics of densely packed ensembles of droplets, in close contact, are responsible of the non-Gaussian character of the droplet acceleration and, more moderately, of the velocity statistics. We further investigated the single droplet diffusion in terms of both the mean square displacement and the self-part of the van Hove distribution functions, finding that, while in the semi-dilute stirred case a ballistic-to-diffusive crossover is observed, in the highly concentrated case a super-diffusive behaviour seems to emerge. Super-diffusion characterizes also the ageing regime, where agreement is found with previous theoretical and experimental results. Further investigations will focus on the dispersion properties on larger systems and for longer observation times, as well as on the relation of the droplet Lagrangian properties with the stress distribution across the system. In perspective, we foresee to extend the reach of the present work to extreme conditions of volume fractions and forcing amplitudes, whereby the emulsion tends to loose stability and to undergo a catastrophic phase inversion. In this limit, too, the Lagrangian approach is of invaluable utility. Overall, our approach suggested a bridge between classical tools for Lagrangian high Reynolds number flows and complex fluid rheology, which paves the way to the inspection of unexplored aspects of the physics of soft materials.
## Acknowledgements
We are thankul to Chao Sun and Lei Yi for useful discussions and to Prasad Perlekar for a fruitful interaction at the initial stage of the work. Numerical simulations were performed thanks to granted PRACE projects (ID: 2018184340 & 2019204899) along with CINECA and BSC for access to their HPC systems. This work was partially sponsored by NWO domain Science for the use of supercomputer facilities.
|
2301.08623 | Ergodic properties of a parameterised family of symmetric golden maps:
the matching phenomenon revisited | We study a one-parameter family of interval maps
$\{T_\alpha\}_{\alpha\in[1,\beta]}$, with $\beta$ the golden mean, defined on
$[-1,1]$ by $T_\alpha(x)=\beta^{1+|t|}x-t\beta\alpha$ where $t\in\{-1,0,1\}$.
For each $T_\alpha,\ \alpha>1$, we construct its unique, absolutely continuous
invariant measure and show that on an open, dense subset of parameters
$\alpha$, the corresponding density is a step function with finitely many
jumps. We give an explicit description of the maximal intervals of parameters
on which the density has at most the same number of jumps. A main tool in our
analysis is the phenomenon of matching, where the orbits of the left and right
limits of discontinuity points meet after a finite number of steps. Each
$T_\alpha$ generates signed expansions of numbers in base $1/\beta$; via
Birkhoff's ergodic theorem, the invariant measures are used to determine the
asymptotic relative frequencies of digits in generic $T_\alpha$-expansions. In
particular, the frequency of $0$ is shown to vary continuously as a function of
$\alpha$ and to attain its maximum $3/4$ on the maximal interval
$[1/2+1/\beta,1+1/\beta^2]$. | Karma Dajani, Slade Sanderson | 2023-01-20T15:08:36Z | http://arxiv.org/abs/2301.08623v1 | Ergodic properties of a parameterised family of symmetric golden maps: the matching phenomenon revisited
###### Abstract.
We study a one-parameter family of interval maps \(\{T_{\alpha}\}_{\alpha\in[1,\beta]}\), with \(\beta\) the golden mean, defined on \([-1,1]\) by \(T_{\alpha}(x)=\beta^{1+|t|}x-t\beta\alpha\) where \(t\in\{-1,0,1\}\). For each \(T_{\alpha},\ \alpha>1\), we construct its unique, absolutely continuous invariant measure and show that on an open, dense subset of parameters \(\alpha\), the corresponding density is a step function with finitely many jumps. We give an explicit description of the maximal intervals of parameters on which the density has at most the same number of jumps. A main tool in our analysis is the phenomenon of matching, where the orbits of the left and right limits of discontinuity points meet after a finite number of steps. Each \(T_{\alpha}\) generates signed expansions of numbers in base \(1/\beta\); via Birkhoff's ergodic theorem, the invariant measures are used to determine the asymptotic relative frequencies of digits in generic \(T_{\alpha}\)-expansions. In particular, the frequency of \(0\) is shown to vary continuously as a function of \(\alpha\) and to attain its maximum \(3/4\) on the maximal interval \([1/2+1/\beta,1+1/\beta^{2}]\).
Key words and phrases:invariant measure, ergodic theory, matching, interval map, number expansions, digit frequency 2020 Mathematics Subject Classification: 37E05 (Primary) 28D05, 37A05 (Secondary)
## 1. Introduction
Dynamical systems given by piecewise monotone maps \(T:I\to I\) of an interval have a rich history: besides having applications in various fields--including population ecology ([3]) and controlled switching circuits ([1])--these systems are often used to produce expansions of numbers from the underlying interval \(I\). Examples include decimal, \(n\)-ary, continued fraction, (generalised) Luroth and \(\beta\)-expansions, though this list is far from exhaustive. A common theme in the study of these expansions is the investigation of asymptotic relative frequencies of digits occurring in typical (i.e. Lebesgue-almost all) expansions. To this end, the standard procedure is the construction of an ergodic, \(T\)-invariant measure \(\mu\) equivalent to Lebesgue measure \(\lambda\) and a calculation of the \(\mu\)-measure of the subinterval of \(I\) corresponding to the digit(s) in question. Birkhoff's ergodic theorem asserts that the measure of this subinterval equals the desired asymptotic frequency.
In [13], invariant measures and frequencies of digits are studied for a family of _symmetric doubling maps_\(\{D_{\eta}\}_{\eta\in[1,2]}\) defined on \([-1,1]\) by \(D_{\eta}(x)=2x-d(x)\eta\) with \(d(x)\in\{-1,0,1\}\). These maps produce _signed binary expansions_ of numbers \(x\in[-1,1]\) of the form \(x=\eta\sum_{n\geq 1}d_{n}/2^{n}\) with each \(d_{n}\in\{-1,0,1\}\). It is shown that each \(D_{\eta}\), \(\eta>1\), admits an ergodic, invariant measure equivalent to Lebesgue measure. The authors use a curious property called _matching_--defined in the sequel--to prove that there is a countable collection of disjoint, open subintervals of \([1,2]\) whose union has full measure, and such that on each such subinterval, the densities of the corresponding invariant measures are step functions with at most the same, finite number of jumps. These explicitly constructed measures are then used to study the asymptotic frequency of the digit \(0\) in generic expansions. This frequency is shown to be continuous as a function of \(\eta\) and attains a maximal value of \(2/3\) on the maximal interval \([6/5,3/2]\). Moreover, the frequency function is either constant, strictly increasing or strictly decreasing on each of the aforementioned subintervals of \([1,2]\).
The present article continues these themes of inquiry with a parameterised family of _skewed symmetric golden maps_\(\{T_{\alpha}\}_{\alpha\in[1,\beta]}\), with \(\beta=(\sqrt{5}+1)/2\) the golden mean, i.e. the positive real solution to \(\beta^{2}=\beta+1\)
Each \(T_{\alpha}:[-1,1]\to[-1,1]\) is defined by
\[T_{\alpha}(x):=\begin{cases}\beta^{2}x+\beta\alpha,&x\in[-1,-1/\beta)\\ \beta x,&x\in[-1/\beta,1/\beta]\\ \beta^{2}x-\beta\alpha,&x\in(1/\beta,1]\end{cases};\]
see Figure 1. Setting \(J_{-1}:=[-1,-1/\beta),\ J_{0}:=[-1/\beta,1/\beta]\) and \(J_{1}:=(1/\beta,1]\), the map \(T_{\alpha}\) may be written more succinctly as
\[T_{\alpha}(x)=\beta^{1+|t(x)|}x-t(x)\beta\alpha, \tag{1}\]
where \(t(x)\in\{-1,0,1\}\) is the unique index for which \(x\in J_{t(x)}\). For \(j\geq 1\), set \(t_{\alpha,j}(x):=t(T_{\alpha}^{j-1}(x))\); the sequence of digits \((t_{\alpha,j}(x))_{j\geq 1}\in\{-1,0,1\}^{\mathbb{N}}\) records indices of the subsequent subintervals \(J_{-1},\ J_{0}\) or \(J_{1}\) entered by the forward orbit of \(x\). With this notation, equation (1) gives for each \(j\geq 1\)
\[T_{\alpha}^{j}(x)=\beta^{1+|t_{\alpha,j}(x)|}T_{\alpha}^{j-1}(x)-t_{\alpha,j} (x)\beta\alpha.\]
Solving this for \(T_{\alpha}^{j-1}(x)\), induction shows that for any \(n\geq 1\),
\[x=\alpha\sum_{j=1}^{n}\frac{t_{\alpha,j}(x)}{\beta^{j-1+\sum_{k=1}^{j}|t_{ \alpha,k}(x)|}}+\frac{T_{\alpha}^{n}(x)}{\beta^{n+\sum_{k=1}^{n}|t_{\alpha,k}(x )|}}.\]
Taking the limit \(n\to\infty\) and recalling that \(|T_{\alpha}^{n}(x)|\leq 1\) gives
\[x=\alpha\sum_{j\geq 1}\frac{t_{\alpha,j}(x)}{\beta^{j-1+\sum_{k=1}^{j}|t_{ \alpha,k}(x)|}}.\]
Note that for fixed \(\alpha\), this process determines a unique expansion for each \(x\in[-1,1]\). We refer to both this expansion and the corresponding sequence of digits \((t_{\alpha,j}(x))_{j\geq 1}\) as the \(T_{\alpha}\)_-expansion_ of \(x\).
Phenomena analogous to those observed in [13] are found to occur for the skewed symmetric binary maps \(T_{\alpha}\). In particular, we prove:
**Theorem 1.1**.: _For each \(\alpha\in(1,\beta]\), the map \(T_{\alpha}\) has a unique--hence ergodic--absolutely continuous invariant probability measure \(\mu_{\alpha}\). Moreover, \(\mu_{\alpha}\) is equivalent to Lebesgue measure \(\lambda\), and there is a countable collection \(\{I_{\mathbf{d}}\}_{\mathbf{d}\in\mathcal{M}}\) of disjoint open subintervals of \([1,\beta]\) of full Lebesgue measure, such that for fixed \(\mathbf{d}\in\mathcal{M}\) the density of each \(\mu_{\alpha}\) with \(\alpha\in I_{\mathbf{d}}\) is a step function with at most the same, finite number of jumps._
Via Birkhoff's ergodic theorem, these measures are employed to show the following:
**Theorem 1.2**.: _The asymptotic relative frequency of the digit \(0\) in Lebesgue-a.e. \(T_{\alpha}\)-expansion depends continuously on \(\alpha\in[1,\beta]\) and attains a maximum value of \(3/4\) on the (maximal) interval \([1/2+1/\beta,1+1/\beta^{2}]\). Furthermore, the frequency function is either constant, strictly increasing or strictly decreasing on each \(I_{\mathbf{d}}\)._
As in [13], the main tool used to construct the \(T_{\alpha}\)-invariant measures is a property called matching. An interval map \(T:I\to I\) is said to have _matching_ if for each critical point \(c\in I\), the orbits of the left and right limits \(y_{\pm}:=\lim_{x\to c^{\pm}}T(x)\) agree after some finite number of steps.1 That is, for each critical point \(c\in I\) there are integers \(M,N\geq 0\) for which \(T^{M}(y_{-})=T^{N}(y_{+})\).
Footnote 1: Some authors require that the one-sided derivatives also agree at these times, in which case the map may be said to have _strong matching_ ([15]). This extra condition is not needed for our purposes.
Matching has gained considerable attention in recent years. Intricacies of the metric entropy function of Nakada's \(\alpha\)-continued fraction maps have been studied using matching in [20], [7], [8], [18], [2] and [9]. In particular, matching is used in [18] to determine the natural extension for each \(\alpha\)-continued fraction transformation, and it is shown that the set of \(\alpha\in[0,1]\) for which matching does not occur has zero Lebesgue measure. The Lebesgue measure of this set of non-matching parameters--in addition to the fact that its Hausdorff dimension is 1--is also shown in [8]. Matching is used in [16] to determine invariant measures for the related family of \(\alpha\)-Rosen continued fraction transformations. A parameterised family of linear maps with one increasing and one decreasing branch are considered in [4], and matching is used to show that in some parameter regions, the Lyapunov exponent and topological entropy are constant. A geometric explanation of matching for a similar family of maps is given in [12], and further implications of matching for these maps--including smoothness of entropy on an open dense subset of parameters--is considered in [6].
The notion of matching is extended to random dynamical systems in [15] and is used to study the asymptotic frequency of the digit \(0\) in typical signed binary expansions arising from a family of random interval maps. Matching has also been investigated for generalised \(\beta\)-transformations, a certain class of continued fraction expansions with finite digit sets, and Lorenz maps (see [5], [10] and [11], respectively).
The present paper exploits the phenomenon of matching in a fashion similar to that of [13]. There the authors use results of [17], which gives formulas for densities of the absolutely continuous invariant measures of piecewise linear expanding interval maps. These densities are--in general--infinite sums of (finite) step functions which are determined by the orbits of the left and right limits at critical points of the underlying interval map. However, when matching occurs the infinite sum becomes finite, and the density itself is a finite step function depending only on these orbits before matching. In [13], it is shown that matching occurs for the symmetric doubling map \(D_{\eta}\) on a set of parameters \(\eta\) in \([1,2]\) of full Lebesgue measure. For these matching parameters, the orbits of the left and right limits at the critical points before matching are studied in detail, and this information is used to provide an explicit formula for the density of the (unique) absolutely continuous invariant probability measure for each \(D_{\eta}\) with matching. The parameter space \([1,2]\) is divided into a countable union of (maximal) open intervals--called _matching intervals_--where each \(D_{\eta}\) has matching, and a Lebesgue-null set of non-matching parameters with Hausdorff dimension \(1\). On each matching interval, matching occurs after the same number of steps, and for each left/right limit at a critical point, the digits of the corresponding signed binary expansions agree before matching.
While the results of the present paper imply that the same direct approach of understanding matching for the skewed symmetric golden maps \(T_{\alpha}\) can be applied to construct the invariant measures asserted in Theorem 1.1, we find that the unequal slopes of the different branches present difficulties. To circumvent these, we instead study matching for a family of _symmetric golden maps_\(\{S_{\alpha}\}_{\alpha\in[1,\beta]}\) of constant slope for which the skewed symmetric golden maps \(\{T_{\alpha}\}_{\alpha\in[1,\beta]}\) are jump transformations, and it is subsequently shown that the parameters \(\alpha\) for which the maps \(T_{\alpha}\) and \(S_{\alpha}\) have matching coincide (Proposition 2.9). Equipped with this result, one could then use the formulas from [17] to determine invariant densities for the \(T_{\alpha}\) with matching; however, we proceed in the simpler setting of the symmetric maps \(S_{\alpha}\)--determining invariant densities and the frequencies of digits for these--and finally use the fact that \(T_{\alpha}\) is the jump transformation of \(S_{\alpha}\) to determine invariant measures and frequencies of digits for the original skewed symmetric golden maps.
The paper is organised as follows. In SS2 the symmetric golden maps \(\{S_{\alpha}\}_{\alpha\in[1,\beta]}\) are introduced. These are shown in SS2.1 to have matching for Lebesgue-a.e. \(\alpha\in[1,\beta]\), and we also prove here that the matching parameters of both families \(\{S_{\alpha}\}_{\alpha\in[1,\beta]}\) and \(\{T_{\alpha}\}_{\alpha\in[1,\beta]}\) coincide. Subsections 2.2 and 2.3 are devoted to understanding the finer structure of the set of matching parameters. The former provides a classification of all matching intervals and of the orbits of all left and right limits at critical points before matching occurs. In the latter, it is shown that all (but two) of the matching intervals generate in a natural fashion a whole 'cascade' of countably many matching intervals with adjacent endpoints. In SS3 we use the results of the preceding section to prove Theorems 1.1 and 1.2. In particular, explicit formulas for densities of the unique, absolutely continuous invariant measures of the symmetric golden maps \(S_{\alpha}\) are provided in SS3.1, and the invariant measures of the skewed maps \(T_{\alpha}\) are expressed in terms of these. These measures are used in SS3.2 to determine expressions for the asymptotic frequencies of the digit \(0\) in typical \(S_{\alpha}\)- and \(T_{\alpha}\)-expansions. The maximal frequencies of the digit \(0\) as functions of \(\alpha\) are considered in SS3.3.. Proofs of some technical results are provided in an appendix (SS4).
### Acknowledgments
This work is part of project number 613.009.135 of the research programme Mathematics Clusters which is financed by the Dutch Research Council (NWO).
## 2. Symmetric golden maps \(S_{\alpha}\)
As mentioned in SS1, we determine invariant measures and the frequencies of digits for a family of _symmetric golden maps_\(\{S_{\alpha}\}_{\alpha\in[1,\beta]}\) for which the \(\{T_{\alpha}\}_{\alpha\in[1,\beta]}\) are jump transformations. These invariant measures and frequencies are then used to determine the invariant measures and frequencies of digits for the original \(T_{\alpha}\). The maps \(S_{\alpha}\) are defined as follows: for \(\alpha\in[1,\beta]\), let \(S_{\alpha}:[-1,1]\to[-1,1]\) be given by
\[S_{\alpha}(x):=\beta x-t(x)\alpha,\]
with \(t(x)\in\{-1,0,1\}\) as in SS1; see Figure 1. Note that \(S_{\alpha}(x)\in J_{0}\) for each \(x\in J_{-1}\cup J_{1}\). Using this, one readily verifies that
\[T_{\alpha}(x)=\begin{cases}S_{\alpha}(x),&x\in J_{0}\\ S_{\alpha}^{2}(x),&x\in J_{-1}\cup J_{1}\end{cases}, \tag{2}\]
i.e. \(T_{\alpha}\) is the jump transformation for \(S_{\alpha}\) with respect to the sweep-out set \(J_{0}=[-1/\beta,1/\beta]\) (see, e.g. SS11.4 of [14]). For each \(j\geq 1\), let \(s_{\alpha,j}(x):=t(S_{\alpha}^{j-1}(x)).\) With induction one finds that for each \(k\geq 0\),
\[S_{\alpha}^{k}(x)=\beta^{k}\left(x-\alpha\sum_{j=1}^{k}s_{\alpha,j}(x)/\beta^ {j}\right) \tag{3}\]
(with the summation for \(k=0\) understood to be \(0\)). Since \(|S_{\alpha}^{k}|\leq 1\), dividing both sides by \(\beta^{k}\) and taking the limit as \(k\) approaches infinity gives
\[x=\alpha\sum_{j\geq 1}s_{\alpha,j}(x)/\beta^{j}. \tag{4}\]
Following our convention from SS1, we refer to both the right-hand side of Equation (4) and the corresponding sequence \((s_{\alpha,j}(x))_{j\geq 1}\) of digits in \(\{0,\pm 1\}^{\mathbb{N}}\) as the \(S_{\alpha}\)_-expansion_ of \(x\). Again this process determines--for fixed \(\alpha\)--a unique expansion for each \(x\in[-1,1]\); moreover, if \(x,y\in[-1,1]\) have the same \(S_{\alpha}\)-expansion, then Equation (3) can be used to show that \(x=y\). Also note that not every sequence in \(\{0,\pm 1\}^{\mathbb{N}}\) is an \(S_{\alpha}\)-expansion; in particular, a \(1\) or \(-1\) is necessarily followed by a \(0\).
As the orbits of \(1\) and \(1-\alpha\) will be studied in detail below, we fix special notation for their \(S_{\alpha}\)-expansions: let \(d_{\alpha,j}:=s_{\alpha,j}(1)\) and \(e_{\alpha,j}:=s_{\alpha,j}(1-\alpha)\) for each \(\alpha\in[1,\beta]\) and \(j\geq 1\). When \(\alpha\) is understood, it is suppressed from the notation, and we simply write \(d_{j}:=d_{\alpha,j}\) and \(e_{j}:=e_{\alpha,j}\).
### Matching almost everywhere
In this section, we show that the maps \(S_{\alpha}\) (and \(T_{\alpha}\)) have matching on a set of full Lebesgue measure.2 The map \(S_{\alpha}\) has two critical points \(\pm 1/\beta\). Due to symmetry, it suffices to consider the matching criteria only for the positive critical point \(1/\beta\). Note that \(\lim_{x\to 1/\beta^{-}}S_{\alpha}(x)=1\) and \(\lim_{x\to 1/\beta^{+}}S_{\alpha}(x)=1-\alpha\). Hence \(S_{\alpha}\) has matching if and only if there are integers \(M,N\geq 1\) for which \(S_{\alpha}^{M}(1)=S_{\alpha}^{N}(1-\alpha)\).
Footnote 2: The general approach to proving this result largely follows that of §2.2 of [13]; however, we shall see that the dynamics of the symmetric golden maps \(S_{\alpha}\) are—in a sense—more delicate than those of the previously studied symmetric binary maps (compare, e.g. Proposition 2.1 below with Proposition 2.1 of [13]).
We begin by investigating matching in a number of specific cases. First, note that \(1\in J_{1}\) and \(1-\alpha\in J_{0}\) for all \(\alpha\in[1,\beta]\).
1. If \(\alpha\in(1+1/\beta^{2},\beta]\), then \[S_{\alpha}(1)=\beta-\alpha\in[0,1/\beta^{3})\subset J_{0}, S_{\alpha}(1-\alpha)=\beta-\beta\alpha\in[-1,-1/\beta)\subset J_{-1},\] \[S_{\alpha}^{2}(1)=\beta^{2}-\beta\alpha\in J_{0} \qquad\text{ and } S_{\alpha}^{2}(1-\alpha)=\beta^{2}-\beta^{2}\alpha+\alpha=\beta^{2}- \beta\alpha\in J_{0}\] shows that \(S_{\alpha}\) has matching with \(M=N=2\).
2. If \(\alpha=1+1/\beta^{2}\), then \[S_{\alpha}(1)=\beta-\alpha=1/\beta^{3}\in J_{0}, S_{\alpha}(1-\alpha)=\beta-\beta\alpha=-1/\beta\in J_{0},\] \[S_{\alpha}^{2}(1)=1/\beta^{2}\in J_{0}, S_{\alpha}^{2}(1-\alpha)=-1\in J_{-1},\] \[S_{\alpha}^{3}(1)=1/\beta\in J_{0}, S_{\alpha}^{3}(1-\alpha)=-1/\beta^{3}\in J_{0},\] \[S_{\alpha}^{4}(1)=1\in J_{1}\qquad\quad\text{ and } S_{\alpha}^{4}(1-\alpha)=-1/\beta^{2}=1-\alpha\in J_{0},\] so \(S_{\alpha}\) has a Markov partition, namely \[\left\{[-1/\beta^{3}/1/\beta^{3}],\;\pm(1/\beta^{3},1/\beta^{2}],\;\pm(1/ \beta^{2},1/\beta],\;\pm(1/\beta,1]\right\},\] and no matching.
3. If \(\alpha\in(1+1/\beta^{3},1+1/\beta^{2})\), \[S_{\alpha}(1)=\beta-\alpha\in(1/\beta^{3},1/\beta^{2})\subset J_{0}, S_{\alpha}(1-\alpha)=\beta-\beta\alpha\in(-1/\beta,-1/\beta^{2}) \subset J_{0},\] \[S_{\alpha}^{2}(1)=\beta^{2}-\beta\alpha\in(1/\beta^{2},1/\beta) \subset J_{0}, S_{\alpha}^{2}(1-\alpha)=\beta^{2}-\beta^{2}\alpha\in(-1,-1/ \beta)\subset J_{-1},\] \[S_{\alpha}^{3}(1)=\beta^{3}-\beta^{2}\alpha\in(1/\beta,1) \subset J_{1}, S_{\alpha}^{3}(1-\alpha)=\beta^{3}-(\beta^{3}-1)\alpha\in(-1/ \beta^{3},1/\beta^{3})\subset J_{0},\] \[S_{\alpha}^{4}(1)=\beta^{4}-(\beta^{3}+1)\alpha\in J_{0}\qquad \text{ and } S_{\alpha}^{4}(1-\alpha)=\beta^{4}-(\beta^{4}-\beta)\alpha\in J_{0}.\] Since \(\beta^{4}-\beta^{3}=\beta^{2}=\beta+1\), we find that \(S^{4}(1)=S^{4}(1-\alpha)\), so \(S_{\alpha}\) has matching with \(M=N=4\).
4. If \(\alpha=1+1/\beta^{3}\), \[S_{\alpha}(1)=\beta-\alpha=1/\beta^{2}\in J_{0}, S_{\alpha}(1-\alpha)=\beta-\beta\alpha=-1/\beta^{2}\in J_{0},\] \[S_{\alpha}^{2}(1)=1/\beta\in J_{0}, S_{\alpha}^{2}(1-\alpha)=-1/\beta\in J_{0},\] \[S_{\alpha}^{3}(1)=1\in J_{1}, S_{\alpha}^{3}(1-\alpha)=-1\in J_{-1}\quad\text{and}\] \[S_{\alpha}^{4}(1-\alpha)=-1/\beta^{2}\in J_{0},\] so \(S_{\alpha}\) has a Markov partition and no matching.
5. If \(\alpha\in(1,1+1/\beta^{3})\), then \[S_{\alpha}(1)=\beta-\alpha\in(1/\beta^{2},1/\beta)\subset J_{0}, S_{\alpha}(1-\alpha)=\beta-\beta\alpha\in(-1/\beta^{2},0)\subset J_{0},\] \[S_{\alpha}^{2}(1)=\beta^{2}-\beta\alpha\in(1/\beta,1)\subset J_{1}, S_{\alpha}^{2}(1-\alpha)=\beta^{2}-\beta^{2}\alpha\in(-1/ \beta,0)\subset J_{0},\] \[S_{\alpha}^{3}(1)=\beta^{3}-(\beta^{2}+1)\alpha\in(-1/\beta^{3},1/ \beta)\subset J_{0}\quad\text{ and } S_{\alpha}^{3}(1-\alpha)=\beta^{3}-\beta^{3}\alpha\in(-1,0)\subset J_{-1} \cup J_{0}.\] This case will be considered more closely in what follows.
6. If \(\alpha=1\), then \(S_{\alpha}(1)=1/\beta\in J_{0},\;S_{\alpha}^{2}(1)=1\in J_{1}\) and \(S_{\alpha}(1-\alpha)=0=1-\alpha\in J_{0}\). Thus there is a Markov partition and no matching.
Note that in the cases above in which there is matching--namely (i) and (iii)--we have \(M=N\) (a property called _neutral matching_ in [6]). We shall see below that this is always the case, i.e. \(S_{\alpha}\) has matching if and only if there is some \(m\geq 1\) for which \(S_{\alpha}^{m}(1)=S_{\alpha}^{m}(1-\alpha)\). For this we need the following proposition--key to a number of arguments throughout--which states that the difference between subsequent points in the orbits of \(1\) and \(1-\alpha\) can take on at most four values. Recall that \((d_{j})_{j\geq 1}\) and \((e_{j})_{j\geq 1}\) denote the \(S_{\alpha}\)-expansions of \(1\) and \(1-\alpha\), respectively.
**Proposition 2.1**.: _For every \(\alpha\in[1,\beta]\) and \(j\geq 0\),_
\[S_{\alpha}^{j}(1)-S_{\alpha}^{j}(1-\alpha)\in\{0,\alpha/\beta,\alpha,\beta \alpha\}.\]
Proof.: For \(\alpha\notin(1,1+1/\beta^{3})\), the statement is verified with the cases above, so assume \(\alpha\in(1,1+1/\beta^{3})\). We use induction on \(j\). The result clearly holds for \(j=0\); assume for some \(j=k-1\geq 0\) that
\[S_{\alpha}^{k-1}(1)-S_{\alpha}^{k-1}(1-\alpha)=y\]
for some \(y\in\{0,\alpha/\beta,\alpha,\beta\alpha\}\). If \(y=0\), then also \(S^{j}_{\alpha}(1)-S^{j}_{\alpha}(1-\alpha)=0\) for all \(j\geq k-1\). Suppose \(y\neq 0\), and note that
\[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=(\beta S^{k-1}_{\alpha}(1)-d_{k} \alpha)-(\beta S^{k-1}_{\alpha}(1-\alpha)-e_{k}\alpha)=\beta y-(d_{k}-e_{k})\alpha.\]
We determine the difference above for each \(y\in\{\alpha/\beta,\alpha,\beta\alpha\}\):
1. \(y=\alpha/\beta\): Since \(1/\beta<y<2/\beta\), we have \((d_{k},e_{k})=(1,0),(0,-1)\) or \((0,0)\). In the first two cases \[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=0,\] and in the third \[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=\alpha.\]
2. \(y=\alpha\): Since \(1/\beta<y<1+1/\beta^{3}=2/\beta\), we again have \((d_{k},e_{k})=(1,0),(0,-1)\) or \((0,0)\). In the first two cases \[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=\beta\alpha-\alpha=\alpha/\beta,\] and in the third \[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=\beta\alpha.\]
3. \(y=\beta\alpha\): Since \(y>2/\beta\), we must have \((d_{k},e_{k})=(1,-1)\), and hence \[S^{k}_{\alpha}(1)-S^{k}_{\alpha}(1-\alpha)=\beta^{2}\alpha-2\alpha=\alpha/\beta.\]
The previous proposition can be used to give an equivalent definition of matching:
**Proposition 2.2**.: _The map \(S_{\alpha}\) has matching if and only if there is some \(m\geq 1\) for which \(S^{m}_{\alpha}(1)=S^{m}_{\alpha}(1-\alpha)\)._
Proof.: One direction is immediate; for the other, suppose there are distinct \(M,N\geq 1\) for which \(S^{M}_{\alpha}(1)=S^{N}_{\alpha}(1-\alpha)\). Assume for the sake of contradiction that \(S^{j}_{\alpha}(1)\neq S^{j}_{\alpha}(1-\alpha)\) for all \(j\geq 1\). By Proposition 2.1,
\[S^{j}_{\alpha}(1)-S^{j}_{\alpha}(1-\alpha)\geq\alpha/\beta\geq 1/\beta,\]
and hence
\[S^{j}_{\alpha}(1-\alpha)\leq S^{j}(1)-1/\beta\leq 1-1/\beta=1/\beta^{2}\]
for each \(j\). If \(S^{j}_{\alpha}(1-\alpha)\in(0,1/\beta^{2}]\), then there is some \(k\geq 0\) for which \(S^{j+k}_{\alpha}(1-\alpha)=\beta^{k}S^{j}_{\alpha}(1-\alpha)>1/\beta^{2}\), contradicting the above, and thus \(S^{j}_{\alpha}(1-\alpha)\leq 0\) for each \(j\). A similar argument implies \(S^{j}_{\alpha}(1)\geq 0\) for each \(j\). But \(S^{M}_{\alpha}(1)=S^{N}_{\alpha}(1-\alpha)\), so this common value must be \(0\). Since \(0\) is fixed by \(S_{\alpha}\), we have the contradiction that \(S^{m}_{\alpha}(1)=0=S^{m}_{\alpha}(1-\alpha)\) with \(m=\max\{M,N\}\).
We can now define a canonical index to describe when matching occurs:
**Definition 2.1**.: The _matching index_ of \(S_{\alpha}\) is
\[m(\alpha):=\inf\{m\geq 1\ |\ S^{m}_{\alpha}(1)=S^{m}_{\alpha}(1-\alpha)\}\in \mathbb{N}\cup\{\infty\}.\]
The cases above together with the proof of Proposition 2.1 reveal a strong interdependence between the orbits of \(1\) and \(1-\alpha\), which is summarised in the graph of Figure 2. In particular, note that if matching occurs with matching index \(m:=m(\alpha)\), then \(S^{m-1}_{\alpha}(1)-S^{m-1}_{\alpha}(1-\alpha)=\alpha/\beta\) and \((d_{m},e_{m})\in\{(1,0),(0,-1)\}\). Since \(S_{\alpha}\)-expansions cannot contain consecutive non-zero digits, this implies \(S^{m-2}_{\alpha}(1)-S^{m-2}_{\alpha}(1-\alpha)=\alpha\) and \((d_{m-1},e_{m-1})\in\{(1,0),(0,-1)\}\). For \(m>2\), this further implies \(S^{m-3}_{\alpha}(1)-S^{m-3}_{\alpha}(1-\alpha)=\alpha/\beta\) and \((d_{m-2},e_{m-2})=(0,0)\). Thus if \(S_{\alpha}\) has matching with index \(m>2\), then the final three digits of the \(S_{\alpha}\)-expansions of \(1\) and \(1-\alpha\) before matching are given by
\[\begin{pmatrix}d_{m-2}d_{m-1}d_{m}\\ e_{m-2}e_{m-1}e_{m}\end{pmatrix}\in\left\{\begin{pmatrix}010\\ \overline{001}\end{pmatrix},\begin{pmatrix}001\\ \overline{010}\end{pmatrix}\right\}, \tag{5}\]
where \(\overline{w}:=-w\) for \(w\in\{0,\pm 1\}\). Conversely, if for some \(m>2\), three consecutive digits of the \(S_{\alpha}\)-expansions of \(1\) and \(1-\alpha\) are given by (5), then the proof implies that \(S_{\alpha}\) has matching with index \(m\).
A number of characterisations of matching for \(S_{\alpha}\) can be derived from Proposition 2.1 and Figure 2. For these we fix some notation: for each \(x\in[-1,1]\) and \(\alpha\neq 1\), let
\[\ell_{\alpha}(x):=\inf_{j\geq 0}\{S^{j}_{\alpha}(|x|)\leq 0\}-1,\]
and set
\[\ell_{\alpha}:=\min\{\ell_{\alpha}(1),\ell_{\alpha}(1-\alpha)\}.\]
**Lemma 2.3**.: _For \(\alpha\neq 1\), \(S_{\alpha}\) has matching if and only if \(\ell_{\alpha}<\infty\). Moreover, if \(\ell_{\alpha}<\infty\), then \(m(\alpha)\in\{\ell_{\alpha}+1,\ell_{\alpha}+2\}\)._
Proof.: Let \(\ell:=\ell_{\alpha}\). That matching implies \(\ell<\infty\) is immediate. Now suppose \(\ell<\infty\), and assume without loss of generality that \(\ell=\ell_{\alpha}(1-\alpha)\) and thus \(S_{\alpha}^{\ell+1}(1-\alpha)\geq 0\) (the other case is similar). The definitions of \(\ell\) and \(m(\alpha)\) give \(\ell+1\leq m(\alpha)\). By Proposition 2.1, \(S_{\alpha}^{\ell+1}(1-\alpha)\geq 0\) and \(\alpha>1\) imply
\[S_{\alpha}^{\ell+1}(1)-S_{\alpha}^{\ell+1}(1-\alpha)\in\{0,\alpha/\beta\}.\]
The result holds if the difference is \(0\). If the difference is \(\alpha/\beta\), we must have \((d_{\ell+2},e_{\ell+2})=(1,0)\). From Figure 2, this implies
\[S_{\alpha}^{\ell+2}(1)-S_{\alpha}^{\ell+2}(1-\alpha)=0.\]
**Corollary 2.4**.: _For \(\alpha\neq 1\), \(S_{\alpha}\) has matching if and only if there exists some \(j\geq 1\) such that_
\[S_{\alpha}^{j}(1)\in(1/\beta,\alpha/\beta]\quad\text{or}\quad S_{\alpha}^{j}( 1-\alpha)\in[-\alpha/\beta,-1/\beta).\]
_Moreover, \(\ell_{\alpha}(1)\) and \(\ell_{\alpha}(1-\alpha)\), respectively, are the infimums over all \(j\) for which the above inclusions hold._
Proof.: This follows from Lemma 2.3 and the facts that
\[S_{\alpha}^{-1}([-1,0])\cap(0,1]=(1/\beta,\alpha/\beta]\]
and
\[S_{\alpha}^{-1}([0,1])\cap[-1,0)=[-\alpha/\beta,1/\beta).\]
Due to symmetry, the above corollary states that \(S_{\alpha}\) has matching if and only if the orbit of either \(1\) or of \(\alpha-1\) enters the region \((1/\beta,\alpha/\beta]\). We shall see that this occurs for Lebesgue-a.e. \(\alpha\) by relating the beginnings of these orbits to the beginnings of certain orbits of the (ergodic) \(\beta\)_-transformation_\(B:[0,1]\to[0,1]\) defined by \(B(x)=\beta x\ (\text{mod}\ 1)\). Set
\[b(x):=\begin{cases}0,&x<1/\beta\\ 1,&x\geq 1/\beta\end{cases},\]
and for each \(j\geq 1\), let
\[b_{j}(x):=b(B^{j-1}(x)).\]
We call the sequence \((b_{j}(x))_{j\geq 1}\) the \(\beta\)_-expansion_ (also referred to as the _greedy-expansion_) of \(x\). Via induction, one finds that for each \(k\geq 0\),
\[B^{k}_{\alpha}(x)=\beta^{k}\left(x-\sum_{j=1}^{k}b_{j}(x)/\beta^{j}\right). \tag{6}\]
**Lemma 2.5**.: _Let \(x\in\{1,\alpha-1\},\ \alpha\neq 1\). Then_
1. \(S^{j}_{\alpha}(x)=\alpha B^{j}(x/\alpha)\) _for each_ \(0\leq j\leq\ell_{\alpha}(x)\)_,_
2. \(s_{\alpha,j}(x)=b_{j}(x/\alpha)\) _for each_ \(1\leq j\leq\ell_{\alpha}(x)\) _and_
3. \(\ell_{\alpha}(x)\) _is the infimum over all_ \(j\) _for which_ \(B^{j}(x/\alpha)\in(1/\beta\alpha,1/\beta]\)_._
Proof.: Claim (iii) will follow from claim (i), Corollary 2.4 and the fact that \(\ell_{\alpha}(x)=\ell_{\alpha}(-x)\). We prove claim (i) via induction on \(j\). Certainly \(S^{j}_{\alpha}(x)=\alpha B^{j}(x/\alpha)\) for \(j=0\). Now suppose this equality holds for some \(j=k-1\) with \(0\leq k-1<\ell_{\alpha}(x)\). By Corollary 2.4, \(S^{k-1}_{\alpha}(x)\in[0,1]\backslash(1/\beta,\alpha/\beta]\), and we find
\[S^{k}_{\alpha}(x) =\begin{cases}\beta S^{k-1}_{\alpha}(x),&S^{k-1}_{\alpha}(x)\in[ 0,1/\beta]\\ \beta S^{k-1}_{\alpha}(x)-\alpha,&S^{k-1}_{\alpha}(x)\in(\alpha/\beta,1]\\ \end{cases}\] \[=\begin{cases}\beta\alpha B^{k-1}(x/\alpha),&B^{k-1}_{\alpha}(x/ \alpha)\in[0,1/\beta\alpha]\\ \beta\alpha B^{k-1}(x/\alpha)-\alpha,&B^{k-1}_{\alpha}(x/\alpha)\in(1/\beta,1/ \alpha]\\ \end{cases}\] \[=\alpha B^{k}(x/\alpha),\]
so the first claim holds. Furthermore, the equality in (i) gives for each \(1\leq j\leq\ell_{\alpha}(x)\) that \(S^{j-1}_{\alpha}(x)\in[0,1/\beta]\) if and only if \(B^{j-1}(x/\alpha)\in[1,1/\beta\alpha]\) and \(S^{j-1}_{\alpha}(x)\in(\alpha/\beta,1]\) if and only if \(B^{j-1}(x/\alpha)\in(1/\beta,1/\alpha]\). Thus \(s_{\alpha,j}(x)=b_{j}(x/\alpha)\) for such \(j\), proving claim (ii).
Corollary 2.4, Lemma 2.5 and symmetry of \(S_{\alpha}\) give yet another characterisation of matching in terms of the map \(B\):
**Corollary 2.6**.: _For \(\alpha\neq 1\), \(S_{\alpha}\) has matching if and only if there exists some \(j\geq 0\) such that_
\[B^{j}(1/\alpha)\in(1/\beta\alpha,1/\beta]\quad\text{or}\quad B^{j}(1-1/\alpha )\in(1/\beta\alpha,1/\beta].\]
_Moreover, \(\ell_{\alpha}(1)\) and \(\ell_{\alpha}(1-\alpha)\), respectively, are the infimums over all \(j\) for which the above inclusions hold._
The previous results together with ergodicity of \(B\) can now be used to prove that \(S_{\alpha}\) has matching for a set of parameters \(\alpha\) of full Lebesgue measure. The proof is nearly identical to that of Proposition 2.3 of [13] but is included here for the ease of the reader.
**Proposition 2.7**.: _The map \(S_{\alpha}\) has matching for Lebesgue-a.e. \(\alpha\in[1,\beta]\)._
Proof.: Let \(\alpha\in(1,\beta]\) and \(k\in\mathbb{N}\) with \(k>\beta^{3}\). By ergodicity of \(B\) with respect to Lebesgue measure (SS4 of [22]), for Lebesgue-a.e. \(x\in[0,1]\) there exists some \(j\geq 1\) such that \(B^{j}(x)\in(1/\beta-1/k,1/\beta]\). Note that \(1/\beta\alpha<1/\beta-1/k\) if and only if \(\alpha>k/(k-\beta)\). Thus for Lebesgue-a.e. \(\alpha\in(k/(k-\beta),\beta]\), there exists some \(j\geq 1\) such that
\[B^{j}(1/\alpha)\in(1/\beta-1/k,1/\beta]\subset(1/\beta\alpha,1/\beta].\]
By Corollary 2.6, \(S_{\alpha}\) has matching for Lebesgue-a.e. \(\alpha\in(k/(k-\beta),\beta]\). Let \(A_{k}\) denote the set of \(\alpha\in(k/(k-\beta),\beta]\) for which \(S_{\alpha}\) does not have matching. Then \(\cup_{k>\beta^{3}}A_{k}\) has Lebesgue measure \(0\) and equals the set of all \(\alpha\in(1,\beta]\) for which \(S_{\alpha}\) does not have matching.
The finer structure of the set of matching parameters \(\alpha\in[1,\beta]\) is considered in SSSS2.2 and 2.3 below. Before investigating this structure, we show that matching occurs for \(S_{\alpha}\) if and only if it occurs for the corresponding jump transformation \(T_{\alpha}\). The following lemma may be deduced from the general theory of jump transformations, but a proof is included for completeness.
**Lemma 2.8**.: _Fix \(x\in[-1,1]\) and let \(j_{1}<j_{2}<j_{3}<\dots\) be an enumeration of the set_
\[\{j\geq 0\ |\ S_{\alpha}^{j}(x)\in J_{0}\}.\]
_Then \(T_{\alpha}^{k}(x)=S_{\alpha}^{j_{k}+1}(x)\) for all \(k\geq 1\)._
Proof.: The claim is immediate for \(k=1\) by (2) and the fact that \(S_{\alpha}(J_{-1}\cup J_{1})\subset J_{0}\). Now suppose the result holds for some \(k\geq 1\), and let \(i\in\{0,1\}\) be minimal such that \(S_{\alpha}^{i}(S_{\alpha}^{j_{k}+1}(x))\in J_{0}.\) By definition, then, \(j_{k+1}=j_{k}+i+1\), and
\[T_{\alpha}^{k+1}x=T_{\alpha}(S_{\alpha}^{j_{k}+1}x)=S_{\alpha}^{i+1}(S_{ \alpha}^{j_{k}+1}x)=S_{\alpha}^{j_{k+1}+1}x.\]
**Proposition 2.9**.: _The matching parameters \(\alpha\in[1,\beta]\) for \(T_{\alpha}\) and for \(S_{\alpha}\) coincide._
Proof.: Recall that \(T_{\alpha}\) has critical points at \(\pm 1/\beta\), and note that \(\lim_{x\to 1/\beta^{-}}T_{\alpha}(x)=1\) while \(\lim_{x\to 1/\beta^{+}}T_{\alpha}(x)=\beta(1-\alpha)\). Due to symmetry, \(T_{\alpha}\) has matching if and only if there are integers \(M,N>0\) for which \(T_{\alpha}^{M}(1)=T_{\alpha}^{N}(\beta(1-\alpha))\).
Suppose first that \(T_{\alpha}\) has matching. Then \(T_{\alpha}^{M}(1)=T_{\alpha}^{N}(\beta(1-\alpha))\) for some \(M,N>0\). By (2) and the fact that \(S_{\alpha}(1-\alpha)=\beta(1-\alpha)\), this implies the existence of some \(M^{\prime},N^{\prime}>0\) for which \(S_{\alpha}^{M^{\prime}}(1)=S_{\alpha}^{N^{\prime}}(1-\alpha)\).
Conversely, suppose \(S_{\alpha}\) has matching with matching index \(m:=m(\alpha)\). From the proof of Proposition 2.1 it is clear that \(S_{\alpha}^{m}(1)=S_{\alpha}^{m}(1-\alpha)\in J_{0}\). By Lemma 2.8, there are \(M,N>0\) for which
\[T_{\alpha}^{M}(1)=S_{\alpha}^{m+1}(1)=S_{\alpha}^{m+1}(1-\alpha)=S_{\alpha}^{ m}(\beta(1-\alpha))=T_{\alpha}^{N}(\beta(1-\alpha)).\]
### Matching words and intervals
When \(S_{\alpha}\) has matching, we call the first \(m(\alpha)<\infty\) digits of the \(S_{\alpha}\)-expansion of \(1\) the _matching word_ corresponding to \(\alpha\). A maximal subinterval of \([1,\beta]\) on which matching words coincide is called a _matching interval_ corresponding to the common matching word. Here we classify matching words and matching intervals (Corollary 2.20); as all matching parameters belong to some matching interval, this gives a complete classification of matching parameters \(\alpha\in[1,\beta]\). (Propositions 2.13, 2.18 and 2.19 imply that this also classifies the first \(m(\alpha)<\infty\) digits of the \(S_{\alpha}\)-expansions of \(1-\alpha\) for \(S_{\alpha}\) with matching and the maximal subintervals of parameters \(\alpha\) on which these digits coincide.) Note that matching words and intervals for \(\alpha\in[1,\beta]\backslash(1,1+1/\beta^{3})\) have been implicitly determined via the cases considered in SS2.1. For instance, \((1+1/\beta^{2},\beta]\) is the matching interval corresponding to the matching word \(10\), and the \(S_{\alpha}\)-expansion of \(1-\alpha\) for each \(\alpha\in(1+1/\beta^{2},\beta]\) begins with \(0(-1)\). Similarly, \((1+1/\beta^{3},1+1/\beta^{2})\) is the matching interval corresponding to the matching word \(1001\), and the \(S_{\alpha}\)-expansion of \(1-\alpha\) for each \(\alpha\) in this interval begins with \(00(-1)0\).
Denote by \(\prec\) the lexicographical ordering on \(\{0,\pm 1\}^{\mathbb{N}}\). Note that \(\prec\) may also be defined on the set \(\{0,\pm 1\}^{*}\) of finite words with alphabet \(-1,0,1\) by first sending \(\mathbf{w}\in\{0,\pm 1\}^{*}\) to \(\mathbf{w}0^{\infty}\).
**Definition 2.2**.: Let
\[\mathbf{w}_{0}:=00\prec\mathbf{w}_{1}:=001\prec\mathbf{w}_{2}:=01.\]
We say that \(\mathbf{d}\in\{0,1\}^{*}\) is in _admissible block form_ if \(\mathbf{d}=10\) or
\[\mathbf{d}=1\mathbf{w}_{i_{1}}\mathbf{w}_{i_{2}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2)\]
for some \(i_{1},\dots,i_{n}\in\{0,1,2\},\ n\geq 1\) with \(i_{n}\neq 1\), and, when \(n\geq 2\), \(i_{1}=2\). The collection of all words in admissible block form is denoted \(\mathcal{B}\).
The condition that a word in admissible block form ends in \(\mathbf{w}_{i_{n}}(1-i_{n}/2),\ i_{n}\neq 1\), guarantees that the final three digits are either \(001\) or \(010\) (recall (5)); however, not every word ending this way belongs to \(\mathcal{B}\):
**Example 2.10**.: _One verifies that_
\[\mathbf{d}:=1\mathbf{w}_{2}\mathbf{w}_{0}\mathbf{w}_{1}\mathbf{w}_{0}1=1010000 1001\in\mathcal{B},\]
_whereas_
\[\mathbf{d}^{\prime}:=1010001\notin\mathcal{B}.\]
Note that the indices \(i_{j}\) for \(\mathbf{d}\in\mathcal{B}\) are uniquely determined; that is, if
\[1\mathbf{w}_{i_{1}}\mathbf{w}_{i_{2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)=1 \mathbf{w}_{j_{1}}\mathbf{w}_{j_{2}}\cdots\mathbf{w}_{j_{m}}(1-j_{m}/2),\]
then \(m=n\) and \(i_{k}=j_{k}\) for each \(1\leq k\leq n\). Define \(\varphi:\mathcal{B}\to\{0,-1\}^{*}\) by \(\varphi(10)=\overline{01}\) and for each \(\mathbf{d}\in\mathcal{B}\) of the form
\[\mathbf{d}=1\mathbf{w}_{i_{1}}\mathbf{w}_{i_{2}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2),\]
by
\[\varphi(\mathbf{d}):=\overline{0\mathbf{w}_{2-i_{1}}\mathbf{w}_{2-i_{2}} \cdots\mathbf{w}_{2-i_{n}}(i_{n}/2)},\]
where \(\overline{\mathbf{w}}:=-\mathbf{w}\) for each \(\mathbf{w}\in\{0,\pm 1\}^{*}\).
Let \(\sigma:\{0,\pm 1\}^{\mathbb{N}}\to\{0,\pm 1\}^{\mathbb{N}}\) denote the left shift defined by \(\sigma((w_{j})_{j\geq 1})=(w_{j+1})_{j\geq 1}\) for each \((w_{j})_{j\geq 1}\in\{0,\pm 1\}^{\mathbb{N}}\); as with the lexicographical ordering, \(\sigma\) is also defined on the set \(\{0,\pm 1\}^{*}\) of finite words by sending \(\mathbf{w}\in\{0,\pm 1\}^{*}\) to \(\mathbf{w}0^{\infty}\). We remark that for each \(T\in\{S_{\alpha},T_{\alpha},B\}\), the left shift of the \(T\)-expansion of \(x\) equals the \(T\)-expansion of \(T(x)\).
**Definition 2.3**.: A word \(\mathbf{d}\in\mathcal{B}\) satisfies _Property \(M\)_ if, for each \(j\geq 0\), both \(\sigma^{j}(\mathbf{d})\preceq\mathbf{d}\) and \(\sigma^{j}(\overline{\varphi(\mathbf{d})})\preceq\mathbf{d}\). Denote by \(\mathcal{M}\subset\mathcal{B}\) the collection of all words \(\mathbf{d}\) satisfying Property \(M\). We call \(10\) and \(1001\) the _exceptional_ words in \(\mathcal{M}\) and denote by \(\mathcal{M}_{U}:=\mathcal{M}\backslash\{10,1001\}\) the collection of _unexceptional_ words in \(\mathcal{M}\).
**Example 2.11**.: _Let \(\mathbf{d}\in\mathcal{B}\) be as in Example 2.10. Then_
\[\varphi(\mathbf{d})=\overline{0\mathbf{w}_{0}\mathbf{w}_{2}\mathbf{w}_{1} \mathbf{w}_{2}0}=\overline{00001001010},\]
_and since both \(\sigma^{j}(\mathbf{d})\preceq\mathbf{d}\) and \(\sigma^{j}(\overline{\varphi(\mathbf{d})})\preceq\mathbf{d}\) for all \(j\geq 0\), we have \(\mathbf{d}\in\mathcal{M}\)._
We shall see that Property \(M\) classifies matching words of the maps \(S_{\alpha}\). To show that \(\mathcal{M}\) contains all matching words we need the following observation, which is not novel, but for which a proof is included for completeness:
**Lemma 2.12**.: _Fix \(\alpha\in[1,\beta]\) and \(x,y\in[-1,1]\). Then \(x<y\) if and only if \((s_{\alpha,j}(x))_{j\geq 1}\prec(s_{\alpha,j}(y))_{j\geq 1}\). Similarly, for \(x,y\in[0,1]\), \(x<y\) if and only if \((b_{j}(x))_{j\geq 1}\prec(b_{j}(y))_{j\geq 1}\)._
Proof.: Suppose \(x,y\in[-1,1]\) with \(x<y\), and let \(n:=\min_{j\geq 1}\{s_{\alpha,j}(x)\neq s_{\alpha,j}(y)\}\). We first claim for each \(0\leq j<n\) that \(S_{\alpha}^{j}(x)<S_{\alpha}^{j}(y)\). This is true by assumption for \(j=0\). If \(n=1\), we're finished. Assume \(n>1\) and that the claim holds for some \(j=k-1\) with \(0\leq k-1<n-1\). Since \(s_{\alpha,k}(x)=s_{\alpha,k}(y)\), we have that \(S_{\alpha}\) restricts to a linear function with positive slope on an interval containing \(S_{\alpha}^{k-1}(x)\) and \(S_{\alpha}^{k-1}(y)\). But \(S_{\alpha}^{k-1}(x)<S_{\alpha}^{k-1}(y)\) by assumption, so also \(S_{\alpha}^{k}(x)<S_{\alpha}^{k}(y)\) and the claim holds. Since \(s_{\alpha,n}(x)\neq s_{\alpha,n}(y)\) and \(S_{\alpha}^{n-1}(x)<S_{\alpha}^{n-1}(y)\), it must be true that \(s_{\alpha,n}(x)<s_{\alpha,n}(y)\) and hence \((s_{\alpha,j}(x))_{j\geq 1}\prec(s_{\alpha,j}(y))_{j\geq 1}\).
Now suppose \(x\geq y\). If equality holds, then by uniqueness of \(S_{\alpha}\)-expansions, \((s_{\alpha,j}(x))_{j\geq 1}=(s_{\alpha,j}(y))_{j\geq 1}\). If the inequality is strict, the argument above applies with \(x\) and \(y\) interchanged.
The proof of the second statement is identical, _mutatis mutandis_.
**Proposition 2.13**.: _Suppose for some \(\alpha\in[1,\beta]\) that \(S_{\alpha}\) has matching with index \(m:=m(\alpha)\), and let \(\mathbf{d}:=d_{1}\cdots d_{m}\) denote the corresponding matching word. Then \(\mathbf{d}\in\mathcal{M}\), and \(\mathbf{e}:=\varphi(\mathbf{d})\) agrees with the first \(m\) digits \(e_{1}\cdots e_{m}\) of the \(S_{\alpha}\)-expansion of \(1-\alpha\)._
Proof.: From the cases of SS2.1, the result holds for \(\alpha\notin(1,1+1/\beta^{3})\); in particular, \(\alpha\in(1+1/\beta^{2},\beta]\) and \(\alpha\in(1+1/\beta^{3},1+1/\beta^{2})\) correspond to the exceptional words \(\mathbf{d}=10\) and \(\mathbf{d}=1001\), respectively, in \(\mathcal{M}\), and \(\varphi(10)=\overline{01},\ \varphi(1001)=\overline{0010}\). Now assume \(\alpha\in(1,1+1/\beta^{3})\). Note that \(d_{1}=1,\ e_{1}=0\), and
\[S_{\alpha}(1)-S_{\alpha}(1-\alpha)=(\beta-\alpha)-\beta(1-\alpha)=\alpha/\beta.\]
Recall from Equation (5) and the discussion preceding it that
\[\left(\frac{d_{m-2}d_{m-1}d_{m}}{e_{m-2}e_{m-1}e_{m}}\right)\in\left\{\left( \frac{001}{\overline{010}}\right),\left(\frac{010}{\overline{001}}\right) \right\}=\left\{\left(\frac{\mathbf{w}_{0}1}{\overline{\mathbf{w}_{2}}0} \right),\left(\frac{\mathbf{w}_{2}0}{\overline{\mathbf{w}_{0}}1}\right)\right\},\]
and \(S_{\alpha}^{m-3}(1)-S_{\alpha}^{m-3}(1-\alpha)=\alpha/\beta\). The remaining digits
\[\left(\frac{d_{2}d_{3}\cdots d_{m-3}}{e_{2}e_{3}\cdots e_{m-3}}\right)\]
are thus determined by edge labels of cycles in the graph of Figure 2 beginning and ending at vertex \(\alpha/\beta\). There are three possible cycles, whose edge labels give
\[\left(\frac{d_{j}d_{j+1}}{e_{j}e_{j+1}}\right)=\left(\frac{01}{00}\right)=\left( \frac{\mathbf{w}_{2}}{\overline{\mathbf{w}}_{0}}\right),\ \left(\frac{d_{j}d_{j+1}}{e_{j}e_{j+1}}\right)=\left(\frac{00}{01}\right)= \left(\frac{\mathbf{w}_{0}}{\overline{\mathbf{w}}_{2}}\right),\text{ and }\left(\frac{d_{j}d_{j+1}d_{j+2}}{e_{j }e_{j+1}e_{j+2}}\right)=\left(\frac{001}{001}\right)=\left(\frac{\mathbf{w}_{1 }}{\overline{\mathbf{w}}_{1}}\right).\]
It follows that \(\mathbf{d}=1\mathbf{w}_{i_{1}}\mathbf{w}_{i_{2}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2)\) and \(e_{1}\cdots e_{m}=\overline{0\mathbf{w}_{2-i_{1}}\mathbf{w}_{2-i_{2}}\cdots \mathbf{w}_{2-i_{n}}(i_{n}/2)}\) for some \(i_{1},\ldots,i_{n}\in\{0,1,2\},\ n\geq 1\) and \(i_{n}\neq 1\). Moreover, note from case (v) of SS2.1 that \(d_{1}d_{2}d_{3}d_{4}=1010\), so \(i_{1}=2\). Thus \(\mathbf{d}\in\mathcal{B}\) and \(\mathbf{e}=e_{1}\cdots e_{m}=\varphi(\mathbf{d})\). From Lemma 2.12, the facts that \(S_{\alpha}^{j}(1),\ S_{\alpha}^{j}(1-\alpha)\in[-1,1]\) for each \(j\geq 0\) imply that \(\sigma^{j}(\mathbf{d}),\sigma^{j}(\overline{\mathbf{e}})\preceq\mathbf{d}\) for each \(j\geq 0\). Thus \(\mathbf{d}\in\mathcal{M}\).
The previous result states that every matching word belongs to \(\mathcal{M}\). Before proving the converse (Propositions 2.16 and 2.18), we define and investigate properties of the _valuation function_\(v:\mathcal{S}\rightarrow\mathbb{R}\) given by the (absolutely) convergent series
\[v((w_{j})_{j\geq 1}):=\sum_{j\geq 1}w_{j}/\beta^{j},\]
where \(\mathcal{S}\subset\mathbb{Z}^{\mathbb{N}}\) consists of all sequences \((w_{j})_{j\geq 1}\) whose entries are bounded above and below. The valuation function is also defined on the set \(\mathcal{S}^{*}\subset\mathcal{S}\) of finite words by considering the corresponding finite sum and setting \(v(\varepsilon)=0\) for the empty word \(\varepsilon\). It is not difficult to check for finite words \(\mathbf{w},\mathbf{w}^{\prime}\in\{0,\pm 1\}^{*}\) with no consecutive nonzero digits that \(\mathbf{w}\prec\mathbf{w}^{\prime}\) if and only if \(v(\mathbf{w})<v(\mathbf{w}^{\prime})\).
**Lemma 2.14**.: _If \(\mathbf{w}:=w_{1}w_{2}\cdots w_{k}\in\{0,1,2\}^{*}\) is \(\varepsilon\) (in which case we set \(k=0\)) or consists solely of blocks of \(01\)'s and \(002\)'s, then_
\[v(\mathbf{w})=1/\beta-1/\beta^{k+1}.\]
Proof.: The case that \(\mathbf{w}=\varepsilon\) is trivial, so suppose \(\mathbf{w}\neq\varepsilon\). One easily verifies that
\[v((01)^{3})=v((002)^{2})\text{ \ \ \ \ and \ \ \ \ }v(01002)=v(00201).\]
These observations, together with the fact that for each \(1\leq j\leq k\),
\[v(\mathbf{w})=v(w_{1}\cdots w_{j})+(1/\beta^{j})v(w_{j+1}\cdots w_{k}),\]
imply that
\[v(\mathbf{w})=\begin{cases}v((002)^{k/3}),&k\equiv 0\ (\text{mod}\ 3)\\ v((002)^{(k-4)/3}(01)^{2}),&k\equiv 1\ (\text{mod}\ 3)\\ v((002)^{(k-2)/3}01),&k\equiv 2\ (\text{mod}\ 3)\end{cases}.\]
Notice that for any \(j\geq 1\),
\[v((002)^{j}) =2\sum_{i=1}^{j}(1/\beta^{3})^{i}\] \[=2\cdot\frac{1/\beta^{3}-1/\beta^{3j+3}}{1-1/\beta^{3}}\] \[=2\cdot\frac{1-1/\beta^{3j}}{\beta^{3}-1}\] \[=2\cdot\frac{1-1/\beta^{3j}}{2\beta}\] \[=1/\beta-1/\beta^{3j+1}.\]
If \(k\equiv 0\) (mod 3), setting \(j=k/3\) gives the result. If \(k\equiv 1\) (mod 3), we compute
\[v(\mathbf{w}) =v((002)^{(k-4)/3}(01)^{2})\] \[=v((002)^{(k-4)/3})+(1/\beta^{k-4})v((01)^{2})\] \[=1/\beta-1/\beta^{k-3}+(1/\beta^{k-4})(1/\beta^{2}+1/\beta^{4})\] \[=1/\beta-1/\beta^{k-3}+1/\beta^{k-2}+1/\beta^{k}\] \[=1/\beta-1/\beta^{k+1}.\]
Similarly, if \(k\equiv 2\) (mod 3),
\[v(\mathbf{w}) =v((002)^{(k-2)/3}01)\] \[=v((002)^{(k-2)/3})+(1/\beta^{k-2})v(01)\] \[=1/\beta-1/\beta^{k-1}+1/\beta^{k}\] \[=1/\beta-1/\beta^{k+1}.\]
For equal-length words \(\mathbf{x},\mathbf{y}\in\{0,\pm 1\}^{*}\), define \(\mathbf{x}+\mathbf{y},\mathbf{x}-\mathbf{y}\in\{0,\pm 1,\pm 2\}^{*}\) where addition and subtraction, respectively, are performed entry-wise. Note that
\[\mathbf{w}_{2}-\overline{\mathbf{w}_{0}}=01,\quad\mathbf{w}_{0}-\overline{ \mathbf{w}_{2}}=01,\quad\text{and}\quad\mathbf{w}_{1}-\overline{\mathbf{w}_{1} }=002.\]
Suppose \(\mathbf{d}\) satisfies Property \(M\) with \(m:=\operatorname{len}(\mathbf{d})\). Since \(\mathbf{d}\) is in admissible block form, the definition of \(\mathbf{e}:=\varphi(\mathbf{d})\) implies that \(\mathbf{d}-\mathbf{e}=1\mathbf{w}1\) for some word \(\mathbf{w}\) consisting solely of blocks of 01's and 002's or \(\mathbf{w}=\varepsilon\). Using Lemma 2.14, we compute
\[v(\mathbf{d}-\mathbf{e})=v(1\mathbf{w}1)=1/\beta+(1/\beta)(1/\beta-1/\beta^{m -1})+1/\beta^{m}=1.\]
This proves the following:
**Proposition 2.15**.: _If \(\mathbf{d}\in\mathcal{M}\) and \(\mathbf{e}:=\varphi(\mathbf{d})\), then_
\[v(\mathbf{d})-v(\mathbf{e})=v(\mathbf{d}-\mathbf{e})=1.\]
For \(\mathbf{d}=10\), set \(I_{\mathbf{d}}=(\alpha_{\mathbf{d}}^{-},\alpha_{\mathbf{d}}^{+}]:=(1+1/\beta^{ 2},\beta]\), and for all other \(\mathbf{d}=d_{1}\cdots d_{m}\in\mathcal{M}\), define
\[I_{\mathbf{d}}=(\alpha_{\mathbf{d}}^{-},\alpha_{\mathbf{d}}^{+}):=\left( \frac{\beta^{m}+\beta^{d_{m}}}{\beta^{m}v(\mathbf{d})+\beta^{d_{m}}},\frac{ \beta^{m}-\beta^{1-d_{m}}}{\beta^{m}v(\mathbf{d})-\beta^{1-d_{m}}}\right). \tag{7}\]
**Proposition 2.16**.: _For each \(\mathbf{d}\in\mathcal{M}\), \(I_{\mathbf{d}}\) is a nonempty subinterval of \((1,\beta]\)._
Proof.: The result is true for \(\mathbf{d}=10\), so assume \(\mathbf{d}\neq 10\). We first show that \(I_{\mathbf{d}}\neq\varnothing\), i.e. that
\[\frac{\beta^{m}+\beta^{d_{m}}}{\beta^{m}v(\mathbf{d})+\beta^{d_{m}}}<\frac{ \beta^{m}-\beta^{1-d_{m}}}{\beta^{m}v(\mathbf{d})-\beta^{1-d_{m}}},\]
or
\[(\beta^{m}+\beta^{d_{m}})(\beta^{m}v(\mathbf{d})-\beta^{1-d_{m}})<(\beta^{m}- \beta^{1-d_{m}})(\beta^{m}v(\mathbf{d})+\beta^{d_{m}}).\]
Distributing and cancelling terms gives that this is equivalent to
\[\beta^{m+d_{m}}v(\mathbf{d})-\beta^{m+1-d_{m}}<\beta^{m+d_{m}}-\beta^{m+1-d_{ m}}v(\mathbf{d}),\]
or \(v(\mathbf{d})<1\). Since \(\mathbf{d}\) has no consecutive 1's, one finds that \(v(\mathbf{d})<v((10)^{\infty})=1\) (see also Lemma 1 of [21]).
Next we show that \(I_{\mathbf{d}}\subset(1,\beta]\). The left endpoint of \(I_{\mathbf{d}}\) is greater than 1 again since \(v(\mathbf{d})<1\). It remains to show that
\[\frac{\beta^{m}-\beta^{1-d_{m}}}{\beta^{m}v(\mathbf{d})-\beta^{1-d_{m}}}\leq\beta.\]
Recall that \(d_{1}=1\), and if \(d_{m}=0\), then \(d_{m-1}=1\); thus \(v(\mathbf{d})\geq 1/\beta+\beta^{1-d_{m}}/\beta^{m}\), and
\[\beta^{m+1}v(\mathbf{d})-\beta^{2-d_{m}}\geq\beta^{m+1}(1/\beta+\beta^{1-d_{m} }/\beta^{m})-\beta^{2-d_{m}}>\beta^{m}-\beta^{1-d_{m}}.\]
Dividing both sides by \(\beta^{m}v(\mathbf{d})-\beta^{1-d_{m}}\) gives the desired inequality.
For each \(\mathbf{u}\in\{0,1\}^{*}\), let \(\Delta(\mathbf{u})\) denote the cylinder of points \(x\in[0,1]\) for which the \(\beta\)-expansion of \(x\) begins with \(\mathbf{u}\). One finds for each \(\mathbf{u}=u_{1}\cdots u_{n}\) with \(u_{j}u_{j+1}=0,\ 1\leq j<n\), that
\[\Delta(\mathbf{u})=\begin{cases}[v(\mathbf{u}),v(\mathbf{u})+1/\beta^{n}),&u_{ n}=0\\,&u_{n}=1\end{cases}. \tag{8}\]
The following lemma is needed in Proposition 2.18 below.
**Lemma 2.17**.: _Let \(\mathbf{d}\in\mathcal{M}_{U}\). Then \(B^{j}(1/\alpha_{\mathbf{d}}^{-})\leq 1/\alpha_{\mathbf{d}}^{-}\) and \(B^{j}(1-1/\alpha_{\mathbf{d}}^{+})\leq 1/\alpha_{\mathbf{d}}^{+}\) for all \(j>0\)._
Proof.: This is a corollary of two technical results (Lemmas 4.1 and 4.2), whose statements and proofs are provided in the appendix.
The next result--together with Proposition 2.16--states that every word \(\mathbf{d}\in\mathcal{M}\) is in fact a matching word, thus completing our classification of matching words as the set \(\mathcal{M}\). Moreover, it states that the interval \(I_{\mathbf{d}}\) is contained in a matching interval corresponding to the matching word \(\mathbf{d}\).
**Proposition 2.18**.: _For any \(\mathbf{d}\in\mathcal{M}\) and \(\alpha\in I_{\mathbf{d}}\), the \(S_{\alpha}\)-expansions of \(1\) and \(1-\alpha\) begin with \(\mathbf{d}\) and \(\varphi(\mathbf{d})\), respectively. Moreover, \(S_{\alpha}\) has matching with matching index \(m(\alpha)=\text{len}(\mathbf{d})\)._
Proof.: The result is shown for exceptional words \(\mathbf{d}\in\{10,1001\}\) in SS2.1, so assume \(\mathbf{d}\in\mathcal{M}_{U}\). Suppose the first statement holds. That \(S_{\alpha}\) has matching with index \(m(\alpha)=\text{len}(\mathbf{d})\) is implied by the final three digits of \(\mathbf{d}\) and \(\mathbf{e}\) (see the discussion surrounding Equation (5)), so we need only prove the first statement. Let \(\alpha\in I_{\mathbf{d}}\), and write \(\mathbf{d}=d_{1}\cdots d_{m}\) and \(\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}\). We must show that
\[d_{\alpha,1}\cdots d_{\alpha,m}=d_{1}\cdots d_{m}\]
and
\[e_{\alpha,1}\cdots e_{\alpha,m}=e_{1}\cdots e_{m}.\]
Assume that
\[\begin{pmatrix}d_{m-2}d_{m-1}d_{m}\\ e_{m-2}e_{m-1}e_{m}\end{pmatrix}=\begin{pmatrix}001\\ \overline{010}\end{pmatrix},\]
and set \(\alpha_{0}:=1/v(\mathbf{d})\) (the case that \(d_{m}=0\) is similar). Proposition 2.16 together with the fact that \(v(\mathbf{d})<1\) imply \(\alpha^{-}<\alpha_{0}<\alpha^{+}\), where, for ease of notation, \(\alpha^{\pm}:=\alpha_{\mathbf{d}}^{\pm}\). We claim that it suffices to show the following:
1. if \(\alpha\in(\alpha^{-},\alpha_{0})\), then \(\ell_{\alpha}(1)>m-1,\ \ell_{\alpha}(1-\alpha)=m-2\), \[b_{1}(1/\alpha)\cdots b_{m}(1/\alpha)=d_{1}\cdots d_{m},\] and \[b_{1}(1-1/\alpha)\cdots b_{m-2}(1-1/\alpha)=\overline{e_{1}\cdots e_{m-2}};\]
2. if \(\alpha\in(\alpha_{0},\alpha^{+})\), then \(\ell_{\alpha}(1)=m-1,\ \ell_{\alpha}(1-\alpha)>m-2\), \[b_{1}(1/\alpha)\cdots b_{m-1}(1/\alpha)=d_{1}\cdots d_{m-1},\] and \[b_{1}(1-1/\alpha)\cdots b_{m}(1-1/\alpha)=\overline{e_{1}\cdots e_{m}};\] and
3. if \(\alpha=\alpha_{0}\), then \(\ell_{\alpha}(1)=m-1,\ \ell_{\alpha}(1-\alpha)=m-2\), \[b_{1}(1/\alpha)\cdots b_{m-1}(1/\alpha)=d_{1}\cdots d_{m-1},\] \[b_{1}(1-1/\alpha)\cdots b_{m-2}(1-1/\alpha)=\overline{e_{1}\cdots e_{m-2}},\] and \(B^{m-1}(1/\alpha)=B^{m-2}(1-1/\alpha)=1/\beta\).
Indeed, suppose (i) holds. Lemma 2.5 implies
\[d_{\alpha,1}\cdots d_{\alpha,m}=d_{1}\cdots d_{m}\]
and
\[e_{\alpha,1}\cdots e_{\alpha,m-2}=e_{1}\cdots e_{m-2}.\]
Since \(\ell_{\alpha}(1-\alpha)=m-2\), Corollary 2.4 gives \(S_{\alpha}^{m-2}(1-\alpha)\in[-\alpha/\beta,-1/\beta)\), so \(e_{\alpha,m-1}=-1\) and \(e_{\alpha,m}=0\). In case (ii), Lemma 2.5 again gives
\[d_{\alpha,1}\cdots d_{\alpha,m-1}=d_{1}\cdots d_{m-1}\]
\[e_{\alpha,1}\cdots e_{\alpha,m-1}=e_{1}\cdots e_{m-1}.\]
Moreover, \(\ell_{\alpha}(1)=m-1\) implies \(S_{\alpha}^{m-1}(1)\in(1/\beta,\alpha,\beta]\) and hence \(d_{\alpha,m}=1\). Since \(e_{\alpha,m-1}=e_{m-1}=-1\), it follows that \(e_{\alpha,m}=0=e_{m}\). In (iii), we have
\[d_{\alpha,1}\cdots d_{\alpha,m-1}=d_{1}\cdots d_{m-1}\]
and
\[e_{\alpha,1}\cdots e_{\alpha,m-2}=e_{1}\cdots e_{m-2}.\]
Moreover, Lemma 2.5 gives \(S_{\alpha}^{m-1}(1)=-S_{\alpha}^{m-2}(1-\alpha)=\alpha/\beta\), so \(d_{\alpha,m}=\overline{e_{\alpha,m-1}}=1\) and \(e_{\alpha,m}=0\).
By Corollary 2.6, (i), (ii) and (iii) are implied by showing:
1. \(1/\overline{I_{\mathbf{d}}}\subsetneq\Delta(d_{1}\cdots d_{m-1})\) and \(1-1/\overline{I_{\mathbf{d}}}\subsetneq\Delta(\overline{e_{1}\cdots e_{m-2}})\);
2. \(B^{j}(1/\alpha)\notin(1/\beta\alpha,1/\beta]\) for each \(0\leq j<m-1\), and \(B^{j}(1-1/\alpha)\notin(1/\beta\alpha,1/\beta]\) for each \(0\leq j<m-2\);
3. if \(\alpha\in(\alpha^{-},\alpha_{0})\), then \(B^{m-1}(1/\alpha)>1/\beta\) and \(B^{m-2}(1-1/\alpha)\in(1/\beta\alpha,1/\beta]\);
4. if \(\alpha\in(\alpha_{0},\alpha^{+})\), then \(B^{m-1}(1/\alpha)\in(1/\beta\alpha,1/\beta]\) and \(B^{m-2}(1-1/\alpha)>1/\beta\); and
5. if \(\alpha=\alpha_{0}\), then \(B^{m-1}(1/\alpha)=B^{m-2}(1-1/\alpha)=1/\beta\).
We prove each of (a), (b), (c), (d) and (e):
1. The first inclusion is equivalent to \[v(d_{1}\cdots d_{m-1})<1/\alpha^{+}<1/\alpha^{-}<v(d_{1}\cdots d_{m-1})+1/ \beta^{m-1}.\] (9) Note that \(v(d_{1}\cdots d_{m-1})<1/\alpha^{+}\) if and only if \[v(\mathbf{d})-1/\beta^{m}<\frac{\beta^{m}v(\mathbf{d})-1}{\beta^{m}-1}.\] Multiplying both sides by \(\beta^{m}-1\), cancelling and rearranging terms, this is equivalent to \(v(\mathbf{d})>1/\beta^{m}\). This latter inequality holds since \(v(\mathbf{d})\geq v(d_{1})=1/\beta\) and \(m>1\). Next, \(1/\alpha^{-}<v(d_{1}\cdots d_{m-1})+1/\beta^{m-1}\) if and only if \[\frac{\beta^{m}v(\mathbf{d})+\beta}{\beta^{m}+\beta}<v(\mathbf{d})-1/\beta^{m }+1/\beta^{m-1}.\] Using the fact that \(1/\beta^{m-1}=1/\beta^{m}+1/\beta^{m+1}\) and multiplying both sides by \(\beta^{m}+\beta\), this is equivalent to \[\beta^{m}v(\mathbf{d})+\beta<(\beta^{m}+\beta)(v(\mathbf{d})+1/\beta^{m+1}),\] or \[\beta^{m}v(\mathbf{d})+\beta<\beta^{m}v(\mathbf{d})+1/\beta+\beta v(\mathbf{d })+1/\beta^{m}.\] Simplifying, this is equivalent to showing \(1<\beta v(\mathbf{d})+1/\beta^{m}\), which again holds since \(v(\mathbf{d})\geq 1/\beta\). Thus \(1/\overline{I_{\mathbf{d}}}\subsetneq\Delta(d_{1}\cdots d_{m-1})\). The second inclusion is equivalent to \[v(\overline{e_{1}\cdots e_{m-2}})<1-1/\alpha^{-}<1-1/\alpha^{+}<v(\overline{e _{1}\cdots e_{m-2}})+1/\beta^{m-2}.\] Now \(v(\overline{e_{1}\cdots e_{m-2}})<1-1/\alpha^{-}\) if and only if \(1/\alpha^{-}<1-(v(\overline{\mathbf{e}})-1/\beta^{m-1})\). By Proposition 2.15, the fact that \(v(\overline{\mathbf{e}})=-v(\mathbf{e})\) and (9), \[1-(v(\overline{\mathbf{e}})-1/\beta^{m-1})=v(\mathbf{d})+1/\beta^{m-1}>v(d_{1} \cdots d_{m-1})+1/\beta^{m-1}>1/\alpha^{-}.\] Lastly, \(1-1/\alpha^{+}<v(\overline{e_{1}\cdots e_{m-2}})+1/\beta^{m-2}\) if and only if \(1-1/\alpha^{+}<v(\overline{\mathbf{e}})-1/\beta^{m-1}+1/\beta^{m-2}\), or \(v(\mathbf{d})<1/\alpha^{+}+1/\beta^{m}\). From (9), we find \[v(\mathbf{d})-1/\beta^{m}=v(d_{1}\cdots d_{m-1})<1/\alpha^{+}.\] Thus \(1-1/\overline{I_{\mathbf{d}}}\subsetneq\Delta(\overline{e_{1}\cdots e_{m-2}})\).
2. Fix \(0\leq j<m-1\). If \(d_{j+1}=1\), then part (a) and Lemma 2.12 imply that \(B^{j}(1/\alpha)>B^{j}(1/\alpha^{+})\geq 1/\beta\). Now suppose \(d_{j+1}=0\). By (a), \(B^{j}(1/\alpha^{-})\in(1/\beta\alpha^{-},1/\beta]\) if and only if \(B^{j+1}(1/\alpha^{-})\in(1/\alpha^{-},1]\). Lemma 2.17 thus implies \(B^{j}(1/\alpha^{-})\notin(1/\beta\alpha^{-},1/\beta]\). By Equation (6), it also holds for each \(x\in\Delta(d_{1}\cdots d_{m-1})\) that \(B^{j}(x)\notin(x/\beta,1/\beta]\) if and only if \[\beta^{j}(x-v(d_{1}\cdots d_{j}))\leq x/\beta,\]
\[x\leq\frac{\beta^{j}v(d_{1}\cdots d_{j})}{\beta^{j}-1/\beta}.\] Since \(1/\alpha,1/\alpha^{-}\in\Delta(d_{1}\cdots d_{m-1})\) and \(B^{j}(1/\alpha^{-})\notin(1/\beta\alpha^{-},1/\beta]\), we have \[1/\alpha<1/\alpha^{-}\leq\frac{\beta^{j}v(d_{1}\cdots d_{j})}{\beta^{j}-1/ \beta},\] which implies \(B^{j}(1/\alpha)\notin(1/\beta\alpha,1/\beta]\). Thus \(B^{j}(1/\alpha)\notin(1/\beta\alpha,1/\beta]\) for each \(0\leq j<m-1\). The proof that \(B^{j}(1-1/\alpha)\notin(1/\beta\alpha,1/\beta]\) for each \(0\leq j<m-2\) is similar.
3. Suppose \(\alpha\in(\alpha^{-},\alpha_{0})\). From Equation (6) and part (a), we have for each \(x\in 1/\overline{I_{\mathbf{d}}}\) that \[B^{m-1}(x) =\beta^{m-1}(x-v(d_{1}\cdots d_{m-1}))\] (10) \[=\beta^{m-1}(x-(v(\mathbf{d})-1/\beta^{m}))\] Since \(1/\alpha>1/\alpha_{0}=v(\mathbf{d})\), we have \(B^{m-1}(1/\alpha)>1/\beta\). Also from Equation (6), part (a) and Proposition 2.15, for each \(x\in 1/\overline{I_{\mathbf{d}}}\), \[B^{m-2}(1-x) =\beta^{m-2}(1-x-v(\overline{e_{1}\cdots e_{m-2}}))\] (11) \[=\beta^{m-2}(1-x+v(\mathbf{e})+1/\beta^{m-1})\] \[=\beta^{m-2}(-x+v(\mathbf{d})+1/\beta^{m-1})\] \[=-\beta^{m-2}x+\beta^{m-2}v(\mathbf{d})+1/\beta.\] Hence \[B^{m-2}(1-1/\alpha)<B^{m-2}(1-1/\alpha_{0})=1/\beta,\] and \(B^{m-2}(1-1/\alpha)>1/\beta\alpha\) if and only if \[\frac{\beta^{m-2}v(\mathbf{d})+1/\beta}{\beta^{m-2}+1/\beta}>1/\alpha.\] But the left hand side equals \(1/\alpha^{-}\), so the inequality holds.
4. Suppose \(\alpha\in(\alpha_{0},\alpha^{+})\). From Equation (10), \(1/\alpha<1/\alpha_{0}=v(\mathbf{d})\) implies \(B^{m-1}(1/\alpha)<1/\beta\). Moreover, \(B^{m-1}(1/\alpha)>1/\beta\alpha\) if and only if \[1/\alpha>\frac{\beta^{m-1}v(\mathbf{d})-1/\beta}{\beta^{m-1}-1/\beta}.\] The right-hand side equals \(1/\alpha^{+}\), and \(\alpha<\alpha^{+}\) by assumption. We also find from Equation (11) that \[B^{m-2}(1-1/\alpha)=\beta^{m-2}(v(\mathbf{d})-1/\alpha)+1/\beta>1/\beta\] since \(1/\alpha<1/\alpha_{0}=v(\mathbf{d})\).
5. This again follows from Equations (10) and (11), setting \(x=1/\alpha_{0}=v(\mathbf{d})\).
The following proposition states that the interval \(I_{\mathbf{d}}\) contains the matching intervals corresponding to the matching word \(\mathbf{d}\); together with Proposition 2.18, this characterises matching intervals as the collection \(\{I_{\mathbf{d}}\}_{\mathbf{d}\in\mathcal{M}}\).
**Proposition 2.19**.: _If \(S_{\alpha}\) has matching with \(m(\alpha)=m\), then \(\alpha\in I_{\mathbf{d}}\), where \(\mathbf{d}=d_{1}\cdots d_{m}\) is beginning of the \(S_{\alpha}\)-expansion of \(1\)._
Proof.: By Proposition 2.13, \(\mathbf{d}\in\mathcal{M}\), so \(I_{\mathbf{d}}\) is defined. The result holds for \(m\leq 2\) by the cases in SS2.1, so assume \(m>2\) and let \(\mathbf{e}=e_{1}\cdots e_{m}\) denote the beginning of the \(S_{\alpha}\)-expansion of \(1-\alpha\). Recall from Equation (5) that
\[\begin{pmatrix}d_{m-2}d_{m-1}d_{m}\\ e_{m-2}e_{m-1}e_{m}\end{pmatrix}\in\left\{\begin{pmatrix}010\\ \overline{001}\end{pmatrix},\begin{pmatrix}001\\ \overline{010}\end{pmatrix}\right\}.\]
Assume \(d_{m}=0\) (the other case is similar). Lemma 2.3, Corollary 2.4 and the final digits of \(\mathbf{d}\) and \(\mathbf{e}\) imply that either
\[\text{(i)}\ \ S_{\alpha}^{m-2}(1)\in(1/\beta,\alpha/\beta]\quad\text{ or }\quad \text{(ii)}\ \ S_{\alpha}^{m-1}(1-\alpha)\in[-\alpha/\beta,-1/\beta).\]
It suffices to show that both (i) and (ii) imply \[\alpha\in I_{\mathbf{d}}=\left(\frac{\beta^{m}+1}{\beta^{m}v(\mathbf{d})+1},\frac {\beta^{m}-\beta}{\beta^{m}v(\mathbf{d})-\beta}\right).\]
1. Equation (3) gives \[S_{\alpha}^{m-2}(1)=\beta^{m-2}(1-\alpha v(d_{1}\cdots d_{m-2}))\in(1/\beta, \alpha/\beta].\] Note that \(v(d_{1}\cdots d_{m-2})=v(\mathbf{d})-1/\beta^{m-1}\), so \[1-\alpha(v(\mathbf{d})-1/\beta^{m-1})\in(1/\beta^{m-1},\alpha/\beta^{m-1}].\] Now \[1-\alpha(v(\mathbf{d})-1/\beta^{m-1})>1/\beta^{m-1}\] implies \[\alpha<\frac{1-1/\beta^{m-1}}{v(\mathbf{d})-1/\beta^{m-1}}=\frac{\beta^{m}- \beta}{\beta^{m}v(\mathbf{d})-\beta}.\] Moreover, \[1-\alpha(v(\mathbf{d})-1/\beta^{m-1})\leq\alpha/\beta^{m-1}\] gives \(1\leq\alpha v(\mathbf{d})\). Thus we have \[\alpha\in\left[\frac{1}{v(\mathbf{d})},\frac{\beta^{m}-\beta}{\beta^{m}v( \mathbf{d})-\beta}\right),\] and it suffices to show that \[\frac{\beta^{m}+1}{\beta^{m}v(\mathbf{d})+1}<\frac{1}{v(\mathbf{d})}.\] But this is true since \(v(\mathbf{d})<v((10)^{\infty})=1\).
2. Again from Equation (3), \[S_{\alpha}^{m-1}(1-\alpha)=\beta^{m-1}(1-\alpha(1+v(e_{1}\cdots e_{m-1})))\in[ -\alpha/\beta,-1/\beta).\] The assumption that \(e_{m}=-1\) together with Proposition 2.15 give \[1+v(e_{1}\cdots e_{m-1})=1+v(\mathbf{e})+1/\beta^{m}=v(\mathbf{d})+1/\beta^{m},\] so \[1-\alpha(v(\mathbf{d})+1/\beta^{m})\in[-\alpha/\beta^{m},-1/\beta^{m}).\] Now \[1-\alpha(v(\mathbf{d})+1/\beta^{m})\geq-\alpha/\beta^{m}\] implies \(1\geq\alpha v(\mathbf{d})\). Furthermore, \[1-\alpha(v(\mathbf{d})+1/\beta^{m})<-1/\beta^{m}\] gives \[\alpha>\frac{1+1/\beta^{m}}{v(\mathbf{d})+1/\beta^{m}}=\frac{\beta^{m}+1}{\beta ^{m}v(\mathbf{d})+1}.\] Hence \[\alpha\in\left(\frac{\beta^{m}+1}{\beta^{m}v(\mathbf{d})+1},\frac{1}{v( \mathbf{d})}\right],\] and it suffices to show \[\frac{1}{v(\mathbf{d})}<\frac{\beta^{m}-\beta}{\beta^{m}v(\mathbf{d})-\beta}.\] This is true again since \(v(\mathbf{d})<1\).
The implications of Propositions 2.13, 2.16, 2.18 and 2.19 are summarised in the following:
**Corollary 2.20**.: _The sets \(\mathcal{M}\) and \(\{I_{\mathbf{d}}\}_{\mathbf{d}\in\mathcal{M}}\) classify the matching words and intervals, respectively, of the maps \(S_{\alpha}\)._
**Remark 2.21**.: _The results of this subsection also imply that \(\varphi(\mathcal{M})\) classifies the first \(m(\alpha)<\infty\) digits of the \(S_{\alpha}\)-expansions of \(1-\alpha\) for matching parameters \(\alpha\in[1,\beta]\). Moreover, the intervals \(I_{\mathbf{d}}\) in \(\{I_{\mathbf{d}}\}_{\mathbf{d}\in\mathcal{M}}=\{I_{\varphi^{-1}(\mathbf{e})} \mathbf{e}_{\mathbf{e}\in\varphi(\mathcal{M})}\) classify the maximal subintervals of matching parameters \(\alpha\) for which these first \(m(\alpha)\) digits coincide (and equal \(\mathbf{e}=\varphi(\mathbf{d})\))._
**Remark 2.22**.: _While not needed for our purposes, we briefly mention that the sets \(\mathcal{M}\) (or \(\varphi(\mathcal{M})\)) and \(\{I_{\mathbf{d}}\}_{\mathbf{d}\in\mathcal{M}}\) also give rise to classifications of the \(T_{\alpha}\)-expansions of \(1\) (resp. \(\beta(1-\alpha)\)) before matching and the maximal intervals of parameters \(\alpha\) on which these expansions coincide. In particular, if \(\mathbf{d}\in\mathcal{M}\) (resp. \(\mathbf{e}:=\varphi(\mathbf{d})\in\varphi(\mathcal{M})\)), then the corresponding \(T_{\alpha}\)-word \(\mathbf{d}^{\prime}\) (resp. \(\mathbf{e}^{\prime}\)) 'forgets' each non-terminal \(0\) which immediately follows a \(1\) (resp. \(-1\), and \(\mathbf{e}^{\prime}\) also forgets the initial \(0\) of \(\mathbf{e}\)). The matching intervals \(I_{\mathbf{d}}\) are unchanged. For instance, \(\mathbf{d}=10100001\) and \(\mathbf{e}=\varphi(\mathbf{d})=\overline{000010101}\) give rise to the words \(\mathbf{d}^{\prime}=110001\) and \(\mathbf{e}^{\prime}=\overline{000110}\) for \(T_{\alpha}\), and each of these words corresponds to the matching interval \(I_{\mathbf{d}}=\left(\frac{\beta^{8}+\beta}{\beta^{7}+\beta^{9}+\beta^{2}}, \frac{\beta^{8}-1}{\beta^{7}+\beta^{9}}\right)\)._
### Cascades of matching intervals
Here it is shown that each _unexceptional matching interval_\(I_{\mathbf{d}},\ \mathbf{d}\in\mathcal{M}_{U}\), generates a whole 'cascade' of unexceptional matching intervals with adjacent endpoints. Define \(\psi:\mathcal{M}_{U}\to\{0,1\}^{*}\), where for \(\mathbf{d}=d_{1}\cdots d_{m}\in\mathcal{M}_{U}\) and \(\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}\),
\[\psi(\mathbf{d})=\begin{cases}\mathbf{d}\overline{\mathbf{e}},&d_{m}=0\\ \mathbf{d}\overline{\mathbf{e}_{2}\cdots e_{m}},&d_{m}=1\end{cases}.\]
Recall the definition of the matching interval \(I_{\mathbf{d}}=(\alpha_{\mathbf{d}}^{-},\alpha_{\mathbf{d}}^{+})\) from (7).
**Proposition 2.23**.: _The map \(\psi\) preserves Property \(M\), i.e. \(\psi(\mathcal{M}_{U})\subset\mathcal{M}_{U}\). Moreover, \(\alpha_{\mathbf{d}}^{-}=\alpha_{\psi(\mathbf{d})}^{+}\) for each \(\mathbf{d}\in\mathcal{M}\)._
Proof.: Let \(\mathbf{d}=d_{1}\cdots d_{m}\in\mathcal{M}_{U}\), and assume \(d_{m}=0\) (the other case is similar). We first show \(\alpha_{\mathbf{d}}^{-}=\alpha_{\psi(\mathbf{d})}^{+}\), assuming \(\psi(\mathcal{M}_{U})\subset\mathcal{M}_{U}\). We compute
\[\alpha_{\psi(\mathbf{d})}^{+} =\frac{\beta^{2m}-1}{\beta^{2m}v(\mathbf{d}\overline{\mathbf{e}} )-1}\] \[=\frac{(\beta^{m}+1)(\beta^{m}-1)}{\beta^{2m}v(\mathbf{d})-(1/ \beta^{m})v(\mathbf{e}))-1}\] \[=\frac{(\beta^{m}+1)(\beta^{m}-1)}{\beta^{2m}v(\mathbf{d})-\beta ^{m}(v(\mathbf{d})-1)-1}\] \[=\frac{(\beta^{m}+1)(\beta^{m}-1)}{(\beta^{m}v(\mathbf{d})+1)( \beta^{m}-1)}\] \[=\frac{\beta^{m}+1}{\beta^{m}v(\mathbf{d})+1}\] \[=\alpha_{\mathbf{d}}^{-}\]
as desired. Now we prove that \(\mathbf{d}^{\prime}:=\psi(\mathbf{d})\in\mathcal{M}_{U}\). Clearly \(\mathbf{d}^{\prime}\notin\{10,1001\}\), so we need only show \(\mathbf{d}^{\prime}\in\mathcal{M}\). Write
\[\mathbf{d}=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}0\]
with \(i_{n}=2\) and
\[\mathbf{e}=\varphi(\mathbf{d})=\overline{0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w }_{2-i_{n}}1}.\]
Then
\[\mathbf{d}^{\prime} =\mathbf{d}\overline{\mathbf{e}}\] \[=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}0\mathbf{w}_{2-i_{1} }\cdots\mathbf{w}_{2-i_{n}}1\] \[=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}\mathbf{w}_{0} \mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}1,\]
so \(\mathbf{d}^{\prime}\in\mathcal{B}\) is in admissible block form. To prove \(\mathbf{d}^{\prime}\in\mathcal{M}\), it remains to show for each \(j\geq 0\) that (i) \(\sigma^{j}(\mathbf{d}^{\prime})\preceq\mathbf{d}^{\prime}\) and (ii) \(\sigma^{j}(\overline{\varphi(\mathbf{d}^{\prime})})\preceq\mathbf{d}^{\prime}\). (Recall that \(\mathbf{d}\in\mathcal{M}\) implies the analogous inequalities hold for \(\mathbf{d}\).)
1. If \(j\geq m\), then \[\sigma^{j}(\mathbf{d}^{\prime})=\sigma^{j}(\mathbf{d}\overline{\mathbf{e}})= \sigma^{j-m}(\overline{\mathbf{e}})\preceq\mathbf{d}\preceq\mathbf{d}^{\prime}.\] Assume \(j<m\), and suppose for the sake of contradiction that \(\sigma^{j}(\mathbf{d}^{\prime})\succ\mathbf{d}^{\prime}\). Since \(\mathbf{d}^{\prime}\) begins with \(1\), so does \(\sigma^{j}(\mathbf{d}^{\prime})\). Thus either \[\sigma^{j}(\mathbf{d}^{\prime})=1\mathbf{w}_{i_{\ell}}\cdots\mathbf{w}_{i_{n}} \mathbf{w}_{0}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}1\] for some \(1<\ell\leq n\), or \[\sigma^{j}(\mathbf{d}^{\prime})=1\mathbf{w}_{0}\mathbf{w}_{2-i_{1}}\cdots \mathbf{w}_{2-i_{n}}1.\] Since \(\mathbf{w}_{0}\prec\mathbf{w}_{2}=\mathbf{w}_{i_{1}}\), the second case is impossible and we must have \[1\mathbf{w}_{i_{\ell}}\cdots\mathbf{w}_{i_{n}}\mathbf{w}_{0}\mathbf{w}_{2-i_{ 1}}\cdots\mathbf{w}_{2-i_{n}}1\succ 1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}} \mathbf{w}_{0}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}1\] for some \(\ell\). Since \(\sigma^{j}(\mathbf{d})\preceq\mathbf{d}\), it follows that \[1\mathbf{w}_{i_{\ell}}\cdots\mathbf{w}_{i_{n}}=1\mathbf{w}_{i_{1}}\cdots \mathbf{w}_{i_{n-\ell+1}}\] and thus \[\mathbf{w}_{0}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}1\succ\mathbf{w }_{i_{n-\ell+2}}\cdots\mathbf{w}_{i_{n}}\mathbf{w}_{0}\mathbf{w}_{2-i_{1}} \cdots\mathbf{w}_{2-i_{n}}1.\] Then either there is some \(1\leq p\leq\ell-3\) for which \[(0,2-i_{1},\ldots,2-i_{p-1})=(i_{n-\ell+2},i_{n-\ell+3},\ldots,i_{n+p-\ell+1})\] and \(2-i_{p}>i_{n+p-\ell+2}\), or \[(0,2-i_{1},\ldots,2-i_{\ell-2})=(i_{n-\ell+2},i_{n-\ell+3},\ldots,i_{n}).\] In the first case, \[(2-i_{n-\ell+2},2-i_{n-\ell+3},\ldots,2-i_{n+p-\ell+1})=(2,i_{1},\ldots,i_{p-1})\] and \(2-i_{n+p-\ell+2}>i_{p}\). Thus there exists some \(k\geq 0\) for which \[\sigma^{k}(\overline{\mathbf{e}}) =1\mathbf{w}_{2-i_{n-\ell+3}}\cdots\mathbf{w}_{2-i_{n+p-\ell+1}} \mathbf{w}_{2-i_{n+p-\ell+2}}\cdots\mathbf{w}_{2-i_{n}}1\] \[\succ 1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{p-1}}\mathbf{w}_{i_{p}} \cdots\mathbf{w}_{i_{n}}0\] \[=\mathbf{d},\] contradicting the fact that \(\mathbf{d}\in\mathcal{M}\). In the second case, \[(2-i_{n-\ell+2},2-i_{n-\ell+3},\ldots,2-i_{n})=(2,i_{1},\ldots,i_{\ell-2}).\] Since \(i_{n}=2\) implies \(i_{\ell-2}=0\), there is again some \(k\geq 0\) for which \[\sigma^{k}(\overline{\mathbf{e}}) =1\mathbf{w}_{2-i_{n-\ell+3}}\cdots\mathbf{w}_{2-i_{n-1}}\mathbf{ w}_{2-i_{n}}1\] \[=1\mathbf{w}_{2-i_{n-\ell+3}}\cdots\mathbf{w}_{2-i_{n-1}}\mathbf{ w}_{1}\] \[\succ 1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{\ell-3}}\mathbf{w}_{i_{ \ell-2}}\cdots\mathbf{w}_{i_{n}}0\] \[=\mathbf{d},\] contradicting \(\mathbf{d}\in\mathcal{M}\).
2. Set \(\mathbf{e}^{\prime}:=\varphi(\mathbf{d}^{\prime})=e_{1}\cdots e_{m}\), and recall that \(d_{m}=0\) implies \(e_{m}=-1\). Then \[\mathbf{e}^{\prime} =\overline{0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}} \mathbf{w}_{2}\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}0}\] \[=e_{1}\cdots e_{m-1}0\overline{\mathbf{d}}.\] If \(j<m-1\), then \[\sigma^{j}(\overline{\mathbf{e}^{\prime}})=\overline{e}_{j+1}\cdots e_{m-1}0 \mathbf{d}\prec\overline{e}_{j+1}\cdots\overline{e}_{m}=\sigma^{j}(\overline{ \mathbf{e}})\preceq\mathbf{d}\preceq\mathbf{d}^{\prime}.\] If \(j=m-1\), then \[\sigma^{j}(\overline{\mathbf{e}^{\prime}})=0\mathbf{d}\prec\mathbf{d}^{\prime},\] and if \(j\geq m\), then \[\sigma^{j}(\overline{\mathbf{e}^{\prime}})=\sigma^{j-m}(\mathbf{d})\preceq \mathbf{d}\preceq\mathbf{d}^{\prime}.\] This concludes the proof that \(\mathbf{d}^{\prime}=\psi(\mathbf{d})\in\mathcal{M}\) and thus \(\psi(\mathcal{M}_{U})\subset\mathcal{M}_{U}\).
## 5. Proof of Theorem 1
In this section, we prove the following result.
**Theorem 1**.: _Let \(\mathcal{M}\) be a finite set and let \(\mathcal{M}\) be a finite set. Then \(\mathcal{M}\) is a finite set._
Proof.: Let \(\mathcal{M}\) be a finite set and let \(\mathcal{M}^{\prime}\) be a finite set. Let \(\mathcal{M}^{\prime}\) be a finite set. Then \(\mathcal{M}\) is a finite set. Since \(\mathcal{M}\) is a finite set, we have \(\mathcal{M}^{\prime}\) is a finite set.
## 3. Invariant measures and frequencies of digits
As noted above, our main interest in matching arises from results of [17] which provide explicit expressions for the densities of absolutely continuous invariant measures. These densities depend on the orbits of the left and right limits at critical points and are in general infinite sums of (finite) step functions; however, the infinite sum becomes finite when either matching or a Markov partition occurs. These observations are used in this section to obtain explicit invariant measures \(\nu_{\alpha}\) and \(\mu_{\alpha}\) for the maps \(S_{\alpha}\) and \(T_{\alpha}\), respectively, and asymptotic relative frequencies of digits occurring in their respective generic expansions. These measures and frequencies are used in the proofs of Theorems 1.1 and 1.2.
Recall that \(B(x):=\beta x\ (\text{mod }1)\). It is well known that
\[h(x):=\begin{cases}\frac{5+3\sqrt{5}}{10},&x\in[0,1/\beta)\\ \frac{5+\sqrt{5}}{10},&x\in[1/\beta,1]\end{cases}\]
is the density of a unique, ergodic, \(B\)-invariant probability measure which is equivalent to Lebesgue measure \(\lambda\) ([22]). By Birkhoff's ergodic theorem, the frequency of \(0\) in \(\lambda\)-a.e. \(\beta\)-expansion is \(\int_{[0,1/\beta)}hd\lambda=(5+\sqrt{5})/10\). When \(\alpha=1\), the map \(S_{\alpha}=S_{1}\) restricts on \([0,1]\backslash\{1/\beta\}\) to \(B\) and on \([-1,0]\backslash\{-1/\beta\}\) to \(-B(-x)\). Since \(S_{1}\) is invariant on \(\pm[0,1]\), we find that the frequency of \(0\) in \(\lambda\)-a.e. \(S_{1}\)-expansion is also \((5+\sqrt{5})/10\). Define \(f_{1}:[-1,1]\to[-1,1]\) by \(f_{1}(x)=h(|x|)/2\), and recall the definitions of the subintervals \(J_{i}\subset[-1,1],\;i\in\{-1,0,1\}\) from SS1. Note, then, that the measure \(\nu_{1}\) defined on Lebesgue-measurable \(A\subset[-1,1]\) by \(\nu_{1}(A)=\int_{A}f_{1}d\lambda\) satisfies \(\nu_{1}(J_{0}):=(5+\sqrt{5})/10\).
A similar analysis (with Lebesgue measure) reveals that the frequency of \(0\) in \(\lambda\)-a.e. \(T_{1}\)-expansion is \(1/\beta\). Setting \(\mu_{1}:=\lambda/2\) as normalised Lebesgue measure gives \(\mu_{1}(J_{0})=1/\beta\). In what follows we consider \(\alpha\neq 1\).
### Invariant measures
Let \(\alpha\in(1,\beta]\). Following a procedure completely analogous to that in SS2.1 of [13], results of [17] imply that the collection of absolutely continuous \(S_{\alpha}\)-invariant measures forms a one real-dimensional linear space and thus there is a unique--and hence ergodic--absolutely continuous invariant probability measure \(\nu_{\alpha}\). Moreover, its corresponding probability density is given explicitly by
\[f_{\alpha}(x):=\frac{1}{C}\sum_{t\geq 0}\frac{1}{\beta^{t+1}}\left(1_{[-1,S_{ \alpha}^{t}(\alpha-1))}(x)-1_{[-1,S_{\alpha}^{t}(-1))}(x)+1_{[-1,S_{\alpha}^{ t}(1))}(x)-1_{[-1,S_{\alpha}^{t}(1-\alpha))}(x)\right),\]
where \(C\in\mathbb{R}\) is some normalising constant. Symmetry of \(S_{\alpha}\) together with Proposition 2.1 allow us to rewrite \(f_{\alpha}(x)\) as
\[f_{\alpha}(x)=\frac{1}{C}\sum_{t\geq 0}\frac{1}{\beta^{t+1}}\left(1_{[S_{ \alpha}^{t}(-1),S_{\alpha}^{t}(\alpha-1))}(x)+1_{[S_{\alpha}^{t}(1-\alpha),S_{ \alpha}^{t}(1))}(x)\right). \tag{12}\]
Note that \(f_{\alpha}\) is bounded away from \(0\) on \([-1,1)\), so \(\nu_{\alpha}\) is in fact equivalent to Lebesgue measure \(\lambda\). Also observe that when matching (or a Markov partition) occurs, the summation becomes a finite sum and \(f_{\alpha}(x)\) is a (finite) step function (see Figure 3).
The measure \(\nu_{\alpha}\) can now be used to obtain a unique, absolutely continuous \(T_{\alpha}\)-invariant measure \(\mu_{\alpha}=\int g_{\alpha}d\lambda\). For each \(\alpha\in(1,\beta]\), define a probability measure
\[\mu_{\alpha}(A):=\frac{\nu_{\alpha}\left(S_{\alpha}^{-1}(A)\cap J_{0}\right)}{ \nu_{\alpha}(J_{0})}. \tag{13}\]
on \([-1,1]\), where \(A\subset[-1,1]\) is Lebesgue-measurable. Note that \(S_{\alpha}^{-1}(A)\cap J_{0}=\frac{1}{\beta}A\), so \(\mu_{\alpha}\) may also be written \(\mu_{\alpha}(A)=\nu_{\alpha}(\frac{1}{\beta}A)/\nu_{\alpha}(J_{0})\).
**Theorem 3.1**.: _The measure \(\mu_{\alpha}\) is the unique--hence ergodic--invariant probability measure for \(T_{\alpha}\) which is absolutely continuous with respect to Lebesgue measure. Moreover, \(\mu_{\alpha}\) is equivalent to Lebesgue measure._
Proof.: Since \(T_{\alpha}\) is an expanding, piecewise \(C^{2}\) monotone map, results of [19] imply the existence of an invariant probability measure \(\rho_{\alpha}\) for \(T_{\alpha}\) which is absolutely continuous with respect to Lebesgue measure. Let \(J_{\pm 1}:=J_{-1}\cup J_{1}\). As \(T_{\alpha}\) is a jump transformation for \(S_{\alpha}\), the measure \(\rho_{\alpha}\) induces an \(S_{\alpha}\)-invariant measure defined by
\[\tilde{\rho}_{\alpha}(A):=\rho_{\alpha}(A)+\rho_{\alpha}\left(S_{\alpha}^{-1} (A)\cap J_{\pm 1}\right) \tag{14}\]
(see, e.g. Proposition 11.4.1 of [14]). Note that for any \(A\subset J_{\pm 1}\) we have \(S_{\alpha}^{-1}(A)\subset J_{0}\), so (14) gives \(\tilde{\rho}_{\alpha}(A)=\rho_{\alpha}(A)\). Then for any measurable \(A\subset[-1,1]\),
\[\tilde{\rho}_{\alpha}\left(S_{\alpha}^{-1}(A)\cap J_{\pm 1}\right)=\rho_{ \alpha}\left(S_{\alpha}^{-1}(A)\cap J_{\pm 1}\right)\]
and (14) gives
\[\rho_{\alpha}(A)=\tilde{\rho}_{\alpha}(A)-\tilde{\rho}_{\alpha}\left(S_{ \alpha}^{-1}(A)\cap J_{\pm 1}\right).\]
Since \(\tilde{\rho}_{\alpha}\) is \(S_{\alpha}\)-invariant, the previous line may be rewritten
\[\rho_{\alpha}(A)=\tilde{\rho}_{\alpha}(S_{\alpha}^{-1}(A))-\tilde{\rho}_{ \alpha}\left(S_{\alpha}^{-1}(A)\cap J_{\pm 1}\right)=\tilde{\rho}_{\alpha}(S_{ \alpha}^{-1}(A)\cap J_{0}).\]
Recall that \(\nu_{\alpha}\) is the unique invariant, absolutely continuous probability measure for \(S_{\alpha}\), so \(\tilde{\rho}_{\alpha}=c\nu_{\alpha}\) for some \(c>0\). Thus
\[\rho_{\alpha}(A)=c\nu_{\alpha}\left(S_{\alpha}^{-1}(A)\cap J_{0}\right),\]
and setting \(A=[-1,1]\) gives \(c=1/\nu_{\alpha}(J_{0})\). Hence \(\rho_{\alpha}=\mu_{\alpha}\).
That \(\mu_{\alpha}\) is equivalent to Lebesgue measure \(\lambda\) follows immediately from the fact that \(\nu_{\alpha}\) is equivalent to \(\lambda\) and the observation above that \(\mu_{\alpha}(A)=\nu_{\alpha}(\frac{1}{\beta}A)/\nu_{\alpha}(J_{0})\).
We are now ready to prove Theorem 1.1:
Proof of Theorem 1.1.: Theorem 3.1 asserts the existence of a unique, absolutely continuous \(T_{\alpha}\)-invariant probability measure \(\mu_{\alpha}\) which is in fact equivalent to Lebesgue measure. It remains to show that for fixed \(\mathbf{d}\in\mathcal{M}\), the density \(g_{\alpha}\) of each \(\mu_{\alpha},\ \alpha\in I_{\mathbf{d}}\), is a step function with at most the same, finite number of jumps. Using a change of variables, one finds that
\[\mu_{\alpha}(A)=\frac{\nu_{\alpha}(\frac{1}{\beta}A)}{\nu_{\alpha}(J_{0})}= \frac{1}{\nu_{\alpha}(J_{0})}\int_{\frac{1}{\beta}A}f_{\alpha}(x)d\lambda(x)= \frac{1}{\beta\nu_{\alpha}(J_{0})}\int_{A}f_{\alpha}(x/\beta)d\lambda(x),\]
so
\[g_{\alpha}(x)=\frac{f_{\alpha}(x/\beta)}{\beta\nu_{\alpha}(J_{0})}.\]
Since, by (12), \(f_{\alpha}\) is a linear combination of at most \(2m(\alpha)\) indicator functions and \(m(\alpha)\) is constant on \(I_{\mathbf{d}}\), the result follows.
**Remark 3.2**.: _The number of jumps of the invariant densities \(f_{\alpha}\) and \(g_{\alpha}\) for \(S_{\alpha}\) and \(T_{\alpha}\), respectively, are non-constant on matching intervals \(I_{\mathbf{d}}\). Figure 3 shows these densities for three values of \(\alpha\) in the matching interval \(I_{\mathbf{d}}\approx(1.14589\ldots,1.23606\ldots)\) with \(\mathbf{d}=1010\). Note that the number of jumps is fewer for \(\alpha=1/v(\mathbf{d})\). One can show that this phenomenon generalises to all matching intervals; in fact, for each \(\mathbf{d}\in\mathcal{M}\), the number of jumps of \(f_{\alpha}\) and \(g_{\alpha}\), respectively, are constant for all but finitely many \(\alpha\in I_{\mathbf{d}}\), and the number of jumps decreases for \(\alpha=1/v(\mathbf{d})\in I_{\mathbf{d}}\)._
### Frequencies of digits
We are now in a position to determine the frequencies of digits in generic \(S_{\alpha}\)- and \(T_{\alpha}\)-expansions. Define \(\mathfrak{f}_{S},\mathfrak{f}_{T}:[1,\beta]\to[0,1]\) by
\[\mathfrak{f}_{S}(\alpha):=\nu_{\alpha}(J_{0})\quad\text{ and }\quad\mathfrak{f}_{T} (\alpha):=\mu_{\alpha}(J_{0}).\]
For \(\alpha\neq 1\), Birkhoff's ergodic theorem--together with the equivalence of the ergodic measures \(\nu_{\alpha}\) and \(\mu_{\alpha}\) with Lebesgue measure \(\lambda\)--implies that the asymptotic frequencies
\[\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}1_{J_{0}}(S_{\alpha}^{i}(x))\quad \text{ and }\quad\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}1_{J_{0}}(T_{\alpha}^{i}(x))\]
of the digit \(0\) in Lebesgue-a.e. \(S_{\alpha}\)- and \(T_{\alpha}\)-expansion are given by \(\mathfrak{f}_{S}(\alpha)\) and \(\mathfrak{f}_{T}(\alpha)\), respectively. Indeed, with the discussion and notation given at the beginning of SS3, \(\mathfrak{f}_{S}(1)\) and \(\mathfrak{f}_{T}(1)\) also give the generic asymptotic frequencies of the digit \(0\). Note, too, that the frequencies of the digits \(\pm 1\) are readily obtained from the frequency of \(0\).
As in the proof of Theorem 3.1, set \(J_{\pm 1}:=J_{-1}\cup J_{1}\). Using (13) and the \(S_{\alpha}\)-invariance of \(\nu_{\alpha}\), one has for any measurable \(A\subset[-1,1]\),
\[\mu_{\alpha}(A)=\frac{\nu_{\alpha}(S_{\alpha}^{-1}(A))-\nu_{\alpha}(S_{\alpha }^{-1}(A)\cap J_{\pm 1})}{\nu_{\alpha}(J_{0})}=\frac{\nu_{\alpha}(A)-\nu_{ \alpha}(S_{\alpha}^{-1}(A)\cap J_{\pm 1})}{\nu_{\alpha}(J_{0})}.\]
Setting \(A=J_{0}\) and using the fact that \(S_{\alpha}^{-1}(J_{0})\cap J_{\pm 1}=J_{\pm 1}\), we find
\[\mu_{\alpha}(J_{0})=\frac{\nu_{\alpha}(J_{0})-\nu_{\alpha}(J_{\pm 1})}{\nu_{ \alpha}(J_{0})}=\frac{\nu_{\alpha}(J_{0})-(1-\nu_{\alpha}(J_{0}))}{\nu_{ \alpha}(J_{0})}\]
or
\[\mathfrak{f}_{T}(\alpha)=2-\frac{1}{\mathfrak{f}_{S}(\alpha)}. \tag{15}\]
**Proposition 3.3**.: _The frequency functions \(\mathfrak{f}_{S}\) and \(\mathfrak{f}_{T}\) are continuous._
Proof.: Arguments completely analogous to those in SS4 of [13] give that \(\mathfrak{f}_{S}\) is continuous. Continuity of \(\mathfrak{f}_{T}\) is immediate from 15.
The remainder of this subsection is devoted to finding--for matching parameters \(\alpha\)--an explicit expression for \(\mathfrak{f}_{S}(\alpha)\) in terms of \(\alpha\) and its corresponding matching word \(\mathbf{d}\) (see Figure 4). Density of matching parameters in \([1,\beta]\), continuity of \(\mathfrak{f}_{S}\) and equation (15) then allow us to determine \(\mathfrak{f}_{S}(\alpha)\) and \(\mathfrak{f}_{T}(\alpha)\) for any \(\alpha\in[1,\beta]\) as limits of these explicit expressions. These expressions are then used in SS3.3 to determine the maximal
frequency of the digit \(0\) occurring in generic \(S_{\alpha}\)- and \(T_{\alpha}\)-expansions, and it is shown that these maximal values are attained for \(\alpha\) in the interval \([1/2+1/\beta,1+1/\beta^{2}]\).
Assume that \(\alpha\in I_{\mathbf{d}},\ \mathbf{d}\in\mathcal{M}\), with matching index \(m:=m(\alpha)<\infty\), and recall the density \(f_{\alpha}\) from equation (12). We first find an expression for the normalising constant \(C\). By symmetry of \(S_{\alpha}\),
\[1 =\nu_{\alpha}([-1,1])\] \[=\int_{-1}^{1}f_{\alpha}(x)d\lambda(x)\] \[=\frac{2}{C}\sum_{t=0}^{m-1}\int_{-1}^{1}\frac{1}{\beta^{t+1}}1_ {[S_{\alpha}^{t}(1-\alpha),S_{\alpha}^{t}(1))}(x)d\lambda(x)\] \[=\frac{2}{C}\sum_{t=0}^{m-1}\frac{1}{\beta^{t+1}}\left(S_{\alpha} ^{t}(1)-S_{\alpha}^{t}(1-\alpha)\right).\]
Assume \(\alpha<1+1/\beta^{2}\) and write
\[\mathbf{d}=d_{1}\cdots d_{m}=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2).\]
For each \(i\in 0,1,2\), let \(\ell(i)\in\{2,3\}\) denote the length of the block \(\mathbf{w}_{i}\)--explicitly, \(\ell(0)=\ell(2)=2\) and \(\ell(1)=3\)--and let \(p:=p_{\mathbf{d}}:\{1,\ldots,n\}\to\{1,\ldots,m-3\}\) be defined by \(p(k)=1+\sum_{j=1}^{k-1}\ell(i_{j})\) so that \(\sigma^{p(k)}(\mathbf{d})=\mathbf{w}_{i_{k}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)\). Recall from Figure 2 that \(S_{\alpha}^{0}(1)-S_{\alpha}^{0}(1-\alpha)=\alpha,\ S_{\alpha}^{m-1}(1)-S_{ \alpha}^{m-1}(1-\alpha)=\alpha/\beta\), and that the remaining differences \(S_{\alpha}^{t}(1)-S_{\alpha}^{t}(1-\alpha)\) are determined by cycles of length two or three beginning at vertex \(\alpha/\beta\). In particular, if \(i_{k}\in\{0,2\}\), then \(S_{\alpha}^{p(k)}(1)-S_{\alpha}^{p(k)}(1-\alpha)=\alpha/\beta\) and \(S_{\alpha}^{p(k)+1}(1)-S_{\alpha}^{p(k)+1}(1-\alpha)=\alpha\) give a cycle of length two, while if \(i_{k}=1\), \(S_{\alpha}^{p(k)}(1)-S_{\alpha}^{p(k)}(1-\alpha)=\alpha/\beta,\ S_{\alpha}^{p(k)+1}(1)-S_{ \alpha}^{p(k)+1}(1-\alpha)=\alpha\) and \(S_{\alpha}^{p(k)+2}(1)-S_{\alpha}^{p(k)+2}(1-\alpha)=\beta\alpha\) give a cycle of length three. We find for each \(k\in\{1,\ldots,n\}\) that
\[\sum_{t=p(k)}^{p(k)+\ell(i_{k})-1}\frac{1}{\beta^{t+1}}\left(S_{\alpha}^{t}(1 )-S_{\alpha}^{t}(1-\alpha)\right)=\frac{\ell(i_{k})}{\beta^{p(k)+2}}\alpha,\]
and thus
\[1 =\frac{2}{C}\sum_{t=0}^{m-1}\frac{1}{\beta^{t+1}}\left(S_{\alpha} ^{t}(1)-S_{\alpha}^{t}(1-\alpha)\right)\] \[=\frac{2}{C}\left(\frac{\alpha}{\beta}+\sum_{k=1}^{n}\sum_{t=p(k) }^{p(k)+\ell(i_{k})-1}\frac{1}{\beta^{t+1}}\left(S_{\alpha}^{t}(1)-S_{\alpha} ^{t}(1-\alpha)\right)+\frac{\alpha}{\beta^{m+1}}\right)\] \[=\frac{2\alpha}{C}\left(\frac{1}{\beta}+\sum_{k=1}^{n}\frac{\ell( i_{k})}{\beta^{p(k)+2}}+\frac{1}{\beta^{m+1}}\right). \tag{16}\]
Note that (16) also holds for \(\alpha>1+1/\beta^{2}\) (i.e. \(\mathbf{d}=10\)) with the summation over \(k\) set to zero. Define a substitution \(\xi:\{\mathbf{w}_{0},\mathbf{w}_{1},\mathbf{w}_{2}\}\to\{02,030\}\) by \(\xi(\mathbf{w}_{0})=\xi(\mathbf{w}_{2})=02\) and \(\xi(\mathbf{w}_{1})=030\), and let \(\Xi:\mathcal{M}\to\{0,1,2,3\}^{*}\) be given by \(\Xi(\mathbf{d})=101\) if \(\mathbf{d}=10\), and
\[\Xi(\mathbf{d})=1\xi(\mathbf{w}_{i_{1}})\cdots\xi(\mathbf{w}_{i_{n}})01\]
if \(\mathbf{d}=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)\in\mathcal{M} \backslash\{10\}\). The left- and right-most sides of (16) may be written more succinctly as \(1=\frac{2\alpha}{C}v(\Xi(\mathbf{d}))\), and thus \(C=2\alpha v(\Xi(\mathbf{d}))\).
Having found \(C\), we are now in a position to determine \(\mathfrak{f}_{S}(\alpha)\). Again by symmetry of \(S_{\alpha}\),
\[\mathfrak{f}_{S}(\alpha) =\nu_{\alpha}(J_{0})\] \[=1-\nu_{\alpha}(J_{-1})-\nu_{\alpha}(J_{1})\] \[=1-\int_{-1}^{-1/\beta}f_{\alpha}(x)d\lambda(x)-\int_{1/\beta}^{1 }f_{\alpha}(x)d\lambda(x)\] \[=1-\frac{2}{C}\sum_{t=0}^{m-1}\left(\int_{-1}^{-1/\beta}\frac{1}{ \beta^{t+1}}1_{[S_{\alpha}^{t}(1-\alpha),S_{\alpha}^{t}(1))}(x)d\lambda(x)+ \int_{1/\beta}^{1}\frac{1}{\beta^{t+1}}1_{[S_{\alpha}^{t}(1-\alpha),S_{\alpha}^ {t}(1))}(x)d\lambda(x)\right).\]
Write \(\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}\). Since by Proposition 2.1, \(S_{\alpha}^{t}(1)\notin J_{-1}\) and \(S_{\alpha}^{t}(1-\alpha)\notin J_{1}\) for \(t<m\), the previous line may be rewritten as
\[\mathfrak{f}_{S}(\alpha) =1-\frac{2}{C}\left(\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ \varepsilon_{t+1}=-1\end{subarray}}\frac{1}{\beta^{t+1}}(-1/\beta-S_{\alpha}^ {t}(1-\alpha))+\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}\frac{1}{\beta^{t+1}}(S_{\alpha}^{t}(1)-1/\beta)\right)\] \[=1-\frac{2}{C}\left(\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}\frac{1}{\beta^{t+1}}S_{\alpha}^{t}(1)-\sum_{ \begin{subarray}{c}0\leq t\leq m-1\\ e_{t+1}=-1\end{subarray}}\frac{1}{\beta^{t+1}}S_{\alpha}^{t}(1-\alpha)-1/\beta \right),\]
where we have used Proposition 2.15 together with the facts that
\[\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ \varepsilon_{t+1}=-1\end{subarray}}1/\beta^{t+1}=-v(\mathbf{e})\quad\text{ and }\quad\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}1/\beta^{t+1}=v(\mathbf{d}).\]
Let \(\mathbf{d}_{1}^{0}=\mathbf{e}_{1}^{0}=\varepsilon\) be the empty word, and for \(1\leq t\leq m-1\) set \(\mathbf{d}_{1}^{t}:=d_{1}\cdots d_{t}\) and \(\mathbf{e}_{1}^{t}:=e_{1}\cdots e_{t}\). For each \(0\leq t\leq m-1\), equation (3) gives \(S_{\alpha}^{t}(1)=\beta^{t}(1-\alpha v(\mathbf{d}_{1}^{t}))\) and \(S_{\alpha}^{t}(1-\alpha)=\beta^{t}(1-\alpha-\alpha v(\mathbf{e}_{1}^{t}))\). Setting
\[\mathfrak{n}(\mathbf{d}):=\#\{1\leq j\leq m\ |\ d_{j}=1\}-\#\{1\leq j\leq m\ |\ e_{j}=-1\}, \tag{17}\]
the frequency function may be written as
\[\mathfrak{f}_{S}(\alpha) =1-\frac{2}{C}\left(\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}\frac{1}{\beta^{t+1}}\beta^{t}(1-\alpha v(\mathbf{d}_{ 1}^{t}))-\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ e_{t+1}=-1\end{subarray}}\frac{1}{\beta^{t+1}}\beta^{t}(1-\alpha-\alpha v( \mathbf{e}_{1}^{t}))-1/\beta\right)\] \[=1-\frac{2}{\beta C}\left(\sum_{\begin{subarray}{c}0\leq t\leq m-1 \\ d_{t+1}=1\end{subarray}}(1-\alpha v(\mathbf{d}_{1}^{t}))-\sum_{\begin{subarray}{c }0\leq t\leq m-1\\ e_{t+1}=-1\end{subarray}}(1-\alpha-\alpha v(\mathbf{e}_{1}^{t}))-1\right)\] \[=1-\frac{2}{\beta C}\left(\mathfrak{n}(\mathbf{d})-\alpha\left( \sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}v(\mathbf{d}_{1}^{t})-\sum_{\begin{subarray}{c}0\leq t \leq m-1\\ e_{t+1}=-1\end{subarray}}(1+v(\mathbf{e}_{1}^{t}))\right)-1\right).\]
Letting
\[K_{\mathbf{d}}:=\sum_{\begin{subarray}{c}0\leq t\leq m-1\\ d_{t+1}=1\end{subarray}}v(\mathbf{d}_{1}^{t})-\sum_{\begin{subarray}{c}0\leq t \leq m-1\\ e_{t+1}=-1\end{subarray}}(1+v(\mathbf{e}_{1}^{t}))\]
and recalling that \(C=2\alpha v(\Xi(\mathbf{d}))\), we find
\[\mathfrak{f}_{S}(\alpha)=1-\frac{1}{\beta v(\Xi(\mathbf{d}))}\left(\frac{ \mathfrak{n}(\mathbf{d})-1}{\alpha}-K_{\mathbf{d}}\right). \tag{18}\]
**Example 3.4**.: _Let \(\mathbf{d}=1001\). Then \(\mathbf{e}=0\overline{0010}\), so \(\mathbf{n}(\mathbf{d})=1\). Moreover,_
\[v(\Xi(\mathbf{d}))=v(10201)=\frac{1}{\beta}+\frac{2}{\beta^{3}}+\frac{1}{\beta^ {5}}\]
_and_
\[K_{\mathbf{d}}=v(\varepsilon)+v(100)-(1-v(\overline{00}))=-\frac{1}{\beta^{2}}.\]
_Thus for all \(\alpha\in I_{1001}\),_
\[\mathfrak{f}_{S}(\alpha)=1-\frac{1}{\beta^{3}(1/\beta+2/\beta^{3}+1/\beta^{5} )}=4/5.\]
_A similar calculation with \(\mathbf{d}=1010\) reveals that \(\mathfrak{f}_{S}(\alpha)=4/5\) also for all \(\alpha\in I_{1010}\)._
Before turning toward the maximal frequency of the digit \(0\), we give an alternate expression for \(K_{\mathbf{d}}\) which will be helpful below. Note that the first summation in the definition of \(K_{\mathbf{d}}\) may be rewritten as the sum of all \(v(\mathbf{d}_{1}^{\mathbf{t}}),\ 1\leq t\leq m\), for which \(d_{t}\)=1, excluding the greatest such index \(t\). The second sum may be similarly rewritten (though an extra term 1 appears from the first non-zero summand of the original sum). Now suppose \(\mathbf{d}\neq 10\). Recalling that \(\{d_{m-2}d_{m-1}d_{m},\overline{e_{m-2}e_{m-1}e_{m}}\}=\{001,010\}\), we have
\[K_{\mathbf{d}} =\sum_{\begin{subarray}{c}1\leq t\leq m-3\\ d_{t}=1\end{subarray}}v(\mathbf{d}_{1}^{t})-\left(1+\sum_{\begin{subarray}{c}1 \leq t\leq m-3\\ e_{t}=-1\end{subarray}}(1+v(\mathbf{e}_{1}^{t}))\right)\] \[=v(1)+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}\in\{1,2\}\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})- \left(1+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ 2-i_{k}\in\{1,2\}\end{subarray}}(1-v(0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2- i_{k}}))\right).\]
Recall that \(p(k+1),\ 1\leq k\leq n-1\), gives the power for which \(\sigma^{p(k+1)}(\mathbf{d})=\mathbf{w}_{i_{k+1}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2)\); in particular, \(p(k+1)\) equals the length of \(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}}\). By Lemma 2.14,
\[v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})+v(0\mathbf{w}_{2-i_{1}}\cdots \mathbf{w}_{2-i_{k}})=\frac{1}{\beta}+\frac{1}{\beta}\left(\frac{1}{\beta}- \frac{1}{\beta p(k+1)}\right)=1-1/\beta^{p(k+1)+1}.\]
Then
\[K_{\mathbf{d}} =\frac{1}{\beta}+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}\in\{1,2\}\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})- \left(1+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ 2-i_{k}\in\{1,2\}\end{subarray}}\left(v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{ i_{k}})+1/\beta^{p(k+1)+1}\right)\right)\] \[=-\frac{1}{\beta^{2}}+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ 2-i_{k}\in\{1,2\}\end{subarray}}1/\beta^{p(k+1)+1}.\]
The latter summation equals
\[\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ 2-i_{k}\in\{1,2\}\end{subarray}}1/\beta^{p(k+1)+1} =\frac{1}{\beta}v(0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n-1 }})\] \[=\frac{1}{\beta}\left(v(\overline{\mathbf{e}})-\frac{1}{\beta^{m-3 }}v(\mathbf{w}_{2-i_{n}}i_{n}/2)\right)\] \[=\frac{1}{\beta}\left(1-v(\mathbf{d})-\frac{1}{\beta^{m-3}}v( \mathbf{w}_{2-i_{n}}i_{n}/2)\right)\] \[=\frac{1}{\beta}\left(1-v(d_{1}\cdots d_{m-3})-\frac{1}{\beta^{m- 3}}v(011)\right)\] \[=\frac{1}{\beta}-\frac{1}{\beta}v(d_{1}\cdots d_{m-3})-\frac{1}{ \beta^{m-1}},\]
and thus for \(\mathbf{d}\in\mathcal{M}\backslash\{10\}\),
\[K_{\mathbf{d}}=-1+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})+\frac{1} {\beta}v(d_{1}\cdots d_{m-3})+\frac{1}{\beta^{m-1}}. \tag{19}\]
### Maximal frequency of zero
Here we prove that the frequency functions \(\mathfrak{f}_{S}\) and \(\mathfrak{f}_{T}\) attain their maximums on the (maximal) interval \([1/2+1/\beta,1+1/\beta^{2}]\). We first need some preliminary results. Note that by (18), on the matching interval \(I_{\mathbf{d}}\) the frequency function \(\mathfrak{f}_{S}\) is strictly increasing with \(\alpha\) for \(\mathfrak{n}(\mathbf{d})>1\), strictly decreasing for \(\mathfrak{n}(\mathbf{d})<1\) and constant for \(\mathfrak{n}(\mathbf{d})=1\). By (15), the same monotonicity conditions hold for \(\mathfrak{f}_{T}\).
The first of our preliminary results states that \(\mathfrak{f}_{S}\) (and hence \(\mathfrak{f}_{T}\)) is constant on 'cascade' intervals:
**Lemma 3.5**.: _For each \(\mathbf{d}\in\mathcal{M}_{U}\), we have \(\mathfrak{n}(\psi(\mathbf{d}))=1\). In particular, for each \(\mathbf{d}\in\mathcal{M}_{U}\), the frequency function \(\mathfrak{f}_{S}\) is constant on \([\lim_{n\to\infty}\alpha_{\psi^{n}(\mathbf{d})}^{-},\alpha_{\mathbf{d}}^{-}]\)._
Proof.: It suffices to prove the first statement; the second follows immediately from this, Proposition 2.23 and continuity of \(\mathfrak{f}_{S}\). Write
\[\mathbf{d}=d_{1}\cdots d_{m}=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2)\quad\text{and}\quad\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}= \overline{0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}(i_{n}/2)}.\]
Observe that
\[\mathbf{d}^{\prime}:=\psi(\mathbf{d}) =\begin{cases}\mathbf{d}\overline{\mathbf{e}},&d_{m}=0\\ \mathbf{d}\overline{e_{2}\cdots e_{m}},&d_{m}=1\end{cases}\] \[=\begin{cases}1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}00 \mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}(i_{n}/2),&d_{m}=0\\ 1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n-1}}001\mathbf{w}_{2-i_{1}}\cdots \mathbf{w}_{2-i_{n}}(i_{n}/2),&d_{m}=1\end{cases}\] \[=\begin{cases}1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}\mathbf{ w}_{0}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}(i_{n}/2),&d_{m}=0\\ 1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n-1}}\mathbf{w}_{1}\mathbf{w}_{2-i_{1 }}\cdots\mathbf{w}_{2-i_{n}}(i_{n}/2),&d_{m}=1\end{cases},\]
so
\[\mathbf{e}^{\prime}:=\varphi(\mathbf{d}^{\prime}) =\begin{cases}\overline{0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2- i_{n}}\mathbf{w}_{2}\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)},&d_{m}=0\\ \overline{0\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n-1}}\mathbf{w}_{1} \mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)},&d_{m}=1\end{cases}\] \[=\begin{cases}e_{1}\cdots e_{m-1}0\overline{\mathbf{d}},&d_{m}=0\\ e_{1}\cdots e_{m-2}0\overline{\mathbf{d}},&d_{m}=1\end{cases}.\]
Recall that if \(d_{m}=0\), then \(\overline{e_{m}}=1\). In this case \(\mathbf{d}^{\prime}\) has exactly one more digit \(1\) than does \(\overline{\mathbf{e}^{\prime}}\). If \(d_{m}=1\), then \(\overline{e_{m-1}e_{m}}=10\). Since \(e_{1}=0\), we see that in this case, too, \(\mathbf{d}^{\prime}\) has exactly one more digit \(1\) than does \(\overline{\mathbf{e}^{\prime}}\). Thus in both cases \(\mathfrak{n}(\mathbf{d}^{\prime})=1\).
We make note here of some computations which will be useful below. Let \(c,\ell\in\mathbb{Z}\) with \(\ell\geq 0\):
\[v((0c)^{\ell}) =c\sum_{j=1}^{\ell}1/\beta^{2j}=\frac{c}{\beta^{2}}\cdot\frac{1- 1/\beta^{2\ell}}{1-1/\beta^{2}}=\frac{c}{\beta}(1-1/\beta^{2\ell}) \tag{20}\] \[v((00c)^{\ell}) =c\sum_{j=1}^{\ell}1/\beta^{3j}=\frac{c}{\beta^{3}}\cdot\frac{1- 1/\beta^{3\ell}}{1-1/\beta^{3}}=\frac{c}{2\beta}(1-1/\beta^{3\ell})\] (21) \[v((0c0)^{\ell}) =\beta v((00c)^{\ell})=\frac{c}{2}(1-1/\beta^{3\ell})\] (22) \[v((000c)^{\ell}) =c\sum_{j=1}^{\ell}1/\beta^{4j}=\frac{c}{\beta^{4}}\frac{1-1/\beta ^{4\ell}}{1-1/\beta^{4}}=\frac{c}{\beta(\beta^{2}+1)}(1-1/\beta^{4\ell})\] (23) \[v((0c00)^{\ell}) =\beta^{2}v((000c)^{\ell})=\frac{c\beta}{\beta^{2}+1}(1-1/\beta^{4 \ell}). \tag{24}\]
**Lemma 3.6**.: _If \(\alpha\in I_{\mathbf{d}}\) for some \(\mathbf{d}\in\mathcal{M}\) with \(\mathfrak{n}(\mathbf{d})=1\), then \(\mathfrak{f}_{S}(\alpha)\leq 4/5\). Moreover, equality holds if and only if \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\)._
Proof.: Note that \(\mathfrak{n}(10)=0\), so we may assume \(\mathbf{d}\succ 10\). That \(\mathfrak{f}_{S}(\alpha)=4/5\) for all \(\alpha\in I_{1010}\cup I_{1001}\) was shown in Example 3.4. Thus we may assume that \(\mathbf{d}\succ 1010\). Write
\[\mathbf{d}=d_{1}\cdots d_{m}=1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{n}}(1-i_ {n}/2)=1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{t}\mathbf{Y}_{t}\mathbf{ w}_{i_{n}}(1-i_{n}/2),\]
where each \(\mathbf{X}_{s}\) and \(\mathbf{Y}_{s}\), \(1\leq s\leq t\), consists solely of \(\mathbf{w}_{2i}\)'s and \(\mathbf{w}_{1}\)'s, respectively, and each \(\mathbf{X}_{s},\ \mathbf{Y}_{s}\neq\varepsilon\) except possibly \(\mathbf{Y}_{t}\). Let \(\ell_{2s-1}:=\frac{1}{2}\mathrm{len}(\mathbf{X}_{s})\) and \(\ell_{2s}:=\frac{1}{3}\mathrm{len}(\mathbf{Y}_{s})\) denote the number of blocks \(\mathbf{w}_{i}\) in \(\mathbf{X}_{s}\) and \(\mathbf{Y}_{s}\), respectively, and set \(\ell_{j}:=0\) for \(j>2t\). Analogous to the function \(p=p_{\mathbf{d}}\) defined in SS3.2, set \(p_{1}:=1\) and for each \(s\geq 1\), let \(p_{2s}:=p_{2s-1}+2\ell_{2s-1}\) and \(p_{2s+1}:=p_{2s}+3\ell_{2s}\); note, then, that
\[\sigma^{p_{2s-1}}(\mathbf{d})=\mathbf{X}_{s}\mathbf{Y}_{s}\cdots\mathbf{X}_{t }\mathbf{Y}_{t}\mathbf{w}_{i_{n}}(1-i_{n}/2)\quad\text{ and }\quad\sigma^{p_{2s}}(\mathbf{d})= \mathbf{Y}_{s}\mathbf{X}_{s+1}\cdots\mathbf{X}_{t}\mathbf{Y}_{t}\mathbf{w}_{i _{n}}(1-i_{n}/2).\]
Let \(k_{2s-1},k_{2s}\in\{1,\ldots,n\}\) be the indices for which
\[\sigma^{p_{2s-1}}(\mathbf{d})=\mathbf{w}_{i_{k_{2s-1}}}\cdots\mathbf{w}_{i_{n- 1}}\mathbf{w}_{i_{n}}(1-i_{n}/2)\quad\text{ and }\quad\sigma^{p_{2s}}(\mathbf{d})= \mathbf{w}_{i_{k_{2s}}}\cdots\mathbf{w}_{i_{n-1}}\mathbf{w}_{i_{n}}(1-i_{n}/2).\]
Using (20) and (22), we compute
\[v(\Xi(\mathbf{d})) =v(1(02)^{\ell_{1}}(030)^{\ell_{2}}\cdots(02)^{\ell_{2t-1}}(030) ^{\ell_{2s}}0201)\] \[=\frac{1}{\beta}+\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}}}v( (02)^{\ell_{2s-1}})+\frac{1}{\beta^{p_{2s}}}v((030)^{\ell_{2s}})\right)+\frac{ 1}{\beta^{m-3}}v(0201)\] \[=\frac{1}{\beta}+\sum_{s=1}^{t}\left(\frac{2}{\beta^{p_{2s-1}+1} }(1-1/\beta^{2\ell_{2s-1}})+\frac{3}{2\beta^{p_{2s}}}(1-1/\beta^{3\ell_{2s}}) \right)+\frac{1}{\beta^{m-3}}(2/\beta^{2}+1/\beta^{4}).\]
Moreover, (21) gives
\[v(d_{1}\cdots d_{m-3}) =\frac{1}{\beta}+\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}}}v( \mathbf{X}_{s})+\frac{1}{\beta^{p_{2s}}}v(\mathbf{Y}_{s})\right)\] \[=\frac{1}{\beta}+\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}}}v( \mathbf{X}_{s})+\frac{1}{\beta^{p_{2s}}}v((001)^{\ell_{2s}})\right)\] \[=\frac{1}{\beta}+\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}}}v( \mathbf{X}_{s})+\frac{1}{2\beta^{p_{2s}+1}}(1-1/\beta^{3\ell_{2s}})\right),\]
so equation (19) becomes
\[K_{\mathbf{d}}= -\frac{1}{\beta}+\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\] \[+\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}+1}}v(\mathbf{X}_{s}) +\frac{1}{2\beta^{p_{2s}+2}}(1-1/\beta^{3\ell_{2s}})\right)+\frac{1}{\beta^{m- 1}}.\]
Then
\[\beta v(\Xi(\mathbf{d}))+5K_{\mathbf{d}}= 1+\sum_{s=1}^{t}\left(\frac{2}{\beta^{p_{2s-1}}}(1-1/\beta^{2\ell_{ 2s-1}})+\frac{3\beta}{2\beta^{p_{2s}}}(1-1/\beta^{3\ell_{2s}})\right)+\frac{1} {\beta^{m-3}}(2/\beta+1/\beta^{3})\] \[-\frac{5}{\beta}+5\left(\sum_{\begin{subarray}{c}1\leq k\leq n-1 \\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\right)\] \[+5\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}+1}}v(\mathbf{X}_{ s})+\frac{1}{2\beta^{p_{2s}+2}}(1-1/\beta^{3\ell_{2s}})\right)+\frac{5}{\beta^{m- 1}}\] \[= 1-\frac{5}{\beta}+\sum_{s=1}^{t}\frac{1}{\beta^{p_{2s}}}\left(3 \beta/2+5/2\beta^{2}\right)(1-1/\beta^{3\ell_{2s}})+\frac{1}{\beta^{m-3}}(2/ \beta+5/\beta^{2}+1/\beta^{3})\] \[+\sum_{s=1}^{t}\left(\frac{2}{\beta^{p_{2s-1}}}(1-1/\beta^{2\ell _{2s-1}})+\frac{5}{\beta^{p_{2s-1}+1}}v(\mathbf{X}_{s})\right)\] \[+5\left(\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\right).\]
One easily verifies that both \(3\beta/2+5/2\beta^{2}\) and \(2/\beta+5/\beta^{2}+1/\beta^{3}\) equal \(c:=5-\beta\). We claim that it suffices to show that
\[\sum_{s=1}^{t}\left(\frac{2}{\beta^{p_{2s-1}}}(1-1/\beta^{2\ell_ {2s-1}})+\frac{5}{\beta^{p_{2s-1}+1}}v(\mathbf{X}_{s})\right) \tag{25}\] \[+5\left(\sum_{\begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}1\leq k\leq n-1\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\right)\] \[\leq \sum_{s=1}^{t}\frac{c}{\beta^{p_{2s-1}}}(1-1/\beta^{2\ell_{2s-1}}),\]
with equality if and only if \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\). Indeed, suppose the claim holds. Then the computation above becomes
\[\beta v(\Xi(\mathbf{d}))+5K_{\mathbf{d}} \leq 1-\frac{5}{\beta}+c\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s- 1}}}(1-1/\beta^{2\ell_{2s-1}})+\frac{1}{\beta^{p_{2s}}}(1-1/\beta^{3\ell_{2s}} )\right)+\frac{c}{\beta^{m-3}}\] \[=1-\frac{5}{\beta}+c\sum_{s=1}^{t}(1/\beta^{p_{2s-1}}-1/\beta^{p_ {2s}}+1/\beta^{p_{2s}}-1/\beta^{p_{2s+1}})+\frac{c}{\beta^{m-3}}\] \[=1-\frac{5}{\beta}+c(1/\beta-1/\beta^{m-3})+\frac{c}{\beta^{m-3}}\] \[=1-\frac{5}{\beta}+\frac{c}{\beta}\] \[=0\]
with equality if and only if \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\). Rearranging, this inequality is equivalent to \(K_{\mathbf{d}}/\beta v(\Xi(\mathbf{d}))\leq-1/5\). From (18) and the assumption that \(\mathfrak{n}(\mathbf{d})=1\), this gives
\[\mathfrak{f}_{S}(\alpha)=1+K_{\mathbf{d}}/\beta v(\Xi(\mathbf{d}))\leq 4/5\]
with equality if and only if \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\), as desired.
It remains to show the claim from (25). The constant \(c\) defined above may be rewritten as \(c=2+5/(\beta^{2}+1)\). Subtracting \(\sum_{s=1}^{t}(2/\beta^{p_{2s-1}})(1-1/\beta^{2t_{2s-1}})\) from both sides, dividing by 5 and noting that \(i_{k}\in\{0,2\}\) only when \(k_{2s-1}\leq k<k_{2s},\ 1\leq s\leq t\), equation (25) becomes
\[\sum_{s=1}^{t}\left(\frac{1}{\beta^{p_{2s-1}+1}v}(\mathbf{X}_{s}) +\sum_{\begin{subarray}{c}k_{2s-1}\leq k<k_{2s}\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}k_{2s-1}\leq k<k_{2s}\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\right) \tag{26}\] \[\leq \frac{1}{\beta^{2}+1}\sum_{s=1}^{t}\frac{1}{\beta^{p_{2s-1}}}(1- 1/\beta^{2t_{2s-1}}).\]
Fix \(1\leq s\leq t\), and write
\[\mathbf{X}_{s}:=\mathbf{w}_{2}^{n_{s,1}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s, 2}}\mathbf{w}_{0}^{n_{s,3}}(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4}}\cdots \mathbf{w}_{2}^{n_{s,4}r_{s-3}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4}r_{s-2}} \mathbf{w}_{0}^{n_{s,4}r_{s-1}}(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4}r_{s}}, \tag{27}\]
where the powers \(n_{s,\ell}\geq 0\) are chosen so that \(\sum_{\ell=1}^{4r_{s}}n_{s,\ell}\) is minimal and no three consecutive \(n_{s,\ell}\) are zero except possibly the first or final three \(n_{s,\ell}\). Set \(p_{s,1}:=p_{2s-1}\) and for each \(1\leq j\leq r_{s}\),
\[p_{s,4j-2}:=p_{s,4j-3}+2n_{s,4j-3}, \qquad p_{s,4j-1}:=p_{s,4j-2}+4n_{s,4j-2},\] \[p_{s,4j}:=p_{s,4j-1}+2n_{s,4j-1}, \qquad p_{s,4j+1}:=p_{s,4j}+4n_{s,4j}.\]
Note that with these definitions, \(p_{s,4r_{s}+1}=p_{2s}\). Equations (20)-(24) give
\[\frac{1}{\beta^{p_{2s-1}+1}}v(\mathbf{X}_{s})= \frac{1}{\beta}\sum_{j=1}^{r_{s}}\left(\frac{1}{\beta^{p_{s,4j-3} }}v(\mathbf{w}_{2}^{n_{s,4j-3}})+\frac{1}{\beta^{p_{s,4j-2}}}v((\mathbf{w}_{2 }\mathbf{w}_{0})^{n_{s,4j-2}})\right.\] \[+\frac{1}{\beta^{p_{s,4j-1}}}v(\mathbf{w}_{0}^{n_{s,4j-1}})+\frac {1}{\beta^{p_{s,4j}}}v((\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j}})\right)\] \[= \sum_{j=1}^{r_{s}}\left(\frac{1}{\beta^{p_{s,4j-3}}}\frac{1}{ \beta^{2}}(1-1/\beta^{2n_{s,4j-3}})+\frac{1}{\beta^{p_{s,4j-2}}}\frac{1}{ \beta^{2}+1}(1-1/\beta^{4n_{s,4j-2}})\right.\] \[+\frac{1}{\beta^{p_{s,4j}}}\frac{1}{\beta^{2}(\beta^{2}+1)}(1-1/ \beta^{4n_{s,4j}})\bigg{)}\]
and
\[\sum_{\begin{subarray}{c}k_{2s-1}\leq k<k_{2s}\\ i_{k}=2\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})-\sum_{ \begin{subarray}{c}k_{2s-1}\leq k<k_{2s}\\ i_{k}=0\end{subarray}}v(1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{k}})\] \[= \sum_{j=1}^{r_{s}}\bigg{(}\sum_{\ell=1}^{n_{s,4j-3}}v(1\mathbf{X} _{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s, 1}}\cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{\ell})\] \[-\sum_{\ell=1}^{n_{s,4j-1}}v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots \mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2} \mathbf{w}_{0})^{n_{s,4j-2}}\mathbf{w}_{0}^{\ell})\] \[+v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s -1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j}})\] \[-v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s -1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}} \mathbf{w}_{0}^{n_{s,4j-1}}\mathbf{w}_{0})\bigg{)}\] \[= \sum_{j=1}^{r_{s}}\bigg{(}\sum_{\ell=1}^{n_{s,4j-3}}v(1\mathbf{X} _{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}} \cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{\ell})\] \[-n_{s,4j-1}v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1} \mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{ s,4j-2}})\] \[+\frac{1}{\beta^{p_{s,4j}}}\frac{1}{\beta(\beta^{2}+1)}(1-1/\beta^{4 n_{s,4j}})\bigg{)}.\]
Thus the left-hand side of (26) equals
\[\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}\frac{1}{\beta^{p_{s,4j-3}}} \frac{1}{\beta^{2}}(1-1/\beta^{2n_{s,4j-3}})+\frac{1}{\beta^{p_{s,4j-2}}}\frac{1 }{\beta^{2}+1}(1-1/\beta^{4n_{s,4j-2}})+\frac{1}{\beta^{p_{s,4j}}}\frac{1}{ \beta^{2}+1}(1-1/\beta^{4n_{s,4j}})\] \[+\sum_{\ell=1}^{n_{s,4j-3}}v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots \mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{0} \mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{\ell})\] \[-n_{s,4j-1}v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1} \mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2}\mathbf{w}_{0})^{ n_{s,4j-2}})\bigg{)}.\]
Moreover, using the definition of \(p_{s,4j-i}\), we find that each summand on the right-hand side of (26) may be expanded
\[\frac{1}{\beta^{p_{2s-1}}}(1-1/\beta^{2\ell_{2s-1}})= 1/\beta^{p_{2s-1}}-1/\beta^{p_{2s}}\] \[= 1/\beta^{p_{s,1}}-1/\beta^{p_{s,4r_{s+1}}}\] \[= \sum_{j=1}^{r_{s}}\bigg{(}\frac{1}{\beta^{p_{s,4j-3}}}(1-1/\beta^ {2n_{s,4j-3}})+\frac{1}{\beta^{p_{s,4j-2}}}(1-1/\beta^{4n_{s,4j-2}})\] \[+\frac{1}{\beta^{p_{s,4j-1}}}(1-1/\beta^{2n_{s,4j-1}})+\frac{1}{ \beta^{p_{s,4j}}}(1-1/\beta^{4n_{s,4j}})\bigg{)}.\]
Subtracting \(\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\frac{1}{\beta^{p_{s,4j-i}}}\frac{1}{\beta^{2} +1}(1-1/\beta^{4n_{s,4j-i}}),\;i=0,2\), from both sides, (26) becomes
\[\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}\frac{1}{\beta^{p_{s,4j-3} }}\frac{1}{\beta^{2}}(1-1/\beta^{2n_{s,4j-3}})+\sum_{\ell=1}^{n_{s,4j-3}}v(1 \mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_ {2}^{n_{s,1}}\cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{ \ell})\] \[-n_{s,4j-1}v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1} \mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2}\mathbf{w}_{0})^{ n_{s,4j-2}})\bigg{)}\] \[\leq \sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}\frac{1}{\beta^{p_{s,4j-3 }}}\frac{1}{\beta^{2}+1}(1-1/\beta^{2n_{s,4j-3}})+\frac{1}{\beta^{p_{s,4j-1}}} \frac{1}{\beta^{2}+1}(1-1/\beta^{2n_{s,4j-1}})\bigg{)}.\]
Rearranging and using the fact that \(1/\beta^{2}-1/(\beta^{2}+1)=1/(\beta^{2}(\beta^{2}+1))\), the previous inequality is equivalent to
\[\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}\sum_{\ell=1}^{n_{s,4j-3}} v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1} \mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-4}} \mathbf{w}_{2}^{\ell}) \tag{28}\] \[+\frac{1}{\beta^{p_{s,4j-3}}}\frac{1}{\beta^{2}(\beta^{2}+1)}(1-1 /\beta^{2n_{s,4j-3}})\bigg{)}\] \[\leq \sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}n_{s,4j-1}v(1\mathbf{X}_{1 }\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}} \cdots(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}})+\frac{1}{\beta^{p_{s,4j-1}} }\frac{1}{\beta^{2}+1}(1-1/\beta^{2n_{s,4j-1}})\bigg{)}\,.\]
Consider the summand (with respect to the summation over \(j\)) on the left-hand side of the previous inequality. We will show that this is less than or equal to \(n_{s,4j-3}v(\mathbf{d})\), with equality if and only if \(n_{s,4j-3}=0\). If \(n_{s,4j-3}=0\), both the summand and \(n_{s,4j-3}v(\mathbf{d})\) are zero; assume \(n_{s,4j-3}>0\). We must show
\[\sum_{\ell=1}^{n_{s,4j-3}}(v(\mathbf{d})-v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots \mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{0} \mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{\ell}))>\frac{1}{\beta^{p_{s,4j-3}} }\frac{1}{\beta^{2}(\beta^{2}+1)}(1-1/\beta^{2n_{s,4j-3}}). \tag{29}\]
The left-hand side of the previous line equals
\[\sum_{\ell=1}^{n_{s,4j-3}}\bigg{(}\frac{1}{\beta^{p_{s,4j-3}+2\ell}}v(\mathbf{w}_ {2}^{n_{s,4j-3}-\ell}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}}\cdots\mathbf{w}_ {i_{n}}(1-i_{n}/2))\bigg{)}\,.\]
Note that \((\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2) \succ(\mathbf{w}_{0}\mathbf{w}_{2})^{\infty}\): if not, then the former word begins with \((\mathbf{w}_{0}\mathbf{w}_{2})^{n^{\prime}}\mathbf{w}_{0}\mathbf{w}_{i}\) for some \(n^{\prime}\geq 0\) and \(i\in\{0,1\}\). But then
\[\mathbf{w}_{2}^{n_{s,4j-3}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}}\cdots \mathbf{w}_{i_{n}}(1-i_{n}/2)=\mathbf{w}_{2}^{n_{s,4j-3}}(\mathbf{w}_{0} \mathbf{w}_{2})^{n^{\prime}}\mathbf{w}_{0}\mathbf{w}_{i}=\mathbf{w}_{2}^{n_{s,4j-3}-1}(\mathbf{w}_{0}\mathbf{w}_{2})^{n^{\prime}+1}\mathbf{w}_{i},\]
contradicting the minimality of the sum of powers \(\sum_{\ell=1}^{4r_{s}}n_{s,\ell}\). Thus the left-hand side of (29) is strictly greater than
\[\sum_{\ell=1}^{n_{s,4j-3}}\left(\frac{1}{\beta^{p_{s,4j-3}+2\ell} }v(\mathbf{w}_{2}^{n_{s,4j-3}-\ell})+\frac{1}{\beta^{p_{s,4j-2}}}v((\mathbf{w} _{0}\mathbf{w}_{2})^{\infty})\right)\] \[= \frac{1}{\beta^{p_{s,4j-3}}}\sum_{\ell=1}^{n_{s,4j-3}}\left(\frac {1}{\beta^{2+1}}(1-1/\beta^{2n_{s,4j-3}-2\ell})+\frac{1}{\beta^{2}+1}\frac{1} {\beta^{2n_{s,4j-3}+1}}\right)\] \[= \frac{1}{\beta^{p_{s,4j-3}}}\left(\frac{1}{\beta^{2}}(1-1/\beta^{ 2n_{s,4j-3}})-\frac{n_{s,4j-3}}{\beta^{2n_{s,4j-3}+1}}+\frac{1}{\beta^{2}+1} \frac{n_{s,4j-3}}{\beta^{2n_{s,4j-3}+1}}\right)\] \[= \frac{1}{\beta^{p_{s,4j-3}}}\left(\frac{1}{\beta^{2}}(1-1/\beta^{ 2n_{s,4j-3}})-\frac{\beta^{2}}{\beta^{2}+1}\frac{n_{s,4j-3}}{\beta^{2n_{s,4j-3 }+1}}\right).\]
It suffices to show that the right-hand side of the previous line is greater than or equal to the right-hand side of (29). Multiplying both quantities by \(\beta^{p_{s,4j-3}+2}(\beta^{2}+1)\), this is equivalent to showing
\[(\beta^{2}+1)(1-1/\beta^{2n_{s,4j-3}})-\beta^{3}\frac{n_{s,4j-3}}{\beta^{2n_{ s,4j-3}}}\geq 1-\frac{1}{\beta^{2n_{s,4j-3}}},\]
which simplifies to
\[1-\frac{1}{\beta^{2n_{s,4j-3}}}\geq\beta\frac{n_{s,4j-3}}{\beta^{2n_{s,4j-3}}}.\]
The left- and right-hand sides of the previous line increase and decrease, respectively, as functions of integers \(n_{s,4j-3}>0\). Since the inequality holds for \(n_{s,4j-3}=1\), we conclude that (29) holds. Thus the summand on the left-hand side of (28) is less than or equal to \(n_{s,4j-3}v(\mathbf{d})\), with equality if and only if \(n_{s,4j-3}=0\).
Next, consider the summand on the right-hand side of (28). We shall show that this is greater than or equal to \(n_{s,4j-1}v(\mathbf{d})\) with equality if and only if \(n_{s,4j-1}=0\). Again if \(n_{s,4j-1}=0\), both the summand and \(n_{s,4j-1}v(\mathbf{d})\) equal zero, so assume \(n_{s,4j-1}>0\). The desired inequality is equivalent to
\[n_{s,4j-1}(v(\mathbf{d})-v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1 }\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{2}\mathbf{w}_{0}) ^{n_{s,4j-2}}))<\frac{1}{\beta^{p_{s,4j-1}}}\frac{1}{\beta^{2}+1}(1-1/\beta^{2 n_{s,4j-1}}). \tag{30}\]
The left-hand side of the previous line equals
\[\frac{n_{s,4j-1}}{\beta^{p_{s,4j-1}}}v(\mathbf{w}_{0}^{n_{s,4j-1}}(\mathbf{w}_ {0}\mathbf{w}_{2})^{n_{s,4j}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2))=\frac{n_{s,4 j-1}}{\beta^{p_{s,4j}}}v((\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j}}\cdots \mathbf{w}_{i_{n}}(1-i_{n}/2)).\]
For similar reasons as above, one finds that \((\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2) \prec(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\). It follows that the left-hand side of (30) is strictly less than
\[\frac{n_{s,4j-1}}{\beta^{p_{s,4j}}}v((\mathbf{w}_{2}\mathbf{w}_{0})^{\infty})= \frac{n_{s,4j-1}}{\beta^{p_{s,4j}}}\frac{\beta}{\beta^{2}+1}.\]
Multiplying both sides of (30) by \(\beta^{p_{s,4j}}(\beta^{2}+1)\) and recalling that \(p_{s,4j}-p_{s,4j-1}=2n_{s,4j-1}\), it thus suffices to show that
\[\beta n_{s,4j-1}\leq\beta^{2n_{s,4j-1}}-1,\]
which clearly holds for each \(n_{s,4j-1}\geq 1\). This proves that the summand on the right-hand side of (28) is greater than or equal to \(n_{s,4j-1}v(\mathbf{d})\) with equality if and only if \(n_{s,4j-1}=0\).
Note that (17) may be rewritten as
\[\mathfrak{n}(\mathbf{d}) =(1+\#\{1\leq k\leq n-1\ |\ i_{k}\in\{1,2\}\}+1)-(\#\{1\leq k\leq n -1\ |\ 2-i_{k}\in\{1,2\}\}+1)\] \[=1+\#\{1\leq k\leq n-1\ |\ i_{k}=2\}-\#\{1\leq k\leq n-1\ |\ i_{k}=0\}.\]
Since \(\mathfrak{n}(\mathbf{d})=1\) by assumption, we have
\[\#\{1\leq k\leq n-1\ |\ i_{k}=2\}=\#\{1\leq k\leq n-1\ |\ i_{k}=0\}.\]
Recalling that \(\mathbf{d}=1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{t}\mathbf{Y}_{t}\mathbf{ w}_{i_{n}}(1-i_{n}/2)\), (27) and the fact that each \(\mathbf{Y}_{s}\) consists solely of \(w_{1}\)'s, we find
\[\#\{1\leq k\leq n-1\ |\ i_{k}=2\}=\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}(n_{s,4j-3}+n_ {s,4j-2}+n_{s,4j})\]
and
\[\#\{1\leq k\leq n-1\ |\ i_{k}=0\}=\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}(n_{s,4j-2}+n_ {s,4j-1}+n_{s,4j}),\]
so
\[\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}n_{s,4j-3}=\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}n_{s,4j-1}. \tag{31}\]
Using this and our prior observations regarding the left- and right-hand sides of (28), we have
\[\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}\bigg{(}\sum_{\ell=1}^{n_{s,4j-3} }v(1\mathbf{X}_{1}\mathbf{Y}_{1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{ w}_{2}^{n_{s,1}}\cdots(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{ \ell})\] \[+\frac{1}{\beta^{p_{s,4j-3}}}\frac{1}{\beta^{2}(\beta^{2}+1)}(1-1 /\beta^{2n_{s,4j-3}})\bigg{)}\] \[\leq v(\mathbf{d})\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}n_{s,4j-3}\] \[= v(\mathbf{d})\sum_{s=1}^{t}\sum_{j=1}^{r_{s}}n_{s,4j-1}\] \[\leq \sum_{j=1}^{r_{s}}\bigg{(}n_{s,4j-1}v(1\mathbf{X}_{1}\mathbf{Y}_ {1}\cdots\mathbf{X}_{s-1}\mathbf{Y}_{s-1}\mathbf{w}_{2}^{n_{s,1}}\cdots( \mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}})+\frac{1}{\beta^{p_{s,4j-1}}}\frac{ 1}{\beta^{2}+1}(1-1/\beta^{2n_{s,4j-1}})\bigg{)}\]
with equality throughout if and only if each \(n_{s,4j-1}=n_{s,4j-3}=0\). Thus the inequality in (28)--and hence in (25)--holds. It remains to show that \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\) if and only if each \(n_{s,4j-1}=n_{s,4j-3}=0\).
Suppose that \(\mathbf{d}\succeq 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\). Then \(\mathbf{d}\) begins with \(1(\mathbf{w}_{2}\mathbf{w}_{0})^{n^{\prime}}\mathbf{w}_{2}\mathbf{w}_{i}\) for some \(n^{\prime}\geq 0\) and \(i\in\{1,2\}\). This implies that either \(n_{1,1}\) or \(n_{1,5}\) is positive. For the converse, suppose that some \(n_{s,4j-3}\) or \(n_{s,4j-1}\) is positive. By (31), we can choose some \(n_{s,4j-3}>0\) with \((s,j)\) (lexicographically) minimal. Note that
\[\sigma^{p_{s,4j-3}-1}(\mathbf{d})=d_{i}\mathbf{w}_{2}^{n_{s,4j-3}}(\mathbf{w}_ {2}\mathbf{w}_{0})^{n_{s,4j-2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)\]
with \(d_{i}\in\{0,1\}\). Suppose \(d_{i}=0\) (the case that \(d_{i}=1\) is similar). Then \(j>1\), and
\[\sigma^{p_{s,4j-7}}(\mathbf{d})=\mathbf{w}_{2}^{n_{s,4j-7}}(\mathbf{w}_{2} \mathbf{w}_{0})^{n_{s,4j-6}}\mathbf{w}_{0}^{n_{s,4j-5}}(\mathbf{w}_{0}\mathbf{ w}_{2})^{n_{s,4j-4}}\mathbf{w}_{2}^{n_{s,4j-3}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2).\]
Since \(d_{i}=0\), we must have \(n_{s,4j-4}=0\). Moreover, \(n_{s,4j-5}>0\) contradicts the minimality of \(\sum_{\ell=1}^{4r_{s}}n_{s,\ell}\), so \(n_{s,4j-5}=0\). Since no three consecutive \(n_{s,\ell}\)'s can be zero (except possibly the first and final three \(n_{s,\ell}\)), it follows that \(n_{s,4j-6}>0\). Thus
\[\sigma^{p_{s,4j-6}-1}(\mathbf{d})=d_{i^{\prime}}(\mathbf{w}_{2} \mathbf{w}_{0})^{n_{s,4j-6}}\mathbf{w}_{2}^{n_{s,4j-3}}(\mathbf{w}_{2}\mathbf{ w}_{0})^{n_{s,4j-2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2) \tag{32}\]
for some \(d_{i^{\prime}}\in\{0,1\}\). Suppose \(d_{i^{\prime}}=0\). Then \(j>2\), and
\[\sigma^{p_{s,4j-11}}(\mathbf{d})= \mathbf{w}_{2}^{n_{s,4j-11}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j- 10}}\mathbf{w}_{0}^{n_{s,4j-9}}(\mathbf{w}_{0}\mathbf{w}_{2})^{n_{s,4j-8}} \mathbf{w}_{2}^{n_{s,4j-7}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-6}}\mathbf{ w}_{2}^{n_{s,4j-3}}(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-2}}\] \[\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2).\]
Since \(d_{i^{\prime}}=0\), we have \(n_{s,4j-8}=n_{4j-7}=0\). If \(n_{s,4j-9}>0\), the fact that \(n_{s,4j-3}>0\) contradicts the minimality of \(\sum_{\ell=1}^{4r_{s}}n_{s,\ell}\). But \(n_{s,4j-9}=0\) is also a contradiction since this implies three consecutive \(n_{s,\ell}\)'s are zero. Thus \(d_{i^{\prime}}=1\). Now suppose \(\sigma^{p_{s,4j-6}-1}(\mathbf{d})\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\). From (32), we find that \(n_{s,4j-3}=1\) and
\[\sigma^{p_{s,4j-6}-1}(\mathbf{d})=1(\mathbf{w}_{2}\mathbf{w}_{0})^{n_{s,4j-6}} \mathbf{w}_{2}(\mathbf{w}_{0}\mathbf{w}_{2})^{n^{\prime}}\mathbf{w}_{0}\mathbf{ w}_{i^{\prime\prime}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)\]
for some \(n^{\prime}\geq 0\) and \(i^{\prime\prime}\in\{0,1\}\). In any case, this contradicts the minimality of \(\sum_{\ell=1}^{4r_{s}}n_{s,\ell}\). Thus (using the fact that \(\mathbf{d}\in\mathcal{M}\)),
\[\mathbf{d}\succeq\sigma^{p_{s,4j-6}-1}(\mathbf{d})\succeq 1(\mathbf{w}_{2} \mathbf{w}_{0})^{\infty},\]
and we conclude that \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\) if and only if each \(n_{s,4j-1}=n_{s,4j-3}=0\).
Note that for each \(n\geq 1\), the word \(\mathbf{d}^{n}:=1(\mathbf{w}_{2}\mathbf{w}_{0})^{n}001\prec 1(\mathbf{w}_{2} \mathbf{w}_{0})^{\infty}\) satisfies Property \(M\). Moreover, \(v(\mathbf{d}^{n})\) approaches \(v(1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty})=2\beta/(\beta+2)\) from below, and thus \(1/v(\mathbf{d}^{n})\) approaches \((\beta+2)/(2\beta)=1/2+1/\beta\) from above. If \(\mathbf{d}\in\mathcal{M}\) satisfies \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\), then there is some \(n\geq 1\) for which \(\mathbf{d}\prec\mathbf{d}^{n}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\) and \(1/2+1/\beta<1/v(\mathbf{d}^{n})<1/v(\mathbf{d})\). Since \(I_{\mathbf{d}^{n}}\cap I_{\mathbf{d}}=\varnothing\) and \(I_{\mathbf{d}^{n}}\) and \(I_{\mathbf{d}}\) contain \(1/v(\mathbf{d}^{n})\) and \(1/v(\mathbf{d})\), respectively, it follows that \(I_{\mathbf{d}}\subset(1/2+1/\beta,\beta]\). Similarly reasoning shows that if \(\mathbf{d}\succ 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\), then \(I_{\mathbf{d}}\subset(1,1/2+1/\beta)\), and in fact \(1/2+1/\beta\) is a non-matching parameter.
With these observations and the previous lemmas, we are now ready to prove the main result of this section:
**Theorem 3.7**.: _The frequency functions \(\mathfrak{f}_{S},\mathfrak{f}_{T}:[1,\beta]\to[0,1]\) attain their maximums \(\mathfrak{f}_{S}(\alpha)=4/5\) and \(\mathfrak{f}_{T}(\alpha)=3/4\) on the maximal interval \([1/2+1/\beta,1+1/\beta^{2}]\)._
Proof.: By (15), it suffices to show the statement for \(\mathfrak{f}_{S}\). Recall from Example 3.4 that \(\mathfrak{f}_{S}\) equals \(4/5\) on \(I_{1010}\cup I_{1001}=(1+1/\beta^{4},1+1/\beta^{2})\backslash\{1+1/\beta^{3}\}\). Moreover, \(\mathfrak{f}_{S}\) is decreasing on \(I_{10}=(1+1/\beta^{2},\beta]\) since \(\mathfrak{n}(10)=0\). By continuity of \(\mathfrak{f}_{S}\), the statement is proven for \(\alpha\in[1+1/\beta^{4},\beta]\).
We now show that \(\mathfrak{f}_{S}(\alpha)\leq 4/5\) for \(\alpha\in[1,1+1/\beta^{4})\), with equality if \(\alpha\geq 1/2+1/\beta\). Since \(\mathfrak{f}_{S}\) is continuous and is monotone on each matching interval \(I_{\mathbf{d}}\), and since the set of matching parameters \(\cup_{\mathbf{d}\in\mathcal{M}}I_{\mathbf{d}}\) is dense, it suffices to show the desired statements for the endpoints \(\alpha_{\mathbf{d}}^{\pm}\) of matching intervals in \([1,1+1/\beta^{4})\). Notice that each endpoint \(\alpha_{\mathbf{d}}^{+},\alpha_{\mathbf{d}}^{-}\in[1,1+1/\beta^{4})\) is the limit (from above) of some sequence of endpoints of cascade intervals. In particular, if \(\mathbf{d}\in\psi(\mathcal{M}_{U})\), then \(I_{\mathbf{d}}\) is itself a cascade interval and we take constant sequences. Suppose \(\mathbf{d}\in\mathcal{M}_{U}\backslash\psi(\mathcal{M}_{U})\). Since each lower endpoint \(\alpha_{\mathbf{d}}^{-}\) equals the upper endpoint \(\alpha_{\psi(\mathbf{d})}^{+}\) of \(I_{\psi(\mathbf{d})}\) by Proposition 2.23, we can again take the constant sequence. Now consider \(\alpha_{\mathbf{d}}^{+}\). Let \(\varepsilon>0\), and choose some matching parameter \(\alpha^{\prime}\in I_{\mathbf{d}^{\prime}}\) satisfying \(\alpha_{\mathbf{d}}^{+}<\alpha^{\prime}<\alpha_{\mathbf{d}}^{+}+\varepsilon\). Since matching intervals are disjoint, Proposition 2.23 implies that the cascade interval \(I_{\psi(\mathbf{d}^{\prime})}\) lies strictly between \(\alpha_{\mathbf{d}}^{+}\) and \(\alpha^{\prime}\), and thus its endpoints are within a distance of \(\varepsilon\) of \(\alpha_{\mathbf{d}}^{+}\). It follows \(\alpha_{\mathbf{d}}^{+}\) is the limit (from above) of a sequence of endpoints of cascade intervals. Again by continuity of \(\mathfrak{f}_{S}\), it now suffices to show the desired statements for endpoints of cascade intervals. These follow directly from Lemmas 3.5 and 3.6 and the observation above that if \(I_{\mathbf{d}}\subset(1/2+1/\beta,\beta]\), then \(\mathbf{d}\prec 1(\mathbf{w}_{2}\mathbf{w}_{0})^{\infty}\).
Maximality of the interval \([1/2+1/\beta,1+1/\beta^{2}]\) follows from the fact that \(\mathfrak{f}_{S}\) is strictly decreasing on \((1+1/\beta^{2},\beta]\), density of matching parameters in \([1,\beta]\) and Lemmas 3.5 and 3.6.
Theorem 1.2 is now a collection of previous results:
Proof of Theorem 1.2.: This is a direct consequence of Proposition 3.3, Theorem 3.7 and Equations (15) and (18).
## 4. Appendix: proofs of technical lemmas
We include here two technical results, which together with Lemma 2.12 prove Lemma 2.17. Recall that \(\Delta(\mathbf{u})\) denotes the cylinder set of points \(x\in[0,1]\) for which the \(\beta\)-expansion of \(x\) begins with \(\mathbf{u}\).
**Lemma 4.1**.: _Let \(\mathbf{d}=d_{1}\cdots d_{m}\in\mathcal{M}_{U}\) and \(\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}\). The \(\beta\)-expansions of \(1/\alpha_{\mathbf{d}}^{-},\ 1/\alpha_{\mathbf{d}}^{+}\) and \(1-1/\alpha_{\mathbf{d}}^{+}\) are given by_
\[(b_{j}(1/\alpha_{\mathbf{d}}^{-}))_{j\geq 1} =\begin{cases}(\mathbf{d}\overline{e_{2}\cdots e_{m-2}}0)^{ \infty},&d_{m}=1\\ (\overline{\mathbf{d}\overline{e_{1}\cdots e_{m-1}}}0)^{\infty},&d_{m}=0\end{cases},\] \[(b_{j}(1/\alpha_{\mathbf{d}}^{+}))_{j\geq 1} =\begin{cases}(d_{1}\cdots d_{m-1}0)^{\infty},&d_{m}=1\\ (d_{1}\cdots d_{m-2}0)^{\infty},&d_{m}=0\end{cases}\text{ and }\] \[(b_{j}(1-1/\alpha_{\mathbf{d}}^{+}))_{j\geq 1} =\begin{cases}\overline{\mathbf{e}}^{\infty},&d_{m}=1\\ 0(\overline{e_{2}\cdots e_{m}})^{\infty},&d_{m}=0\end{cases}.\]
Proof.: We consider only the \(\beta\)-expansion of \(1/\alpha_{\mathbf{d}}^{-}\) for \(d_{m}=1\); the proofs of the other expansions are similar. It suffices to show that \(1/\alpha_{\mathbf{d}}^{-}\in\Delta(\overline{\mathrm{d}e_{2}\cdots e_{m-2}0})\) and \(B^{2m-2}(1/\alpha_{\mathbf{d}}^{-})=1/\alpha_{\mathbf{d}}^{-}\). First, note that
\[v(\overline{\mathrm{d}e_{2}\cdots e_{m-2}0}) =v(\mathbf{d})-(1/\beta^{m})v(e_{2}\cdots e_{m-2})\] \[=v(\mathbf{d})-(1/\beta^{m-1})v(e_{1}\cdots e_{m-2})\] \[=v(\mathbf{d})-(1/\beta^{m-1})(v(\mathbf{e})+1/\beta^{m-1})\] \[=v(\mathbf{d})-(1/\beta^{m-1})(v(\mathbf{d})-1+1/\beta^{m-1})\] \[=(1-1/\beta^{m-1})(v(\mathbf{d})+1/\beta^{m-1}).\]
Using this and Equation (8), \(1/\alpha_{\mathbf{d}}^{-}\in\Delta(\overline{\mathrm{d}e_{2}\cdots e_{m-2}0})\) if and only if
\[(1-1/\beta^{m-1})(v(\mathbf{d})+1/\beta^{m-1})\leq 1/\alpha_{\mathbf{d}}^{-}<(1 -1/\beta^{m-1})v(\mathbf{d})+1/\beta^{m-1}.\]
Since \(d_{m}=1\), the first inequality holds if and only if
\[(1-1/\beta^{m-1})(v(\mathbf{d})+1/\beta^{m-1})\leq\frac{\beta^{m}v(\mathbf{d} )+\beta}{\beta^{m}+\beta},\]
or
\[(\beta^{m}+\beta)(1-1/\beta^{m-1})(v(\mathbf{d})+1/\beta^{m-1})\leq\beta^{m}v (\mathbf{d})+\beta.\]
Factoring \(\beta^{m}\) from the first and multiplying it through the third term, the left-hand side is equal to
\[(1+1/\beta^{m-1})(1-1/\beta^{m-1})(\beta^{m}v(\mathbf{d})+\beta)=(1-1/\beta^{ 2m-2})(\beta^{m}v(\mathbf{d})+\beta),\]
which is less than \(\beta^{m}v(\mathbf{d})+\beta\). The second inequality is true if and only if
\[\frac{\beta^{m}v(\mathbf{d})+\beta}{\beta^{m}+\beta}<(1-1/\beta^{m-1})v( \mathbf{d})+1/\beta^{m-1}.\]
Multiplying both sides by \(\beta^{m}+\beta\), this is equivalent to
\[\beta^{m}v(\mathbf{d})+\beta<(\beta^{m}-1/\beta^{m-2})v(\mathbf{d})+\beta+1/ \beta^{m-2},\]
or \((1/\beta^{m-2})v(\mathbf{d})<1/\beta^{m-2}\). This holds since \(v(\mathbf{d})<v((10)^{\infty})=1\). Thus \(1/\alpha_{\mathbf{d}}^{-}\in\Delta(\overline{\mathrm{d}e_{2}\cdots e_{m-2}0})\). With this and Equation (6),
\[B^{2m-2}(1/\alpha_{\mathbf{d}}^{-}) =\beta^{2m-2}(1/\alpha_{\mathbf{d}}^{-}-v(\overline{\mathrm{d}e_{ 2}\cdots e_{m-2}0}))\] \[=\beta^{2m-2}\left(\frac{\beta^{m}v(\mathbf{d})+\beta}{\beta^{m}+ \beta}-(1-1/\beta^{m-1})(v(\mathbf{d})+1/\beta^{m-1})\right)\] \[=\beta^{2m-2}\left(\frac{\beta^{m}v(\mathbf{d})+\beta-(\beta^{m} -1/\beta^{m-2})(v(\mathbf{d})+1/\beta^{m-1})}{\beta^{m}+\beta}\right)\] \[=\frac{\beta^{m}v(\mathbf{d})+\beta}{\beta^{m}+\beta}\] \[=1/\alpha_{\mathbf{d}}^{-}.\]
**Lemma 4.2**.: _Let \(\mathbf{d}=d_{1}\cdots d_{m}\in\mathcal{M}_{U}\) and \(\mathbf{e}:=\varphi(\mathbf{d})=e_{1}\cdots e_{m}\). If \(d_{m}=1\), then for each \(j>0\),_
\[\sigma^{j}((\overline{\mathrm{d}e_{2}\cdots e_{m-2}0})^{\infty})\preceq( \overline{\mathrm{d}e_{2}\cdots e_{m-2}0})^{\infty}\]
_and_
\[\sigma^{j}(\overline{\mathbf{e}}^{\infty})\preceq(d_{1}\cdots d_{m-1}0)^{ \infty}.\]
_If \(d_{m}=0\), then for each \(j>0\),_
\[\sigma^{j}((\overline{\mathrm{d}e_{1}\cdots e_{m-1}0})^{\infty})\preceq( \overline{\mathrm{d}e_{1}\cdots e_{m-1}0})^{\infty}\]
_and_
\[\sigma^{j}(0(\overline{e_{2}\cdots e_{m}})^{\infty})\preceq(d_{1}\cdots d_{m- 2}0)^{\infty}.\]
Proof.: We prove the statements for \(d_{m}=1\); the other proofs are similar. Write
\[\mathbf{d}=1\mathbf{w}_{i_{1}}\mathbf{w}_{i_{2}}\cdots\mathbf{w}_{i_{n}}(1-i_{n}/2)\]
and
\[\mathbf{e}=\overline{0\mathbf{w}_{2-i_{1}}\mathbf{w}_{2-i_{2}}\cdots\mathbf{w}_ {2-i_{n}}(i_{n}/2)}\]
with each \(i_{k}\in\{0,1,2\}\) and \(i_{n}=0\). Due to periodicity, it suffices to show the first inequality for \(0\leq j<m-2\). Note that \(d_{m}=1\) implies \(\overline{e_{m-1}}=1\). If \(j\geq m\), then
\[\sigma^{j}((\mathbf{d}\overline{e_{2}\cdots e_{m-2}}0)^{\infty})=(\overline{ e_{j-m+2}\cdots e_{m-2}}0\mathbf{d}\overline{e_{2}\cdots e_{j-m+1}})^{ \infty}\prec\overline{e_{j-m+2}\cdots e_{m-1}\overline{e_{m}}}\preceq\mathbf{ d}\prec(\mathbf{d}\overline{e_{2}\cdots e_{m-2}}0)^{\infty}.\]
Now suppose \(0\leq j<m\). It suffices to show that
\[d_{j+1}\cdots d_{m}\overline{e_{2}\cdots e_{m-2}}0d_{1}\cdots d_{j}\preceq \mathbf{d}\overline{e_{2}\cdots e_{m-2}}0.\]
This trivially holds if \(j=0\), so assume \(j>0\). Since \(\sigma^{j}(\mathbf{d})\preceq\mathbf{d}\), we have \(d_{j+1}\cdots d_{m}\preceq d_{1}\cdots d_{m-j}\). If this inequality is strict, we are finished. Suppose equality holds. Then we wish to show
\[\overline{e_{2}\cdots e_{m-2}}0d_{1}\cdots d_{j}\preceq d_{m-j+1}\cdots d_{m} \overline{e_{2}\cdots e_{m-2}}0.\]
Since \(\overline{e_{m-1}}=1\), it suffices to show
\[\overline{e_{2}\cdots e_{m-1}}\preceq d_{m-j+1}\cdots d_{m}\overline{e_{2} \cdots e_{m-j-1}}. \tag{33}\]
If \(j=m-1\), this is trivial, so suppose \(j<m-1\). By assumption, \(d_{j+1}\cdots d_{m}=d_{1}\cdots d_{m-j}\), so \(d_{j+1}=d_{1}=1=d_{m}=d_{m-j}\). Now \(d_{2}=0\) implies \(j\neq m-2\), and similarly \(d_{m-2}=0\) implies \(j\neq m-3\). Hence \(j<m-3\), and \(d_{j+1}=d_{m-j}=1\) imply that \(d_{j+2}\) and \(d_{m-j+1}\) are the beginnings of some blocks \(\mathbf{w}_{i_{p}}\) and \(\mathbf{w}_{i_{\ell}}\), respectively. (Similarly, \(\overline{e_{j+2}}\) and \(\overline{e_{m-j+1}}\) are the beginnings of \(\mathbf{w}_{2-i_{p}}\) and \(\mathbf{w}_{2-i_{\ell}}\), respectively.) Then \(d_{1}\cdots d_{m-j}=d_{j+1}\cdots d_{m}\) may be written as
\[1\mathbf{w}_{i_{1}}\cdots\mathbf{w}_{i_{\ell-1}}=1\mathbf{w}_{i_{p}}\cdots \mathbf{w}_{i_{n-1}}001.\]
In particular, \(i_{\ell-1}=1\), and \(\mathbf{w}_{2-i_{\ell-1}}=\mathbf{w}_{1}\) implies \(\overline{e_{m-j}}=1\).
The desired inequality (33) may be written in terms of blocks:
\[\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{n}}\preceq\mathbf{w}_{i_{\ell}} \cdots\mathbf{w}_{i_{n-1}}\mathbf{w}_{1}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_ {2-i_{\ell-2}}.\]
Suppose for the sake of contradiction that this inequality does not hold, and let \(1\leq k\leq n\) be minimal such that \(\mathbf{w}_{2-i_{k}}\) differs from the \(k^{\text{th}}\) block on the right-hand side. Then
\[\mathbf{w}_{2-i_{k}}\succ\begin{cases}\mathbf{w}_{i_{\ell+k-1}},&k<n-\ell+1\\ \mathbf{w}_{1},&k=n-\ell+1\\ \mathbf{w}_{2-i_{k-(n-\ell)-1}},&k>n-\ell+1\end{cases},\]
and we consider these three cases separately:
1. If \(k<n-\ell+1\), then \[(2-i_{1},\ldots,2-i_{k-1})=(i_{\ell},\ldots,i_{\ell+k-2})\] and \(2-i_{k}>i_{\ell-k-1}\) imply \[(2-i_{\ell},\ldots,2-i_{\ell+k-2})=(i_{1},\ldots,i_{k-1})\] and \(2-i_{\ell-k-1}>i_{k}\). This gives \[1\mathbf{w}_{2-i_{\ell}}\cdots\mathbf{w}_{2-i_{\ell+k-1}}\succ 1\mathbf{w}_{i_{1}} \cdots\mathbf{w}_{i_{k}}.\] Recall that \(\overline{e_{m-j+1}}\) is the beginning of the block \(\mathbf{w}_{2-i_{\ell}}\), so the previous line together with \(\overline{e_{m-j}}=1\) imply \(\sigma^{m-j-1}(\overline{\mathbf{e}})\succ\mathbf{d}\), a contradiction.
2. If \(k=n-\ell+1\), then \[(2-i_{1},\ldots,2-i_{n-\ell})=(i_{\ell},\ldots,i_{n-1})\] and \(2-i_{n-\ell+1}>1\) imply \[(2-i_{\ell},\ldots,2-i_{n-1})=(i_{1},\ldots,i_{n-\ell})\] and \(i_{n-\ell+1}=0\). Since \(2-i_{n}=2\), this implies \[1\mathbf{w}_{2-i_{\ell}}\cdots\mathbf{w}_{2-i_{n}}\succ 1\mathbf{w}_{i_{1}} \cdots\mathbf{w}_{i_{n-\ell+1}}.\]
As in case (i), this gives the contradiction that \(\sigma^{m-j-1}(\overline{\mathbf{e}})\succ\mathbf{d}\).
* If \(k>n-\ell+1\), then \[(2-i_{1},\ldots,2-i_{n-\ell})=(i_{\ell},\ldots,i_{n-1})\] and \(2-i_{n-\ell+1}=1\) implies \[(2-i_{\ell},\ldots,2-i_{n-1})=(i_{1}\ldots,i_{n-\ell})\] and \(i_{n-\ell+1}=1\). Again since \(2-i_{n}=2\), \[1\mathbf{w}_{2-i_{\ell}}\cdots\mathbf{w}_{2-i_{n}}\succ 1\mathbf{w}_{i_{1}} \cdots\mathbf{w}_{i_{n-\ell+1}},\] and the contradiction of cases (i) and (ii) arises.
This proves for each \(j>0\) that
\[\sigma^{j}((\mathbf{d}\overline{e_{2}\cdots e_{m-2}}0)^{\infty})\preceq( \mathbf{d}\overline{e_{2}\cdots e_{m-2}}0)^{\infty}.\]
It remains to show that
\[\sigma^{j}(\overline{\mathbf{e}}^{\infty})\preceq(d_{1}\cdots d_{m-1}0)^{ \infty},\]
or, equivalently,
\[\overline{e_{j+1}\cdots e_{m}e_{1}\cdots e_{j}}\preceq d_{1}\cdots d_{m-1}0\]
for \(0\leq j<m\). Suppose for the sake of contradiction that this inequality does not hold. If there is some \(k\leq m-j\) for which
\[\overline{e_{j+1}\cdots e_{j+k}}\succ d_{1}\cdots d_{k},\]
then \(\sigma^{j}(\overline{\mathbf{e}})\succ\mathbf{d}\), a contradiction. Thus there is some minimal \(1\leq k\leq j\) for which
\[\overline{e_{j+1}\cdots e_{m}e_{1}\cdots e_{k}}\succ d_{1}\cdots d_{m-j+k}.\]
The previous line may be written in block form
\[1\mathbf{w}_{2-i_{\ell}}\cdots\mathbf{w}_{2-i_{n-1}}\mathbf{w}_{2}\mathbf{w}_ {0}\mathbf{w}_{2-i_{1}}\cdots\mathbf{w}_{2-i_{p}}\succ 1\mathbf{w}_{i_{1}} \cdots\mathbf{w}_{i_{q}}\]
for some \(\ell,p,q\in\{1,\ldots n\}\). In particular,
\[(0,2-i_{1},\ldots,2-i_{p-1})=(i_{q-p},i_{q-p+1},\ldots,i_{q-1})\]
and \(2-i_{p}>i_{q}\) imply
\[(2-i_{q-p},2-i_{q-p+1},\ldots,2-i_{q-1})=(2,i_{1},\ldots,i_{p-1})\]
and \(2-i_{q}>i_{p}\). Since \(\mathbf{w}_{2-i_{q-p}}=01\), there is some \(s\geq 0\) such that
\[\sigma^{s}(\overline{\mathbf{e}})=1\mathbf{w}_{2-i_{q-p+1}}\cdots\mathbf{w}_{ 2-i_{q-1}}\mathbf{w}_{2-i_{q}}\cdots\mathbf{w}_{2-i_{n}}0\succ 1\mathbf{w}_{i_{1}} \cdots\mathbf{w}_{i_{p-1}}\mathbf{w}_{i_{p}}\cdots\mathbf{w}_{i_{n}}1= \mathbf{d},\]
contrary to the assumption that \(\mathbf{d}\in\mathcal{M}\).
|
2303.01718 | Two Problems on Narayana Numbers And Repeated Digit Numbers | In this paper, we find all repdigits which can be expressed as the product of
a Narayana, and a product of two repdigits is Narayana. | G. Abou-Elela, A. Elsonbaty, M. Anwar | 2023-03-03T05:40:35Z | http://arxiv.org/abs/2303.01718v2 | ###### Abstract
###### Abstract
This work aims to solve two problems in Diophantine equation of the Narayana sequence (\(OEISA000930\)). In the first part it's proved that the Narayana number can not be factored as a product of two repdigit numbers for base \(2\leq b\leq 50\), except in two cases. In The second it has been proved that there is a finite number of solutions up to 290 to express the product of two Narayana numbers as base \(b-\) repdigits numbers, \(2\leq b\leq 50\), the proofs of these results use some number-theoretic technique includes Baker's method of linear forms in logarithms height, and some reduction technique.
**Two Problems on Narayana Numbers And Repeated Digit Numbers**
**G. Abou-Elela\({}^{1}\), A. Elsonbaty\({}^{2}\), and M. Anwar\({}^{3}\)**
\({}^{1}\) Department of Mathematics, University of Damietta
Faculty of science, Egypt
\({}^{23}\) Department of Mathematics, University of Ain Shams
Faculty of science, Egypt
e-mails:[email protected]\({}^{1}\),
[email protected]\({}^{2}\),
[email protected], [email protected]\({}^{3}\)
**keywords** : Narayana sequence, linear forms in logarithms, and Davenport's lemma
## 1 Introduction
Let \(\{N_{n}\}_{n\geq 0}\) be The Narayana sequence given by \(N_{0}=0,N_{1}=1,N_{2}=1\) and the recurrence relation
\[N_{n}=N_{n-1}+N_{n-3}\quad for\,all\,n\geq 3 \tag{1.1}\]
The first values of \(N_{k}\) are \(0,1,1,1,2,3,4,6,\cdots\). Narayana cow's sequence is a problem similar to the problem of Fibonacci rabbits as it counts calves produced every four years. This sequence (\(OEISA000930\)) appeared for the first time in the book "Ganita Kaumudi" (1365) by Indian mathematician Narayana Pandita, who gave this sequence his name, and play roles in mathematical developments such as, finding the approximate value of the square roots, investigations into the Diophantine equation \(ax^{2}+1=y^{2}\) (Pell's equation). Narayana cows sequence, also known as the supergolden sequence and the real root corresponding to the solution of the characteristic equation is known as the super golden ratio. In Pascal's triangle, starting from \(n\geq 3\) we find that the sum of its rows with triplicated diagonals is a Narayana sequence, while the sum of the rows with slops diagonals of 45 degrees express the Fibonacci sequence. This sequence plays an important role in cryptography, coding theory, and graph theory.
In this paper we determine all the solutions of the Diophantine equation
\[N_{n}N_{m}=[a,\cdots,a]_{b}=a\,(\frac{b^{l}-1}{b-1}) \tag{1.2}\]
in integers \((n,m,l,a,b)\) with \(3\leq m\leq n,2\leq b\leq 50,1\leq a\leq b-1\) and \(l\geq 2\).
and the solutions of the Diophantine equation
\[N_{k}=a_{1}a_{2}\,(\frac{b^{l_{1}}-1}{b-1})\,(\frac{b^{l_{2}}-1}{b-1}) \tag{1.3}\]
in integers \((k,b,a_{1},a_{2},l_{1},l_{2})\) with \(2\leq l_{1}\leq l_{2}\), \(1\leq a_{1}\leq a_{2}\leq b-1\), \(k\geq 3\), and \(b\geq 2\).
Many authors have studied such a diophantine equation, for example, Luca [1] showed that \(F_{5}=55\) and \(L_{5}=11\) are the largest repdigits in the Fibonacci and Lucas sequences respectively, the researchers in [5] showed that \(F_{10}=55\) and \(L_{6}=18\) it is the largest Fibonacci and Lucas number respectively that can be expressed as a product of two repdigits, the author in [6] studied the sum of three Padovan numbers as repdigits in base 10 and he found them, the researchers in [8] showed that the only Narayana numbers expressible as sums of two repdigits are \(N_{14}=88\) and \(N_{17}=277\).
In the following theorem we consider \(n\geq 3\) because \(N_{1}=N_{2}=N_{3}=1\).
**Theorem 1.1**.: _The only solution to the Diophantine equation (1.3) are_
\[N_{8}=\frac{2^{2}-1}{2-1}\frac{2^{2}-1}{2-1}=[\![11]\!]_{2}[\![11]\!]_{2}\]
\[and\,N_{16}=\frac{2^{2}-1}{2-1}\frac{2^{6}-1}{2-1}=[\![11]\!]_{2}[\![111111]\!] _{2}\]
**Theorem 1.2**.: _Let \(3\leq m\leq n,b\in\{2,3,..,50\}\), \(a\in\{1,..,b-1\}\), and \(l\geq 2\). If \(N_{n}N_{m}\)is a repdigits in base \(b\) then the only solutions are given by_
\[(n,m,l,a,b)\in\left\{\begin{array}{cccc}(5,3,2,1,2),&(6,3,2,1,3),&(9,3,3),& (4,4,2,1,3)\\ (11,9,2,1,3),&((7,3,2,1,5),&(5,4,5),&(6,4,2,1,7)\\ (10,5,2,1,7),&(8,3,2,1,8),&(5,5,2,1,8),&(7,4,2,1,11)\\ (6,5,2,1,11),&(9,3,2,1,12),&(19,6,2,1,13),&(6,6,2,1,15)\\ (8,4,2,1,17),&(7,5,2,1,17),&(10,3,2,1,18),&(7,6,2,1,23)\\ (9,4,2,1,25),&(8,5,2,1,26),&(11,3,2,1,27),&(8,6,2,1,35)\\ (7,7,2,1,35),&(10,4,2,1,37),&(9,5,2,1,38),&(12,3,2,1,40)\\ (15,10,2,1,49)\end{array}\right\}\]
\[\left\{\begin{array}{llll}(6,4,2,2,3),&(9,4,3,2,3),&(7,4,2,2,5),&(8,4,2,3,5)\\ (6,5,2,2,5),&(7,5,2,3,5),&(7,6,2,4,5),&(11,3,2,4,6)\\ (15,3,3,3,6),&(6,6,2,2,7),&(7,6,2,3,7),&(10,7,3,2,7)\\ (10,8,3,3,7),&(8,4,2,2,8),&(7,5,2,2,8),&(8,5,2,3,8)\\ (8,6,2,4,8),&(7,7,2,4,8),&(8,7,2,6,8),&(13,3,2,6,9)\\ (11,9,3,4,9),&(13,12,4,3,9),&(14,3,2,8,10),&(13,3,2,5,11)\\ (13,4,2,10,11),&(11,5,2,7,11),&(7,6,2,2,11),&(8,6,2,3,11)\\ (7,7,2,3,11),&(11,10,3,4,11),&(9,4,2,2,12),&(9,5,2,3,12)\\ (9,6,2,4,12),&(9,7,2,6,12),&(9,8,2,9,12),&(11,3,2,2,13)\\ (11,4,2,4,13),&(11,5,2,6,13),&(11,6,2,8,13),&(11,7,2,12,13)\\ (19,11,4,7,13),&(13,3,2,4,14),&(13,4,2,8,14),&(13,5,2,12,14)\\ (14,4,2,11,15),&(11,6,2,7,15),&(16,9,3,9,16),&(13,5,2,10,17)\\ (8,6,2,2,17),&(7,7,2,2,17),&(8,7,2,3,17),&(11,8,2,14,17)\\ (10,4,2,2,18),&(10,5,2,3,18),&(10,6,2,4,18),&(10,7,2,6,18)\\ (10,8,2,9,18),&(10,9,2,13,18),&(13,3,2,3,19),&(13,4,2,6,19)\\ (13,5,2,9,19),&(13,6,2,12,19),&(13,7,2,18,19),&(16,3,2,9,20)\\ (16,4,2,18,20),&(11,5,2,4,20),&(11,7,2,8,20),&(11,8,2,12,20)\\ (14,3,2,4,21),&(14,4,2,8,21),&(14,5,2,12,21),&(14,6,2,16,21)\\ (13,4,2,5,23),&(14,5,2,11,23),&(13,6,2,10,23),&(11,7,2,7,23)\\ (13,7,2,15,23),&(14,7,2,22,23),&(9,6,2,2,25),&(9,7,2,3,25)\\ (11,9,2,14,25),&(16,3,2,7,26),&(16,4,2,14,26),&(16,5,2,21,26)\\ (8,7,2,2,26),&(8,8,2,3,26),&(13,8,2,20,26),&(11,4,2,2,27)\\ (11,5,2,3,27),&(11,6,2,4,27),&(11,7,2,6,27),&(11,8,2,9,27)\\ (11,9,2,13,27),&(11,10,2,19,27),&(18,3,2,14,28),&(13,3,2,2,29)\\ (13,4,2,4,29),&(13,5,2,6,29),&(13,6,2,8,29),&(13,7,2,12,29)\\ (13,8,2,18,29),&(13,9,2,26,29),&(14,6,2,11,31),&(14,5,2,8,32)\\ (14,7,2,16,32),&(14,8,2,24,32),&(20,19,4,14,33),&(19,3,2,17,34)\\ (13,5,2,5,35),&(16,6,2,21,35),&(13,7,2,10,35),&(11,8,2,7,35)\\ (13,8,2,15,35),&(14,8,2,22,35),&(10,6,2,2,37),&(10,7,2,3,37)\\ (11,10,2,14,37),&(13,10,2,30,37),&(9,7,2,2,38),&(9,8,2,3,38)\\ (13,9,2,20,38),&(13,4,2,3,39),&(13,6,2,6,39),&(13,7,2,9,39)\\ (12,4,2,2,40),&(12,5,2,3,40),&(12,6,2,4,40),&(12,7,2,6,40)\\ (12,8,2,9,40),&(12,9,2,13,40),&(12,10,2,19,40),&(12,11,2,28,40)\\ (16,4,2,9,41),&(11,5,2,2,41),&(18,5,2,29,41),&(16,6,2,18,41)\\ (11,7,2,4,41),&(16,7,2,27,41),&(11,8,2,6,41),&(13,11,2,40,41)\\ (15,3,2,3,42),&(15,4,2,6,42),&(15,5,2,9,42),&(15,6,2,12,42)\\ (15,7,2,18,42),&(15,8,2,27,42),&(15,9,2,39,42),&(14,3,2,2,43)\\ (14,4,2,4,43),&(14,5,2,6,43),&(14,6,2,8,43),&(14,7,2,12,43)\\ (14,8,2,18,43),&(14,9,2,26,43),&(14,10,2,38,43),&(13,5,2,4,44)\\ (13,7,2,8,44),&(13,8,2,12,44),&(20,10,3,8,45),&(13,6,2,5,47)\\ (14,7,2,11,47),&(13,11,2,35,47),&(11,1,2,16,48),&(19,5,2,35,50)\end{array}\right\}\]
## 2 Preliminary
### Narayana sequence
The characteristic equation corresponding to the third-order linear recurrence relation (1.1) is \(x^{3}-x^{2}-1\), this equation has roots \(\alpha\),\(\beta\), and \(\gamma=\bar{\beta}\) where
\[\alpha=\frac{2+r_{1}+r_{2}}{6},\beta=\frac{4-(1+\sqrt{-3})r_{1}-(1-\sqrt{-3})r _{2}}{12}\]
and
\[r_{1}=\sqrt[3]{116-12\sqrt{93}},r_{2}=\sqrt[3]{116+12\sqrt{93}}\]
Furthermore, the Bient formula is
\[N_{n}=a_{1}\alpha^{n}+a_{2}\beta^{n}+a_{3}\gamma^{n}\quad for\,all\,n\geq 0\]
The initial conditions \(N_{0}=0,N_{1}=1\) and \(N_{2}=1\) imply that
\[a_{1}=\frac{\alpha}{(\alpha-\beta)(\alpha-\beta)},a_{2}=\frac{\beta}{(\beta- \gamma)(\beta-\alpha)},a_{3}=\frac{\gamma}{(\gamma-\alpha)(\gamma-\beta)}\]
The above like Bient formula can also be written as
\[N_{n}=c_{\alpha}\alpha^{n+2}+c_{\beta}\beta^{n+2}+c_{\gamma}\gamma^{n+2}\]
where,
\[c_{t}=\frac{1}{t^{3}+2}\quad,t\in\{\alpha,\beta,\gamma\}\]
It's easy to verify the following inequalities approximations
\[\begin{array}{c}1.45<\alpha<1.5\\ \\ 0.82<|\gamma|=|\beta|<0.83\\ \\ 5<c_{\alpha}^{-1}<5.15\\ \\ |c_{\beta}|\simeq 0.4075\\ \\ |\xi(n)|<\frac{1}{2}\quad where\,\xi(n)=c_{\beta}\beta^{n+2}+c_{\gamma}\gamma^{n +2}\end{array} \tag{2.1}\]
By induction over \(n\), it is easy to prove the relation between Narayana and \(\alpha\)
\[\alpha^{n-2}\leq\ N_{n}\leq\alpha^{n-1}\quad for\,all\,n\geq 0 \tag{2.2}\]
We have
\[2^{l-1}\leq b^{l-1}\leq a\frac{b^{l}-1}{b-1}=N_{n}N_{m}\leq\alpha^{n+m-2}\leq \alpha^{2n-2}\leq(1.5)^{2n-2}\]
\[l\leq(2n-2)\frac{\log 1.5}{\log 2}+1<2n-1\]
and,
\[(1.45)^{n-2}<\alpha^{n-2}<N_{n}<N_{n}N_{m}=a\frac{b^{l}-1}{b-1}<b^{l}<(50)^{l}\]
\[n<l\frac{\log 10}{\log 1.45}+2<11\,l+2\]
Similarly, we have
\[2^{l_{1}-1}<b^{l_{1}-1}<\frac{b^{l_{1}}-1}{b-1}<a_{1}a_{2}\frac{(b^{l_{1}}-1)(b^{l _{2}}-1)}{(b-1)^{2}}=N_{k}<\alpha^{k-1}\]
\[l_{1}<(k-1)\frac{\log\alpha}{\log 2}+1<k\]
and
\[\alpha^{k-2}<N_{k}=a_{1}a_{2}\frac{(b^{l_{1}}-1)(b^{l_{2}}-1)}{(b-1)^{2}}<(b^{l _{2}}-1)^{2}<b^{2l_{2}}<50^{2l_{2}}\]
\[k <2l_{2}\frac{\log 50}{\log\alpha}+2 \tag{2.3}\] \[<22l_{2}+2\]
### Linear forms in logarithms of real algebraic number
Let \(\psi\) be an algebraic number of degree \(d\) with minimal polynomial over \(\mathbb{Z}\)
\[f(X)=a_{0}\prod_{i=1}^{d}(X-\psi^{(i)}).\]
where \(a_{0}>0\) is leading coefficient, and \(\psi^{(i)}\)'s are the conjugates of \(\psi\). The logarithmic height of \(\psi\)\([[1],Def.\,2.2.8]\) is defined by
\[h(\psi)=\frac{1}{d}(\log a_{0}+\sum_{i=1}^{d}\log\max\{|\psi^{(i)}|,1\}).\]
and the following properties hold:
\[h(\psi\pm\gamma) \leq h(\psi)+h(\gamma)+\log 2 \tag{2.4}\] \[h(\psi\gamma^{\pm 1}) \leq h(\psi)+h(\gamma)\] \[h(\psi^{s}) =|s|h(\psi)\qquad(s\in\mathbb{Z})\]
**Theorem 2.1** ((Matveev),[3]).: _Let \(\psi_{1},\ldots\psi_{t}\) be positive real algebraic numbers, \(\mathbb{K}\) be a number field of degree D over over \(\mathbb{Q}\), and \(r_{1},\ldots,r_{t}\) integers. Let_
\[\Lambda=\psi_{1}^{r_{1}}\cdots\psi_{t}^{r_{t}}\]
_let \(B\geq\max\{|r_{1}|,\cdots|r_{t}|\}\) and \(A_{j}\geq\max\{Dh(\psi_{j}),|\log\psi_{j}|,0.16|\}\) if \(\Lambda\neq 0\), then \(\log|\Lambda|>-1.4\times 30^{t+3}\times t^{4.5}\times D^{2}(1+\log D)(1+\log B)A_{1} \cdots A_{t}\)._
**Lemma 2.2**.: \([[4],Lemma\,7]\) _If \(m\geq 1\), \(T>(4m^{2})^{m}\) and \(T>\frac{x}{\log^{m}x}\), then \(x<2^{m}T\log^{m}T\)._
This **lemma** will be used to reduce the upper bound for variables, and we will define \(\|X\|=\min\{|X-n|:n\in\mathbb{Z}\}\) be the distance from \(X\) to the nearest integer.
**Lemma 2.3**.: _((Dujella- petho),[1], \(Lemma\,2.3.1\)) Let M be a positive integer such that \(q>6M\), since \(\frac{p}{q}\) is a convergent of the irrational number \(\tau\), let A,B, and \(\mu\) be some real numbers with \(A>0\),\(B>1\) and \(\epsilon=\|\mu q\|-M\|\tau q\|\). if \(\epsilon>0\), then there is no solution to the inequality_
\[0<|u\tau-v+\mu|<AB^{-w}\]
_in positive integers u,v and w with_
\[u\leq M\quad\text{and}\quad w\geq\frac{\log(Aq/\epsilon)}{\log B}\]
**Lemma 2.4**.: _(i) [(Legender),[1],Theorem\(\,1.3.3\)] Let \(\tau\) be an irrational number such that_
\[|\tau-\frac{x}{y}|<\frac{1}{2y^{2}}\]
_then \(\frac{x}{y}\) is a convergent of \(\tau\)._
_(ii) If_ \(y<q_{k+1}\) _then_
\[\frac{1}{(g+2)y^{2}}<|\tau-\frac{x}{y}|\]
\(g=max\{g_{i}\,:j\leq k+1\}\)_._
## 3 Proof of theorem 1.1
### Bounding on \(l_{1}\)
From equation(1.3), we obtain that
\[c_{\alpha}\alpha^{k+2}-\frac{a_{1}a_{2}b^{l_{1}+l_{2}}}{(b-1)^{2}}=-\xi(k)- \frac{a_{1}a_{2}b^{l_{1}}}{(b-1)^{2}}-\frac{a_{1}a_{2}b^{l_{2}}}{(b-1)^{2}}+ \frac{a_{1}a_{2}}{(b-1)^{2}}\]
Taking absolute values in the above equation, using inequalities (2.1),(2.2) and dividing both sides by \(|\frac{a_{1}a_{2}b^{l_{1}+l_{2}}}{(b-1)^{2}}|\), we get
\[|c_{\alpha}\alpha^{k+2}-\frac{a_{1}a_{2}b^{l_{1}+l_{2}}}{(b-1)^{2 }}| <\frac{1}{2}+b^{l_{1}}+b^{l_{2}}+1\] \[<\frac{3}{2}+2b^{l_{2}}\] \[|\frac{c_{\alpha}\alpha^{n+2}(b-1)^{2}}{a_{1}a_{2}b^{l_{1}+l_{2} }}-1| <\frac{3(b-1)^{2}}{2a_{1}a_{2}b^{l_{1}+l_{2}}}+\frac{2(b-1)^{2}}{a_{1}a_{2}b ^{l_{1}}}\] \[<\frac{3(b-1)^{2}}{b^{l_{1}}}+\frac{2(b-1)^{2}}{b^{l_{1}}}\] \[<\frac{3b^{2}}{b^{l_{1}}}+\frac{2b^{2}}{b^{l_{1}}}\] \[<\frac{5}{b^{l_{1}-2}}\]
Put
\[\Lambda_{3}=\frac{c_{\alpha}\alpha^{n+2}(b-1)^{2}}{a_{1}a_{2}b^{l_{1}+l_{2}}}-1\]
we have
\[|\Lambda_{3}|<\frac{5}{b^{l_{1}-2}} \tag{3.1}\]
and \(\log|\Lambda_{3}|<\log 5-(l_{1}-2)\log b\) Now, we apply matveev theorem, where
\[\begin{array}{ccc}\psi_{1}=\alpha&\psi_{2}=b&\psi_{3}=\frac{c_{\alpha}(b-1)^ {2}}{a_{1}a_{2}}\\ r_{1}=(k+2)&r_{2}=-(l_{1}+l_{2})&r_{3}=1\end{array}\]
Similarly we can prove that \(\Lambda_{3}\neq 0\), moreover using properties of logarithmic height (2.4), we obtain
\[h(\psi_{3}) <h(c_{\alpha})+h(\frac{b-1}{a_{1}})+h(\frac{b-1}{a_{2}})\] \[<\frac{\log 31}{3}+2\log(b-1)\] \[<3\log b\]
Thus, we can take \(A_{1}=\log\alpha\),\(A_{2}=3\log b\), \(A_{3}=9\log b\), \(B=22l_{2}+4\) since \(k<22l_{2}+2\) and \(\mathbb{K}=\mathbb{Q}(\alpha)\) thus \(D=3\), and then from theorem (2.1) we get
\[\log\Lambda_{3}>-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{5}(1+\log 3)(1+\log(22l_{2 }+4))\log\alpha\log^{2}b\]
Now we compare the lower bound for \(\log\Lambda_{3}\) with the upper bound of \(\log\Lambda_{3}\). Since \((1+\log(22l_{2}+4))<8\log(l_{2})\) for all \(l_{2}\geq 2\), a computer search with Mathematica gives us that
\[l_{1}<3\times 10^{14}\log l_{2}\log b \tag{3.2}\]
### Bounding on \(l_{2}\)
Let
\[\frac{N_{k}}{\frac{a_{1}(b^{l_{1}}-1)}{b-1}} =\frac{a_{2}(b^{l_{2}}-1)}{b-1}\] \[\frac{c_{\alpha}\alpha^{k+2}(b-1)}{a_{1}(b^{l_{1}}-1)}-\frac{a_{2 }b^{l_{2}}}{b-1} =\frac{-\xi(k)(b-1)}{a_{1}(b^{l_{1}}-1)}-\frac{a_{2}}{b-1}\]
Taking absolute values in the above equation and dividing both sides by \(|\frac{a_{2}b^{l_{2}}}{b-1}|\), we get
\[|\frac{c_{\alpha}\alpha^{k+2}(b-1)}{a_{1}(b^{l_{1}}-1)}-\frac{a_{ 2}b^{l_{2}}}{b-1}| <\frac{(b-1)}{2a_{1}(b^{l_{1}}-1)}+1\] \[|\frac{\alpha^{k+2}c_{\alpha}b^{-l_{2}}(b-1)^{2}}{a_{1}a_{2}(b^{ l_{1}}-1)}-1| <\frac{(b-1)^{2}}{a_{1}a_{2}b^{l_{2}}(b^{l_{1}}-1)}+\frac{b-1}{a_{ 2}b^{l_{2}}}\] \[<\frac{(b-1)^{2}}{b^{l_{2}}}+\frac{b-1}{b^{l_{2}}}\] \[<\frac{b^{2}}{b^{l_{2}}}+\frac{b}{b^{l_{2}}}\]
\[|\frac{\alpha^{k+2}c_{\alpha}b^{-l_{2}}(b-1)^{2}}{a_{1}a_{2}(b^{l_{1}}-1)}-1)|< \frac{2}{b^{l_{2}-2}} \tag{3.3}\]
Put \(\Lambda_{4}=\frac{\alpha^{k+2}c_{\alpha}b^{-l_{2}}(b-1)^{2}}{a_{1}a_{2}(b^{l_{1} }-1)}\), we have
\[\log|\Lambda_{4}|<\log 2-(l_{2}-2)\log b \tag{3.4}\]
Now, we apply matveev theorem (2.1), where
\[\psi_{1}=\alpha\qquad\psi_{2}=b\quad\psi_{3}=\frac{c_{\alpha}(b-1 )^{2}}{a_{1}a_{2}(b^{l_{1}}-1)}\] \[r_{1}=k+2\quad r_{2}=-l_{2}\qquad\quad r_{3}=1\]
Similarly we can prove that \(|\Lambda_{4}|\neq 0\), moreover using properties of logarithmic height (2.4)
\[h(\psi_{3}) <h(c_{\alpha})+h(\frac{b-1}{a_{1}})+h(\frac{b-1}{a_{2}})+h(b^{l_ {1}}-1)\] \[<\frac{\log 31}{3}+2\log(b-1)+l_{1}\log b\] \[<3\log b+l_{1}\log b\]
thus, we can take \(A_{1}=\log\alpha\),\(A_{2}=3\log b\), \(A_{3}=3(4\log b+l_{1}\log b)\) and \(B=22l_{2}+4\)
\[\log\Lambda_{4}>-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{4}\log\alpha(1+\log 3)(1+ \log(12l_{2}+2))(4\log b+l_{1}\log b) \tag{3.5}\]
from (3.2),(3.4) and (3.5) we deduce that
\[l_{2}<2\times 10^{28}\log b\log^{2}l_{2}\]
Now we apply lemma (2.2), since \(2\times 10^{28}\log^{2}(l_{2})\log b>(16)^{2}\), we obtain
\[\frac{l_{2}}{\log^{2}l_{2}} <2\times 10^{28}\log b\] \[l_{2} <2^{2}\cdot 2\cdot 10^{28}\log b(\log(2\times 10^{28}\log b))^{2}\] \[<10^{29}\log(b)(66+\log\log b)^{2}\] \[<10^{33}\log^{3}b\]
since \((66+\log\log b)^{2}<95^{2}\log^{2}b\) for every \(b\geq 2\). from (2.3), we find that \(k<2.3\times 10^{34}\log^{3}b\).
### Reduction of The upper bound on \(l_{1}\)
Let \(z_{3}=(n+2)\log\alpha-(l_{1}+l_{2})\log b+\log\frac{(b-1)^{2}c_{\alpha}}{a_{1} a_{2}}\), if \(z_{3}>0\) then \(z_{3}<|e^{z_{3}}-1|\) and \(|z_{3}|<2|e^{z_{3}}-1|\,ifz_{3}<0\),Thus in both side we have, \(|z_{3}|<2|e^{z_{3}}-1|\). By substituting into the equation (3.1), dividing both by \(\log b\), we have
\[|(k+2)\log\alpha-(l_{1}+l_{2})\log b+\log(\frac{(b-1)^{2}c_{\alpha}}{a_{1}a_ {2}})| <\frac{10}{b^{l_{1}-2}}\]
\[|(k+2)\frac{\log\alpha}{\log b}-(l_{1}+l_{2})+\frac{\log(\frac{(b-1)^{2}c_{\alpha }}{a_{1}a_{2}})}{\log b}|<\frac{15}{b^{l_{1}-2}} \tag{3.6}\]
Since \(\frac{1}{\log 2}=1.4427\). Let \(\tau=\frac{\log\alpha}{\log b},\mu=\frac{\log(\frac{(b-1)^{2}c_{\alpha}}{a_{1} a_{2}})}{\log b}\) and \(M=1.3\times 10^{34}\log^{3}b\), at all \(b\in\{2,3,\cdots,50\}\) and \(a_{1},a_{2}\in\{1,\cdots,b-1\}\), a computer search with Mathematica find that \(\varepsilon>0\) for all, so we apply lemma (2.3), let \(A=15\) and \(B=b\), we can say that if the inequality (3.6) has a solution then \(l_{1}-2\leq\max(\frac{\log(\frac{Aq_{k}}{\varepsilon})}{\log B})\leq 120\), hence \(l_{1}\leq 122\).
### Reduction of The upper bound on \(l_{2}\)
Let \(z_{4}=(k+2)\log\alpha-l_{2}\log b+\log\frac{c_{\alpha}(b-1)^{2}}{a_{1}a_{2}(b ^{l_{1}}-1)}\), if \(z_{4}>0\) then \(z_{4}<|e^{z_{4}}-1|\) and \(|z_{4}|<2|e^{z_{4}}-1|\,ifz_{4}<0\), thus in both side we have, \(|z_{4}|<2|e^{z_{4}}-1|\). By substituting into the equation (3.3) and dividing both by \(\log b\), we have
\[|(k+2)\frac{\log\alpha}{\log b}-l_{2}+\frac{\log(\frac{c_{\alpha }(b-1)^{2}}{a_{1}a_{2}(b^{l_{1}}-1)})}{\log b}| <\frac{4}{\log b^{l_{2}-2}}\] \[<\frac{6}{b^{l_{2}-2}}\]
Let \(\tau=\frac{\log\alpha}{\log b},\mu=\frac{\log(\frac{c_{\alpha}(b-1)^{2}}{a_{1 }a_{2}(b^{l_{1}}-1)})}{\log b}\) and \(M=1.3\times 10^{34}\log^{3}b_{1}\) at all \(b\in\{2,3,\cdots,10\}\), \(a_{1},a_{2}\in\{1,\cdots,b-1\}\),and \(l_{1}\in\{1,\cdots,122\}\), a computer search with Mathematica founds that \(\varepsilon>0\) for all, so we apply lemma (2.3), let \(A=6\) and \(B=b\), we can say that if the inequality (3.6) has a solution then \(l_{2}-2\leq\max(\frac{\frac{Aq_{k}}{\varepsilon})}{\log B})\leq 131\), hence \(l_{2}\leq 133\), then \(k<1598\).
## 4 Proof of theorem 1.2
### Bounding on m
From equation (1.2), we obtain that
\[c_{\alpha}^{2}\alpha^{n+m+4}-\frac{ab^{l}}{b-1}=-\xi(m)c_{\alpha}\alpha^{n+2 }-\xi(n)c_{\alpha}\alpha^{m+2}-\xi(n)\xi(m)-\frac{a}{b-1}\]
Taking absolute values in the above equation, using inequalities (2.1) and dividing both sides by \(|c_{\alpha}^{2}\alpha^{n+m+4}|\), one gets
\[\left|c_{\alpha}^{2}\alpha^{n+m+4}-\frac{ab^{l}}{b-1}\right| <\frac{c_{\alpha}\alpha^{n+2}}{2}+\frac{c_{\alpha}\alpha^{m+2}}{2} +\frac{5}{4}\] \[\left|1-\frac{ab^{l}}{c_{\alpha}^{2}\alpha^{n+m+4}(b-1)}\right| <\frac{1}{2c_{\alpha}\alpha^{m+2}}+\frac{1}{2c_{\alpha}\alpha^{n +2}}+\frac{5}{4c_{\alpha}^{2}\alpha^{n+m+4}}\] \[<\frac{1}{c_{\alpha}\alpha^{m+2}}+\frac{5}{4c_{\alpha}^{2}\alpha ^{m+2}}\] \[<\frac{39}{\alpha^{m}}\]
Put
\[\Lambda_{1}:=\frac{ab^{l}}{c_{\alpha}^{2}\alpha^{n+m+4}(b-1)}-1\]
we have
\[|\Lambda_{1}|<\frac{39}{\alpha^{m}}\quad and\,\log|\Lambda_{1}|<\log(39)-m \log(\alpha) \tag{4.1}\]
Now, we apply the Matveev theorem, where
\[\psi_{1}=\alpha \psi_{2}=b \psi_{3}=\frac{a}{c_{\alpha}^{2}(b-1)}\] \[r_{1}=-(n+m+4) r_{2}=l r_{3}=1\]
First, we show that \(\Lambda_{1}\neq 0\). If \(\Lambda_{1}=0\), then \(\frac{ab^{l}}{b-1}=c_{\alpha}^{2}\alpha^{n+m+4}\). Consider the automorphism \(\sigma(c_{\alpha})=c_{\beta}\).Then \(|c_{\beta}^{2}\beta^{n+m+4}|<|c_{\beta}^{2}|<1\), while the right-hand side is greater than \(4\) which is a contradiction, moreover using properties of logarithmic height (2.4), we obtain
\[h(\psi_{1}) =\frac{\log(\alpha)}{3},h(\psi_{2})=\log(b)\] \[h(\psi_{3}) <h(\frac{a}{b-1})+h(c_{\alpha}^{2})\] \[<\log(b-1)+\frac{2\log(31)}{3}\] \[<\log(b)+3.4\log(b)\] \[<4.5\log(b)\]
since the minimal polynomial of \(c_{\alpha}\) is given by \(31x^{3}-31x^{2}+10x-1\). We take \(B=2n+4\), \(A_{1}=\log(\alpha)\), \(A_{2}=3\log(b)\), \(A_{3}=13.5\log(b)\), we take \(\mathbb{K}=\mathbb{Q}(\alpha)\), thus \(D=3\).
Now from theorem (2.1), we get the following
\[\log|\Lambda_{1}|>-1.4\cdot 30^{6}\cdot 3^{4.5}\cdot 3^{3}\cdot 13.5\left(1+ \log(3)\right)\left(1+\log(2n+4)\right)\,\log(\alpha)\log^{2}(b)\]
Now we compare the lower bound for \(\log|\Lambda_{1}|\) with the upper bound of \(\log|\Lambda_{1}|\). Since \(1+log(2n+4)<5log(n)\) for all \(n\geq 3\), a computer search with Mathematica gives us that
\[m<1.7\times 10^{15}\log(n)\log^{2}(b) \tag{4.2}\]
### Bounding on n
Let
\[N_{n} =\frac{a}{N_{m}}\frac{b^{l}-1}{b-1}\] \[c_{\alpha}\alpha^{n+2}-\frac{ab^{l}}{N_{m}(b-1)} =-\xi(n)-\frac{a}{N_{m}(b-1)}\]
Taking absolute values in the above equation, using inequalities (2.1),(2.2) and dividing both sides by \(|c_{\alpha}\alpha^{n+2}|\), we get
\[\Big{|}c_{\alpha}\alpha^{n+2}-\frac{ab^{l}}{N_{m}(b-1)}\Big{|} <|\xi(n)|+|\frac{a}{N_{m}(b-1)}|\] \[<\frac{1}{2}+\frac{1}{\alpha^{m-2}}\] \[\Big{|}1-\frac{ab^{l}}{N_{m}c_{\alpha}\alpha^{n+2}(b-1)}\Big{|} <\frac{1}{2c_{\alpha}\alpha^{n+2}}+\frac{1}{c_{\alpha}\alpha^{n+m}} \tag{4.3}\] \[<\frac{1}{2c_{\alpha}\alpha^{n}}+\frac{1}{c_{\alpha}\alpha^{n}}\] \[<\frac{11}{\alpha^{n}}\]
Put
\[\Lambda_{2}:=\frac{ab^{l}}{N_{m}c_{\alpha}\alpha^{n+2}(b-1)}-1\]
we have
\[|\Lambda_{2}|<\frac{11}{\alpha^{n}} \tag{4.4}\]
and \(\log|\Lambda_{2}|<\log(11)-n\log(\alpha)\). Now, we apply matveev theorem (2.1), where
\[\psi_{1}=\alpha \psi_{1}=b \psi_{1}=\frac{a}{N_{m}c_{\alpha}(b-1)}\] \[r_{1}=-(n+2) r_{2}=l r_{3}=1\]
Similarly we can prove that \(\Lambda_{2}\neq 0\), moreover using properties of logarithmic height (2.4), we obtain
\[h(\psi_{3}) <h(\frac{a}{b-1})+h(c_{\alpha})+h(N_{m})\] \[<\log(b-1)+\frac{\log(31)}{3}+m\log(\alpha)\] \[<\log(b)+1.2\log(b)+m\log(\alpha)\] \[<2.3\log(b)+m\log(\alpha)\]
we take \(B=2n+2\), \(A_{1}=\log(\alpha)\), \(A_{2}=3\log(b)\,,A_{3}=3(2.3\log(b)+m\log(\alpha))\), \(\mathbb{K}=\mathbb{Q}(\alpha)\) thus \(D=3\), from theorem (2.1) we get
\[\log|\Lambda_{2}|>-1.4\cdot 30^{6}\cdot 3^{4}\cdot 3^{4}\,\log(\alpha)\,\log(b) \,(1+\log(3))\,(1+\log(2n+2))\,(2.3\,\log(b)+m\,\log(\alpha)).\]
Now we compare the lower bound for \(\log|\Lambda_{2}|\) with the upper bound of \(\log|\Lambda_{2}|\) and using (4.2), a computer search with Mathematica gives us that
\[n <7.6\times 10^{28}\,\log^{2}n\,\log^{3}b\] \[\frac{n}{\log^{2}(n)} <7.6\times 10^{28}\log^{3}b\]
Now we apply lemma (2.2), since \(7.6\times 10^{28}\log^{3}(b)>(16)^{2}\), we obtain
\[n <2^{2}\cdot 7.6\cdot 10^{28}\log^{3}(b)(\log(7.6\times 10^{28}\log^{ 3}b))^{2}\] \[<3.04\times 10^{29}\log^{3}b(66.6+3\log\log b)^{2} \tag{4.5}\] \[<3.04\times 10^{29}\log^{3}b(96.1\,\log b+3\log b)^{2}\] \[<2.99\times 10^{33}\log^{5}b\]
since \(\,\log\log b<\log b\) for every \(b\geq 2\) and \(\frac{1}{\log 2}\simeq 1.4427\).
### Reduction of The upper bound on m
Let \(z_{1}=l\log(b)-(n+m+4)\log\alpha+\log(\frac{a}{(b-1)c_{\alpha}^{2}})\), if \(z_{1}>0\) then \(z_{1}<|e^{z_{1}}-1|\) and \(|z_{1}|<2|e^{z_{1}}-1|\,if\,z_{1}<0\), thus in both side we have, \(|z_{1}|<2|e^{z_{1}}-1|\). By substituting into the equation (4.1), we have
\[|l\log b-(n+m+4)\log(\alpha)+\log(\frac{a}{(b-1)c_{\alpha}^{2}})|<\frac{78}{ \alpha^{m}}\]
Dividing this inequality by \(|\log\alpha|\), we get
\[|l\frac{\log b}{\log\alpha}-(n+m+4)+\frac{\log(\frac{a}{c_{\alpha}^{2}(b-1)}) }{\log\alpha}|<\frac{210}{\alpha^{m}} \tag{4.6}\]
Let \(\tau=\frac{\log(b)}{\log\alpha},\mu=\frac{\log(\frac{a}{c_{\alpha}^{2}(b-1)}) }{\log\alpha}\) and \(M=5.98\times 10^{33}\log^{5}b\). For all \(b\in\{2,3,\cdots,50\}\) and \(a\in\{1,2,\cdots,b-1\}\), we need to calculate a convergent \(\frac{p_{k}}{q_{k}}\) such that \(q_{k}>6M\), furthermore computing \(\varepsilon=\|\mu q_{k}\|-M\|\tau q_{k}\|\), a computer search with Mathematica find that \(\varepsilon>0\) for all, so we can apply lemma (2.3), let \(A=210\), and \(B=\alpha\), we can say that if the inequality (4.6) has a solution then \(m\leq\max\left(\frac{\log(\frac{Aq_{k}}{\varepsilon})}{\log B}\right)\leq 261\).
### Reduction of The upper bound on n
Let \(z_{2}=l\log b-(n+2)\log\alpha+\log(\frac{a}{N_{m}c_{\alpha}(b-1)})\), substituting into the equation (4.4), we have
\[\Big{|}l\frac{\log b}{\log\alpha}-(n+2)+\frac{\log(\frac{a}{N_{m}c_{\alpha}(b -1)})}{\log\alpha}\Big{|}<\frac{32}{\alpha^{n}} \tag{4.7}\]
Let \(\tau=\frac{\log b}{\log\alpha}\), \(\mu=\frac{\log(\frac{a}{N_{m}c_{\alpha}(b-1)})}{\log\alpha}\) and \(M=5.98\times 10^{33}\log^{5}b\), at all \(b\in\{2,3,\cdots,50\}\), \(a\in\{1,2,\cdots,b-1\}\) and \(m\in\{3,\cdots,261\}\), a computer search with Mathematica find that \(\varepsilon>0\) for all except \((b,a,m)=\{(b,b-1,3)for\,all\,b=2,\cdots,50\}\), in addition \(\text{to}\{(2,1,4),(2,1,6),(3,2,5),(3,2,8),(4,3,6),(6,5,7),(9,8,8),(13,12,9)\)
\(,(19,18,10),(28,27,11),(41,40,12)\}\). We apply lemma (2.3) in case \(\varepsilon>0\), let \(A=32\) and \(B=\alpha\), we can say that if the inequality (4.7) has a solution then \(n\leq\max(\frac{\frac{\log(\frac{Aq_{k}}{\varepsilon})}{\log B}}{\leq 290})\leq 290\), in other cases we apply Lemma (2.4),
\[\Big{|}\frac{\log b}{\log\alpha}-\frac{(n+2)-\frac{\log(\frac{a}{N_{m}c_{\alpha }(b-1)})}{\log\alpha}}{l}\Big{|}<\frac{32}{\alpha^{n}l} \tag{4.8}\]
now assume that \(n\) is so large the right hand side of the inequality (4.8) is smaller than \(\frac{1}{2l^{2}}\) holds if \(\alpha^{n}>64l\), which by Lemma (2.4), implies that the fraction \(\frac{\log b}{\log\alpha}\) is a convergent of \(\frac{(n+2)-\frac{\log(\frac{1}{N_{m}c_{\alpha}})}{\log\alpha}}{l}\), since in all case \(a=b-1\), for each \((b,a,m)\) which have \(\varepsilon<0\), we calculate the continued fraction expantion of \(\tau\) and find \(g=max\{g_{i}\,:j\leq k+1\}\). since
\[\frac{1}{(g+2)l^{2}}<\big{|}\frac{\log b}{\log\alpha}-\frac{(n+2)-\frac{\log( \frac{a}{N_{m}c_{\alpha}(b-1)})}{\log\alpha}}{l}\big{|}<\frac{32}{\alpha^{n}l}\]
\[\alpha^{n} <32(g+2)l\] \[n <\frac{\log(32(g+2)l)}{\log\alpha}\] \[<\frac{\log(32\times 5.98\times 10^{33}\log^{5}b(g+2))}{\log\alpha}\]
we found \(n\leq 239\), therefore \(n\leq 290\) in both cases.
We conclude all solutions \((n,m,l,a,b)\) to the Diophantine equation (1.2) \(3\leq m\leq n,2\leq b\leq 50,1\leq a\leq b-1\) and \(l\geq 2\), reduce to the rang \(3\leq n\leq 264\), with the help of Mathematica, we compute all solution in specified range, we conclude theorem (1.2).
|
2303.16264 | Mid-Infrared Observations of the Giant Planets | The mid-infrared spectral region provides a unique window into the
atmospheric temperature, chemistry, and dynamics of the giant planets. From
more than a century of mid-infrared remote sensing, progressively clearer
pictures of the composition and thermal structure of these atmospheres have
emerged, along with a greater insight into the processes that shape them. Our
knowledge of Jupiter and Saturn has benefitted from their proximity and
relatively warm temperatures, while the details of colder and more distant
Uranus and Neptune are limited, as these planets remain challenging targets. As
the timeline of observations continues to grow, an understanding of the
temporal and seasonal variability of the giant planets is beginning to develop,
with promising new observations on the horizon. | Michael T. Roman | 2023-03-28T19:24:18Z | http://arxiv.org/abs/2303.16264v2 | # Mid-Infrared Observations of the Giant Planets
###### Abstract
The mid-infrared spectral region provides a unique window into the atmospheric temperature, chemistry, and dynamics of the giant planets. From more than a century of mid-infrared remote sensing, progressively clearer pictures of the composition and thermal structure of these atmospheres have emerged, along with a greater insight into the processes that shape them. Our knowledge of Jupiter and Saturn has benefitted from their proximity and relatively warm temperatures, while the details of colder and more distant Uranus and Neptune are limited as these planets remain challenging targets. As the timeline of observations continues to grow, an understanding of the temporal and seasonal variability of the giant planets is beginning to develop with promising new observations on the horizon.
giant planets; atmospheres; dynamics; atmospheres; chemistry Article
## 1 Introduction
The mid-infrared region of the electromagnetic spectrum provides a unique and important window into the atmospheric physics and chemistry of the giant planets. Linking the near- and far-infrared, it spans a range of wavelengths (variously defined), over which the dominant source of planetary radiation transitions from scattered sunlight to intrinsic thermal emission. As the scattered solar component fades with increasing wavelength, the various features and colors that define the planets' appearances in visible and near-infrared images give way to distinct thermal structures shaped by the temperatures and chemistry of these atmospheres. Against this changing backdrop of scattered and emitted radiation, numerous molecules leave their distinct spectral signatures, indicative chemical abundances, kinetic temperatures, and ambient pressures. The observation and analysis of reflected and radiant energy can thus be used to reveal the composition, temperature, and structure of a planetary atmosphere from afar, providing remote measurements of fundamental properties largely inaccessible by other means.
From more than a century of mid-IR observations, a rich picture of the four giant planets' atmospheres has emerged. Now, with the anticipated results from the new JWST promising to revise our knowledge of these planets in the years ahead [1], we use this opportunity to look back and take stock of the field. In this review, we examine remote sensing of the Solar System's giant planets across the mid-infrared. We trace an observational history from its modest beginnings to present-day efforts, highlighting what we have learned along the way and what questions remain for future work.
### The Mid-Infrared
Infrared radiation (IR) occupies the region of the electromagnetic spectrum between visible light and radio waves (specifically, microwaves), corresponding to wavelengths from
about 750 nanometers to 1 millimeter. In the modern literature, it is commonly divided into three subdivisions--near-, mid-, and far-infrared (see Figure 1). The precise boundaries of these divisions are generally not agreed upon and differ widely across various disciplines and applications. The International Commission on Illumination (CIE)1, for example, defines the mid-IR as radiation with wavelengths of only 1.4 to 3 microns, while the International Organization for Standardization (ISO)2 adopts a much broader range for the mid-IR, spanning from 3 to 50 \(\upmu\)m. In some engineering literature, the infrared is divided into five regions, classified as short-wave (1-3 \(\upmu\)m), mid-wave (3-5 \(\upmu\)m), long-wave (8-12 \(\upmu\)m), and very-long (12-30 \(\upmu\)m) infrared (e.g., [2]), with significant variation in the defined demarcations.
Footnote 1: _International Standard CIE S 017:2020 ILV: International Lighting Vocabulary_, 2nd edition
Footnote 2: _ISO 20473:2007, Optics and Photonics—Spectral bands_
In astronomy and planetary science literature, the mid-infrared typically refers to wavelengths between roughly 5 \(\upmu\)m and 20 to 30 \(\upmu\)m [7; 8; 12; 13]. These bounds are a natural consequence of practical constraints, namely astronomical detector technology and the transparency of Earth's atmosphere. This adopted lower limit around 5 \(\upmu\)m roughly coincides with the longest wavelengths detected by most common near-infrared detectors, typically composed of indium antimonide (InSb) or mercury-cadmium-telluride (HgCdTe) (see Figure 1). At longer wavelengths, arsenic- or antimony doped silicon (Si:As and Si:Sb) Impurity Band Conduction (IBC) detectors are typically employed, sensitive to ranges of \(\sim\)6-27 \(\upmu\)m and \(\sim\)14-38 \(\upmu\)m, respectively, followed by germanium photoconductive detectors and bolometers in the far-infrared
Figure 1: Idealized blackbody emittance of the giant planets compared to the telluric atmospheric transmission and typical detector sensitivities across the infrared. The assumed boundaries of the near-, mid-, and far-infrared regions are indicated. Colored curves show the black body spectral radiant emittance for the effective temperatures of the giant planets, accounting for their distances, scaled and labeled for clarity. Jupiter and Saturn peak in the mid-infrared, while Uranus and Neptune peak in the far-infrared. The atmospheric transmission is indicated by the blue–gray interface varying between 100% (full transmission) and 0% (total attenuation) from the top of the atmosphere down to a surface altitude of 2,640 m (corresponding to the altitude of the Very Large Telescope (VLT) at Cerro Paranal, with a precipitable water vapor (PWV) of 1.66 mm at an air mass of 1.15 [3; 4]). Characteristic ranges for various thermal detectors are shown in purple [5; 6; 7; 8; 9; 10; 11].
[5; 6; 7; 8; 9; 10; 11]. Case in point, the JWST Near Infrared Camera (NIRCam) instrument uses HgCdTe detectors for the 0.6-5 \(\upmu\)m region, while the JWST Mid-Infrared Instrument (MIRI) uses a Si:As detectors to measure radiation from 5 to 28 \(\upmu\)m [14; 15; 16]. As discussed in the next section, the so-called atmospheric window of infrared transparency provides a natural upper boundary to the mid-infrared around 30 \(\upmu\)m, beyond which little infrared radiation is transmitted through the atmosphere.
For the purpose of this review, we will adopt the definition of the mid-infrared as radiation between 5 and 30 \(\upmu\)m (or 2,000-333 cm\({}^{-1}\), in terms of wavenumber) and limit our scope to remote sensing within this wavelength range. We will also restrict ourselves to the giant planets without our Solar System, leaving the growing number of extrasolar planet infrared observations to other reviews [17; 18].
### Atmospheric Transmission, Emission, and Mid-Infrared Sub-Bands
Gaseous absorption, primarily by telluric water vapor, renders the Earth's atmosphere largely opaque to extraterrestrial infrared radiation as seen from the ground at various wavelengths. Between about 30 \(\upmu\)m and several hundred microns, the atmosphere is nearly continuously opaque3, marking the adopted cutoff between the mid- and far-infrared (see Figure 1). Owing to this absorption, the far-infrared (or sub-millimeter) spectral region is only accessible from extremely high-altitude, airborne, and space observatories [22].
Footnote 3: The atmospheric transparency begins to increase once again approaching the millimeter region, which has been used to sense deeper into the atmospheres of the giant planets than that which can be accessed by visible and infrared observations [19; 20; 21].
Between 5 and 30 \(\upmu\)m, the atmospheric transmission is more variable and frequency dependent, with H\({}_{2}\)O, CO\({}_{2}\), O\({}_{3}\), CH\({}_{4}\), and nitrous oxides contributing to the opacity [23; 24; 25] (see Figure 2). Strong absorption by CO\({}_{2}\) between 14 and 17 \(\upmu\)m effectively blocks the atmospheric window near its center, and thus mid-infrared is typically divided into two subregions known as the N and Q bands in photometric systems. The precise ranges of these bands are not universally standardized, but the N band is typically recognized as ranging from roughly 8 to 14 \(\upmu\)m, while the Q band extends between 17 and 25-27 \(\upmu\)m [7; 8]. These bands are often divided further into various sub-bands for filtered imaging (e.g., Q1, Q2, Q3, etc.), naturally demarcated by the numerous absorption lines [7; 26].
Additionally, corresponding to a narrow window of atmospheric transparency between 4.6 and 5.0 \(\upmu\)m, the M band straddles the rough boundary between the near- and mid-infrared. It has been grouped with the mid-infrared in at least some literature (e.g., [29]), although it is more commonly considered as a near-infrared band [7].
The gases in Earth's atmosphere do not only absorb radiation--they also emit, with an emission spectrum characteristic of the atmospheric temperature and composition. Given the Earth's effective temperature of 255 K, the atmosphere's black body thermal emission peaks near 12 \(\upmu\)m (see Figure 2). Likewise, the telescope itself inescapably emits thermal radiation corresponding to the observatory's ambient temperature (typically 280-290 K at the VLT, for example [30; 31]) leading to an additional source of thermal radiation that also peaks in the N-band [31]. This combined telluric emission easily overwhelms the faint celestial emission from the colder, distant atmospheres of the outer planets. One solution to this problem is to actively cool the instrument and to place the telescope above as much of the Earth atmosphere as possible, ideally well into space. However, when space is out of reach, observations from the ground are still possible over much of the mid-IR owing to specialized techniques developed by observers over the past century.
The standard approach is to attempt to remove the thermal contribution of the sky and telescope by a process known as chopping and nodding [32]. Chopping entails oscillating the telescope's secondary mirror at a frequency of several hertz, cycling on and off target in order
to isolate and subtract the sky's thermal contribution from the total signal. Likewise, nodding attempts to remove the residual, non-uniform emission from the telescope by alternating the telescope's pointing every few minutes. By this approach, measurements of Uranus' 13-\(\upmu\)m emission, for example, can be made from the ground despite being roughly 100,000 times fainter than the combined sky and telescope emission [33]. However, even this approach cannot overcome the atmosphere's considerable infrared opacity beyond the atmospheric window, and significant portions of the infrared spectrum (e.g., \(\sim\)5.5-8 \(\upmu\)m, 13.5-17 \(\upmu\)m, and 25-30 \(\upmu\)m) remain inaccessible from the ground.
Figure 2: The atmospheric transmission (top) and emission (bottom) in the mid-IR, for conditions at Cerro Paranal, as described in Figure 1. Transmission is indicated by the blue–gray interface varying between 100% (full transmission) and 0% (total attenuation) from the top of the atmosphere down to the surface [3,4]. Emission from the atmosphere is indicated by the red shaded curve, assuming annual average temperatures, 1.5 mm PWV, and an airmass of 1.16. Additional emission from the telescope is shown for two different assumed values of emissivity, spanning typical values found in the literature (\(\epsilon\)= 0.07–0.17) [27,28,31], and typical ambient temperature of 280 K. The total telluric thermal radiance is orders of magnitude greater than that received from the giant planets.
### Why We Observe in the Mid-Infrared
While observations of scattered sunlight at visible wavelengths define our most familiar views of the giant planets, they do not reveal a complete picture of the important processes that shape these atmospheres. A complementary understanding of the atmospheric environment, within and above the clouds, can be achieved with infrared observations. Although scattered sunlight from aerosols can contribute to mid-IR radiances (particularly at shorter wavelengths), mid-IR is dominated by intrinsic emission from the atmosphere, indicative of temperature and composition (see Figure 3).
With effective temperatures4 of less than 125 K, the idealized black body emission of the Solar System's giant primarily emit energy in the infrared region of the electromagnetic spectrum. The spectral radiant black body emission of Jupiter's and Saturn's peak in the mid-infrared, while emission from the colder atmospheres of Uranus' and Neptune's peak at longer wavelengths of the far-infrared (see Figure 2). In either case, considerable energy is radiated in the mid-infrared, and this thermal emission is relatively more accessible to observers on Earth's surface than that radiating in the far-infrared. Understanding the temperature structure and energy budget of these giant planets, therefore, requires measurements of mid-infrared radiances. This idealized picture of the mid-infrared emission is, however, complicated--and greatly enriched--by the presence of radiatively active molecules, which profoundly alter the emission spectrum.
Footnote 4: In astronomy, the effective temperature relates the observed emission from an astronomical source to that of a perfect black body of known temperature. The term and application date back to at least the 19th century, where, for example, it was applied to estimate the surface temperature of the Sun [34, 35]. For giant planets with thick atmospheres, it generally corresponds to emission from the altitudes at which the atmospheric gases become opaque to infrared radiation—typically around 100 mbar.
The mid-infrared is home to rotational-vibrational transitions of numerous molecules found in the giant planet atmospheres, including CH\({}_{4}\), C\({}_{2}\)H\({}_{6}\) C\({}_{2}\)H\({}_{2}\), NH\({}_{3}\), PH\({}_{3}\), H\({}_{2}\)O, C\({}_{2}\)H\({}_{4}\), CH\({}_{3}\), GeH\({}_{4}\), AsH\({}_{3}\), C\({}_{6}\)H\({}_{6}\), CO\({}_{2}\), and more [36, 37, 38, 39]. In spectroscopic observations, these state transitions show up as emission or absorption features, depending on the vertical temperature and chemical structure within the atmosphere. The intensity of spectral lines is dependent on both the abundance of the emitting or absorbing molecule and its ambient temperature5. If the ambient temperature is known, the molecular abundance can be inferred, typically by comparison of the observations with simulations from theoretical radiative transfer models (e.g., [40]). Alternatively, if the molecular abundances are known, the observed spectrum can be used to constrain the temperature. The greater the spectroscopic resolution of the observations, the better the vertical resolution of the inferred temperatures or abundances. Imaging essentially provides an integrated radiance over a finite passband and, therefore, yields poorer vertical resolution (effectively vertically averaging), but it typically has the advantage of greater angular (spatial) resolution and radiometric sensitivity.
Footnote 5: This is under the assumption of local thermodynamic equilibrium, which may be assumed generally valid at pressures corresponding to the tropospheres and lower-to-mid stratospheres. The ambient pressure of the molecule’s environment also shapes the spectroscopic signature through pressure broadening
The detection and measurement of specific molecules can provide unique insight into processes active in giant planets' atmospheres. Some species are expected as a result of a solar composition atmosphere in thermodynamic chemical equilibrium (CH\({}_{4}\), for example [41, 42]), but are nonetheless useful as indicators of temperatures, vertical structure, and circulation [43, 44, 45, 46]. Others are unexpected given the ambient temperatures and bulk chemistry, and they require specific mechanisms to explain their abundances (for example, CO\({}_{2}\)[47] and H\({}_{2}\)O [48] in Uranus atmosphere, implying external, meteoric sources).
N-band (8-14 um) spectroscopy and imaging have been used in numerous investigations to infer temperatures, chemistry, and aerosol abundances in the troposphere and stratosphere of Jupiter (e.g., [49, 50, 51, 52, 53, 54]) and Saturn (e.g.,[55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]). For Uranus and Neptune, the N band has
been used to measure stratospheric emission associated with hydrocarbons (e.g., [33; 66; 67; 68; 69; 70; 71; 72; 73; 74]), but interpretations have been limited by larger uncertainties in both temperatures and chemical abundances.
The dominant components of giant planets' atmospheres are hydrogen and helium, the collision of which produces continuum absorption dependent on the pressure and temperature. Since the abundance of hydrogen and helium are homogeneous and relatively well constrained by the overall atmospheric density, this collision-induced absorption (CIA) provides a powerful, unambiguous indicator of the atmospheric thermal structure. Q-band observations (17-25 \(\upmu\)m) are dominated by this absorption, and have thus successfully been used to infer atmospheric temperatures in the upper tropospheres of all the Solar System giant planets (e.g., [33; 50; 61; 65; 72; 74; 75; 76; 77; 78; 79]). Hydrogen _emission_ can also be found in the Q band, and this can additionally serve as a remote sensing thermometer of the stratosphere, as discussed in Section 3.1.
Figure 3: Contributions to observed mid-infrared emission from the giant planet atmospheres. Vertical temperature profiles for Jupiter, Saturn, Uranus, and Neptune are shown for pressures ranging from 10 bar to 1 microbar. While scattered sunlight from aerosols weakly contributes at shorter wavelengths, the mid-IR is dominated by intrinsic thermal emission from the atmosphere. The mid-IR emission originates from heights above the cloud layers, within the upper troposphere and lower stratosphere. Stratospheric emission is primarily associated with various stratospheric hydrocarbons that result from methane photochemistry.
Observations at roughly 5 \(\upmu\)m have notably been used to study Jupiter's deep atmosphere [80; 81; 82; 83; 84; 85; 86; 87; 88], producing striking, high-contrast images of Jupiter's clouds silhouetted against the underlying thermal emission as shown in Figure 4. (See [89] for a review of 5-\(\upmu\)m imaging of Jupiter, and see [90] for a review of Jupiter's deep clouds). Saturn show less contrast at 5 \(\upmu\)m owing to thicker, scattering hazes, but such observations have helped constrained vertical cloud structure and deeper chemistry [91; 92; 93; 94; 95; 96]. As a consequence of their colder temperatures and weakly scattered sunlight, the Ice Giants have smaller radiances at 5 \(\upmu\)m, and as a result, have largely been unexplored at this wavelength [69; 76; 97]. JWST promises to provide the first detailed observations of the Ice Giants in this spectral region in the years ahead. It will be one of many observational breakthroughs that JWST promises in the mid-infrared, as it observes the giant planets with unprecedented precision and sensitivity.
## 2 A Historical Overview: Observing The Giant Planets in the Mid-Infrared
Infrared characterization of the giant planets developed in parallel with the evolution of broader infrared astronomy in general. It is marked by advances in theory, technology, and techniques that sparked new discoveries, inevitably prompting new theories, technologies, and techniques. Unlike galactic and most stellar astronomy, however, planetary astronomy has the advantage of being able to apply remote sensing with relatively generous proximity, including very close encounters via robotic spacecraft. Robotic missions to the giant planets have afforded leaps in knowledge in recent decades, building upon and complementing a long history of ground-based observing. Beginning with basic measurements of effective planetary temperatures, mid-infrared investigations grew to provide critical insight into the chemistry, structure, and dynamics of the giant planets.
Repeatedly over this observational history, Jupiter, by virtue of its superior size, proximity, and brightness, was naturally investigated first and most thoroughly. Successful investigations were then typically extended to Saturn shortly thereafter. Finally, Uranus and Neptune, owing
Figure 4: Images of Jupiter at roughly 5 \(\upmu\)m. Brighter regions indicate strong thermal radiation emerging from the atmosphere below the clouds, while darker patches reveal opaque clouds, silhouetted by underlying thermal emission. The image on the left is one of the earliest examples of 5-\(\upmu\)m imaging, made with the Hale 200-in (5-m) telescope at Palomar Observatory in 1973 [83]. The 4.7-\(\upmu\)m image on the right is from the Near-InfraRed Imager (NIRI) instrument [98] at Gemini North in Hawai’i, composed of multiple images captured in 2017 [99]. The images reveal the dramatic improvement in the imaging quality, as well as changes in cloud structure over the past half-century.
to their great distances and cold temperatures, were investigated if and when even viable, consistently lagging many years behind the Gas Giants in thermal and chemical characterization.
### Beyond the Visible: Measuring Heat from the Giant Planets
In the closing year of the 18th century, William Herschel demonstrated that radiant heating from the Sun extended beyond the red light of the visible spectrum [100], arguably marking the birth of infrared astronomy. Over the following century, quantitative investigations of this "invisible thermometrical spectrum" and the infrared properties of materials developed alongside innovations in optics and instrumentation (e.g., see early reviews by [101]). As theory and tools developed, astronomers pushed their observations further into the uncharted infrared spectrum while aiming their instruments at increasingly fainter celestial targets.
Beginning in the late 1850s, the first successful measurements of the "non-luminous" radiation from the Moon's surface were made using telescopes equipped with sensitive early thermopiles, which converted observed radiative energy into electrical energy that was then read as needle deflection on a galvanometer. Increasingly sensitive radiometers were subsequently developed, including the Langley bolometer6 in 1878 [104], which has found continued use in modern submillimeter instruments [105] (e.g., Herschel-PACS [106]). Lacking spectrometers with dispersive prisms and gratings tuned to the infrared [107; 108; 101], early observers simply used glass filters, transparent to visible light but opaque to thermal radiation, to remove and isolate the thermal component from the total observed radiation. This filtering approach provided ratios of relative band radiances, leading to the first (somewhat disputed) estimates for the extreme diurnal range of lunar surface temperatures [109; 110; 111; 112; 113].
Footnote 6: Giving rise to an enduring limerick, dubiously attributed to Langley’s student:
Footnote 7: For. Langley devised a Bolometer.
It’s really a sort of Thermometer.
It’ll detect the heat
Of a Polar Bear’s feet
At a distance of Half-a Kilometer.” [102; 103]
By the early 20th century, improved radiometers and observing techniques were combined with larger reflecting telescopes to provide the first quantitative estimates of thermal emission from the planets. In pioneering work by Coblentz and Lampland [114; 115; 116; 117; 118] and Pettit and Nicholson [119; 120], observations of Venus, Mars, Jupiter, Saturn, and Uranus were made between 1914 and 1924 using sensitive new radiometers and a series of filters 7 in order to separate observed radiances into five discrete spectral bands ranging between 0.3 \(\upmu\)m to 15 \(\upmu\)m8, extending progressively further into the mid-infrared (see Figure 5). The combination of filtered observations thus provided the first rough spectra of the giant planets. Analysis of these spectral data revealed Jupiter and Saturn to have temperatures (at the effective emission layers) of 120-140 k and 125-130 K, respectively, while Uranus was colder yet, with an upper limit of 100 K [121; 122]--not far from modern estimates9 of the planetary effective temperatures. These measured temperatures indicated the giant planets were cold--not much warmer than expected for equilibrium with the solar heating--and therefore contributed to evidence that the low density of the outer planets could only be explained by a bulk composition rich in hydrogen [124].
Over the following decades, improvements in technology and technique continued. Advances in mid-infrared bandpass filters and gratings (i.e., those that transmit only in the mid-infrared) enabled improved calibration of stars and planets by allowing for the direct comparison with known blackbody cavities at the telescope [125]. Errors due to the drift in detector response and changing sky radiance were minimized by shifting the sensor on and off target at high frequency [125; 126; 127]--an approach that evolved into the chopping and nodding technique still used today to remove the thermal signal of the sky and telescope [32; 128]. By the 1960s, photometric systems utilizing mercury-doped germanium detectors cooled by liquid hydrogen allowed for increased sensitivity in the mid-infrared spectral region [128].
Utilizing the new detectors and techniques, observations in the early 1960s provided the first truly spatially resolved photometry of the thermal radiation emitted by a giant planet. Beginning in 1962, radiances across the disk of Jupiter were measured at 8-14 \(\upmu\)m using the Palomar Observatory Hale 200-inch telescope--the world's largest telescope at the time. These spatially resolved data revealed thermal limb-darkening indicative of temperatures increasing with depth; temperature contrast (\(\sim\)0.5 K) between the warmer darker belts and the cooler brighter zones [129]; and that the Great Red Spot (GRS) was 1.5-2.0 K cooler than the surrounding disk (see Figure 6).
Figure 5: Photographs of the Coblentz-Lampland radiometer, used to make groundbreaking measurements of thermal from stars and the giant planets [115]. Top left: Evacuated glass tube containing the thermocouple—a wire of bismuth that converts temperature differences to electric voltage via the thermoelectric effect. The thermocouple is kept in an evacuated tube, with the vacuum maintained by the presence of reactive calcium metal (serving as a getter). Bottom left: The thermocouple is placed into the radiometer, which was fastened to the photographic plate holder of the telescope. The thermocouple was placed in the optical path between the target and eyepiece, while filters of different passbands were selectively rotated in and out of view. By these means, coarse spectra could be inferred. Right: The Lampland 40-inch telescope of the Lowell Observatory, on which the radiometer was mounted for much of Coblentz’s planetary work, inside its dome, ca. 1909 (_Image credit: Slipher, E.C., ”The 42-inch Lampland Telescope inside of its dome,” Lowell Observatory Archives, [https://collectionslowellobservatory.omek.net/items/show/1047_](https://collectionslowellobservatory.omek.net/items/show/1047_), accessed on 22 December 2022).
Soon after, the first observations of Saturn and Uranus at 10 \(\upmu\)m [130] and 17-25 \(\upmu\)m [131, 132] were made and flux calibrated by comparison to recently defined photometric standard stars. These observations yielded a 20-\(\upmu\)m brightness temperature and 95 \(\pm\) 3 K for Saturn, roughly consistent with modern values, and 55 \(\pm\) 3 K for Uranus, which established "the current lower limit to the brightness temperature of a celestial object which can be measured in the infrared" [132]. The opacity of Earth's atmosphere limited the infrared spectrum that can be obtained from the ground, leading observers to seek greater heights.
Observations from airborne observatories began in the 1960s, with rockets [133, 134], balloons [135, 136], and jets [137, 138] rising above Earth's moist lower atmosphere. A 12-inch telescope flow on a modified Lear jet (NASA 701) in 1968 captured thermal radiances from Jupiter and Saturn using a series of broad filters with bandpasses sampling the spectrum between 1.5 \(\upmu\)m to 350 \(\upmu\)m. Analysis of these brightness temperatures showed that both Jupiter and Saturn radiated roughly twice as much energy as they receive [137, 138]10.
Footnote 10: The energy balances—ratios of emitted to received radiation—for Jupiter and Saturn have been revised down to 1.7 and 1.8, respectively, following Voyager measurements [139]; however, following Cassini, the balance for Jupiter was raised to 2.1 [140], while Saturn was shown to be seasonally variable [141]
Neptune's thermal emission was finally measured years later, when, in 1972, observations were made from the newly constructed high-altitude observatory on Maunakea. At over 4,200-m above sea level, the observatory's altitude sits above a majority of the Earth's attenuating water vapor, permitting observations further into the mid-infrared. Both Uranus and Neptune were observed between 17 and 28 \(\upmu\)m using a liquid-helium-cooled bolometer mounted on a 2.24-m telescope [142, 143]. Surprisingly, it was discovered that Neptune had a brightness temperature of 57.2 \(\pm\) 1.6 K at 24-\(\upmu\)m --warmer than that of Uranus (54.7 \(\pm\) 1.6 K) despite Neptune's greater distance from the Sun [143, 144, 145]. Combined with observations in the visible, far-infrared, and millimeter wavelengths, this led to the concl
Figure 6: Among the earliest maps of Jupiter’s mid-infrared radiances, reproduced from Murray et al. [129]. Left: Contours show brightness temperatures derived from 8 to 14 \(\upmu\)m observations, made over five nights in mid-December 1963 using the Hale 200-inch (5.1 m) Telescope of the Palomar Observatory. The contours indicate modest limb-darkening and early hints of possible zonal thermal structure, with the visible belt-zone structure superimposed (oriented south pole upwards). Right: Perpendicular lines represent scans passing through the Great Red Spot (GRS). Corresponding brightness temperature curves (as numbered just to the right and above) show a depression in temperature at the location of the GRS.
excess heat--\(2.4^{+1.3}_{-0.9}\) times as much power as it absorbs [146]--similar to Jupiter and Saturn. Uranus was evidently the outlier, as the only giant planet apparently lacking an internal heat source.
Observations continued to improve over the following decades, refining these initial temperature measurements with effective temperatures constrained at longer and longer wavelengths, including far-infrared [147; 148], sub-millimeter [149], millimeter [150; 151; 152], and microwave [153; 154] wavelengths. Spatially resolving the temperature structure on the Ice Giants had to wait for spacecraft encounters and larger telescopes in the following decades. Meanwhile, the spectral resolution was quickly improving from the ground, allowing for the detection of discrete spectral signatures [12; 155].
### A New Window into the Giant Planets' Atmospheric Composition
In the 1970s, the focus of mid-infrared planetary studies arguably shifted from temperatures to chemistry. Until then, the detection and measurement of the atmospheric composition had been investigated primarily in the visible and near-infrared for the better part of a century [41; 156; 157; 158; 159; 160; 161; 162], but such observations had only succeeded in spectroscopically identifying molecular hydrogen (H\({}_{2}\)) and methane (CH\({}_{4}\)) in the giant planets, plus ammonia (NH\({}_{3}\)) in Jupiter and Saturn. Based on assumptions of solar-composition in chemical and adiabatic equilibrium, the newly-constrained atmospheric temperatures of the planets were used to predict theoretical abundances of several hundred volatile compounds throughout Jupiter's atmosphere [42; 163], while photochemical models were predicting disequilibrium of stratospheric hydrocarbons such as ethane (C\({}_{2}\)H\({}_{6}\)) and acetylene (C\({}_{2}\)H\({}_{2}\)) due to photolytic destruction of CH\({}_{4}\)[164; 165]. The mid-infrared provided a promising new window to potentially detect these molecular signatures via fundamental rovibrational and pure rotational transitions.
In 1973, excess radiance at 11-14-um--as seen in moderate resolution (R \(\sim\) 50-66) spectra of Jupiter [80] from the ground at \(\sim\)2,500-meter (8,200-ft) altitude--was correctly identified as the first evidence of stratospheric C\({}_{2}\)H\({}_{6}\) and C\({}_{2}\)H\({}_{2}\) on Jupiter, enhanced by an atmospheric temperature inversion [166; 167], confirming photochemical model predictions [164; 165]. This 12-um C\({}_{2}\)H\({}_{6}\) enhancement was also seen in the spectrum from Saturn the following year [168]. In light of these discoveries, previous observations of Uranus' exceptionally weak 12-13-um emission were insightfully reinterpreted as evidence that Uranus was not necessarily colder, but potentially contained less stratospheric ethane than Neptune [144]. We now know colder temperatures and lower ethane abundances _both_ contribute to Uranus' relatively weak 12-13-um emission compared to Neptune [39].
Following from theory and techniques applied in the analysis of terrestrial satellite data [169; 170], spectral inversion techniques were at this time being developed for the giant planets in order to infer vertical temperature profiles and chemical abundances [171; 172]. In particular, Taylor [171] showed that measurements of the \(\nu_{4}\) branch of CH\({}_{4}\) (at \(\sim\)7.74 um) could be inverted to provide temperature profiles for relatively warm Jupiter and possibly Saturn. For colder Uranus and Neptune, collision-induced rotational S(0) absorption by hydrogen at 25-40 um could be used to infer temperature profiles. Indeed, measurements of S(0) and S(1) collision-induced H\({}_{2}\) absorption were successfully used to retrieve upper tropospheric temperature structure in the giant planets from Voyager-IRIS spectra decades later [173].
An instrumental leap forward came with the advent of high-resolution Fourier Transform Spectrometers (FTS) in the late 1960s [174; 175; 175], which allowed for greater spectral resolution (R\(\approx\)500) at longer wavelengths. With the promise of further discoveries already evident in modest-resolution high-altitude observations [80], high-resolution mid- and far-infrared spectroscopy rapidly emerged, opening a window to new molecules and greater constraints on atmospheric composition and vertical temperature structure.
Spectroscopy in the decade that followed yielded the first detections of CH\({}_{3}\)D [176], 13-NH\({}_{3}\)[177], H\({}_{2}\)O [178], PH\({}_{3}\)[179; 180], GeH\({}_{4}\)[84], and CO [181; 182] in the atmosphere of Jupiter. Given that PH\({}_{3}\), GeH\({}_{4}\), and CO are not thermodynamically stable at the low temperatures and pressures at which they were detected, their presence suggested strong vertical mixing from below producing tropospheric disequilibrium chemistry [183; 184]. Similarly, CH\({}_{3}\)D [185] and PH\({}_{3}\)[186; 187] were detected in Saturn's atmosphere, along with conclusive evidence of stratospheric C\({}_{2}\)H\({}_{6}\)[188] and tentative detection of C\({}_{2}\)H\({}_{4}\)[189]. NH\({}_{3}\) was also found [190], but at a factor of at least 20 less than on Jupiter, consistent with Saturn's colder temperature and deeper condensation levels. Likewise, disequilibrium GeH\({}_{4}\) on Saturn was not detected until a decade later [191]. With even greater distances and colder temperatures, the chemistry (and temperature structure) of Uranus and Neptune remained almost unconstrained in the mid-infrared until their encounter with Voyager 2.
### Remote Sensing Up Close: Missions to the Giant Planets
Beginning in the 1970s, robotic spacecraft missions to the giant planets permitted infrared remote sensing of the giant planets at relatively close proximity without attenuation from the Earth's atmosphere. Infrared radiometers on Pioneer 10 and Pioneer 11 fle by Jupiter in 1973 and 1974, respectively, [192], equipped with broadband filters (11- and 26-\(\upmu\)m-wide) centered at roughly 20 \(\upmu\)m and 40 \(\upmu\)m, respectively. Though broadly filtered in wavelength, the spatially resolved measurements yielded new, stronger constraints on the energy balance [193; 194] and thermal structure of Jupiter [49]. Similarly, Pioneer 11 observed Saturn in 1979, providing similar refinements of Saturn's thermal structure and energy balance [43; 195], before continuing out towards interstellar space. The first to encounter Jupiter and Saturn, the Pioneer missions were envisioned as precursors to a more ambitious Mariner program mission to the giant planets, later renamed as the Voyager Program.
Launched in 1977, Voyager 1 and Voyager 2 carried the Infrared Interferometer Spectrometer and Radiometer (IRIS) experiment--arguably the most fruitful infrared instrumentation in the history of solar system exploration. A combination of three instruments, IRIS included a Michelson interferometer that operated in the infrared from 2.5 \(\upmu\)m to 55 \(\upmu\)m (180 and 2400 \(\upmu\)m\({}^{-1}\)) with a spectral resolution of R\(\sim\)42-558 (4.3 cm\({}^{-1}\)), in contrast to the Pioneer radiometer's filters.
Voyager 1 reached Jupiter in March 1979, followed four months later by Voyager 2. Initial findings from these observations included refined estimates of the effective temperature and energy balance [196]; improved measurements of meridional thermal structure and cold anomaly of the Great Red Spot (GRS) [197]; confirmation of excess thermal emission near Jupiter's north magnetic pole [198] new constraints on the ammonia cloud density and particle sizes [199]; new constraints on the chemical abundances [197], including that of helium [200]; and the first detection of several new hydrocarbons, including C\({}_{2}\)H\({}_{4}\), C\({}_{3}\)H\({}_{4}\), and C\({}_{6}\)H\({}_{6}\)[198]. Similarly, IRIS placed new constraints on the temperature structure and chemistry of Saturn during the Voyager 1 and Voyager 2 encounters in 1980 and 1981, respectively, [201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225]. Following the Saturn encounters, Voyager 1 began its extended mission on course to depart the solar system, while Voyager 2 continued onward towards the Ice Giants.
The subsequent Voyager 2 flybys of Uranus in 1986 and Neptune in 1989 marked watershed moments in the exploration of the outer planets. With unprecedented spatial resolution and phase-angle coverage, Voyager substantially improved constraints on the Bond albedos, effective temperatures, thermal structure, and energy balances [206; 207; 208; 139] of both planets, confirming that Uranus was indeed anomalous in its lack of interior heat. In particular, the spatial resolution allowed the upper-tropospheric temperature structure of both planets to be mapped for the first time, revealing relative cold anomalies (2-4 K) at mid-latitudes compared to the warmer low and high latitudes [209]. This latitudinal structure was interpreted as evidence of mid-latitude upwelling and resulting adiabatic cooling as part of a meridional circu
lation cell, compensated by downwelling at the equator and poles [209; 210; 211]. New constraints were also placed on the helium abundances of both planets [207; 208], although stratospheric hydrocarbons remained poorly constrained due to insufficient instrument sensitivity at wavelengths less than 25 \(\upmu\)m. Nonetheless, the Voyager 2 flybys of the Ice Giants remain to be the only close encounter with these distant worlds and the definitive account of their temperature structure.
Notably, the IRIS observations also allowed for the first measurement of ortho-para hydrogen disequilibrium in the outer planets. Pressure-induced H\({}_{2}\) absorption at \(\sim\)17 \(\upmu\)m and \(\sim\)27 \(\upmu\)m result from transitions in ortho-H\({}_{2}\) and para-H\({}_{2}\) energy levels, respectively, and the ratio of these absorption features are theoretically dependent on temperature [212; 213]. Conrath and Gierasch found ortho-para fractions were not in equilibrium with the retrieved temperatures on Jupiter, particularly at the equator, implying upwelling from warmer depths [214]. Combined with implied zonal wind shear inferred from thermal wind relations, these observations provided powerful new insight into the atmospheric circulation on the giant planets [72; 77; 173; 201; 215].
Following the success of Voyager, the Galileo orbiter examined the Jupiter system over the course of 35 orbits between 1995 and 2003 [216; 217; 218; 219]. On-board instruments included the Near-Infrared Mapping Spectrometer (NIMS) [220], which observed from 0.7 to 5.2 \(\upmu\)m, and the Photopolarimeter-Radiometer (PPR) experiment [221], which observed in five mid-infrared spectral bands between 15 and 100 \(\upmu\)m. The NIMS spectra, combined with contemporaneous visible imaging, found evidence of deep water clouds [222] and showed that most, _but notably not all_, bright clouds blocking thermal emission extended vertically to the upper troposphere [223; 224; 225]. The PPR was used to derive the 200-700-mbar temperature field of the Great Red Spot (GRS) using four discrete mid-infrared filters centered on 15, 22, 25, and 37 \(\upmu\)m. These filtered data showed that the GRS was roughly 3 K colder than regions to its east and west, consistent with Voyager and previous investigations [226].
While Galileo was still in orbit around Jupiter, the next great flagship mission to the outer planets was already en route to Saturn. The Cassini-Huygens spacecraft launched in 1997, beginning its two-decade-long journey of exploration [227; 228]. It observed Jupiter over a period of roughly six months, reaching its closest approach in December 2000 at just under 10 million kilometers [229], before entering orbit around Saturn in July 2004.
The Cassini spacecraft was equipped with two state-of-the-art instruments sensitive to the infrared: the Cassini Visual and Infrared Mapping Spectrometer (VIMS) [230] and the Composite Infrared Spectrometer (CIRS) [231; 232]. VIMS was an improved successor to Galileo-NIMS [220] (even inheriting some of its mechanical and optical parts from the original NIMS engineering model [230]). As an imaging spectrometer, it produced spectra for each pixel (or _spaxel_) in an image. It was composed of a visible and infrared channel, allowing for measurements from the ultraviolet to the edge of the mid-infrared (0.3-5.1 \(\upmu\)m). By simultaneously sensing both near-infrared scattering and thermal emission, VIMS allowed for new constraints on Saturn's cloud opacity and composition [94; 233; 234; 235] (see Figure 7). For observations at longer wavelengths, Cassini's CIRS instrument was used [231; 232]. Unlike Galileo's infrared instrument (PPR), Cassini's CIRS was a proper spectrometer. Following the FTS principles used since the 1960s, CIRS was composed of a mid-infrared Michelson interferometer and a far-infrared polarizing interferometer that could together provide spectra from 7.1 to 1000 \(\upmu\)m at a spectral resolution that could be set between 0.5 and 15.5 cm\({}^{-1}\).
With its unprecedented spectral coverage, CIRS observations of Jupiter provided new constraints on temperature structure [236], energy balance [237], cloud structure and composition [87; 238; 239], and chemical abundances, including that of NH\({}_{3}\)[240], PH\({}_{3}\)[241; 242], C2H2 [243] and C2H6 [243], the D/H ratio [244], halides [245], and trace hydrocarbons [246; 247]. Then, from its unrivaled vantage point in orbit around Saturn for more than 13 years, CIRS
revolutionized our understanding of Saturn's seasonally variant chemistry and thermal structure [78; 231; 248; 249; 250; 251; 252; 253]. It placed the new and improved constraints on numerous molecular and isotopic abundances [45; 60; 242; 254; 255; 256; 257; 258; 259; 260; 261; 262]. The CIRS observations of Saturn remain the definitive measurements of the planet at mid-infrared wavelengths, and largely define our current knowledge of Saturn's temperature and chemistry (see [263] for a comprehensive review).
Figure 7: Images of Saturn from Cassini-VIMS. Top: False-color mosaic of Saturn from February 2006 showing thermal infrared radiation at 5.02-\(\upmu\)m (in red) and scattered sunlight at 1.07 \(\upmu\)m and 2.71 \(\upmu\)m (in blue and green, respectively). Discrete clouds appear silhouetted against the glow of Saturn’s thermal emission at 5-\(\upmu\)m, while the rings cast a shadow upon Saturn’s northern hemisphere. Bottom: The last images from VIMS, captured on 14 September, 2017, as the spacecraft made its final descent towards Saturn. Thermal emission at 5 \(\upmu\)m appears brighter where the cloud opacity is less. The dotted ellipse marks the approximate location where the Cassini spacecraft soon thereafter entered into the atmosphere, concluding the mission. _Image credits: NASA/JPL-Caltech/University of Arizona_
### From High Above the Atmosphere: Observations from Space Telescopes
While robotic spacecraft missions were venturing far into the outer Solar System, new discoveries were being made relatively closer to home with a series of space-borne telescopes. Though modest in size compared to ever-larger ground-based telescopes, these versatile observatories were unencumbered by telluric absorption, possessing a sensitivity only possible in the coldness of space.
The Infrared Space Observatory (ISO) was the first such space observatory to make great contributions in mid-infrared (and far-infrared) observations of the giant planets. Operated from 1995 to 1998, it was equipped with the Short Wave Spectrometer (SWS)--a scanning spectrometer sensing from 2.35 to 45.4 \(\upmu\)m with grating resolutions between 930 and 2450 (\(\lambda/\Delta\lambda\)) and a higher resolution Fabry-Perot mode (20,600-31,000) [264; 265]. The combination of high spectral resolution and coverage led to the new detection of several molecules on all four giant planets [266; 267], although Uranus and Neptune proved too faint to be observed below 7 \(\upmu\)m. Discoveries included the detection of water vapor in Saturn's troposphere at 5 \(\upmu\)m [266]; detection of new hydrocarbons (e.g., CH\({}_{3}\)C\({}_{2}\)H and C\({}_{4}\)H\({}_{2}\)) in Saturn's stratosphere [268]; detection of stratospheric CO\({}_{2}\)\(v_{2}\) bands on Saturn [268], Jupiter [269] and Neptune [48]; and the first detection of methyl (CH\({}_{3}\))--a molecule diagnostic of the height to which methane is mixed--in the stratospheres of Saturn [267] and Neptune [270]. Numerous discoveries were also made at longer wavelengths with the Long Wavelength Spectrometer (LWS). See Encrenaz et al., [271] for an excellent summary.
ISO was followed by the Spitzer Space Telescope, launched in 2003 [272]. Sensing from 5.2 to 38 \(\upmu\)m with low (R\(\sim\)60-130) and moderate (R\(\sim\)600) resolution spectroscopy, the Spitzer-Infrared Spectrograph (IRS) [273] observed Neptune on four occasions between 2004 and 2006 [274; 275], and Uranus in 2004 [47] and 2007 [71; 76], near the time of the planet's equinox. With a primary mirror of 0.85 m, Spitzer, like ISO, was not able to spatially resolve the Ice Giants' disks, but the observations nonetheless led to strong new constraints on the planets' disk-averaged temperature structure [76; 275; 276] and chemistry [71]. The observations yielded the first detections of C\({}_{2}\)H\({}_{6}\) and possibly CH\({}_{3}\) on Uranus and the first detections of methylacetylene (C\({}_{3}\)H\({}_{4}\)) and diacetylene (C\({}_{4}\)H\({}_{2}\)) in both Ice Giants [47; 274].
Finally, it is worth noting that the new JWST promises to far surpass these previous mid-infrared space observatories and provide the definitive mid-infrared spectra of the giant planets. The Mid-Infrared Instrument (MIRI) [16] is capable of providing spatially resolved (integral field unit) spectra from 5 to 28 \(\upmu\)m with resolving powers from 1300 to 3700 (\(\lambda/\Delta\lambda\)). The telescope successfully launched on 25 December 2021 and is expected to be operational for 20 years. All four giant planets will be observed in the first two years following launch. With superior sensitivity and spatial resolution, the results are anticipated to greatly advance our understanding of Uranus and Neptune, in particular.
### Matured Mid-Infrared Observing from the Ground
Back on the ground, improvements in detectors, telescopes, and observing techniques advanced ground-based observations to a quality rivaling spacecraft observations (e.g., see Figures 4 and 8). The early contour maps of Jovian brightness temperatures from Palomar [129] gave way to raster-scanned maps from the NASA Infrared Telescope Facility (IRTF) in the 1980s and 1990s [50; 277], followed by the first modern 2-D array detectors in the 1990s11. By the mid-2000s, numerous mid-infrared instruments were in operation on 8-meter class telescopes, including: Long Wavelength Spectrometer (LWS) [280] at Keck; Michelle [281] at Gemini North;
The VLT Imager and Spectrometer for Mid-Infrared (VISIR) [282] at the Very Large Telescope (VLT); the Thermal-Region Camera Spectrograph (T-ReCS) at Gemini South, [283]; and the Cooled Mid-Infrared Camera and Spectrometer (COMICS) [284] at Subaru. Typically, planetary observations with these instruments have applied narrow-band filters covering spectral ranges between 8 and 13 \(\upmu\)m (the N-band) and 17 to 25 \(\upmu\)m (the Q-band), from which chemistry and/or temperatures were retrieved [33, 51, 53, 65, 74, 77]. Additionally, such observations were frequently used to complement contemporaneous spacecraft observation, providing greater spatial or temporal coverage than possible from orbit [46, 62, 78, 285].
In terms of spectroscopy, a notable workhorse of ground-based remote sensing at mid-infrared wavelengths over the past two decades is the Texas Echelon Cross Echelle Spectrograph (TEXES) [286]. Capable of the spectral resolving power of 15,000 to 100,000 (\(\lambda/\Delta\lambda\)) in windows between 5 and 25 \(\upmu\)m, TEXES has been used to great effect on IRTF and Gemini North to map chemistry and temperatures in Jupiter [287, 288, 289, 290, 291, 292], Saturn [293, 294, 295, 296], and to a lesser extent Uranus [297, 298] and Neptune [70, 74], with the exceptionally high spectral resolution needed to resolve fine lines. The resulting quality of retrieved maps of temperature, composition, and aerosols have been noted to even surpass previous spacecraft results for Jupiter [52].
Of the aforementioned mid-IR instruments, only VLT-VISIR and TEXES remain in operation as of 2023. Given the significant and unique information provided by mid-infrared ground-based observations, it can only be hoped that these continue to serve the community until the next generation of instruments is developed, at least.
Looking ahead, promising future mid-infrared instruments to include a mid-infrared imager and spectrometer called MIMIZUKU (Infrared Multi-field Imager for Gazing at the UnKnown Universe) [299], developed for the planned 6.5-m telescope of the University of Tokyo Atacama Observatory (TAO), currently under construction in the Chilean Atacama at a remarkable 5640-m altitude [300]. MIMIZUKU will cover a wavelength range of 2 to 38 \(\upmu\)m with a spectral resolution of \(\lambda/\Delta\lambda\sim\)60-230 and diffraction-limited (wavelength-dependent) angular resolution of 0.077-1.47 arcseconds. This spatial resolution is exceptional by current far-infrared standards, although it will not surpass the current leading resolution of the larger VLT across much of the mid-IR (e.g., TAO-MIMIZUKU's 0.7\({}^{\prime\prime}\) diffraction-limited resolution versus VLT-VISIR's 0.55\({}^{\prime\prime}\) resolution at 18 \(\upmu\)m). The larger disks of Jupiter and Saturn will, therefore, be particularly well suited for MIMIZUKU, but all the Solar System's giant planets will benefit from its exceptionally broad spectral range, innovative technical design [301], and long-term monitoring capabilities in the years ahead. MIMIZUKU has already seen its first light, having been successfully tested on the Subaru Telescope in 2018 [302].
Looking even further ahead, the European Southern Observatory's planned 39.3-m Extremely Large Telescope (ELT) first-generation instruments will include the Mid-infrared ELT
Figure 8: Improvements in mid-IR imaging, as illustrated by an early image of Saturn acquired with the IRTF-BOLO1 instrument in 1984 (left) compared to a recent image from the VLT-VISIR instrument in 2019 [65].
Imager and Spectrograph (METIS) [303]. METIS promises to provide diffraction-limited imaging and medium resolution slit-spectroscopy from 3 to 13 \(\upmu\)m (covering the M and N bands), as well as high resolution (R\(\sim\)100,000) integral field spectroscopy (IFU) from 2.9 to 5.3 \(\upmu\)m [304]. N-band imaging will be capable of an amazing 6.8-mas (milli-arcsecond) angular resolution over a 13.5\({}^{\prime\prime}\)\(\times\) 13.5\({}^{\prime\prime}\) field of view (FoV). The high spatial resolution and narrow FoV will make the instrument ideally suited for observing the small disks of Uranus and Neptune, while mosaicking or regional targeting will be required for Jupiter and Saturn. Likewise, the even narrower FoV of the M-band IFU (0.58\({}^{\prime\prime}\)\(\times\) 0.93\({}^{\prime\prime}\)) will be optimal for analyzing small-scale, 5-\(\upmu\)m atmospheric features with unprecedented resolution from the ground. With METIS' first light expected in 2028 [304], the complementary capabilities of MIMZUKU, VISIR, and METIS promise exciting advances in mid-infrared observations from the ground over the next decade.
## 3 What We Have Learned
From more than a century of mid-infrared remote sensing, a picture of the general atmospheric thermal structure and chemistry of the giant planets has emerged. For Jupiter and Saturn, the picture can appear quite intricate, with complex structure, unexplained variability, and puzzling correlations across different heights and hemispheres. By comparison, our pictures of Uranus and Neptune in 2023 are little more than rough sketches, lacking details but nonetheless challenging our understanding of temporal variation in the outer solar system.
Figures 9 and 10 compare the observed mid-infrared spectra of the giant planets derived from ISO-SWS [266] and Cassini-CIRS [141; 237] for Jupiter and Saturn, and Spitzer-IRS [74; 76; 275] for Uranus and Neptune. Figure 11 compares ground-based images in three key mid-infrared windows.
### Chemistry and Temperature from Mid-IR Spectra
#### 3.1.1 5-6 \(\upmu\)m
From 5 to 6 \(\upmu\)m, scattered light and thermal emission contribute to the spectrum, modified by gaseous absorption. On Jupiter and Saturn, NH\({}_{3}\) and H\({}_{2}\)O are the primary absorbers [266; 268; 306]. Measurements of NH\({}_{3}\), have been used to provide insights into the accretion stage of the planets' formation histories. Analyses of the nitrogen ratios (at 5 to 6 \(\upmu\)m and \(\sim\)10-11 \(\upmu\)m) indicate identical values of these isotopic ratios for both Jupiter and Saturn, suggesting a similar history of primordial N\({}_{2}\) accretions during the formation of each planet [245; 287]. Likewise, the water abundance is important because oxygen is potentially telling of the carbon-to-oxygen (C/O) ratio, which is seen as diagnostic of the planet's formation history in the solar nebula [307; 308; 309]. The quest for Jupiter's and Saturn's deep water abundances has been a challenge since the mid-IR cannot sense well below the H\({}_{2}\)O condensation level on either planet [310]. The Galileo probe (in situ) and Juno (microwave radiometer) have aimed to resolve this value for Jupiter, but uncertainties remain due to the inhomogeneous nature of Jupiter's atmosphere. A proper discussion is beyond the scope of this review, but see [88; 89].
On Uranus and Neptune, this region of the spectrum was too weak to be observed by ISO-SWS, and even Spitzer-IRS spectra are in doubt [76; 275; 276]. The high opacity of Earth's atmosphere, particularly around 6 \(\upmu\)m, makes these observations impractical from the ground. Observations with JWST-MIRI should provide the first comprehensive examination of this spectral region.
Figure 9: Observed mid-infrared spectra of giant planets in the N- and Q-bands (top and bottom panels, respectively). The spectra of Jupiter (red) and Saturn (orange) are from ISO-SWS [266] and Cassini-CIRS [141; 237], while Uranus (green) and Neptune (blue) are disk-averaged radiances from Spitzer-IRS [74; 76; 275; 276]. The rough uncertainty of the spectra (most evident for Uranus) is suggested by the faint transparent envelopes. Select emission features are indicated, and the wavelengths at which different gases broadly contribute to spectra are indicated by the labeled horizontal lines (purple). The atmospheric transmission is indicated by the blue–gray interface varying between 100% (full transmission) and 0% (total attenuation) from the top of the atmosphere down to a surface, as in Figure 2.
#### 3.1.2 6-15 um
From 6 to 15 um, the spectra are shaped by numerous strong emission and absorption features against a backdrop of the hydrogen-helium continuum emission from around the tropopause (roughly 100 mbar). On Jupiter and Saturn, absorption is produced by NH\({}_{3}\), PH\({}_{3}\), and H\({}_{2}\)O, while CH\({}_{3}\)D (at \(\sim\)9 um) and deeper CH\({}_{4}\) absorption is found in the spectra of all four giant planets.
PH\({}_{3}\) is a disequilibrium species in the cold upper troposphere of Jupiter and Saturn, and its presence indicates vigorous vertical mixing on time scales less than that of chemical conversion [186; 187; 311]. It has yet to be detected on Uranus and Neptune [312]. The spatial distribution reveals latitudinal variation in mixing, as discussed in Section 3.2.
Combined measurements of CH\({}_{4}\) and CH\({}_{3}\)D have been used to estimate the D/H ratio of the planets, providing powerful clues as to their formation history in the solar nebula [313]. From theory, Jupiter and Saturn are expected to have D/H ratios consistent with the solar nebula, from which they derived most of their mass; Uranus and Neptune, however, should have higher D/H ratios if they formed from proportionately larger, deuterium-rich icy cores. Measurements have shown that D/H ratios on Uranus and Neptune are indeed a factor of a few larger than those of Jupiter and Saturn [69; 71; 244; 275; 276; 314; 315; 316].
Nearly all the emission lines between 6 and 15 um are from stratospheric hydrocarbons, primarily CH\({}_{4}\) (peaking at 7.7 um), C\({}_{2}\)H\({}_{6}\) (\(\sim\) 12 um), C\({}_{2}\)H\({}_{2}\) (13 to 15 um). C\({}_{2}\)H\({}_{6}\), C\({}_{2}\)H\({}_{2}\), and other minor hydrocarbons (including methyl radicals (CH\({}_{3}\)), ethylene (C\({}_{2}\)H\({}_{4}\)), methylacetylene (CH\({}_{3}\)C\({}_{2}\)H) and diacetylene (C\({}_{4}\)H\({}_{2}\))), are the result of photochemistry in the stratospheres of the giant planets [39]. Methane from the troposphere is mixed up into the stratosphere, where it is then broken down by ultraviolet radiation, prompting a chain of chemical reactions that
Figure 10: As in Figure 9, mid-infrared spectra of the giant planets, but now expressed in brightness temperature versus spectroscopic wavenumber.
result in a melange of new hydrocarbons [37; 39; 164; 165; 317; 318]. Estimates of the abundances of these hydrocarbons have been used to infer vertical mixing within the atmospheres and constrain seasonal-chemical models of their formation and destruction, e.g., [318]. Emission from CH\({}_{4}\) has been used to infer stratospheric temperatures on Jupiter and Saturn since it is considered uniformly well mixed in the warm atmospheres of the Gas Giants [171; 248; 293], whereas it cannot necessarily be used as a thermometer on Uranus and Neptune given that colder temperatures are expected to condense methane and alter the distribution [70]. However, hydrogen is well mixed in all these atmospheres, and the H\({}_{2}\) S(0), S(1), S(2), S(3), and S(4) quadrupole emissions contribute at observed radiances roughly 28, 17, 12, 9.7, 8 \(\upmu\)m, respectively, to varying degrees. The S(2) and S(3) lines are weakly emitted from pressures near 1 \(\upmu\)bar, and though they are detected in the Spitzer-IRS observations of Uranus [276], they are generally lost in the forest of ethane lines on the other planets. The H\({}_{2}\) (S1) and H\({}_{2}\) (S0), observed at longer wavelengths, are most easily measured and have proven the most useful for evaluating temperatures and ortho-para fractions, as discussed below.
The relatively intense spectra of Jupiter and Saturn at wavelengths beyond \(\sim\)9 \(\upmu\)m is telling of their relatively warmer upper-tropospheric temperatures, as inferred from the earliest observations of these planets [114]. This can be seen in typical temperature profiles derived from spectra (see Figure 3). Neptune, however, appears relatively bright at 7-8 \(\upmu\)m--comparable to Saturn and indicative of Neptune's surprisingly warm and methane-rich stratosphere. The large methane abundance is generally interpreted as evidence that Neptune has particularly strong vertical mixing, while Uranus is particularly stagnant [39; 144; 318]. The stratospheric methane mole-fraction ((1.15 \(\pm\) 0.10) \(\times\) 10\({}^{-3}\)[69; 319]) is greater than the expected value limited by the colder temperatures of the underlying tropopause (i.e., the cold-trapped minimum) [209; 320; 321]. Moist convection has been discussed as a possible explanation for the stratospheric methane enhancement [292; 309; 322]. Alternatively, another possible avenue for transferring methane from the troposphere to the stratosphere, despite the cold trap, was suggested following discoveries from thermal imaging. Images from ground-based imaging show the south pole of Neptune to be warmer at the tropopause and lower stratosphere than elsewhere on the planet [323]. Orton et al. [323] proposed that methane could potentially be seeping up from the troposphere at the warm pole before spreading to lower latitudes, avoiding cold-trapping. However, evidence of meridional transport or strong stratospheric methane gradients has yet to be found [70; 74]. Furthermore, the excess methane and potential associated hydrocarbon hazes are still not enough to explain the high stratospheric temperatures of Neptune, which exceed that expected from radiative heating models [211; 324; 325; 326; 327]. Additional modeling is necessary to explain these observations.
The comparison of the planets' spectra at 12-14 \(\upmu\)m also reveals a striking difference between Uranus and the other giant planets. Uranus appears anomalously faint, with a conspicuous absence of C\({}_{2}\)H\({}_{6}\) emission. Modeling of the stratospheric photochemistry has suggested that this is a consequence of Uranus' apparently weak vertical mixing, which results in meager lower-stratospheric methane abundances (1.6 \(\times\) 10\({}^{-5}\)) and a lower-altitude homopause (7 \(\times\) 10\({}^{-5}\) bars). This limits methane and hydrocarbon photochemistry to relatively higher pressures, where the dominant hydrocarbon reactions and loss rates differ. With less CH\({}_{4}\) in the stratosphere, C\({}_{2}\)H\({}_{6}\) is also less shielded and more easily photolyzed. This results in relatively lower ethane abundances (1.3 \(\times\) 10\({}^{-7}\) at 0.2 mbar) [71] compared to Jupiter (2.08 \(\times\) 10\({}^{-5}\)[328], Saturn (9 \(\times\) 10\({}^{-6}\)[251]), and Neptune (8.5 \(\times\) 10\({}^{-7}\)[69]).
#### 3.1.3 15-30 \(\upmu\)m
Finally, from 15 to 30 \(\upmu\)m, the spectrum is dominated by the hydrogen-helium continuum emission from the upper troposphere and lower stratosphere. At these wavelengths, the differences in radiances between the planets clearly express the relative temperatures around the tropopause (\(\sim\)40-200 mbar) (see Figure 3). Uranus, with its apparent weak internal flux and
vertical mixing of solar-absorbing methane, is overall coldest at these pressures, despite being nearer to the Sun than Neptune. Several small emission features can also be seen, including CO\({}_{2}\) on both Jupiter [48; 329] and Saturn [268] at 14.98 \(\upmu\)m; CH\({}_{3}\)C\({}_{2}\)H (methylacetylene) and C\({}_{4}\)H\({}_{2}\) (diacetylene) at 15.80 and 15.92 \(\upmu\)m, respectively, on all giant planets [71; 76; 268; 275; 276; 330]; likewise CH\({}_{3}\) has been detected at 16.5 \(\upmu\)m, although only tentatively for Uranus [71; 267; 276]. Retrieved CH\({}_{3}\) on Jupiter and Saturn have been shown to be inconsistent with predicted values based on theoretical eddy diffusivity and CH\({}_{3}\) recombination rates [267]. Subsequent analysis of TEXES spectra also revealed a 3 \(\times\) greater abundance of CH\({}_{3}\) in Jupiter's polar regions [292] than predicted by photochemical models [331]. These inconsistencies suggest a need for additional sources of CH\({}_{3}\) production or uncertainties in chemical rates [292], and the topic remains an area of active research.
Standing out among the emission features are the H\({}_{2}\) (S1) and H\({}_{2}\) (S0) hydrogen quadrupoles, observed at roughly 17 and 28 \(\upmu\)m. These lines are unambiguously sensitive to the lower stratospheric temperatures within a larger continuum that is sensitive to the _ortho_ and _para_ fractions [211; 212; 213]. Retrievals exploiting the H\({}_{2}\) S(1) quadrupole have been particularly important for the Ice Giants, where methane emission cannot be used as an unambiguous proxy for stratospheric temperature owing to its potentially variable distribution. Several studies have used the H\({}_{2}\) (S1) line to determine lower stratospheric temperatures and, combined with the H\({}_{2}\) and He continuum emission, derive vertical temperature profiles [70; 74; 76; 276]. Notably, the H\({}_{2}\) (S1) has also been used to confirm that Neptune's enhanced polar stratospheric emission and its changes in time are due primarily to variations in temperatures, as discussed in the next sections [74].
### Structure and Dynamics from Spatially Resolved Mid-IR Spectra and Imaging
As current exoplanetary investigations demonstrate, a rich amount of atmospheric data can be inferred from an unresolved target [332; 333]. However, constraining many of the processes shaping a three-dimensional atmosphere--often in unanticipated ways--requires observations to characterize the spatial structure.
The Solar System planets vary significantly in observed structure at mid-infrared wavelengths, as can be seen in the representative examples of mid-infrared images shown in Figure 11. Filtered images are shown in three typical mid-infrared passbands for each planet. Each of these filters senses radiation from a different wavelength and is thus associated with different molecular transitions and pressure levels in the atmosphere. The Q-band is represented by the images with filtered bandpasses around 18-19 \(\upmu\)m. These sense thermal emission from the upper troposphere to the lower stratosphere that results from the collision-induced hydrogen-helium continuum. The 12-13-\(\upmu\)m filters are centered on wavelengths dominated by ethane and/or acetylene emission lines originating from the stratospheres. The 7.9-\(\upmu\)m filters are sensing emissions from stratospheric methane.
In general, the measured mid-infrared radiances are dependent on the abundance of the emitting gas as well as the temperature of the gas. In all cases, hydrogen and helium are assumed to be uniformly well mixed throughout the atmosphere below the homopause, and so the observed spatial structure can be explained by spatially varying temperatures. On Jupiter and Saturn, methane is likewise considered well mixed, and thus the 7.9-\(\upmu\)m observations are again indicative of temperatures structure [53; 54], but at lower pressures. However, on Uranus and Neptune, it is cold enough for methane to condense in the troposphere, and therefore methane cannot necessarily be assumed to be uniformly well mixed [70; 74]. Similarly, stratospheric ethane and acetylene are disequilibrium species, with sources and sinks dependent on photochemistry and temperatures, and so these hydrocarbons are not expected to be uniformly well mixed in pressure or latitude on any of the planets. For these potentially variable gases, the cause of the structure is inherently ambiguous, and interpretation of the radiances requires
independent knowledge of the temperatures or assumptions regarding the gaseous distributions. Hence, temperatures derived from thermal observations, particularly from imaging and low-resolution spectra, are inherently subject to large degeneracies with chemical composition (and sometimes cloud opacity), resulting in potentially large uncertainties.
#### 3.2.1 Spatial Structure of Jupiter and Saturn
Jupiter and Saturn show distinct zonal banding across the mid-infrared, indicative of a complex temperature structure associated with belt-zone dynamics [49; 197; 334; 335]. Temperature structures retrieved from spatially resolved spectra are shown in Figure 12. Regions that appear brighter in thermal infrared emission (see Figures 4 and 11) are warmer with thinner clouds, whereas darker areas are colder with thicker clouds. The mechanism behind these regional temperature differences has been interpreted as evidence of adiabatic warming and cooling associated with sinking and rising currents of gas, respectively, [44; 173; 211; 215; 334]. However, it has been argued that the temperature anomalies can be sustained dynamically given cyclonic/anticyclonic zonal shear and the strong vertical stability of the tropopause [335].
Figure 11: Mid-infrared images of the giant planets from ground-based observatories at three different wavelengths regions, each primarily sensitive to different molecules and pressures: stratospheric methane (centered at \(\sim\)7.9 \(\upmu\)m); stratospheric ethane (\(\sim\)12.2 \(\upmu\)m, relevant to Jupiter, Saturn, and Neptune) and acetylene (\(\sim\)13 \(\upmu\)m, relevant to Uranus); and tropospheric hydrogen (\(\sim\)17–19 \(\upmu\)m). Images have been rotated so that north is up in all cases. Note that Uranus appears remarkably different in structure in its stratospheric emission compared to other planets. Furthermore, note that Uranus images are of starkly poorer quality owing to Uranus’ weaker emission. Images of Uranus at 7.9-\(\upmu\)m do not exist in the literature, given poorer telluric transmission and Uranus’ particularly weak emission at these wavelengths. Images are from the following sources: Jupiter from IRTF-MIRSI in 2010 [305]; Saturn from VLT-VISIR in 2016 [65]; Uranus from VLT-VISIR in 2018 [33]; Neptune from VLT-VISIR, averaged from images dating between 2008 and 2018 [74].
In this interpretation, pressure differences between cyclonic and anticyclonic shear regions lead to temperature differences, given constraints on the column thickness imposed by the static stability of the tropopause. However, upwelling and downwelling may still be necessary to explain evidence of chemical disequilibrium, including that of ortho-para hydrogen, which suggests equatorial upwelling on Jupiter and Saturn [46; 52; 214; 336; 337].
The meridional temperature gradients imply vertical wind shear by the geostrophic thermal wind balance, and the regions of maximum gradients appear well correlated with the latitudes of localized peaks in the zonal winds (i.e., zonal jets) detected by cloud tracking [338; 339; 340; 341; 342]. The vertical motions and shears implied by the temperature field must also be balanced by meridional winds, and Cassini-CIRS observations evidence of this meridional transport in chemical tracers (e.g., C\({}_{2}\)H\({}_{2}\), C\({}_{2}\)H\({}_{6}\), C\({}_{3}\)H\({}_{8}\)) on Saturn [60; 254; 293] and Jupiter [343; 344]. Distributions of ammonia [240] on Jupiter and phosphine [242] on both Jupiter and Saturn also show signs of dynamical motions, with maximum abundances in the cool equatorial zone and reduced abundances in the adjacent warm belts. This is consistent with the picture suggested by the temperature field, with strong uplift in the equatorial zone and descent in the neighboring belts at the top of the troposphere. As these results demonstrate, the mid-infrared measurements provide an independent diagnostic of the winds and dynamics, beyond which visual imaging of aerosol scattering alone can provide.
This full picture of the gas giant circulations becomes more complicated when one also considers the distribution of storms, deep ammonia, and microwave radiances--all of which potentially point towards deeper, vertically coincident, but directionally opposite circulation cells ("stacked" circulation cells) on Jupiter [345; 346; 347; 348; 349]. A discussion of this circulation is beyond the scope of this review, but see Fletcher et al., [334] for a comprehensive review.
Saturn also displays enhanced emission at its poles, which measures 4-7 K warmer than the surrounding latitudes [78; 350]. As can be seen in the consistency across three filtered images in Figure 11, the feature extends from the upper troposphere into the stratosphere. The enhanced emission implies downwelling and adiabatic warming, consistent with the local reduction in phosphine [242]. Observations over time have shown that this is a seasonally varying feature, as discussed in Section 3.3.2.
#### 3.2.2 Uranus and Neptune
In the case of Uranus and Neptune, the thermal structures appear, at first glance, less complex. On both planets, the equators and poles appear relatively more radiant than do the mid-latitudes in the Q-band images (18-19-\(\upmu\)m) [33, 72, 74, 77, 323, 354, 355, 356]. This is consistent with tropopause pressures (40-200 mbar) being colder (roughly 3-6 K) compared to the warmer equator and poles [33, 72, 74, 77, 210, 211, 352]. The stratospheres of the Ice Giants, however, appear significantly different in structure compared to each other and their tropospheres.
Neptune possesses signs of faint banding at 7.9 \(\upmu\)m and strong limb-brightening at 12 \(\upmu\)m, but only slightly enhanced equatorial brightening [74, 292]. The limb brightening can be explained by temperature and ethane profiles that increase with height at the range of pressures sensed [72, 74, 318], in contrast to the decreasing profile. However, the banding, if truly present, appears somewhat more complicated than the temperature structure below. With some squinting, one may even argue that 7.9 \(\upmu\)m images of Neptune appear vaguely more similar to those of Saturn, with its strong polar vortex and banding, only degraded by poorer spatial resolution. With slightly weaker radiances at mid-latitudes compared to the equator and pole, it is possible that we are simply seeing an extension of the upper tropospheric circulation imprinted upon a more complex stratospheric temperature and/or chemical structure, but this cannot be conclusively determined with existing data [73, 74].
Figure 12: Contours depicting retrieved temperatures versus latitude and pressure for each of giant planets, reproduced from Fletcher et al. [334]. Colors suggest the transition from warmer (redder) to colder (bluer) temperatures. Temperature data for Jupiter are from the Cassini-CIRS Jupiter flyby in 2000 [242]. Data for Saturn are from Cassini-CIRS while in orbit around the planet, dating between 2006 and 2010 [351]. Temperature data from Uranus [77, 352] and Neptune [72] are from the Voyager 2 flybys in 1986 and 1989, respectively. The vertical lines to the right of each plot indicate the pressures at which temperatures are constrained by the observations; outside these pressure ranges, temperatures simply relax to an assumed starting profile [334]. Vertical dotted and dashed lines indicate the position of prograde and retrograde zonal jets, respectively, (from [338, 339, 353]). Zonal winds and temperatures are in geostrophic balance.
Observations of Neptune's hydrogen quadrupole emission (17.03-\(\upmu\)m H\({}_{2}\) S(1)) suggest that Neptune's stratospheric emission structure is primarily owing to latitudinal gradients in its temperature field [74]. Assuming that the atmospheric composition is uniform with latitude, retrievals of atmospheric temperatures reveal a strong meridional gradient, with a 30 K difference between the cool mid-latitudes and the warm polar vortex at 0.5 mbar in 2020 (see Figure 13). As discussed in Section 3.3.4, this temperature structure appears variable in time.
Finally, and most peculiar of all, Uranus' stratosphere appears completely different from all other giant planets. Uranus' lower stratosphere is very cold and relatively dry [39; 71; 76; 38], and as such, no methane-sensing images (7.9 \(\upmu\)m) currently exist (see Figure 11). However, a few images at 13 \(\upmu\)m, sensitive to stratospheric C\({}_{2}\)H\({}_{2}\), do exist, and they show excess radiance at high latitudes in the northern and southern hemispheres [33; 298; 357] (see Figure 14). From existing data, it cannot be determined whether these greater high-latitude radiances are due to warmer temperatures or an enhancement in C\({}_{2}\)H\({}_{2}\) (see Figure 13). Additionally, the peak latitude of this radiance cannot be strongly constrained given the low signal-to-noise ratio (SNR) of the data. It is tentatively placed at 40\({}^{\circ}\) latitude, but it may remain constant poleward of this value, depending on the amount of limb-brightening present [33]. The determination of the distribution is significant. A peak at 40\({}^{\circ}\) would coincide with the latitudes of temperature minima and assumed maxima upwelling in the upper troposphere, implying a dynamical connection from below. This could be in the form of a vertically coincident but opposite circulation cell, or, in contrast, an extension of the existing upper-tropospheric circulation simply supplying
Figure 13: Retrieved stratospheric properties from ground-based images of the Ice Giants. Left: Neptune’s temperature structure, retrieved from 2020 VLT-VISIR imaging data [74]. Temperatures are indicated by the colored contours at 2 K intervals. The heights constrained by the data are suggested by the vertical curves on the right, with maxima contributions peaking near 100 and 0.5. A warm polar vortex is evident at south polar (planetocentric) latitudes. Right: Meridional gradients in C\({}_{2}\)H\({}_{2}\) (top) and temperature (bottom), consistent with Uranus’ observed stratospheric radiances (see Figure 14). Current data cannot differentiate between the two potential extreme solutions, given the ambiguous nature of the stratospheric emission [33].
excess hydrocarbons to the local stratosphere. However, a uniform distribution north of 40\({}^{\circ}\) would require a completely different explanation. The latter would imply either a separate and somewhat independent circulation, or simply that a completely different mechanism (e.g., annual radiative heating, photochemistry, or breaking waves) is shaping the stratospheric radiance [33]. In any case, the lack of data is limiting our ability to understand the stratospheric dynamics and/or chemistry of Uranus. Fortunately, JWST should soon provide the data necessary to make considerable advances in our understanding of Uranus' stratosphere.
### Temporal Variability
The atmospheres of the giant planets exhibit significant variation at visible and near-infrared wavelengths, where we observe sunlight scattered and/or absorbed by gases, clouds, and hazes [342; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372] (see Simon et al. [373] for a review). Corresponding variations in atmospheric temperatures and chemistry may naturally be expected. With decades of mid-infrared observations now available, investigations of temporal variability at thermal wavelengths have revealed intriguing findings in recent years.
In general, many potential sources of temporal variability exist in planetary atmospheres, acting over a wide range of timescales [374]. We can divide these sources into two basic
Figure 14: Uranus’ stratosphere at 13 μm, as seen from VLT-VISIR in 2009 (**left**) and 2018 (**right**). Differences in the geometry of the observations are illustrated in the bottom panels. The cause and precise spatial distribution of the enhanced radiance at high latitudes is unclear [33].
groups, categorized as either internal or external mechanisms. Internal mechanisms include meteorological phenomena and generally stochastic processes within the atmosphere that are poorly understood in the giant planets, whereas external mechanisms act upon the atmosphere and may be considered more deterministic12. The latter category includes solar energy incident upon the atmosphere, the effects of which can be assessed with seasonal models [37; 211; 37; 375; 211].
Footnote 12: Impactors may be considered a notably stochastic exception
For planets with significant axial tilts13 the daily mean insolation (per unit area) varies seasonally across the disk, with the greatest variation at higher latitudes. The period of this cyclic variation is determined by the tropical orbital period of the planet14. With a 98\({}^{\circ}\) axial tilt and 84-year orbit, Uranus arguably serves as the most extreme example of variable seasonal forcing, with much of the planet experiencing decades of uninterrupted summer daylight and winter darkness [206; 318]. Although solar fluxes are weak in the outer Solar System, modeling suggests seasonal variation in temperatures and chemistry are likely [37; 318], and, in the case of Saturn, well documented by observations [379] (see Section 3.3.2).
Footnote 13: The giant planet axial tilts are 3.12\({}^{\circ}\), 26.73\({}^{\circ}\), 97.77\({}^{\circ}\), and 28.33\({}^{\circ}\), for Jupiter, Saturn, Uranus, and Neptune, respectively, where the axial tilt is defined as the angle between the direction of the positive pole and the normal to the orbital plane. Note that this differs from the definition adopted by the International Astronomical Union (IAU), which defines Uranus’ north pole as the one that lies on the north side of the Solar System’s invariable plane, thus placing Uranus’s tilt at 82.23\({}^{\circ}\)[377; 378].
In addition to these larger changes in seasonal forcing, the Sun is intrinsically variable over the course of a roughly 11-year solar cycle. While the total solar irradiance differs by little more than 0.1% over a typical solar cycle [380], variation in far-ultraviolet (e.g., 121.57 nm Lyman-\(\alpha\) irradiance) can exceed 40%. Such high-energy photons are the main drivers behind methane photochemistry, and so modulation in the UV flux can potentially produce observable variation in photochemistry if the reaction timescales are sufficiently short [37].
The expected extent of the seasonal variation will depend on the change in solar forcing and the capacity of the atmosphere to respond to that change. Characteristic timescales for the atmospheric responses can be calculated from radiative and chemical models, and by comparing these timescales to the orbital periods, the potential for seasonal changes can be assessed. Figure 15 illustrates the results of two separate studies, in which radiative time constants were calculated by perturbing the temperature profile and calculating the resultant change in cooling rates [211; 327]. While significant differences exist between the results (likely owing to the use of updated gaseous absorption coefficients [381] and more rigorous radiative-transfer modeling by Li et al. [327]), both analyses suggest that Uranus is an outlier, with radiative time constants far longer than the orbital/seasonal timescales, as discussed in Section 3.3.3. However, variation in the stratospheric temperatures of the other giant planets seems likely according to the more recent analysis, and, indeed, this potential appears consistent with observed variability.
Analyses of the characteristic timescales of chemical reactions [36; 37; 318] and dynamical transport [173; 211; 336; 212] have similarly been explored to assess the potential for variability. Uranus again appears relatively sluggish compared to the other planets, with expectations for less seasonal photochemical variation [318] and exceedingly long dynamical time constants (estimated at 700 years compared to \(\leq\)200 for the others) [173]. Chemical variation has been detected in the atmospheres of Jupiter and Saturn [63; 243; 285; 290; 382; 383], but Uranus and Neptune remain poorly constrained given their long seasonal timescales. Likewise, dynamical timescales for all the planets remain highly theoretical and uncertain owing to the obvious challenges of observationally constraining such parameters.
#### 3.3.1 Jupiter Variability
Multi-wavelength imaging of Jupiter over the past nearly 40 years has revealed surprisingly complex variability in Jupiter's atmosphere, (e.g., [305]). Data have revealed gradual changes in low latitude temperatures, with little seasonal or short-term variation [384]. Emission at 5-\(\upmu\)m has been used to reveal significant variability in the cloud opacity [54, 289, 385], while stratospheric temperatures, appear more variable and complicated on shorter timescales [50, 386].
Observations of Jupiter's stratospheric temperatures via methane at 7.9 \(\upmu\)m have been used to infer variability between 1980 and 2011 [387] (see Figure 16). This investigation revealed significantly different periods of oscillation (the quasi-quadrennial oscillation), with a 5.7-year period between 1980 and 1990 and a 3.9-year period between 1996 and 2006. Planetary-scale disturbances in 1992 and 2007 disrupted the predicted quasi-quadrennial oscillation pattern, suggesting that these oscillations are related to vertically propagating waves generated by meteorological sources below [387].
Figure 15: Theoretical radiative time constants for the giant planets. Plots show these characteristic timescales for each planet over a range of pressures, as derived in two separate studies—dashed lines are Conrath et al. [211] while solid lines are from Li et al. [327]. The left plot presents the radiative time constant in years, with the orbital periods of each planet indicated by the vertical dotted lines (labeled ”7” for Jupiter, ”S” for Saturn, etc.). The right plot expresses the values in terms of a ratio parameter (of a form akin to the resonance behavior of an underdamped harmonic oscillator [211]), for which values of order unity or smaller indicate the potential for stronger seasonal responses. The approximate pressures sensed by the N and Q bands are suggested at the far right. Note that Uranus has the longest radiative time constants throughout the stratosphere.
Similar studies have revealed surprising apparent correlations (and anti-correlations) between different altitudes and locations. Equatorial temperature variations in the upper troposphere appear anti-correlated with higher altitudes, in a manner that suggests stratospheric dynamics may also influence the upper tropospheric temperatures below. Intriguingly, anti-correlations in temperatures have been detected for conjugate latitudes in opposite hemispheres (305; 387; 389).
Figure 16: Sequences of mid-infrared images of Jupiter at 5 μm (top) and 7.9 μm (bottom) showing changes over time, adapted from Antünano et al. (2015) and Antünano et al. (2015). Variation at 5 μm suggests changes in tropospheric temperature and cloud opacity, with large temporal variability mainly at the equatorial and tropical latitudes and less temporal variability at mid-latitudes (385). Dashed blue and green lines mark 16\({}^{\circ}\) N and 10\({}^{\circ}\) S planetocentric latitudes, respectively. Emission at 7.9 μm sense stratospheric temperatures (via methane emission), revealing roughly periodic variation associated with the quasi-quadrennial oscillation (387). 5- μm images are from various instruments on the IRTF, including BOLO-1 (1984) (384), NSFCam (1999, 2001) (388), NSFCam2 (2006) (51), and SpeX (2010, 2013). 7.9- μm images are from IRTF-MIRLIN (1996, 1997) and IRTF-MIRSI (2008, 2011).
Though the sources of such oscillations are not definitively known, some are thought to be associated with stratospheric winds and temperature oscillations, analogous to Earth's quasi-biennial oscillation [50; 236; 390]. Theories suggest wave or eddy-driven meridional winds likely play an important role in modulating the temperatures and winds in the upper troposphere and stratosphere on seasonal and shorter timescales [391], and analyses of the thermal variability could potentially be used to estimate variation in the mechanical forcing [236].
#### 3.3.2 Saturn Variability
Unlike Jupiter, Saturn has a significant axial tilt (26.73\({}^{\circ}\)), as illustrated by the changing views from Earth seen in Figure 17. The resulting seasonal variation in sunlight over a Saturnian year (29.4 Earth years) dominates Saturn's temporal variability in the mid-infrared. The Cassini-Huygens mission orbited Saturn for 13 years--enough to gain unprecedented detail of how the planet changed over the course of nearly two seasons. Cassini-CIRS observed Saturn's northern mid-latitude stratosphere warming by 6-10 K as this region emerged from ring-shadow in spring, while the southern mid-latitudes cooled by 4-6 K [249] (see Figure 18). The tropospheric temperatures also changed, but to a lesser degree, consistent with theoretical expectations of larger thermal inertia and longer radiative time constants. The fall and winter hemispheres also saw significant depletion in acetylene, consistent with seasonal photochemical modeling [37; 393].
As part of Saturn's seasonal cycle, its polar stratosphere sees the development of a warm circumpolar vortex that peaks in the summer and dissipates in the winter. Cassini-CIRS observed the dissipation of Saturn's southern polar vortex in southern mid-autumn (2012) [78; 379], followed by the eventual formation of the northern polar vortex in late northern spring (2015) [78]. The northern feature was associated with warmer temperatures poleward of \(\sim\)75\({}^{\circ}\) planetographic latitude. Interestingly, this feature exhibited a hexagonal boundary, echoing the hexagonal Rossby wave made visible in the clouds far below. This suggests a dynamical link between the features separated by 300 km in height [78]. A comprehensive review of Saturn's seasonal changes during the Cassini era can be found in Fletcher et al. [63].
A recent multi-decadal study of ground-based mid-infrared imaging similarly found seasonal temperature changes of \(\sim\)30 K in the stratosphere and \(\sim\)10 K in the upper troposphere, consistent with Cassini observations and predictions from radiative climate models [65]. The most recent observations from VLT-VISIR show warming is continuing at the northern summer polar stratosphere However, comparison of \(\sim\)7.9 \(\upmu\)m-imaging revealed evidence of inter-annual variations at equatorial latitudes. Variations on these timescales are inconsistent with the strictly semi-annual 15-year equatorial stratospheric oscillation [250; 394], suggesting the oscillation's period is either intrinsically variable and/or subject to disruption by storms or other meteorological phenomenon.
Aside from seasonal phenomena, mid-infrared observations have also notably detected warm stratospheric features associated with an immense northern-hemisphere storm that appeared in December 2010. The storm was observed to produce enormous changes in stratospheric temperatures and chemistry, warming the localized region by 80 K compared with its surroundings at 2 mbar [285]. The stratospheric warm "beacons" eventually evolved into a stratospheric anticyclonic vortex in 2011 [62; 285] (see Figure 17). Cassini-CIRS observations were compared with chemical models to explain the mid-infrared changes, and it was found that elevated temperatures alone could not explain the enhanced thermal emission from ethane and acetylene. Downwelling winds, transporting hydrocarbons to higher pressures, were also needed to reproduce the CIRS observations.
Figure 17: Saturn images showing changes between 2004 and 2012, adapted from [65]. Images sense stratospheric temperatures via methane emission at 7.8 μm (**left**) and tropospheric temperatures via collision-induced hydrogen at 17.6 μm (**right**). Images are from Keck-LWS (2004) [392], Subaru-COMICS (2007), and VLT-VISIR (2008-2012). Note the prominent warm spot associated with a remarkable storm in the northern hemisphere in 2011 [62; 395].
#### 3.3.3 Uranus Variability
Reviewing all temporal variability detected in the mid-infrared on Uranus is unfortunately a very brief exercise. There is simply very little to compare given the limited amount of mid-infrared data that exists. Furthermore, what does exist appears largely invariant over the short history of these observations relative to the lengthy 21-year seasons on Uranus.
One might reasonably expect seasonality on Uranus to be interesting given its extreme axial tilt of 98\({}^{\circ}\), which forces nearly all latitudes into extended periods of total daylight and darkness [318]. However, its atmosphere is sluggish vertical mixing, low stratospheric methane abundances, and cold temperatures result in a great thermal inertia that leads to theoretically small seasonal changes and large lags [211; 327; 352]. The atmospheric temperatures are thus expected to remain close to the annual mean radiative equilibrium values, even though the seasonal amplitude of the radiative forcing is large [211]. There is some discrepancy in the literature over the length of the theoretical radiative time constants as a function of height for
Figure 18: Retrieved temperatures of Saturn’s stratosphere at 2 mbar versus latitude over the entire Cassini mission, adapted from Fletcher et al., [78]. The years and heliocentric longitudes–indicating the seasonal phase, with 270 and 360 marking the northern winter solstice and spring equinox, respectively– are indicated by the color bar.
the outer planet atmospheres (see Section 3.3 and Figure 15). Conrath et al. [211] calculated values of over 130 years in the upper-troposphere and stratosphere, but Li et al. [327] found them to be significantly shorter--ranging from roughly 10 to 70 years at pressures of 400 to 70 mbar. The latter would suggest the potential for variability, and observations could potentially confirm or refute these theoretical expectations.
When Voyager-IRIS produced the first temperature maps of Uranus near the time of the southern summer, there were little differences between the summer and winter hemispheric temperatures at the tropopause. The summer pole was no warmer than the winter pole in the tropopause and only marginally warmer in the lower stratosphere [77]. This indicated that seasonal variation in the upper troposphere was indeed very small. Subsequent comparisons between Voyager-era temperatures and ground-based imaging acquired over 20 and 32 years later revealed no significant changes in the upper-tropospheric (70-400 mbar) temperatures more than a full season later [33; 77]. Significant temperature changes have yet to be found.
In the stratosphere, observations are even more limited. Only nine years separate the existing images sensitive to stratospheric emission, and they appear invariant within the considerable uncertainties [33] (see Figures 13 and 14). There have been some hints of possible variation in ground-based images [33] and Spitzer-IRS observations, averaged over different sub-observer longitudes, but these have been interpreted as possible evidence of longitudinal variation, rather than temporal variability [276]. A lack of temporal variability in the stratosphere would be consistent with the expected long stratospheric radiative time constants [211; 327] (see Figure 15). However, additional mid-infrared observations, repeated frequently over the coming decade, will be needed to determine whether significant temperature or chemical changes actually occur on Uranus.
#### 3.3.4 Neptune Variability
Despite its supremely long seasonal timescales (165 year orbit) and great distance from the Sun, Neptune exhibits remarkable variability at mid-infrared wavelengths. Like Uranus, Neptune's temperatures were first mapped by Voyager-IRIS. Like Uranus, comparisons with subsequent ground-based imaging have shown the upper tropospheric temperatures are largely invariant in time within uncertainties [72; 74]. The possible exception is at the south pole, which demonstrates possible variability in the troposphere with no obvious pattern [72; 74], but it is subject to large uncertainties. However, unlike Uranus, Neptune's stratosphere clearly exhibits considerable variability.
A recent analysis of all mid-infrared observations of Neptune existing prior to 2020 has revealed an overall decline in mid-infrared radiances since reliable imaging began in 2003 [74] (see Figure 19). Combined with spectral data sensitive to atmospheric temperatures via the \(\sim\)17.03-\(\mu\)m H\({}_{2}\) S(1) quadrupole emission, these observations indicated that Neptune's disk-integrated temperatures dropped by roughly 8 K in the lower stratosphere [74]. These changes are unexpected, since radiative-seasonal models predicted that temperatures should rise in Neptune's southern hemisphere in early summer [70; 211].
While global temperatures dropped, images sensitive to 12-\(\upmu\)m emission from stratospheric ethane showed a dramatic surge in radiance from Neptune's south pole between 2018 and 2020-again attributed to a rise in temperatures (\(\sim\)13 K) inferred from nearly contemporaneous H\({}_{2}\) S(1) spectra [74]. This warming circumpolar vortex was combined with a drop in temperatures at nearly all other latitudes (see Figure 20). Radiative and chemical models have predicted a gradual brightening of the south pole following the southern summer solstice in 2005 [70,211], but such rapid change is unexpected.
The cause of these stratospheric temperature changes is currently unknown. Roman et al. [74] speculated that it may be related to seasonal changes in chemistry [318], which alters the cooling rates, but explanations involving solar cycle variations, stratospheric oscillations, and meteorological activity cannot be discounted. With such dramatic and unexpected changes in recent years, regular observations over the next decade will be crucial for understanding the nature and trends shaping the stratospheric variability of Neptune.
Figure 19: A sequence of mid-infrared images showing the variation of Neptune at roughly 12 \(\upmu\)m in different years, along with disk geometry and a Hubble Space Telescope (HST) visible image for comparison. The mid-infrared images were taken from VLT-VISIR (2006, 2009, 2018) and Subaru-COMICS (2020) [74]. The sequence shows a global decline in radiances accompanied by dramatic warming at Neptune’s south pole between 2018 and 2020. The HST image was taken in 2020, three weeks after the Subaru-COMICS image. (HST Image credit: NASA, ESA, STScI, M.H. Wong (University of California, Berkeley), and L.A. Sromovsky and P.M. Fry (University of Wisconsin-Madison).
Figure 20: Neptune’s temperatures versus planetocentric latitude from ground-based images dating from different years, adapted from Roman et al. [74]. Shaded envelopes indicate uncertainties. Temperatures are shown at 0.5 mbar (**left**) and 100 mbar (**right**), corresponding to peaks in the contribution from stratospheric ethane (12.2 \(\upmu\)m) and tropospheric hydrogen CIA (\(\sim\)18–25\(\upmu\)m). The stratospheric temperatures vary in time, with brightening at the pole in recent years. Tropospheric temperatures are largely invariant, except for the south pole. Data are from Keck-LWS (2003), Gemini-N-Michelle (2005), VLT-VISIR (2006–2018), and Subaru-COMICS (2020).
## 4 Conclusions
From more than a century of remote sensing at mid-IR wavelengths, a remarkably detailed picture of the temperature structure, chemistry, and dynamics of the giant planets has emerged. Many questions and challenges remain, particularly regarding how and why the planets change over time.
Much of the knowledge written in this review will soon be rewritten. The upcoming Solar System observations of the giant planets by JWST-MIRI have the potential to greatly surpass existing observations and revise our knowledge of the atmospheres of the giant planets, particularly regarding the Ice Giants [1; 318]. Nonetheless, this brief look into the history and results of mid-infrared remote sensing can hopefully continue to provide insight and inspiration, if simply by considering how far the field has come.
During the preparation of this manuscript, the author was supported by a European Research Council Consolidator Grant, under the European Union's Horizons 2020 research and innovation program, grant number 723890.
I wish to thank Leigh Fletcher and Imke de Pater for offering the opportunity, support, and patience necessary for completing this review. I also wish to acknowledge Arrate Antunano for her readiness to assist with her expertise on the Gas Giant atmospheres.
In memory of Peter Jay Gierasch (1940-2023), an insightful scientist and generous advisor, whose many enduring contributions have uniquely shaped the field of planetary atmospheres, as demonstrated throughout this review.
No new data were created or analyzed in this study. Data sharing is not applicable to this article. The data presented in this study are available from the original sources, as referenced.
The authors declare no conflict of interest.
|
2308.02163 | BlockChain I/O: Enabling Cross-Chain Commerce | Blockchain technology enables secure tokens transfers in digital
marketplaces, and recent advances in this field provide other desirable
properties such as efficiency, privacy, and price stability. However, these
properties do not always generalize to a setting across multiple independent
blockchains. Despite the growing number of existing blockchain platforms, there
is a lack of an overarching framework whose components provide all of the
necessary properties for practical cross-chain commerce. We present BlockChain
I/O to provide such a framework. BlockChain I/O introduces entities called
cross-chain services to relay information between different blockchains. The
proposed design ensures that cross-chain services cannot violate transaction
safety, and they are furthermore disincentivized from other types of
misbehavior through an audit system. BlockChain I/O uses native stablecoins to
mitigate price fluctuations, and a decentralized ID system to allow users to
prove aspects of their identity without violating privacy. After presenting the
core architecture of BlockChain I/O, we demonstrate how to use it to implement
a cross-chain marketplace and discuss how its desirable properties continue to
hold in the end-to-end system. Finally, we use experimental evaluations to
demonstrate BlockChain I/O's practical performance. | Anwitaman Datta, Daniël Reijsbergen, Jingchi Zhang, Suman Majumder | 2023-08-04T06:51:50Z | http://arxiv.org/abs/2308.02163v3 | # BlockChain I/O: Enabling Cross-Chain Commerce
###### Abstract.
By enabling users to safely transfer digital tokens without trusted intermediaries, blockchains have fueled the rise of _Decentralized Finance_ (DeFi). However, the current DeFi ecosystem consists of multiple _independent_ blockchains, and cross-chain token trading is a challenge because the desirable properties of individual blockchains do not always generalize to a multi-chain setting. Recently, advances have been made in the generalization of these properties, but there is still a lack of an overarching framework that provides all of the properties required for a practical cross-chain commerce platform: transaction atomicity, privacy-preserving digital identities, stablecoin support, and general applicability.
In this paper, we present BlockChain I/O to provide such a framework. BlockChain I/O uses entities called _cross-chain services_ to relay information between different chains. Cross-chain services cannot violate transaction atomicity, and are disincentivized from other types of misbehavior - i.e., causing delays or misrepresenting information - through an audit system. BlockChain I/O uses stablecoins to mitigate price fluctuations, and a digital ID system to allow users to prove aspects of their identity without violating privacy. After presenting the core architecture of BlockChain I/O, we demonstrate how to use it to implement a cross-chain marketplace. Finally, we use an experimental evaluation to demonstrate BlockChain I/O's practical performance.
blockchains, interoperability, decentralized finance +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Journal: Information Systems
+
Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal
_services_ relay relevant information, e.g., bids and token prices, between nodes. Once the outline of a deal has become clear, e.g., after an auction has terminated, they send a tentative exchange of tokens to the involved blockchains, and the users lock their tokens in escrow. Users then _vote_ to commit if the exchange of tokens is agreeable, and abort otherwise. As such, BlockChain I/O facilitates _cross-chain deals_(Kumar et al., 2017), which have proven security guarantees, i.e., safety and liveness. If cross-chain services misbehave, e.g., if they go offline while processing a deal, or if they misrepresent the outcome of an auction, then this is provable to nodes who view the relevant blockchains. In BlockChain I/O, we utilize a separate class of nodes called _auditors_ to detect this misbehavior and relay it to a governance layer - the resulting reputation damage provides an incentive for cross-chain services to behave honestly.
**Decentralized Identities.** A second desirable property for cross-chain commerce is the ability to assure users that the entities with whom they interact have certain attributes, e.g., that they are real people or licensed companies, that they have a demonstrable track record in their field, or that they have the correct age or country of residence. Challenges include the multitude of different ledgers that provide this information (e.g., by countries or corporations) and ensuring user privacy through pseudonymity. In BlockChain I/O, we leverage Hyperledger AnonCreds (Kumar et al., 2017) for this purpose.
**Native Stablecoin Support**. When executing cross-chain transactions, there are several practical bottlenecks that involve economic rather than technical considerations, e.g., i) the tokens involved in deals may lose value, ii) parties may abandon trades leading to opportunity costs for others, and iii) users may be reluctant to receive payments in less popular, volatile currencies, even when it is technically possible. As such, a third desirable property is support for a currency which can be naturally used across chains, and which provides stability against price volatility or catastrophic failure of smaller blockchains and associated cryptocurrencies. In this paper, we explore how a cross-chain stablecoin based on recent work, CroCoDai(Talal et al., 2017), can be implemented for cross-chain commerce and integrated with our core interoperability module, PieChain.
**Generality.** Although there are already many existing blockchain platforms, they often impose bottlenecks in the form of high cost, low performance, etc. As such, it can occasionally be advantageous to deploy a new, purpose-specific ledger. The only requirements for integrating a new blockchain in BlockChain I/O is for the governance nodes and at least one cross-chain service to run a (light) client for the new blockchain, and for BlockChain I/O's core contracts to be translated to the new blockchain's virtual machine code. If blockchains share the same virtual machine design, e.g., the Ethereum Virtual Machine (EVM), then the smart contracts can be re-used - furthermore, the construction of the commit vote in PieChain is more efficient if the blockchains share a built-in signature verification scheme.
Finally, to validate the implementation of our ideas and viability of a versatile platform for cross-chain commerce, we implement a decentralized marketplace that allows users to create and bid on token listings both as a proof-of-concept, and to drive our performance benchmark experiments.
In summary, our contributions are as follows.
* We present BlockChain I/O, a framework for cross-chain commerce that has the desirable properties of atomic cross-chain transactions, decentralized identities that preserve privacy, native stablecoin support, and generality as discussed above.
* We present an implementation of a decentralized marketplace that is built using BlockChain I/O, and provide the results of benchmark experiments to show that it has practical performance.
The rest of this paper is organized as follows. Section 2 presents the background and related work. Section 3 discusses two use cases for cross-chain commerce that require BlockChain I/O's properties. Section 4 presents the core architecture of BlockChain I/O, including the components and system requirements, while Section 5 presents the digital ID component. Section 6 presents a decentralized marketplace built on top of BlockChain I/O. Section 7 presents the experimental results and Section 8 concludes the paper.
## 2. Background & Related Work
In this section, we first discuss existing cross-chain mechanisms and related work on digital identities. Next, we discuss related work on e-commerce and reputation systems, which we need for the online marketplace of Section 6.
### Blockchains & Cross-chain systems
Blockchains are decentralized, append-only ledgers that can be used to track the ownership of digital tokens. Each individual blockchain is an ordered list of _transactions_ that are grouped into blocks, and each block contains a reference to a previous block, forming a chain. A transaction may represent a change of tokens, or a call to a _smart contract_ which is a software program on the blockchain. Smart contracts can be used to implement the logic of commerce-related concepts such as auctions and listings. By executing the full list of transactions in the blockchain, a node obtains the global _state_, i.e., the current ownership of all digital tokens and the internal states of all smart contracts. Blockchains transactions are atomic by design, i.e., if a single transaction consists of multiple steps that modify the global state, then if any step is committed/aborted then
Figure 1. The BlockChain I/O stack for a versatile cross-chain platform with an overlying open marketplace: This paper primarily focuses on the modules highlighted in blue.
all are. However, atomicity cannot be guaranteed by default in _crosschain systems_ where transactions may involve steps on different blockchains.
Existing cross-chain solutions can be typically categorized as sidechains, relays, notary schemes, or ledgers of ledgers (Brandt et al., 2016). A _sidechain_ is a blockchain that interacts with another (typically primary) blockchain as an extension, aiming to improve its scalability or interoperability. Major examples of sidechains in the context of Bitcoin include RSK (Krishnan et al., 2017) and the Liquid Network (Tran et al., 2017). A _notary scheme_ is a system where an entity initiates a transaction on one blockchain in response to a specific event occurring on another blockchain - one example is the PieChain framework that we use in the core architecture of Section 4. Similarly, a _relay_ refers to a mechanism where a designated entity keeps track of events or transactions on one blockchain and then relays this information to another blockchain. One of the most popular relay solutions, BTC Relay,3 was released in 2016 by the Ethereum Foundation. It is a bridge between the Bitcoin blockchain and Ethereum smart contracts. Since Bitcoin block headers are stored in an Ethereum smart contract, BTC Relay is able to securely verify Bitcoin transactions without any intermediaries. Finally, a _'ledger of ledgers'_ system is one where a central blockchain is connected to multiple other blockchains (known as sidechains or parachains). These blockchains collectively form an interconnected ecosystem. Polkadot (Polkadot, 2017) exemplifies such a system. It relies on an underlying relay chain for security, which can be used by other parachains (parallel chains), which could in principle run distinct protocols, inducing in effect a logical star topology with the relay chain in the center. Thus Polkadot achieves not only sharding to improve scalability, but also provides support for heterogeneity, and given that all the parachains rely on the same relay chain, the parachains can natively interoperate. However, this does not immediately help in solving the larger problem of facilitating arbitrary existing blockchain pairs which do not share Polkadot's relay chain, to interoperate among themselves. Similar to Polkadot, Cosmos (Krishnan et al., 2017) also uses a central 'hub' which ensures governance at a global level, while supporting parallel chains called 'zones'.
Footnote 3: [http://bitcrelay.org/](http://bitcrelay.org/)
The idea of disentangling a blockchain's security (in effect, its consensus mechanism) from its operating environment is extended in IDLT (Brandt et al., 2016), which uses a 'Consensus as a Service' (CaaS) abstraction. In contrast to Polkadot's unique relay chain, 1DLT harnesses multiple public blockchains such as Algorand (Brandt et al., 2016) and Hedera (Hedera, 2016) for the underlying consensus-dependent security guarantees to deploy EVM-based blockchains enjoying the versatile programmability of EVMs while evading Ethereum's high cost and low throughput issues. Though the current 1DLT (Brandt et al., 2016) implementation only supports EVMs, the CaaS abstraction can in principle be extended to support other environments, similar to Polkadot. While the CaaS module in iDLT itself relies on multichain interaction, CaaS or iDLT do not address the general problem of interoperation of multiple blockchains. However, such on-demand deployment of DLTs can be handy to spin-up a new ledger for any purpose, and readily integrate it with BlockChain I/O's open marketplace.
### Digital Identities
A _digital identity_ connects an individual's _attributes_, e.g., her name, age, location, or reputation, to a digital presence such as an online account or public key. Different methods exist for storing and sharing digital identities - for example, they can be provided by a corporation (e.g., Google), by the individual herself, or by a public ledger. Digital identity systems that are decentralized - i.e., not maintained by a single entity - are also called _Self-Sovereign Identity_ (SSI) systems (Gilman et al., 2017). Individual SSI data entries that contain attribute information are called _Decentralized Identifiers_ (DIDs), and a database that contains DIDs is commonly called a _Verifiable Data Registry_ (VDR). One prominent example of an SSI is the Sovrin Network (Sovrin et al., 2017) which uses a public blockchain as a VDR. However, a major challenge in SSI systems is establishing trust between identity issuers and validators (Gilman et al., 2017): the validator must decide whether a DID and its attribute information come from a trusted source. In Hyperledger AnonCreds (Hedera et al., 2017), this challenge is addressed by assigning the creation and verification of digital identities a variety of user types, particularly _holders_ who have digital attribute information, _issuers_ who issue DIDs, and _verifiers_ who verify DIDs. AnonCreds specifies a set of protocols for zero-knowledge proofs and schema definitions that allow any consortium of users to run an SSI system on a permissioned blockchain, e.g., Hyperledger Indy, where the consortium members have write access and arbitrary users can have read access. AnonCreds is inherently decentralized - subject to acceptance of participants within an ecosystem, arbitrary entities may participate as issuers, in the creation of VDRs, or the creation of DIDs given a VDR. SSI schemes provide privacy through the use of zero-knowledge cryptography to prove attributes from a DID, and accountability because issuers sign the DIDs so that issuers of incorrect DIDs suffer reputation damage.
### E-Commerce
E-commerce, i.e., the electronic sale of goods or services, grew rapidly after the emergence of the World Wide Web and digital payments in the 1990s. A prominent example of e-commerce is an _online marketplace_ in which a website is maintained by a dedicated entity (e.g., eBay or Amazon), and a multitude of independent vendors create listings that allows customers to browse and bid on items. Depending on the marketplace, vendors can set a fixed price for each item (e.g., Amazon), or buyers can bid for the items through an _e-auction_ mechanism (e.g., e-Bay). There is a variety of auction mechanisms, i.e., open-bid increasing-price (English) auctions, open-bid decreasing-price (Dutch) auctions, sealed-bid first-price auctions, and sealed-bid second-price (Vickrey) auctions (Gilman et al., 2017). Recent advances in multi-party computation and zero-knowledge cryptography have enabled e-auction approaches that are both privacy-preserving and verifiable (Brandt et al., 2016; Brandt et al., 2016), thus enabling e-auctions on public blockchains (Gilman et al., 2017).
### Reputation Systems
One critical factor that determines the success of e-commerce platforms is _trust_(Brandt et al., 2016). Vendors can establish trust through repeated interactions with buyers, generating (if successful) positive feedback. In an online marketplace, vendor reputation metrics can be computed automatically from feedback and displayed alongside
listings. For example, on eBay, the percentage of positive feedback is displayed on each vendor's account page. Privacy is an important aspect of reputation systems: if user identities are known, then users may avoid giving negative feedback out of fear of retaliation - however, if users are fully anonymous, then this may allow vendors to inflate their reputation (or damage their competitors') through dummy accounts - a so-called _Sybil attack_.
In (Kafka et al., 2017), two main categories of privacy-preserving reputation systems are identified: i) those in which user identities are hidden and feedback is visible, and ii) those in which user identities are visible and feedback is hidden. Recent advances in reputation systems include a blockchain-based e-commerce platform in which buy orders are pooled and sellers compete to fill the order (Kafka et al., 2017): this raises the cost of Sybil attacks as fake buyer(s) who collude with sellers to boost the seller's rating risk being obligated to purchase a real item if an honest seller wins the auction. Finally, Beaver (Beaver, 2018) is a decentralized anonymous marketplace in which the cost of a Sybil attack can be made explicit.
## 3. Decentralized E-commerce: Use Cases
In this section, we discuss two use cases for a decentralized e-commerce platform and discuss why the components discussed in the introduction provide non-trivial solutions to the challenges.
### Scaling-Resistant Ticket Sales
Ticket scaling refers to the practice where tickets are bought by third parties for the sole purpose of re-selling them at a higher price. Although ticket scalping may increase the efficiency of the sales process (Blek et al., 2016; Li et al., 2017), it is generally regarded as unfair by customers who observe tickets that were previously affordable being sold at prices that are (far) beyond their budget, and by vendors whose potential profits are seized by another entity (Blek et al., 2016). Ticket scalping is non-trivial to avoid in a decentralized marketplace because the entities who make purchases are pseudonymous. For example, a scalper can trivially create a multitude of different accounts to circumvent restrictions on the number of tickets bought per user, and it is impossible by design to determine whether the customer who uses the ticket paid the original price or a higher price at an external marketplace.
To address ticket scalping, we use the existence of dedicated blockchains that contain identity information to link ticket purchases to digital identities. These blockchains are typically different from the ones on which the ticket is sold and/or the payment is made, so this is necessarily a cross-chain challenge. In particular, as part of the function call that initiates the ticket purchase, the customer must also submit a DID. When the ticket is shown at the event, the customer reveals their name and/or any other associated information using an ID card, to match the same as linked in the DID, thwarting large-scale systematic ticket scalping.
### Sybil-Resistant Reputations
As discussed in Section 2.4, reputation systems that allow users to rate their interactions with vendors may enhance their trust in the marketplace. One challenge in a reputation system is that a vendor may create Sybil accounts to boost its reputation or hurt its competitors' through dishonest feedback. Centralized systems can link each customer or vendor account to a credit card, making large-scale Sybil attacks impractical. However, in a fully decentralized anonymous marketplace, such an approach is impossible by design.
To address the challenge of Sybil attacks, we use DIDs to provide an analogous defense mechanism as a centralized marketplace. In particular, vendors who list an item can include a reputation metric signed by a cross-chain service, such that feedback is included in the metric only if it was issued by a customer who meets certain attributes, such as inclusion on a ledger maintained by trusted (e.g., government) organizations. Although this does not protect against Sybil attacks completely (e.g., a vendor could still ask family members or friends to give favorable feedback) it emulates the level of protection of centralized systems.
## 4. The Blockchain I/O Framework
In this section, we describe BlockChain I/O's core architecture and its main components. In Section 5, we furthermore discuss how these core modules integrate with a decentralized digital identity system. Figure 2 visualizes the different entities and their interactions, and the different types of smart contracts on each chain.
### System Components
_Blockchains (BCs)_. In our setting, there are multiple independent blockchains, and each of them supports its own set of tokens, including the native token (e.g., Ethereum's ETH token) and user-created tokens (e.g., Ethereum's ERC-20 tokens and NFTs). We assume that all blockchains support smart contracts and use the account model for native token balances.4 Independent blockchains are not designed to interoperate inherently, i.e., a smart contract on one blockchain cannot use information from another blockchain without a cross-chain communication and coordination framework.
Footnote 4: For blockchains that do not support smart contract and/or are UTXO-based, e.g., Bitcoin, we assume that a _warpedy_ version of its native token exists on a chain that meets our criteria (Blek et al., 2016).
Cross-Chain Services (CC-SVCs)In BlockChain I/O, we use the PieChain framework for communication between the underlying blockchains. In PieChain, information is relayed between blockchains by CC-SVCs, which are entities that use full nodes or light clients to detect _events_ - i.e., interactions with BlockChain I/O smart contracts. Detected events are written to an event log - in PieChain,
Figure 2. Interaction between the main components of BlockChain I/O’s core architecture.
Apache Kafka is used for this purpose. Users and CC-SVCs can subscribe to events and hence track the interactions with BlockChain I/O contracts across the supported blockchains.
If CC-SVCs or the event log are compromised, then users are not at risk of having tokens stolen or frozen permanently. Although CC-SVCs can delay the conclusion of cross-chain deals, or misrepresent digital identities, they are disincentivized from doing so by _auditors_ (as discussed below). As the same is true for the messaging service, we choose a performance-oriented solution - i.e., Kafka - instead of a security-oriented one such as a private blockchain. The rarity of misbehavior in Ethereum's block proposal market (Kafka, 2017) empirically supports the assumption that relay services are sufficiently disincentivized by reputation damage in practice.
Stablecoins.Stablecoins are tokens whose value is pegged to a real-world asset, e.g., the US dollar. The use of stablecoins for cross-chain deals minimizes the influence of token price fluctuations on users' valuation of the involved assets. For example, if a user were to bid on an auction item using bitcoins, then a sudden change of the bitcoin price could cause the user to reconsider whether winning would lead to an acceptable outcome and abort. Although such a change of heart cannot be ruled out entirely as other offline circumstances may change (e.g., the user's valuation of the item), the use of stablecoins mitigates one prominent source of uncertainty about the value of the involved tokens.
To enable stablecoins in BlockChain I/O, we use the design of CroCoDai(CroCoDai, 2018) which relies on an optimized (to reduce volatility) portfolio of cryptoassets from multiple chains: customers who need stablecoins can buy them locally or deposit collateral tokens on supported chains. Collateral tokens are stored in dedicated smart contracts called _vaults_, and can be reclaimed at a later time by returning the stablecoins plus some interest. If price changes or interest cause the ratio of the collateral's value to the amount of created stablecoins to become too low, then the collateral can be _liquidated_ through an auction. Price information about collateral tokens is provided to the vaults by price oracles (e.g., Chainlink or Uniswap contracts).
Stablecoins can be transferred between chains if approved by the governance layer. The governance layer also decides on changes in the system-level parameters, e.g., the interest rate or liquidation ratio. To receive input from the governance layer, supported blockchains have _relay_ contracts that validate messages from the governance layer, e.g., by validating (group) signatures or a zero-knowledge proof-of-state.
Digital Identities.The pseudonyms of each user can be linked to a set of _attributes_ that represent important information about the user's identity, e.g., her name, location, or age. We assume that this information itself is stored (typically in encrypted form) on blockchains, e.g., in an _ID contract_ or using a dedicated data type. In BlockChain I/O, vendors can indicate during the specification of a cross-chain deal which attributes of potential customers must be submitted and what conditions must be met - e.g., to ensure that customers submit (a hash of) their full name to prevent ticket scalping - or they can provide aggregated feedback from users who meet certain attributes to prevent Sybil attacks. To obtain a DID, which must be specific to a request to avoid replay attacks, a holder requests it from an issuer - next, it is validated by a verifier who signs the DID. The DID's signature can then be verified by the contract that uses it. As part of any cross-chain deal, the creator must specify which ID ledgers are trusted, and which CC-SVCs are trusted to act as verifiers. We discuss the design and integration of DIDs in BlockChain I/O in more detail in Section 5.
Governance Chain.This is a special blockchain (which can be an existing blockchain) on which nodes who monitor the underlying blockchains are present. The main role of governance chain nodes is to vote on changes to system-level parameters, approve cross-chain stablecoin transfers, and to verify claims of misbehavior by CC-SVCs.
Auditors.Misbehavior/abuse by CC-SVCs is detected by BlockChain I/O users called _auditors_. Since the CC-SVCs' actions are entirely restricted to blockchain actions, misbehavior is provable to entities who have a view of the different blockchains. We identify three main types of misbehavior by CC-SVCs: i) not concluding a cross-chain deal faithfully (e.g., not awarding an auction's item to the highest bidder), ii) causing a cross-chain deal to abort by failing to forward messages, and iii) misrepresenting the reputation or attributes of customers or vendors. Upon detecting misbehavior by a CC-SVC, an auditor can submit a claim of misbehavior to a smart contract on the governance chain. If the claim is invalid, then the auditor loses a small deposit, but if it is valid, the CC-SVC suffers a reputation penalty, which hampers the prospects of the CC-SVC being used in the future.
Smart Contracts.As depicted in Figure 2, each blockchain contains five main types of smart contracts: the coin and vault contracts for the stablecoin, the relay contract to receive input from the governance layer, and possibly an IDs contract to store DID information. Finally, a blockchain would also have one or more _app_ contracts to implement applications on top of BlockChain I/O- in Section 6, we give an example of such an application, namely a decentralized marketplace.
### System Requirements
A platform for cross-chain commerce should satisfy the following requirements.
1. **Atomicity**. Users who send tokens as part of a deal also receive all of the tokens agreed in the same deal.
2. **Pseudonymity**. The only information about users that is revealed by the protocol is a link to a persistent pseudonym, and whether the pseudonym meets the attributes required for the trade.
3. **Stablecoin support**. Tokens that customers need to participate in trades are pegged to real-world currencies, to ensure that their exposure to price fluctuations is not higher than in a centralized marketplace.
4. **General Applicability**. The system should be able to support real-world use cases and be able to support all existing blockchains by adding smart contracts.
Ledgers of ledgers such as Polkadot or Cosmos do not satisfy (4) because they only support their own sidechains or parachains, and can therefore not be integrated with existing blockchains through
the addition of smart contracts. Hyperledger AnonCreds, CroCoDai, and PieChain respectively satisfy (2), (3), and (4), but not the others.
BlockChain I/O satisfies (1) by using the framework of cross-chain deals (Krishnan et al., 2017), which ensures _safety_ and _liveness_ - user tokens cannot not be stolen or lost - and therefore the atomicity properly defined above. To support (2), we present BlockChain I/O's integration with DID support in Section 5. BlockChain I/O supports stable-coins through its integration with CroCoDai. To demonstrate that BlockChain I/O satisfies (4), we discuss in Section 6 a decentralized marketplace that enables the use-cases presented in Section 3. In Section 7, we empirically evaluate a proof-of-concept implementation of this marketplace that incorporates two blockchains that are EVM-compliant (private Ethereum and Quorum) and one that is not (Hyperledger Fabric), and find that its computation times and gas fees are reasonable.
## 5. IDentities in Blockchain I/O
### Entities
We leverage the zero-knowledge proofs (Krishnan et al., 2017) based framework of Hyperledger AnonCreds (Krishnan et al., 2017) for DIDs that represent attribute information. The logical entities involved in the creation and verification of DIDs are as follows.
_Holder_: Any entity which is trying to establish its credentials can be a holder - e.g., in the cross-chain marketplace, this would typically be a vendor or bidder.
_Issuer_: Issuers are entities who are responsible for generating valid schema and credential definitions for holders. In cross-chain applications, they may be any pre-specified trusted organization. Issuers themselves are identified using so-called _issuer identifiers_, which are a form of DIDs. Accommodating arbitrary entities as issuers makes the approach decentralized - in effect, issuers have an analogous role to that of certificate authorities in the Web PKI, and DIDs and issuer identifiers can be seen as the analogues of end-entity and intermediate certificates.
_Verifier_: The verifier communicates with the VDR for each action and with the _Registration Smart Contract_ (RSC) module of PieChain where the verifier is notified through an event listing action. The verifier validates the schema identity, credential definition identity, and the DID of the respective holder, and forwards an ECDSA (Elliptic Curve Digital Signature Algorithm) based signed hashed transaction that indicates validation of the DID, along with VC information, to the RSC for acceptance or rejection of the respective holder.
_Verifiable data registry_ (VDR): Hyperledger AnonCreds is used as a VDR to verify and store the DIDs, schema information, credential definition identity, revocation list for future reference and validation of VCs.
_Registration Smart Contract_ (RSC): This module realizes the integration of AnonCreds with BlockChain I/O and is used to communicate with the credential holders or verifiers. It validates the DID, 160-bit hashed address information and VC of the holder during registration and validates the signed hashed information regarding the holder received from the verifier. Figure 3 depicts the interactions, which, in effect, also require cross-chain communication given that AnonCreds itself is built using the Hyperledger framework.
### DID Creation
For the generation of pseudonym DIDs, the issuer first has to determine the schema and credential definition for the holder (which can be reused for many users/holders when suitable). To generate credentials, the issuer also has to be verified using its issuer identifier. In this regard, both the issuer and verifier first forward their requests to the system pool to obtain their identifiers, verification keys, and roles as Trust Anchor or Trustee respectively. After obtaining a suitable role (Trust Anchor), the issuer requests the AnonCreds VDR to generate the schema of the holder based on some secret credentials provided by the holder using a predefined reusable template. As a response, a schema ID is returned by AnonCreds VDR. Using this schema ID, the issuer requests the credential definition alongside other information like tag values and type and revocation information for the schema. Next, a DID is generated internally by the verifier. After getting credential definition information from AnonCreds, the issuer forwards it to the holder. Later on, the holder requests verification of the DID against the credential definition. However, based on validation of such information, the verifier either forwards the DID and credential definition - in a 32-bit encrypted format called the _canonicalization_ format - to the holder if the schema is valid (after getting confirmation from AnonCreds), otherwise it rejects the request.
### DID Verification
A conversion mechanism is maintained by the verifier to translate information from the canonicalization format to ECDSA-based signed information while communicating with PieChain via the RSC. So, after obtaining the DID and credential definition from the holder as a request to interact with other elements of BlockChain I/O and the cross-chain marketplace, it is first forwarded to the RSC along with a 160-bit hashed address of the holder. It is then further validated by the RSC (in turn communicating with the validator and the VDR) against the information maintained by the AnonCreds schema and VCs. If the holder is valid, only then the respective credentials are forwarded along with the DID to the RSC to grant the holder access for further communication with PieChain, otherwise the request is rejected.
Figure 3. PieChain and AnonCreds integration
## 6. Decentralized Marketplace
In this section, we present a decentralized marketplace built on top of the BlockChain I/O infrastructure described in Sections 4 and 5. We discuss both the core design of the marketplace, as well as the smart contract implementation. We then explain how to implement the two use cases from Section 3, and how auditors detect abuse by CC-SVCs.
### Overview
_Users._ The marketplace has the following types of users.
* _Bidders_ who hold digital (crypto) tokens and who are interested in purchasing listed tokens.
* _Vendors_ who want to sell tokens, and who seek to exchange them for (other) digital tokens.
In addition, the core user types of BlockChain I/O, i.e., CC-SVCs, auditors, and governance token holders, also participate in the marketplace.
_Listings._ A listing represents an intent to sell \(n\) equivalent tokens - these can be same-sized batches of cryptocurrencies, or NFTs that represent identical goods. Each listing belongs to one of the following types (see also Section 2.3): 1) fixed-price listings, 2) open-bid increasing-price auctions, 3) closed-bid first-price auctions, 4) open-bid decreasing-price auctions, and 5) closed-bid second-price auctions. Listings of type 1, 2, and 4 are resolved through three phases: bidding, conclusion, and feedback, whereas listings of type 3 and 5 are resolved through four phases: bidding, revealing, conclusion, and feedback. The actions in the four phases are as follows:
1. _Bidding._ Bidders submit their intent to purchase (fixed-price), their bid (open-bid auction), or their bid's hash (closed-bid auction), and transfer either the full bid or a minimum amount (the abort penalty) for escrow.
2. _Revealing._ Users reveal their bids, and transfer the remaining value of their bid (i.e., their full bid minus the abort penalty) for escrow.
3. _Conclusion._ A final transfer of assets is proposed, after which the parties who transfer tokens vote to commit if agreeable. Upon commitment or abortion, the tokens are transferred or returned to the intended users. If an exchange is aborted because a user neglects to commit or reveal, then this user loses the abort penalty.
4. _Feedback._ If the token represents physical items whose quality cannot be determined unambiguously, customers may provide feedback on the vendor - e.g., to indicate their opinion of the speed of delivery, whether the item matched the description, etc.
_Events._ The following types of events are recorded by the CC-SVCs: (1) _AuctionCreationEvent_, emitted after a new listing has been created, (2) _BiddingAuctionEvent_, emitted after a new bid has been created, and (3) _AuctionEndingEvent_, emitted after a listing has been concluded.
### Smart Contract Specification
Listings are processed through a single _market_ smart contract, which is a type of _app_ contract as depicted in Figure 2. In the following, we first describe the smart contract's core functions, and then discuss how they are integrated with DID support for our use cases in Section 6.3.
_createListing._ Takes as input a start timer, a reveal timer, a conclusion timer, a feedback timer, CC-SVC addresses, a listing type, and an initial/fixed price. If successful, creates an entry for the listing using a hashmap in the _market_ contract.
_bidFixed._ Takes as input a listing ID. If the listing has the 'fixed' type, the current time is between the start and conclusion timers, and the sender has enough stablecoins in her wallet, then a bid is recorded, and an amount of stablecoins equal to the listing's fixed price is transferred to the _market_ contract for escrow.
_bidOpen._ Takes as input a listing ID and bid value. If the listing has the 'open' type, the current time is between the start and conclusion timers, the bid value either exceeds the previous highest bid and the starting price (increasing-price auction) or is the first bid (decreasing-price auction), and the sender has enough stablecoins in her wallet, then a bid is recorded, and an amount of stablecoins equal to the bid value is transferred to the _market_ contract for escrow.
_bidSealed._ Takes as input a listing ID and bid hash. If the listing has the'sealed' type, the current time is between the start and reveal timers, and the sender has enough stablecoins in her wallet to pay the abort fee, then a tentative bid is recorded, and an amount of stablecoins equal to the abort fee is transferred to the _market_ contract for escrow.
_revealBid._ Takes as input a listing ID and bid value. If the listing has the'sealed' type, the current time is between the reveal and conclusion timers, the hash of the value equals the hash stored through _bidSealed_, the sender has enough stablecoins in its wallet to pay the bid value minus the already excowed abort fee, then a bid is recorded, and an amount of stablecoins equal to the bid value is transferred to the _market_ contract for escrow.
_finaAuction._ Takes as input a listing ID and a number of winners. If sent by the vendor, and if the current time is between the conclusion and feedback timers, then the auction is concluded. If the auction has not been concluded before the feedback timer, then it is aborted and all tokens in escrow are returned.
_feedback._ Takes as input a listing ID and a bid ID. If the auction has been concluded and the current time is between the conclusion and feedback timers, then the user's feedback is recorded in a hashmap in the contract.
### Use Cases
The two use cases of Section 3 can be implemented by combining DIDs with the smart contract discussed in Section 6.2 in the following way. For the first use case (i.e., mitigating ticket scalping), each bidder who calls the _bidFixed, bidOpen, or bidSealed_ function also includes a DID that contains a hash of personal identifying information, e.g., the bidder's full name. When redeeming the ticket, the ticket owner can then reveal that it was indeed her who issued the winning bid. For the second use case (i.e., a reputation system that is resilient against Sybil attacks), a vendor includes an aggregate feedback score (signed by a CC-SVC who acts as a verifier) when creating a new auction.
### Audits
In the marketplace, CC-SVCs are relied on for faithfully concluding an auction through the _concludeListing_ function, and for signing DIDs for personal identifying information and for the feedback aggregates. If they misbehave in any of these roles, then this is detectable by auditors and governance token holders. In particular, if they misrepresent an auction outcome, then a higher bid must exist on a chain than the one that was declared the winner. An auditor can send a proof of the existence of this bid to the governance chain, upon which a reputation penalty can be administrated to the CC-SVC. Similarly, if a CC-SVC has misrepresented a DID or reputation aggregate, then this can be demonstrated to the governance chain. This creates necessary checks and balances to ensure that end-users can identity and isolate misbehaving CC-SVCs, thus creating a mechanism to satisfy the implicit trust assumptions in PieChain's design.
## 7. Experiments
In this section, we present our experimental results, demonstrating that the marketplace built using BlockChain I/O (as discussed in Section 6) has practical performance. In particular, we evaluate the time and gas costs of the various steps. We focus on the elements that are common between all use cases - i.e., creating and processing a listing on multiple blockchains - for an open-bid increasing auction. The performance of the other listing types is similar (e.g., the time and gas costs of the reveal phase in a closed-bid auction resemble those in the bidding phase of an open-bid auction).
### Experimental Set-Up
We use an iMac with an i9-10900k CPU to simulate an experimental environment with local test networks for Ethereum, Quorum, and Fabric. For the stablecoin, we use a modified version of the Dai stablecoin for coin transfers based on the coin and relay contracts from (Dai et al., 2018). In our experiments, the different agent types all run on the same machine, so our results exclude time costs due to network latency.
### Experimental Steps
We consider the following steps in the processing of a listing.
**Create Listing.** (1) The auctioneer calls the _add_Asset function of an **Asset** contract that is deployed on the Fabric network, which is detected by the CC-SVC. (2) The CC-SVC deploys **Auction** contracts on the Ethereum and Quorum platforms, and publishes an _AuctionCreationEvent_ to Kafka. We create a new contract for each asset auction to obtain a worst-case estimate for the cost of creating a new listing.
**Issue Bid.** (3) A bidder calls _bidOpen_ on one of the deployed **Auction** contracts to issue a bid and submit stablecoins for escrow. (4) The CC-SVC detects a bid and successful coin transfer, and publishes a _BiddingAuctionEvent_ on Kafka.
**Conclude Listing.** (5) After the end of the conclusion timer, the auctioneer may conclude the listing by invoking a _closeAuction_ call to the **Asset** contract on Fabric. Alternatively, this action can be automatically triggered when the auction reaches its pre-determined conclusion time. (6) Upon detection by the CC-SVC, it sends the listing's outcome to the **Auction** contract on each chain (i.e., whether the highest bid on that chain has won or not). This changes the state of these contracts to _ending_ - which means that they await further action from the user - and logs this activity as an _AuctionClosingEvent_. (The gas cost has a minor dependence on whether the winner is on the chain or not - the table depicts the costs for the chain with the winner.)
**Commit/Abort.** (8) The winning bidder either commits or aborts the auction result, which is detected and published on Kafka by the CC-SVC. (9) The CC-SVC forwards the winner's response to the **Asset** contract on Fabric, then either returns or collects the coins transferred by the user in the previous stage. (10) When the related event **AuctionResponse** has been posted by one CC-SVC and received by Kafka, the CC-SVC transfers the asset from the auctioneer to the winner.
**Feedback.** (11) If the auction result has been committed, then the winner can eventually submit her feedback about the purchased asset to the **Auction** contract on her chain.
### Experimental Results
An overview of the costs of steps 1-11 are displayed in Table 1. For each step, the first three data columns indicate which entities perform an action and what this entails (e.g., a function call), and the last three data columns indicate the time costs (_italic_) and the gas costs for the coin blockchains (**bold**). Quorum's time costs are typically lower than Ethereum's because it has a higher block frequency by default. We observe reasonable time costs: each step takes fewer than 5 seconds, whereas an auction would typically run for more than a day. Furthermore, a bid costs \(\approx\)130000 gas, which at current (early August 2023) gas prices (\(\approx\)16 GWei) would cost $3.75 USD on Ethereum's main chain, but typically (much) less on other EVM-compliant chains (enabling bids on less-congested chains is a core motivation for BlockChain I/O).
## 8. Conclusions
We have presented BlockChain I/O, a framework for cross-chain commerce. It satisfies the essential properties of transaction atomicity, pseudonymity, stablecoin support, and general applicability. We have demonstrated its versatility by creating a decentralized marketplace built on top of BlockChain I/O, hosting a variety of application use cases. We validated our proof-of-concept implementation for functional correctness and by benchmarking the
\begin{table}
\begin{tabular}{c|c|c|c|c|c} & \multicolumn{2}{c|}{Active Entity} & \multicolumn{2}{c}{Platform} \\ & \multicolumn{1}{c|}{\multirow{-2}{*}{Anoticamer}} & \multicolumn{1}{c|}{\multirow{-2}{*}{Bidder}} & CC-SVC & Fabric & Ethereum & Quorum \\ \hline \multirow{3}{*}{Conate Lating} & 1. & \multirow{3}{*}{AddAsset} & \multirow{3}{*}{detect} & \multirow{3}{*}{2.03} & \multirow{3}{*}{2.53 (**1545000**)} & \multirow{3}{*}{1.59 (**1550460**)} \\ & 2. & & & & & 2.03 (**132000**) \\ \hline \multirow{3}{*}{Issue Bid} & 3. & \multirow{3}{*}{bidOpen} & \multirow{3}{*}{2.03} & \multirow{3}{*}{1.01 (**132000**)} \\ & 4 & & & & 3.90 & 2.0 \\ \hline \multirow{3}{*}{Conclude Lating} & 5. & \multirow{3}{*}{dex/Notation} & \multirow{3}{*}{detect} & \multirow{3}{*}{2.02} & \multirow{3}{*}{2.03} \\ & 7. & & & & 2.00 (**29300**) \\ \hline \multirow{3}{*}{Conclude Lating} & 8. & \multirow{3}{*}{conclude dating date} & \multirow{3}{*}{2.00 (**29300**)} & \multirow{3}{*}{1.01 (**29300**)} \\ & 7. & & & & 1.00 \\ \hline \multirow{3}{*}{Conclude Lating} & 8. & \multirow{3}{*}{conclude dating date} & \multirow{3}{*}{2.00 (**29300**)} & \multirow{3}{*}{1.01 (**29300**)} \\ & 8. & & & & 2.02 (**35970**) \\ \cline{1-1} & 10. & & & 2.04 & \\ \hline \multirow{3}{*}{Conclude Lating} & \multirow{3}{*}{1.01} & \multirow{3}{*}{ferched} & \multirow{3}{*}{2.54 (**37347**)} & \multirow{3}{*}{1.56 (**37318**)} \\ & & & & & \\ \end{tabular}
\end{table}
Table 1. Time costs in seconds (_italic_) and gas costs (bold) of the various stages of processing a listing.
overheads to demonstrate the practicality of the BlockChain I/O framework.
## Acknowledgement
This work was supported by Ministry of Education (MOE) Singapore's Tier 2 Grant Award No. MOE-T2EP20120-0003.
|
2310.17042 | StochGradAdam: Accelerating Neural Networks Training with Stochastic
Gradient Sampling | In the rapidly advancing domain of deep learning optimization, this paper
unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded
Adam algorithm. Central to StochGradAdam is its gradient sampling technique.
This method not only ensures stable convergence but also leverages the
advantages of selective gradient consideration, fostering robust training by
potentially mitigating the effects of noisy or outlier data and enhancing the
exploration of the loss landscape for more dependable convergence. In both
image classification and segmentation tasks, StochGradAdam has demonstrated
superior performance compared to the traditional Adam optimizer. By judiciously
sampling a subset of gradients at each iteration, the optimizer is optimized
for managing intricate models. The paper provides a comprehensive exploration
of StochGradAdam's methodology, from its mathematical foundations to bias
correction strategies, heralding a promising advancement in deep learning
training techniques. | Juyoung Yun | 2023-10-25T22:45:31Z | http://arxiv.org/abs/2310.17042v2 | # StochGradAdam: Accelerating Neural Networks Training with Stochastic Gradient Sampling
###### Abstract
In the rapidly advancing domain of deep learning optimization, this paper unveils the StochGradAdam optimizer, a novel adaptation of the well-regarded Adam algorithm. Central to StochGradAdam is its gradient sampling technique. This method not only ensures stable convergence but also leverages the advantages of selective gradient consideration, fostering robust training by potentially mitigating the effects of noisy or outlier data and enhancing the exploration of the loss landscape for more dependable convergence. In both image classification and segmentation tasks, StochGradAdam has demonstrated superior performance compared to the traditional Adam optimizer. By judiciously sampling a subset of gradients at each iteration, the optimizer is optimized for managing intricate models. The paper provides a comprehensive exploration of StochGradAdam's methodology, from its mathematical foundations to bias correction strategies, heralding a promising advancement in deep learning training techniques.
## 1 Introduction
Deep learning, with its ability to model complex relationships and process vast amounts of data, has revolutionized various fields from computer vision to natural language processing [11; 20]. The heart of deep learning lies in the optimization algorithms that tune model parameters to minimize loss and increase accuracy [28]. The choice of an optimizer can significantly influence a model's convergence speed, final performance, and overall stability [3]. In this rapidly evolving arena, we continuously strive for more efficient and powerful optimization techniques.
In our pursuit of advancing optimization methodologies, our focus extends beyond merely the architectural intricacies of models. Instead, we place significant emphasis on the mechanisms governing weight updates during the training phase. Renowned optimization algorithms such as Adam [17], RMSProp [36], and Adagrad [9] have traditionally been the cornerstones that have ushered numerous models to achieve exemplary performance. Yet, the evolving landscape of deep learning raises a pertinent question: Is there potential to further enhance these optimization strategies, especially when applied to expansive and intricate neural architectures?
To address this query, we introduce StochGradAdam, our novel optimizer that incorporates gradient sampling as a pivotal technique to enhance the accuracy of models. This gradient sampling method not only assists in optimizing the training process but also contributes significantly to the improvement in the model's generalization on unseen data. In our empirical evaluation, we subject Convolutional Neural Networks (CNNs) like ResNet [14], VGG [32], MobileNetV2[29] and Vision Transformer Model(ViT) [8] to a comprehensive array of tests.
Our examination extends beyond just test accuracy and loss. We delve deeper, investigating the entropy of the class predictions. Entropy, in this context, measures the uncertainty in the model's
predictions across classes [7]. A model with lower entropy exudes confidence in its predictions, while one with higher entropy reflects greater uncertainty. By harnessing the strength of StochGradAdam with gradient sampling, we have observed a decrease in entropy, indicating a more confident prediction by the models. Through studying entropy, we strive to gain a more nuanced understanding of the model's behavior, shedding light on aspects that might remain obscured when solely relying on traditional metrics [38].
```
1:Stepsize (Learning rate) \(\alpha\)
2:Decay rates \(\beta_{1},\beta_{2}\in[0,1)\) for the moment estimates
3:Stochastic objective function \(f(\theta)\) with parameters \(\theta\)
4:Initial parameter vector \(\theta_{0}\)
5:sampling rate \(s\)
6:Initialize \(m\) and \(v\) as zero tensors \(\triangleright\) Moment vectors
7:while\(\theta\) not converged do
8:Get gradient \(g\) with respect to the current parameters \(\theta\)
9:Generate a random mask \(mask\) with values drawn uniformly from \([0,1]\)
10:\(grad\_mask\gets mask<s\)\(\triangleright\) Mask gradient based on sampling rate
11:\(grad\_sampled\leftarrow\) where\((grad\_mask,g,0)\)
12:\(\beta_{1\_t}\leftarrow\beta_{1}\times\delta\)
13:\(m_{t}\leftarrow\beta_{1\_t}m+(1-\beta_{1\_t})grad\_sampled\)
14:\(v_{t}\leftarrow\beta_{2}v+(1-\beta_{2})grad\_sampled^{2}\)
15:\(m_{corr\_t}\leftarrow\frac{1-m_{t}}{1-\beta_{1\_t}^{t+1}}\)
16:\(v_{corr\_t}\leftarrow\frac{v_{t}}{1-\beta_{2\_t}^{t+1}}\)
17:\(\theta\leftarrow\theta-\alpha\frac{m_{corr\_t}}{\sqrt{v_{corr\_t}+\epsilon}}\)
18: Update \(m\) with \(m_{t}\) and \(v\) with \(v_{t}\)
19:endwhile
20:return\(\theta\)\(\triangleright\) Updated parameters
```
**Algorithm 1** StochGradAdam, a modified version of the Adam optimizer with random gradient sampling. See section 3 for details explaination for the algorithm of our proposed optimizer.
Following these findings, we further elucidate the contribution of StochGradAdam. This optimizer results from rigorous research and embodies state-of-the-art algorithmic principles. Preliminary results suggest that StochGradAdam exhibits performance on par with, if not superior to, existing optimization methods. A detailed explanation of the algorithm and its update rule is provided in section 3.
## 2 Related Works
The exploration of gradient-based optimization techniques has been a cornerstone in deep learning research. Over the years, this has given rise to various methodologies that aim to harness the power of gradients in more effective ways. Among them, gradient sampling and techniques with similar goals have piqued the interest of researchers, leading us on a more exhaustive dive into the landscape.
Stochastic Gradient Descent (SGD) serves as the foundation for many gradient-based methods. It updates the model's parameters using only a subset of the entire dataset. This inherent introduction of noise is balanced well with computational efficiency and commendable convergence properties [3].
Delving deeper into gradient behavior, Sparse Gradients Techniques emerged from observations that many gradient components might be trivial for model updates despite their existence. Such techniques, therefore, prioritize gradients that exceed certain thresholds, aiming for updates that are sparser yet potentially more informative [39].
Adaptive Sampling techniques, an evolution in gradient sampling, tailor the subset of gradients under consideration by observing the gradients' historical behavior. These techniques operate under the hypothesis that gradients showing significant fluctuations over time might be more pivotal for efficient optimization [42]. Gradient Sampling Methods[6] for Nonsmooth Optimization offer a distinct approach, especially beneficial for tackling nonsmooth problems. They rely on the concept of
randomly sampling gradients, a strategy particularly beneficial when the gradient might be challenging to compute or might not exist at all.
The RSO (random search optimization) technique by Tripathi and Singh[37] offers a unique perspective on optimization. Instead of relying on direct gradient computations, RSO introduces perturbations to weights and evaluates their impact on the loss function. This approach becomes particularly beneficial in situations where computing the gradient is either intricate or entirely unfeasible. The essence of RSO underscores the notion that for certain problems, venturing into the vicinity of randomly initialized networks without the need for exact gradient computations can be sufficient for optimization. Contrastingly, the StochGradAdam method amalgamates the principles of Gradient Sampling with the renowned ADAM optimizer. While both RSO and Gradient Sampling diverge from traditional gradient-based methods, StochGradAdam's approach is distinct. It capitalizes on gradient information, even if it's sampled, to guide the optimization process. This gradient-centric nature of StochGradAdam allows it to potentially provide more precise weight adjustments. The key differentiation between RSO and StochGradAdam lies in their treatment of gradients: while RSO bypasses them in favor of random perturbations, StochGradAdam harnesses sampled gradients to inform its optimization steps.
In this rich tapestry of gradient-based methodologies, our StockGradAdam emerges with distinction. Instead of merely adopting the usual random sampling approach, our method pulsates with adaptive intelligence. It aligns with the training phase's nuances and the inherent gradient variances, balancing exploration and exploitation. Furthermore, its standout feature lies in the harmonious melding of gradient sampling with momentum intricacies. This symbiosis magnifies the collective strengths of both strategies. Moreover, our technique adeptly navigates the terrain of sparse gradients, ensuring precision with each update. The vast landscape of gradient manipulation methods might seem saturated at first glance. However, the introduction and success of StockGradAdam reiterate that there's always room for innovative, impactful strategies in gradient-based optimizations.
## 3 Methodology: StochGradAdam Optimizer
The StochGradAdam optimizer is an extension of the Adam optimizer [17], incorporating selective gradient sampling to bolster optimization efficacy. Its principal update rule is:
\[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\alpha\frac{m_{\text{corr}_{t}}}{\sqrt{v_{ \text{corr}_{t}}}+\epsilon}, \tag{1}\]
where \(\alpha\) symbolizes the learning rate, \(m_{\text{corr}_{t}}\) is the bias-corrected moving average of the gradients, and \(v_{\text{corr}_{t}}\) is the bias-corrected moving average of the squared gradients. The following sections elaborate on the inner workings of this formula.
### Preliminaries
Let a trainable parameter be denoted as \(\mathbf{w}\in\mathbb{R}^{d}\), where \(\mathbb{R}^{d}\) is the space of all possible parameters. The optimizer maintains two state variables for each such parameter:
\[m(\mathbf{w}) :\text{Moving average of the gradients with respect to }\mathbf{w}.\] \[v(\mathbf{w}) :\text{Moving average of the squared gradients with respect to }\mathbf{w}.\]
Furthermore, the hyperparameters are:
\[\beta_{1},\beta_{2} \in(0,1):\text{Exponential decay rates}.\] \[\text{decay}\in\mathbb{R}^{+}:\text{Decay multiplier for }\beta_{1}.\] \[\text{\emph{s}}\in(0,1):\text{Probability for gradient sampling}.\] \[\epsilon\in\mathbb{R}^{+}:\text{Constant ensuring numerical stability}.\]
### Gradient Sampling
Gradient sampling, in the context of optimization, is a technique where a subset of the gradients is randomly selected during the optimization process. This method not only promotes more robust training by sifting through gradient components, potentially reducing the influence of noisy or outlier
data, but also enhances the exploration of the loss landscape, leading to more reliable convergence. Additionally, our approach to gradient sampling is designed to adaptively choose the sampling rate based on training dynamics, ensuring that more relevant gradients are considered during pivotal training phases.
#### 3.2.1 Stochastic Mask Generation
Given a gradient \(\mathbf{g}\), the objective is to determine whether each component of this gradient should be considered in the update. To this end, a stochastic mask \(\Omega\) is introduced. Each component of \(\Omega\) is independently derived by drawing from the uniform distribution \(\mathcal{U}(0,1)\):
\[\Omega_{i}=\begin{cases}1&\text{if }\mathcal{U}(0,1)<s,\\ 0&\text{otherwise},\end{cases} \tag{2}\]
for \(i=1,2,\ldots,d\), where \(d\) represents the dimensionality of \(\mathbf{g}\). Here, \(\mathcal{U}(0,1)\) denotes a uniform random variable over the interval [0,1], and \(s\) is a predefined threshold dictating the average portion of gradients to be sampled.
#### 3.2.2 Computing the Sampled Gradient
With the stochastic mask in hand, the next objective is to compute the sampled gradient, denoted by \(\phi\). This is accomplished by executing an element-wise multiplication between \(\mathbf{g}\) and \(\Omega\):
\[\phi_{i}=\Omega_{i}\times g_{i}, \tag{3}\]
for \(i=1,2,\ldots,d\). Thus, we get:
\[\phi=\Omega\odot\mathbf{g}, \tag{4}\]
where \(\odot\) signifies element-wise multiplication, ensuring only the components of the gradient flagged by \(\Omega\) influence the sampled gradient.
The underlying idea of gradient sampling is rooted in the belief that not all gradient components are equally informative. By stochastically selecting a subset, one can potentially accelerate the optimization process without sacrificing much in terms of convergence properties. Moreover, this also introduces a form of noise, which can, in some cases, assist in escaping local minima or saddle points in the loss landscape.
### State Updates
The StochGradAdam optimizer maintains two state variables, \(m\) and \(v\), representing the moving averages of the gradients and their squared values, respectively. Their iterative updates are influenced by the gradient information and specific hyperparameters.
#### 3.3.1 Moving Average of Gradients: \(m\)
The moving average of the gradients, \(m\), is updated through an exponential decay mechanism. At each iteration, a part of the previous moving average merges with the current sampled gradient:
\[m_{t}=\beta_{1}^{t}m+(1-\beta_{1}^{t})\phi, \tag{5}\]
Here, \(\beta_{1}\) signifies the exponential decay rate for the moving average of the gradients [17]. The term \(\beta_{1}^{t}\) showcases the adjusted decay rate at the \(t^{th}\) iteration, defined as:
\[\beta_{1}^{t}=\beta_{1}\times\text{decay}. \tag{6}\]
The function of \(\beta_{1}\) is to balance the memory of past gradients. A value nearing 1 places more emphasis on preceding gradients, yielding a smoother moving average. Conversely, a value nearing 0 focuses on the recent gradients, making the updates more adaptive [28].
#### 3.3.2 Moving Average of Squared Gradients: \(v\)
Similarly, \(v\) captures the moving average of the squared gradients. It's updated as:
\[v_{t}=\beta_{2}^{t}v+(1-\beta_{2}^{t})\phi\odot\phi, \tag{7}\]
Here, \(\beta_{2}\) denotes the exponential decay rate for the moving average of squared gradients [17]. Analogous to \(\beta_{1}\) but for squared values, \(\beta_{2}^{t}\) is the adjusted decay rate at the \(t^{th}\) iteration, defined as:
\[\beta_{2}^{t}=\beta_{2}\times\text{decay}. \tag{8}\]
The element-wise multiplication \(\odot\) ensures that each gradient component's squared value is computed individually [2].
### Bias Correction
Given the nature of moving averages, especially when initialized with zeros, the early estimates of \(m\) and \(v\) can be significantly biased towards zero. To address this, bias correction is employed to adjust these moving averages [17].
#### 3.4.1 Correcting the Bias in \(m\)
The bias-corrected value of \(m\) at the \(t^{th}\) iteration is:
\[m_{\text{corr}_{t}}=\frac{m_{t}}{1-\beta_{1}^{t}}, \tag{9}\]
Here, the term \(1-\beta_{1}^{t}\) serves as a corrective factor to counteract the initial bias [28].
#### 3.4.2 Correcting the Bias in \(v\)
Similarly, for \(v\):
\[v_{\text{corr}_{t}}=\frac{v_{t}}{1-\beta_{2}^{t}}, \tag{10}\]
This correction ensures that the state variables \(m\) and \(v\) provide unbiased estimates of the first and second moments of the gradients, respectively [17].
### Parameter Update
StochGradAdam optimizes model parameters by adapting to both the historical gradient and the statistical properties of the current gradient [17]. The update rule for model parameter \(\mathbf{w}_{t}\) at iteration \(t\) is:
\[\mathbf{w}_{t+1}=\mathbf{w}_{t}-\alpha\frac{m_{\text{corr}_{t}}}{\sqrt{v_{ \text{corr}_{t}}}+\epsilon}, \tag{11}\]
The update can be viewed as an adaptive gradient descent step. By normalizing the gradient using its estimated mean and variance, StochGradAdam effectively scales parameter updates based on their historical and current behavior [2]. StochGradAdam synergizes the principles of stochastic gradient sampling with the Adam optimizer's robustness.
## 4 Experimental Results
In the subsequent section, we delve into the empirical evaluation of the methodologies discussed thus far. The primary objective of our experiments is to validate the theoretical assertions and gauge the performance of our proposed techniques in real-world scenarios. Through a series of meticulously designed experiments, we compare our methods against established benchmarks, thereby offering a comprehensive assessment of their efficiency, robustness, and scalability. Each experiment is crafted to answer specific questions, shedding light on the strengths and potential limitations of our approach. By coupling rigorous experimental design with diverse datasets and tasks, we aspire to provide readers with a holistic understanding of the practical implications of our contributions.
### Image Classification
#### 4.1.1 Cifar-10
In the image classification, the accuracy of various deep learning architectures are often deployed to identify the best model for a particular task. This section focuses on the CIFAR-10 dataset[18],
a widely recognized benchmark in the field, to compare the performance of several neural network architectures and optimizers. We train each architecture at a learning rate of 0.01 and implement the model using TensorFlow.
The displayed figure 2 provides a detailed visualization of the test accuracy for various deep learning architectures when trained using different optimizers. Notably, our newly introduced method is depicted in red, distinguishing its performance from the rest.
ResNet-56: Our optimizer manifests a swift uptick in test accuracy during the initial epochs, outpacing other methodologies. Although slight deviations become apparent towards the culmination of training, the overall trend suggests competitive prowess.
ResNet-110: With this architecture in play, our optimizer emulates the trajectory observed in ResNet-56. An impressive ascent during the incipient phases is evident, and despite minor oscillations in later epochs, the optimizer's performance remains robust.
ResNet-156: This architecture reveals an expeditious surge in test accuracy using our optimizer during the early training, subsequently aligning with trends showcased by other optimizer methodologies.
MobileNetV2: Our optimizer commences with a formidable performance, either paralleling or slightly eclipsing the Adam optimizer across epochs. Despite some intermittent fluctuations, the trend underscores its commendable adaptability to the MobileNetV2 architecture.
ViT-8: Employed on the Vision Transformer model, our optimizer portrays steady advancement in test accuracy, showcasing a consistent and relatively stable performance. The original ViT model was optimized using the Adam method. Notably, when trained with RMSProp at a learning rate of 0.01, the ViT model stagnated, registering a mere 10% test accuracy - reminiscent of random decision-making.
VGG-16: Contrasting the trends observed with other architectures, our optimizer, in tandem with other methods, doesn't scale to the pinnacle of test accuracy when employed on VGG-16. While the results closely mirror each other across the optimizer spectrum, our method's performance remains indistinguishable. The rationale for the discernible dip in VGG-16's performance relative to other
Figure 1: Comparison of test accuracy over 300 epochs on CIFAR-10 dataset for various neural network architectures: ResNet-56, ResNet-110, ResNet-156, MobileNetV2, VIT-8, and VGG-16. Three different optimizers - RMSprop (green), Adam (blue), and StochGradAdam (red) - were used to train each model. The graphs showcase how each architecture and optimizer combination performs over the course of training.
architectures is hypothesized to stem from gradient diminution. This decrease in gradient magnitude can adversely affect gradient sampling.
It's surmised that the unique characteristics of VGG and ViT, particularly in the context of gradient dynamics, might not be optimally suited for our new optimizer. The diminishing gradients observed might be adversely influencing gradient sampling. A more nuanced exploration of this phenomenon and its implications will be delved into within the "Limitation" section of our research.
In summation, our optimizer showcases promising test accuracy metrics across an array of architectures, especially in the nascent training phases. Its rapid convergence coupled with unwavering performance earmarks it as a potent contender for image classification tasks, specifically on the CIFAR-10 dataset
### Segmentation
Following our detailed discussion on classification, we ventured into another pivotal domain of deep learning - segmentation. For our experiments, we adopted the Unet-2 architecture[26] integrated with MobileNetV2 [29], ensuring a balance between computational efficiency and performance. Our optimizer of choice for these experiments was StochGradAdam, running at a learning rate of 0.001.
We sourced our dataset from the renowned oxford_iiit_pet [25], which provides a robust set of images for segmentation tasks. The results of our experiments can be observed in the figure above. A close inspection of the visualizations reveals the strength of our StochGradAdam optimizer. The "Ours Prediction" column demonstrates a more precise and coherent segmentation when compared to other widely-used optimizers like Adam and RMSProp. The boundaries are sharper, and the segmentation masks align more accurately with the true masks, accentuating the prowess of StochGradAdam in driving better model performance.
To conclude, StochGradAdam not only showed promise in the realm of classification but also established its potential in segmentation tasks. Our findings are corroborated by the comparative visual analysis, setting a new benchmark for future optimizers.
Figure 2: Comparative visualization of segmentation results on the oxford_iiit_pet dataset using the Unet-2 architecture integrated with MobileNetV2 across different optimizers, including StochGradAdam, Adam, and RMSProp
Analysis: Uncertainty Reduction in Prediction Probabilities
Having observed the performance of various optimizers in our experimental results, one might wonder about the intricacies beyond mere accuracy metrics. The efficacy of an optimizer is not solely gauged by its ability to minimize the loss function but also by its influence on the uncertainty of the model's predictions. In the realm of deep learning, where models make probabilistic predictions across multiple classes, it's crucial to delve into how different optimizers shape these probabilities. In this section, we explore the role of optimizers in determining the entropy of prediction probabilities and discuss its implications for uncertainty in predictions.
### Entropy: A Measure of Uncertainty
For a discrete probability distribution \(P=(p_{1},p_{2},...,p_{n})\), entropy, denoted as \(H(P)\), provides a measure of the uncertainty or randomness of the distribution [31]:
\[H(P)=-\sum_{i=1}^{n}p_{i}\log(p_{i}) \tag{12}\]
A distribution that is entirely certain (one probability is 1 and the others are 0) will have an entropy of 0. On the contrary, a uniform distribution, characterized by maximum uncertainty, will possess the highest entropy [7]. To ensure the interpretability and comparability of entropy values, especially when working with distributions over different numbers of outcomes, we resort to normalized entropy [27]:
\[H_{\text{normalized}}(P)=\frac{H(P)}{\log(n)} \tag{13}\]
With this normalization, entropy values are confined between 0 (absolute certainty) and 1 (absolute uncertainty).
### The Role of Optimizers
An optimizer's main objective is to update model parameters to minimize the associated loss [11]. As this optimization unfolds, the model starts achieving a better fit to the data, leading to predictions marked by heightened confidence [2]. From an entropy perspective, this translates to:
\[H(P_{\text{initial}})>H(P_{\text{optimized}}) \tag{14}\]
Here, \(P_{\text{initial}}\) stands for the prediction probabilities before the optimization and \(P_{\text{optimized}}\) denotes them post-optimization. The entropy reduction is symbolic of the declining uncertainty in predictions [22].
Different optimizers traverse the parameter space distinctively [28]. While some might directly target the global minima, others might explore the space more expansively. These varied approaches can influence both the rate and magnitude of entropy reduction [17]. To quantitatively gauge the prowess of each optimizer:
\[\Delta H_{O}=H(P_{\text{initial}})-H(P_{\text{final}}) \tag{15}\]
A superior value of \(\Delta H_{O}\) underlines the optimizer's proficiency in diminishing prediction uncertainty [40].
### Comparative Visualization: Histograms
For a more intuitive understanding of the influence of different optimizers on prediction uncertainty, we can employ histograms that depict the distribution of normalized entropies subsequent to optimization. This analysis was conducted utilizing TensorFlow, with experiments performed on the ResNet-56[14] architecture applied to the CIFAR-10 dataset[18], all while maintaining a learning rate of 0.01.
The presented figure 3 showcases contrasting characteristics between RMSProp, Adam, and StochGradAdam (Ours) in their approach to prediction uncertainty throughout the epochs. Initially, each optimizer displays a wide distribution in normalized entropy, reflecting a mixture of prediction confidences.
However, by the 100th epoch, a distinguishing feature of StochGradAdam becomes apparent. Its histogram is markedly skewed towards the lower normalized entropy values, implying that a significant portion of its predictions are made with high confidence. RMSProp and Adam, on the other hand, also depict enhancements, but they do not attain the same degree of prediction certainty as rapidly as StochGradAdam.
As training advances to epochs 200 and 300, StochGradAdam persistently upholds its superior performance, ensuring that its predictions remain substantially certain. Meanwhile, while both RMSProp and Adam make strides, their histograms still display a more distributed range of entropy values, suggesting some residual uncertainties in their predictions.
StochGradAdam demonstrates exceptional proficiency in swiftly minimizing prediction uncertainty. This allows for models trained with it to reach confident predictions at a faster rate compared to those trained with RMSProp and Adam. This effectiveness potentially makes StochGradAdam a more favorable option for situations that require rapid convergence to assured predictions.
### Comparative Visualization: PCA
In the pursuit of comprehending the nuances of various optimizers, visual illustrations provide crucial perspectives. This becomes particularly enlightening when evaluating their efficacy on benchmarked datasets and architectures, exemplified by the performance of ResNet56 on CIFAR-10, trained over 300 epochs with a learning rate of 0.01. Dimensionality reduction, facilitated through the renowned technique of Principal Component Analysis (PCA), serves as a pivotal tool to unveil underlying patterns within data structures [16]. Figure 4 offers a side-by-side visualization comparison, portraying the uncertainty landscape of neural networks based on the assessment of 10,000 test data post-training.
At the outset, around the 10th epoch, all methods, including RMSProp, Adam, and Ours, display a scatter of data points spread across the triangle's breadth. This suggests that the optimizers are still in the initial stages, exploring the feature space to determine the best direction for steepest
Figure 3: Comparison of the distribution of normalized entropy across different optimizers (RMSProp, Adam, and StochGradAdam) at various training epochs (10, 100, 200, and 300). The histograms depict the frequency of a specific range of normalized entropy values, illustrating how the uncertainty in predictions evolves as training progresses.
descent. RMSProp, at this stage, has a denser cluster at the vertex of the triangle with sparse data points radiating outward, indicating a certain segment of data points has found an optimization path, but the majority are still in search [33]. Adam's distribution leans towards the left, hinting at early tendencies to converge to specific regions in the feature space. In contrast, our method shows an evident concentration of data points towards the triangle's base, implying that it has already started discerning the optimal path more effectively than the other two.
By the 200th epoch, the differences between the optimizers become more prominent. RMSProp maintains a widespread distribution, indicating progression but still some distance to optimal convergence. Adam displays a tighter congregation of data points around the center, reflecting its ability to reduce prediction uncertainties but not uniformly across all data points [41]. Our method, however, presents a dense clustering at the triangle's lower section, denoting higher prediction confidence. This suggests that our optimizer is not only adept at reducing prediction uncertainties but also exhibits superior dense clustering ability, which is indicative of its robustness and consistency [33].
By the 300th epoch, our method underscores its superiority with data points compactly clustered, indicating minimal prediction uncertainty and reinforcing the idea that dense clustering often translates to empirical success in real-world applications [41]. RMSProp and Adam, although showcasing progression, do not match the level of clustering and confidence exhibited by our method, emphasizing our method's superior performance in swiftly navigating the optimization landscape.
Figure 4: PCA visualization of data processed with different optimizers at distinct training epochs. Each plot captures the distribution of data points in the reduced dimensional space, with color gradients representing normalized entropy.
The visualizations across epochs not only illuminate the progression of each optimizer but also distinctly highlight our method's edge, especially in the latter stages, where it efficiently reduces prediction uncertainty and hints at potentially faster convergence.
### Inference
The extent to which an optimizer can curtail prediction uncertainty carries profound implications:
* **Robustness:** Diminished uncertainty often signals a model's robustness [12]. A model that consistently yields confident predictions across diverse datasets is likely more resilient against adversarial attacks [35] and noisy data [40]. This robustness is especially crucial in real-world applications where input data can be unpredictable [15].
* **Calibration:** Beyond just producing accurate predictions, a well-calibrated model ensures its predicted probabilities closely mirror the actual likelihoods [13]. This is pivotal in probabilistic forecasting and risk assessment scenarios [10]. When a model's predicted confidence levels align with observed outcomes, users and downstream systems can trust and act upon its predictions with more assurance [19].
* **Decision Making:** In applications from medical diagnostics to financial forecasting, the degree of certainty in a prediction often holds as much weight as the prediction itself [21]. For instance, in medical settings, high certainty in a negative diagnosis can potentially prevent unnecessary treatments, leading to better patient outcomes and cost savings [23].
* **Efficient Resource Allocation:** In large-scale applications, models providing certainty in their predictions allow for better resource allocation [5]. For instance, in automated systems, tasks based on high-certainty predictions can be expedited, while those with low-certainty predictions can be flagged for human review [1].
* **Feedback Loop:** Optimizers reducing prediction uncertainty can also aid in creating a constructive feedback loop during training [4]. As the model becomes more certain of its predictions, the feedback it provides for subsequent training iterations is more reliable, leading to a virtuous cycle of consistent improvement [34].
While the predominant evaluation criterion for optimizers has traditionally been their speed and efficiency in reducing the loss function, their capability to mitigate prediction uncertainties is equally vital [30]. Recognizing this can guide researchers and practitioners in making informed choices, ensuring the models they deploy are not only accurate but also reliably confident in their predictions [24].
## 6 Discussion
In our exploration of our optimizer, we've delved deep into its intricacies, nuances, and potential advantages in the realm of neural architectures. The results garnered from various architectures illuminate not just the merits of our approach but also the subtleties of how different neural architectures respond to gradient manipulations. As with any methodological advance, while the advantages are manifold, it is crucial to be cognizant of the boundaries and constraints. Before presenting our conclusive thoughts on the methodology, it is pertinent to discuss the limitations observed during our study. The understanding of these constraints not only provides clarity about the method's scope but also lays the groundwork for potential future improvements.
### Limitations
Our approach to gradient sampling has demonstrated effectiveness in neural architectures like ResNet, MobileNet. These architectures employ residual connections or other mechanisms that help alleviate the vanishing gradient problem, preserving gradient flow throughout the layers. However, deeper architectures, such as VGG, without these mitigating features, have posed challenges. This limitation is likely rooted in the vanishing gradient problem prevalent in deep architectures without such protective mechanisms. I explain the reason:
#### 6.1.1 Deep Gradient Vanishing
Considering a deep architecture, the error gradient at a given layer \(l\) can be approximated by the recursive relation:
\[\delta^{(l)}=(W^{(l)})^{T}\delta^{(l+1)}\circ f^{\prime(l)}(z^{(l)}) \tag{16}\]
Where \(\delta^{(l)}\) is the gradient error for layer \(l\), \(W^{(l)}\) is the weight matrix for layer \(l\), \(f^{\prime(l)}\) is the derivative of the activation function for the layer's output \(z^{(l)}\), and \(\circ\) denotes element-wise multiplication.
When layers are deep, and \(|f^{\prime(l)}(z^{(l)})|<1\) for several layers, the product of these derivatives becomes exponentially small, leading to the gradient \(\delta^{(l)}\) becoming negligible for the initial layers.
#### 6.1.2 Quantitative Analysis of Gradient Decay
Given \(|f^{\prime(l)}(z^{(l)})|\leq\beta\) where \(0<\beta<1\) for all \(l\), then for \(L\) layers:
\[|\delta^{(1)}|\leq\beta^{L}|\delta^{(L)}| \tag{17}\]
If \(L\) is large and \(\beta\) is slightly less than 1, the gradient at the first layer \(|\delta^{(1)}|\) can be vanishingly small compared to the gradient at the last layer \(|\delta^{(L)}|\).
#### 6.1.3 Consequences for Gradient Sampling
Our gradient sampling strategy is contingent upon capturing and updating using the most informative gradient components. In the face of gradient vanishing, the magnitudes in earlier layers are dwarfed, reducing their informativeness. When we stochastically sample from a distribution where most gradients have negligible magnitude, the variance of the sampled gradients increases. This increase in variance, in tandem with already minute gradients, hampers the optimization's directionality, leading to inefficient weight updates.
While our gradient sampling technique offers promising results in architectures equipped with mechanisms to counteract the vanishing gradient issue, it might not be universally applicable across all deep learning models. Especially for architectures like VGG, which lack built-in gradient preservation mechanisms, more research and adaptations are required to fully leverage the potential of our approach.
### Future Work
There is a pressing need for further research to address the gradient issues observed in certain deep architectures when using the StochGradAdam optimizer. Exploring solutions to mitigate the vanishing gradient problem, especially in architectures without inherent gradient preservation mechanisms, will be crucial. This will not only enhance the optimizer's applicability across a broader range of architectures but also ensure consistent and efficient training outcomes.
## 7 Conclusion
In the realm of deep learning optimization, the introduction of the StochGradAdam optimizer marks a significant stride forward. Central to its design is the innovative gradient sampling technique, which not only ensures stable convergence but also potentially mitigates the effects of noisy or outlier data. This approach fosters robust training and enhances the exploration of the loss landscape, leading to more dependable convergence.
Throughout the empirical evaluations, StochGradAdam consistently demonstrated superior performance in various tasks, from image classification to segmentation. Especially noteworthy is its ability to reduce prediction uncertainty, a facet that goes beyond mere accuracy metrics. This reduction in uncertainty is indicative of the model's robustness and its potential resilience against adversarial attacks and noisy data.
However, like all methodologies, StochGradAdam has its limitations. While it excels in architectures like ResNet and MobileNet, challenges arise in deeper architectures like VGG, which lack certain mitigating features. This limitation is believed to be rooted in the vanishing gradient problem, prevalent in deep architectures without protective mechanisms.
Nevertheless, the successes of StochGradAdam underscore the potential for further innovation in gradient-based optimizations. Its rapid convergence, adaptability across diverse architectures, and ability to reduce prediction uncertainty set a new benchmark for future optimizers in deep learning.
|
2305.13092 | Improved Compositional Generalization by Generating Demonstrations for
Meta-Learning | Meta-learning and few-shot prompting are viable methods to induce certain
types of compositional behaviour. However, these methods can be very sensitive
to the choice of support examples used. Choosing good supports from the
training data for a given test query is already a difficult problem, but in
some cases solving this may not even be enough. We consider a grounded language
learning problem (gSCAN) where good support examples for certain test splits
might not even exist in the training data, or would be infeasible to search
for. We design an agent which instead generates possible supports which are
relevant to the test query and current state of the world, then uses these
supports via meta-learning to solve the test query. We show substantially
improved performance on a previously unsolved compositional behaviour split
without a loss of performance on other splits. Further experiments show that in
this case, searching for relevant demonstrations even with an oracle function
is not sufficient to attain good performance when using meta-learning. | Sam Spilsbury, Alexander Ilin | 2023-05-22T14:58:54Z | http://arxiv.org/abs/2305.13092v1 | # Improved Compositional Generalization by Generating Demonstrations for Meta-Learning
###### Abstract
Meta-learning and few-shot prompting are viable methods to induce certain types of compositional behaviour. However, these methods can be very sensitive to the choice of support examples used. Choosing good supports from the training data for a given test query is already a difficult problem, but in some cases solving this may not even be enough. We consider a grounded language learning problem (gSCAN) where good support examples for certain test splits might not even exist in the training data, or would be infeasible to search for. We design an agent which instead generates possible supports which are relevant to the test query and current state of the world, then uses these supports via meta-learning to solve the test query. We show substantially improved performance on a previously unsolved compositional behaviour split without a loss of performance on other splits. Further experiments show that in this case, searching for relevant demonstrations even with an oracle function is not sufficient to attain good performance when using meta-learning.
## 1 Introduction
We want autonomous agents to have the same compositional understanding of language that humans do (Chomsky, 1957; Tenenbaum, 2018). Without this understanding, the sample complexity required to train them for a wide range of compositions of instructions would be very high (Sodhani et al., 2021; Jang et al., 2021). Naturally, such compositional generalization has received interest from both the language and reinforcement learning communities. "Compositional Generalization" can be divided into several different sub-skills, for example being able to reason about object properties compositionally (Chaplot et al., 2018; Qiu et al., 2021), composing sub-instructions into a sequence (Logeswaran et al., 2022; Min et al., 2022) or generating novel outputs according to novel inputs made up of familiar components (Lake and Baroni, 2018).
A long line of work and many different datasets show that Deep Learning approaches do not always achieve such compositional generalization, especially in the case of novel output sequences. Some solutions to make up for this deficiency include modular architectures, data augmentation, and sparsity. A recent line of work concerns incontext learning. Instead of just providing a query and asking for the target directly, a few examples of query-target pairs, called _supports_, are provided along with the query. In the compositional generalization case, we cannot provide out-of-distribution examples showing the expected behaviour exactly, but as long as the examples are _relevant_ in that they cover the correct elements of the problem space, then compositional generalization is possible. This immediately begs the follow up question of how such relevant examples should be generated for each query. Most of the prior work in this area takes one of four approaches: searching for near-neighbours to the query input (Pasupat et al., 2021); searching for solutions to subproblems (assuming that the subproblems are known) (Yang et al., 2022), searching for near-neighbours of the initial predicted output (Zemlyanskiy et al., 2022) and chain-of-thought prompting (Wei et al., 2022).
We suggest that in the Grounded Language Learning case, these approaches might not be sufficient to make compositional generalization by in-context learning work. In Grounded Language Learning, the outputs are conditional not only on the _query_, but also on the _state_ of the world. Searching for nearby examples in the input space thus becomes problematic. Using the query alone means that it is unlikely that _state-relevant_ examples will be retrieved. The complexity of the state space is so large that there might not even be other examples in the same state and finding _similar_ states is challenging because small changes in the state can result in
large changes to the target sequence. For example, a change to the position of the target object in an object reaching task, where all other objects stay in the same position, results in a large change to the target sequence, but a large change in the position of other objects results in little-to-no change. Searching for nearby examples in the output space is more promising, but this approach relies on the assumptions explained above. We show in this work that on a well-known Grounded Language Learning benchmark (gSCAN), it is difficult to come up with a purely retrieval-based strategy that works well.
Instead, we suggest another way to approach the problem, which is to generate the supports. We call our method DemoGen. It first generates near neighbours of the query as support inputs, ranks them by their applicability to the current state, then _generates_ the corresponding support outputs conditioned on the current state. The generation and ranking processes are trained using models with access only to the training data. The supports inputs and outputs generated by our method are typically congruent with the underlying environment rules. It is possible to generate an out of distribution support input, or a support that might not be relevant to the query at hand, or even a support with an incorrect demonstration, but we show that in practice, this does not matter all that much as long all the relevant supports are generated. Through our experiments, we show that our method is able to unlock compositional generalization on a challenging split of gSCAN, without sacrificing significant amounts of performance in other cases.
## 2 Related work
### Compositional Generalization and Grounded Language Learning
The capability of Deep Learning to perform compositional generalization has been studied extensively. Early experiments showed the challenge of doing so on both RNNs Lake and Baroni (2018) and Transformers Hupkes et al. (2020) and many datasets have been created to demonstrate the problem, both with synthetic and "realistic" natural language data Lake and Baroni (2018); Bastings et al. (2018); Kim and Linzen (2020); Keysers et al. (2020); Li et al. (2021); Yin et al. (2021); Finegan-Dollak et al. (2018). As more datasets become available, so do approaches to handle the compositional generalization problem. Most approaches generally fall into some combination of data augmentation Andreas (2020); Li and McClelland (2022); Chen et al. (2022); Qiu et al. (2022); Akyurek et al. (2021), neural module networks Andreas et al. (2016); Buch et al. (2021); D'Amario et al. (2021); Andreas et al. (2016); Ruis and Lake (2022) and meta-learning Lake (2019); Conklin et al. (2021), discussed in more detail in the next section.
Compositional generalization is also a highly relevant problem in the field of autonomous agents and robotics as well. In that field, the analogy to the compositional production of language is compositional use of _skills_. In robotics there is typically a richer observation space and it has been shown that some level of compositional generalization is possible when it comes to manipulating unseen objects or objects in novel ways Jang et al. (2021); Goyal et al. (2021); Hill et al. (2020); Garg et al. (2022), but the success rates are still below a level that could be considered reliable.
Language grounded agents (often referred to as "Grounded Language Learning" agents) are a natural fit to study this problem, because it is easy to test compositional generalization scenarios by varying the input utterance composition and checking if a corresponding composition of skills is executed by the agent. Many such language grounding environments exist, such as BabyAI Chevalier-Boisvert et al. (2019), ALFRED Shridhar et al. (2020), VizDoom Chaplot et al. (2018) and SILG Zhong et al. (2021). The most relevant environment for studying compositional generalization in Grounded Language Learning is gSCAN Ruis et al. (2020)1, which has a single training data set and 8 out-of-distribution test splits covering various compositional generalization scenarios. Extensions to gSCAN such as ReaSCAN Wu et al. (2021) and Relational Splits (gSCAN-RS) Qiu et al. (2021) test further scenarios particularly in compositional goal identification.
Footnote 1: MIT License github.com/LauraRuis/groundedSCAN
gSCAN is a Minigrid-based environment where an agent receives an instruction with a target object, a verb to apply to that object and an adverb which affects both navigation and the verb. About 360,000 demonstrations of navigating to various objects and performing some task on them with various adverbs are provided as a training set. A _success_ happens when the agent performs the ex
pected sequence of actions exactly. The input and action vocabularies are small and the instructions constructed using a simple grammar. Typically the instructions follow the form "[verb] a [size] [color] [object] [adverb]", where [size], [color] and [adverb] are sometimes omitted. The task is designed such that a simple deep learning approach like a recurrent neural network could solve it, at least on the training and in-distribution validation sets. More challenging are the eight out-of-distribution test splits. The splits can be categorized into two categories. The first category, splits B, C, E, F require a compositional understanding of the input, for example identifying a "red square" as a goal in split C, a size-3 object being "small" in relation to other objects in split E. The ReaSCAN and gSCAN-RS extensions have similar requirements in their test splits. The second category, splits D, G, H and I, require entirely new outputs to be produced at testing-time. Split D requires navigating to an object that is south-west of the agent, which in practice requires the production of \(\mathtt{LTURN(3)}\)2. Split H requires composing a the verb "pull" with the adverb "while spinning", which requires the production of novel fragments \(\mathtt{LTURN(4)}\)\(\mathtt{PULL}\).
Footnote 2: In this work, where an action or subsequence is repeated \(n\) times, we use the notation (ACT1 ACT2) (\(n\))
Various approaches to gSCAN including graph networks Gao et al. (2020), linguistic-assisted attention Kuo et al. (2021), symbolic reasoning Nye et al. (2021), auxiliary tasks Jiang and Bansal (2021), modular networks Heinze-Deml and Bouchacourt (2020); Ruis and Lake (2022) and data augmentation Setzler et al. (2022); Ruis and Lake (2022) have been proposed. These approaches tend to make some trade-off between performance and generalizability. Transformers have been shown to work well on on the first category of splits Qiu et al. (2021) as well as on ReaSCAN and gSCAN-RS Sikkarwar et al. (2022), but there is no general approach which works well on the second category. In this work, we aim to show that a meta-learning approach along with a support generation strategy that does not assume too much about the problem is a feasible general approach at least for problems like the one in Split H.
### In-context and Meta-learning for Compositional Generalization
Meta-learning and in-context learning are promising approaches for compositional generalization in sequence generation tasks. In this paradigm, a few _support inputs_ and corresponding _support outputs_ for a given _query_ sequence are provided and the task is to predict the correct _target_ sequence Lake et al. (2019); Conklin et al. (2021). This has been popularized by the notion of in-context learning in large language models, where a few examples of the input-output pairs as well as a query are given as part of a _prompt_, then the target is predicted autoregressively Brown et al. (2020); Min et al. (2022). In-context learning has also been shown to enable compositional generalization in sequence Lake et al. (2019); Chen et al. (2022); Logeswaran et al. (2020). Modern architectures for in-context learning generally rely on fine-tuning a pre-trained sequence-to-sequence Transformer like T5 Raffel et al. (2020), where both the query and supports are placed in the input sequence and the output sequence is predicted autoregressively. An earlier architecture with a similar idea is Meta Sequence-to-Sequence Learning Lake (2019), referred to in this work as **meta-seq2seq**. In particular, **meta-seq2seq** has been shown to solve compositional generalization tasks on synthetic datasets like SCAN.
### Retrieval Methods for In-Context Learning
In-context learning methods are sensitive to the choice of support sets used. Mitchell et al. (2021) found that selecting supports that were not relevant to the task at hand degraded performance when using **meta-seq2seq** with SCAN. Qiu et al. (2022) also found that retrieving examples that were close in the output space using an oracle function improved meta-learning performance for compositional generalization splits in SMCalFlow-CS. As we show in our experiments, a poor support set selection strategy not only impacts performance on compositional generalization tasks, but also on _in-distribution_ tasks as well, especially for architectures like **meta-seq2seq** where the supports are critical to solving any task instance. In other words, an in-context learning approach with a poorly chosen procedure for selecting supports may be worse on all tasks compared to when no meta-learning is used at all.
Different approaches have been proposed for finding good examples. One approach is to try to "tune" the supports directly, either with a gradient based method Lester et al. (2021); Shin et al. (2020) or by
reinforcement learning (Deng et al., 2022). Such methods are theoretically attractive, but are difficult optimization problems to solve in absence of the test data that we want to tune the prompts for. Other methods try to pick good examples from the training data, for example by using a similarity index (Pasupat et al., 2021), or with a metric that takes into account diversity and local structure coverage (Levy et al., 2022). Zemlyanskiy et al. (2022) generates a possible output candidate for the query input, then searches the training data for similar outputs, but this depends on a good initial generation of the output, in the sense that it should be close in the output space to useful supports. Retrieval based approaches all have the same drawback on a task like gSCAN however, which is that the optimal supports for some test splits simply don't exist in the training data. We provide some analysis of a case where this happens in Section 3.2.
Closer to this work are generative approaches. One approach applied to compositional semantic parsing problems is to decompose the query into sub-problems and generate supports for the sub-problems (assuming that it is possible to generate partial solutions) (Yang et al., 2022). Another emerging approach is chain-of-thought (Wei et al., 2022; Kojima et al., 2022) and least-to-most-prompting (Zhou et al., 2022; Drozdov et al., 2022; Anonymous, 2023). These approaches can get very impressive results on ungrounded compositional generalization benchmarks, but they have their own requirements. These requirements can include knowledge of the structure that the inputs follow, fine-tuning, prompting with examples of how to generate relevant examples, or knowledge that is embedded within the weights of a large language model. The existing work on compositional semantic parsing with large language models also does not consider the grounded setting, where inputs are multimodal and therefore may be difficult to fit within the context window of a large language model or pose scaling challenges at inference time.
## 3 Method
In this section, we describe an implementation of our proposed method. The method is designed to work with datasets like gSCAN where there is both an instruction and a state in the input, but it can be adjusted to work with stateless datasets where only an instruction is used.
### Meta-learning
Our meta-learning architecture is an extension of the **meta-seq2seq** model (Lake, 2019) for the case of grounded action generation (see Fig. 1). For a given episode with the initial state \(S\) and instruction \(I^{Q}\), the model is trained to generate a sequence of actions \(A^{Q}=a_{1}^{Q},...,a_{m}^{Q}\) using a set of _support inputs_\(I_{1},...,I_{n}\) and the corresponding _support outputs_\(A_{1},...,A_{n}\). Compared to the original **meta-seq2seq** which used recurrent neural networks as sequence-to-vector encoders, our implementation is based on transformers.
The inputs \(I_{1},...,I_{n}\) are encoded into vectors using a transformer encoder-decoder (T): the encoder encodes the state \(S\) and the decoder processes the support input. The vector representation is taken from the position of a special [CLS] token added to the decoder sequence. \(I^{Q}\) and \(S\) are encoded using the same transformer T, except that the decoded \(I^{Q}\) sequence is taken as the output as opposed to the decoded [CLS] token. The encoded query inputs \(I^{Q}\) are processed with an attention block which uses encoded support inputs and outputs as keys and values, respectively. The output of the attention block is a sequence of the same length as \(I^{Q}\). A Transformer Decoder (TD) implements an autoregressive model of the query output sequence
Figure 1: Our approach is a modified version of **meta-seq2seq**(Lake, 2019). A transformer decoder (TD) is trained to produce a sequence of actions \(a_{1}^{Q},...,a_{m}^{Q}\) given a query instruction \(I^{Q}\). The context are demonstrations \((I_{k},A_{k})\) produced by our generative model. We use a transformer encoder-decoder (T) to encode instructions and state \(S\) and a transformer encoder (TE) to encode actions. The transformers that process instructions (pink blocks) receive state \(S\) as the input of the encoder.
\(A^{Q}=a_{1}^{Q},...,a_{m}^{Q}\) using the output of the attention block as context.
Similar to **meta-seq2seq**, the symbol-index mapping (from words or actions to numbers used to look up embeddings) is permuted differently in each training step. The same permutations are applied to \(I_{1},...,I_{n}\) and \(I^{Q}\), and to \(A_{1},...,A_{n}\) and \(A^{Q}\). Without the symbol-index permutations, the model can ignore the support inputs and outputs and instead predict the query outputs from the query inputs directly. The permutations make the training task impossible to solve without reference to the support inputs and actions. The effect of permutations are shown in Appendix I.
### Support Set Generation
Choosing the support inputs \(I_{1},...,I_{n}\) and outputs \(A_{1},...,A_{n}\) for the meta-learning model is not a trivial problem. In this work, we propose to generate the support sets using generative models trained on the training data.
We generate the support inputs by the use of a Masked Language Model (MLM). The masked language model is trained to estimate \(p(w_{i\in\mathcal{M}}|w_{j\not\in\mathcal{M}})\) - the probability distribution over a dictionary of tokens for tokens in a masked set \(\mathcal{M}\) in the input sequence given their surrounding unmasked context. The MLM is trained on a balanced dataset of all the instructions in the training data to ensure that query inputs occuring less often have a reasonable chance of being sampled. To generate support inputs, some percentage of the tokens (including padding tokens) in the query \(I^{Q}\) (in this work, 20%) are randomly masked and then replacement tokens are sampled from \(p(w_{i\in\mathcal{M}}|w_{j\not\in\mathcal{M}})\). This process is repeated \(k\geq n\) times, to form \(I_{1},...,I_{k}\). We deduplicate the samples and remove \(I^{Q}\). By generating demonstrations in this way, we hope to sample support inputs that are both related to the query, not in the training distribution and also still potentially solvable. Support outputs \(A_{1},...,A_{n}\) are generated by using a pre-trained ViLBERT model on the gSCAN training set, using each support input and its corresponding state as the model input. Examples of the generated instructions are shown in Fig. 2 and also in Appendix H.
Generating both the support inputs and outputs has a few interesting advantages. The first is that we can generate examples that might not exist in the training data. This is important on gSCAN, because correct output sequence for a given instruction depends on the state. Even if we identify a helpful support instruction, we might not be able to find support outputs corresponding to that input in the same state. Assuming that the output generation model generalizes in-distribution, we can generate the corresponding support outputs for that input. The second is that relevant support _inputs_ might not exist in the training data. Consider for example gSCAN Split B. The task is to do something with a red square. The term "red square" never appears in the query at training time, so we cannot sample it as a support from the training dataset at test time. However if our process for generating
Figure 2: Generating instructions for use with metaseq2seq in gSCAN. The Instruction Generator takes as input the current state and \(I^{Q}\) and produces similar instructions \(I_{1},...I_{10}\) which are likely to occur in the same state according to the state-instruction distribution in the training data. An encoder-decoder Transformer trained on the training data takes each generated instruction and generates the corresponding actions in that state. Some instructions are more helpful than others. Instructions in green, \(I_{1},...,I_{5}\) show both the correct object in \(I^{Q}\) and also either one of the verb or adverb. Instructions in yellow, \(I_{6},...,I_{7}\) show the correct object, an irrelevant verb and adverb combination. Instructions in red, \(I_{8},...,I_{10}\) show a different object to the target one. We believe that as long as the instructions and actions in green are included in the support set, a sufficiently powerful model will be able to use them and ignore the other supports.
support inputs admits "red square", then assuming that it is within the capability of our output generation model (what has been shown by Qiu et al., 2021; Sikarwar et al., 2022), we can generate its corresponding support outputs. By exploiting this generalization, we can generate useful supports on many of the gSCAN splits, even if the inputs in those splits are not seen in the training data. All we have to do is generate those inputs, then use the existing model to generate their probable outputs.
One challenge with generating the supports is that our support generator might come up with support inputs that are either not relevant or not solvable in the current state. We show in the experiments that the presence of irrelevant supports is not a particularly large problem as long as the other useful supports are also present in the support set. As for unsolvable supports, we propose to also _filter_ the supports by the use of a _scoring model_. The choice of the scoring model depends on the problem at hand, but it should estimate the probability that a generated support is in-distribution, conditioned on any relevant context. Support inputs with a high score are likely to also be solvable and should be preferred over inputs that seem unusual or do not match the current context and therefore receive a low score. Since \(k\geq n\), when choosing the top-\(n\) instructions as supports, supports with a low score will be filtered out.
To rank instructions, we train a CLIP-like model (Radford et al., 2021) with instructions and their corresponding states in the training data, using the InfoNCE loss. Each instruction and state is encoded using a Transformer Encoder and the output of a [CLS] token at the end of each sequence is taken as the representation. The outer product of a batch of instructions and state encodings is computed, then cross-entropy loss is computed along the diagonal. Essentially we train a model which predicts whether a given instruction occurs in a state or not. Generated instructions are ranked by computing the dot product between the query state and the instruction representation according to this model, in descending order. The top \(n\) instructions are chosen as the support inputs \(I_{1},...,I_{n}\) and their corresponding generated outputs \(A_{1},...,A_{n}\) are used as support outputs.
## 4 Experiments
To validate our hypothesis on the importance of generating demonstrations for meta-learning in language-grounding settings, we measure the performance of the modified **meta-seq2seq** model on the gSCAN test splits (Ruis et al., 2020). In all experiments, we generate up to 8 supports for each example in the training and test splits using the methods described below, then train **meta-seq2seq** on the training set augmented with the generated supports, then evaluate on the test splits augmented with the generated supports. Examples of each are given in Appendix H. The **meta-seq2seq** uses eight layers and eight heads per Transformer component, and an embedding dimension of 128. Additional hyperparameters are described in Table 7 in the Appendix. For all experiments, training was run for 100,000 iterations and models are evaluated by taking the best checkpoint on the in-distribution Split-A validation loss, then evaluating on all the other splits.
DemoGen (ours)The generation strategy as described in Section 3.2. 128 instructions are sampled from the MLM, deduplicated, then ranked by taking the inner product of the instruction/state representations produced by the CLIP model.
GandRThe same **metaseq2seq** model is used, but 8 support states, inputs and outputs per query are sampled from the training data using the Generate-and-Retrieve strategy (Zemlyanskiy et al., 2022). In this method a vector similarity index of input and target pairs is built, where the input-output pairs are encoded using TF-IDF. The baseline transformer model makes an initial (possibly wrong) prediction for the query input, then the query input and prediction are encoded as a vector and used to find other similar query-output pairs using the index, which become the support inputs and outputs used for meta-learning. In the original work, a tunable \(\alpha\) value is used to trade-off between the importance of the input and target components of the vector search - in our implementation we keep \(\alpha\) fixed by using a single index and concatenating the vectors together. Note that in this method we do not also search for similar states, though the identity of the target object and also its distance to the agent will likely be similar as we select on the basis of input and output similarity. There is also nothing to ensure that a diversity of different instructions is sampled - only the near neighbours
are sampled, even if they are correspond to a single instruction.
Expert HeuristicAn expert with access to a simulator generates all valid input and output pairs for a given state and selects the best ones according to the following heuristic. We select instructions which 1) go to the same object, 2) show the target verb in combination with other adverbs, 3) show the target adverb in combination with other verbs. Note that the generated supports might contain trajectories from the test set, which means that the expert uses extra knowledge not available for the learning agent. See Appendix G for more details.
Expert RandomThe same expert is used but the support inputs are selected randomly, without the use of the heuristic described above. Thus, instructions can be about any object in the same state, not just the target one. For example, if the query instruction is "walk to a red circle while spinning", but the state includes the objects "blue square" and "yellow cylinder", the oracle function might generate instructions like "push a red circle", "pull a blue square while spinning", "walk to a yellow cylinder while zigzagging".
Expert Other StatesWe generate instructions as in the Expert Heuristic approach but the outputs are shown for states which are different to the query state. Such states are extracted from the training data. The sampled states are also included in the supports and used by **meta-seq2seq**. If the training data does not contain a state with the same instruction as the one generated by the expert, that instruction is not included in the support set.
### Analysis of Generated Instructions
We analyze some properties of the generated support sets under different generation conditions for Split H in Table 1 (similar analysis for other splits can be found in Appendix D). In retrieval-based methods, the distance between the agent and the target object is often different in the query versus the supports (4). Retrieval based methods tend to generate fewer demonstrations showing the same exact same target object (5). The target object might vary because the instruction can be under-specified (for example, "walk to a square", where the only square in the query state is a red square, but it would be perfectly valid to fetch an example where the target was a blue square). Retrieval methods do not always have both (8) the correct verb (6) and adverb (7) in the retrieved supports. This happens on GandR because the adverb can quite significantly change the outputs, such that supports with the same verb (but without the adverb) are not selected. In even fewer cases will there be at least one demonstration each of both the correct verb and adverb on a trajectory covering the same path as the one in the query (9). Our method on the other hand is able to to generate demonstrations which do have these properties.
One important question for our method is the quality of the generated supports. Ideally they should comprise _valid_ support inputs (eg, tasks that are actually solveable in a state) and the generated support outputs should be _correct_ enough to facilitate meta-learning. We investigated this on supports generated by our method and reported the results in Table 2. On average, about 77% of generated support inputs are valid. A support output is _correct_ if it matches what an oracle generator would have generated for the corresponding instruction and state. 50% of the support pairs were both correct
\begin{table}
\begin{tabular}{r l r r r} \hline \hline & & **DemoGen** & **GandR** & **Retrieval** \\ \hline
1 & Desc. Obj. & 0.441 & 0.773 & 1.000 \\
2 & Agent Pos. & 1.000 & 0.077 & 0.033 \\
3 & Tgt. Pos. & 0.476 & 0.083 & 0.033 \\
4 & Same Diff. & 0.476 & 0.039 & 0.016 \\
5 & Tgt. Obj. & 0.476 & 0.172 & 0.192 \\
6 & Verb \& (5) & 1.000 & 0.294 & 0.435 \\
7 & Advb \& (5) & 1.000 & 0.478 & 0.333 \\
8 & (6) \& (7) & 1.000 & 0.002 & 0.187 \\
9 & (4) \& (8) & 1.000 & 0.000 & 0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Analysis of generated data, Split H. Not shown is **Expert Random**, which generates instructions about the same target object about 15% of the time.
\begin{table}
\begin{tabular}{r r r} \hline \hline & **Valid** & **Correct \& Valid** \\ \hline A & 0.75 & 0.61 \\ B & 0.73 & 0.65 \\ C & 0.75 & 0.63 \\ D & 0.75 & 0.14 \\ E & 0.78 & 0.55 \\ F & 0.81 & 0.65 \\ G & 0.74 & 0.30 \\ H & 0.81 & 0.48 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Our method, correctly generated support inputs by split according to an oracle function
and valid. The number is clearly lower on splits where a Transformer is not able to solve the task well. For example on Split H, there may be "pull an [object] while spinning" in the generated support inputs, where [object] is not the target object.
### Performance on gSCAN
We first measure the performance of our approach and compare with two baselines: 1) a Transformer-based ViLBERT Qiu et al. (2021) and 2) a metaseq2-seq using GandR demonstrations. The baseline ViLBERT is an 8-layer, 8-head encoder-decoder transformer with causal masking in the decoder outputs. In contrast to Qiu et al. (2021), there are no convolutional layers to process the state and the cells in the state are encoded by concatenating component embeddings instead of one-hot encoding. The results are shown in Table 3.
ViLBERT can perform very well on the in-distribution Split A, and as expected, performance on splits B, C, E and F is also very good. Performance on Split H is not strong however. In comparison, DemoGen performs quite well on Split H, at a success rate of 82% compared to 22%. Performance on the other splits is still very good, with a relative drop of about 4 points on the in-distribution Split A, and comparable performance on the other splits. While GandR seemed to be a promising approach, performance was not very high. We suspect the reason for this is that our implementation of GandR selects supports that are high in output similarity to the initially generated output, but are not very diverse and also may not contain information required to solve the task, especially on test splits where there are no examples of the instruction in the training data. More comparisons to prior work on gSCAN are in Appendix C.
In Table 4, we analyze the importance of the strategy used to select the support sets by evaluating the performance of three hand-written oracle functions on **meta-seq2seq**. **Heuristic** gets very high scores, since it samples only the instructions and actions known a-priori to be relevant the query instruction. However, without care in sampling the supports, performance drops significantly on all-splits, including the in-distribution ones. For example, when sampling random possible instructions that are likely not relevant to the query task (because they concern a different object), performance on all splits is very bad. Performance is even worse when sampling demonstrations from different states. In some splits, it is not even possible to sample from the training data as there is no example of an instruction concerning the same object as in the query. For those splits where sampling from the training data is possible, even though the support instructions are the ones known to be relevant a-priori, the difference in the support outputs versus the task target creates difficulties for the model.
## 5 Conclusion
In this work we examined a case where it was necessary to generate support sets for meta-learning in a grounded language learning problem. We proposed a method for doing so based on sampling from a masked language model and solving the generated support inputs using a transformer trained on the training data. Our method performs well on
\begin{table}
\begin{tabular}{c c c c} \hline & **Heuristic** & **Random** & **Other States** \\ \hline A & \(0.97\pm 0.0\) & \(0.18\pm 0.04\) & \(0.59\pm 0.06\) \\ B & \(0.98\pm 0.0\) & \(0.02\pm 0.01\) & \(0.0\pm 0.0\) \\ C & \(0.98\pm 0.0\) & \(0.12\pm 0.02\) & \(0.03\pm 0.01\) \\ D & \(0.15\pm 0.06\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ E & \(0.98\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ F & \(0.98\pm 0.0\) & \(0.27\pm 0.03\) & \(0.67\pm 0.05\) \\ G & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ H & \(0.75\pm 0.03\) & \(0.02\pm 0.01\) & \(0.13\pm 0.01\) \\ \hline \end{tabular}
\end{table}
Table 4: Different types of oracle behaviour. Numbers are success rates \(\pm\) standard deviation with the same measurement methodology as Table 3
\begin{table}
\begin{tabular}{c|c|c c} \hline & **ViLBERT** & **DemoGen** & **GandR** \\ \hline A & \(\mathbf{1.0\pm 0.0}\) & \(0.96\pm 0.01\) & \(0.4\pm 0.03\) \\ B & \(0.81\pm 0.32\) & \(\mathbf{0.97\pm 0.01}\) & \(0.36\pm 0.03\) \\ C & \(\mathbf{0.96\pm 0.08}\) & \(0.97\pm 0.01\) & \(0.44\pm 0.03\) \\ D & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ E & \(\mathbf{0.98\pm 0.04}\) & \(0.98\pm 0.0\) & \(0.43\pm 0.02\) \\ F & \(\mathbf{1.0\pm 0.0}\) & \(0.98\pm 0.01\) & \(0.52\pm 0.03\) \\ G & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ H & \(0.22\pm 0.03\) & \(\mathbf{0.82\pm 0.02}\) & \(0.02\pm 0.0\) \\ \hline \end{tabular}
\begin{tabular}{c|c c c} \hline & **Heuristic** & **Random** & **Other States** \\ \hline A & \(0.97\pm 0.0\) & \(0.18\pm 0.04\) & \(0.59\pm 0.06\) \\ B & \(0.98\pm 0.0\) & \(0.02\pm 0.01\) & \(0.0\pm 0.0\) \\ C & \(0.98\pm 0.0\) & \(0.12\pm 0.02\) & \(0.03\pm 0.01\) \\ D & \(0.15\pm 0.06\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ E & \(0.98\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ F & \(0.98\pm 0.0\) & \(0.27\pm 0.03\) & \(0.67\pm 0.05\) \\ G & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) & \(0.0\pm 0.0\) \\ H & \(0.75\pm 0.03\) & \(0.02\pm 0.01\) & \(0.13\pm 0.01\) \\ \hline \end{tabular}
\end{table}
Table 3: Success rates for different splits (A–H). Numbers are \(\pm\) standard deviation over 10 seeds, measured after 100,000 steps. DemoGen (ours) and GandR both use Meta-seq2seq as the model architecture, with meta-learning from supports. Best results bolded. Split B performance on our implementation of ViLBERT had high variability compared to Qiu et al. (2021), so we do not claim that our approach necessarily improves performance on that split.
many of the gSCAN splits, including the challenging Split H. We analyze the nature of the generated supports and show that they contain useful information are typically valid and correct.
## 6 Limitations
In this section, we discuss the limitations of our work.
First, on the dataset and evaluation. gSCAN is a synthetic dataset used to test the compositional generalization capabilities of sequence models in the Grounded Language Learning domain. Similarly to SCAN, the instructions in gSCAN comprise of only a few words and follow a specific template. As noted by other follow-up works in the semantic parsing literature, a more realistic dataset would be one where human utterances had been collected as inputs, then translated into the target domain by task annotators (see, e.g., COGS Kim and Linzen (2020), GeoQuery Finegan-Dollak et al. (2018), CoGnition Yin et al. (2022)). Since not all the splits on gSCAN are solved, and our focus was on generating meta-learning supports, we decided to stick with the simpler gSCAN problem. However, a natural extension to this work would be to extend gSCAN itself such that the tasks were described in human-generated language (similar to ALFRED Shridhar et al. (2021); Yao et al. (2022).
The second limitation of this work is that supports need to be generated at test time for the test set. In this work, we pre-generated the supports for the test set, though a real-time application of this work on unseen examples would need to run the generation process, which could make inference time much longer.
Third, the MLM we use to generate demonstrations in Section 3.2 assumes that the meaning of a sentence can be changed by replacing a few tokens. In real language, you may have to replace entire spans. A good line of future work would be replacing the MLM with a sequence-to-sequence generative model capable of sampling diverse and related instructions. While this is an important limitation, we believe that it does not undermine our core contribution, which is about the necessity of generating supports and how that might be done in the case where the supports you need to generate are not in the training data.
The **meta-seq2seq** method may be difficult to scale to large vocabulary sizes, because of the permutations of symbol/index mappings used during each training step. One possible approach to handle this problem would be to compress the vocabulary range for the current example to a smaller number of tokens. For example, if there are 10,000 possible tokens, but the example (including the query and all of the supports for that example) only cover 100 tokens, then each symbol can be given an index up to 100. Another approach to handling the problem is moving the complexity to the size of the context window, by providing the entire string of support inputs, states and outputs. A model such as T5 could then be used to do the in-context learning.
## 7 Ethics
Since this work covers the foundational issue of compositional generalization in Grounded Language Learning and is applied to an entirely synthetic dataset, we do not anticipate any special ethical concerns or risks associated with this work.
## Acknowledgements
We would like to acknowledge the anonymous reviewers of this paper in its various submissions, as well as our colleagues Nicola Dainese and Ananth Mahadevan for their valuable feedback on prior versions of this work. Computational resources were generously provided by the Aalto Science-IT project and CSC - IT Center for Science, Finland. We also acknowledge the the support within the Academy of Finland Flagship programme: Finnish Center for Artificial Intelligence (FCAI).
|
2302.07040 | Optimal Hadamard gate count for Clifford$+T$ synthesis of Pauli
rotations sequences | The Clifford$+T$ gate set is commonly used to perform universal quantum
computation. In such setup the $T$ gate is typically much more expensive to
implement in a fault-tolerant way than Clifford gates. To improve the
feasibility of fault-tolerant quantum computing it is then crucial to minimize
the number of $T$ gates. Many algorithms, yielding effective results, have been
designed to address this problem. It has been demonstrated that performing a
pre-processing step consisting of reducing the number of Hadamard gates in the
circuit can help to exploit the full potential of these algorithms and thereby
lead to a substantial $T$-count reduction. Moreover, minimizing the number of
Hadamard gates also restrains the number of additional qubits and operations
resulting from the gadgetization of Hadamard gates, a procedure used by some
compilers to further reduce the number of $T$ gates. In this work we tackle the
Hadamard gate reduction problem, and propose an algorithm for synthesizing a
sequence of $\pi/4$ Pauli rotations with a minimal number of Hadamard gates.
Based on this result, we present an algorithm which optimally minimizes the
number of Hadamard gates lying between the first and the last $T$ gate of the
circuit. | Vivien Vandaele, Simon Martiel, Simon Perdrix, Christophe Vuillot | 2023-02-14T13:44:11Z | http://arxiv.org/abs/2302.07040v3 | # Optimal Hadamard gate count for Clifford+\(T\) synthesis of Pauli rotations sequences
###### Abstract
The Clifford+\(T\) gate set is commonly used to perform universal quantum computation. In such setup the \(T\) gate is typically much more expensive to implement in a fault-tolerant way than Clifford gates. To improve the feasibility of fault-tolerant quantum computing it is then crucial to minimize the number of \(T\) gates. Many algorithms, yielding effective results, have been designed to address this problem. It has been demonstrated that performing a pre-processing step consisting of reducing the number of Hadamard gates in the circuit can help to exploit the full potential of these algorithms and thereby lead to a substantial \(T\)-count reduction. Moreover, minimizing the number of Hadamard gates also restrains the number of additional qubits and operations resulting from the gadgetization of Hadamard gates, a procedure used by some compilers to further reduce the number of \(T\) gates. In this work we tackle the Hadamard gate reduction problem, and propose an algorithm for synthesizing a sequence of \(\pi/4\) Pauli rotations with a minimal number of Hadamard gates. Based on this result, we present an algorithm which optimally minimizes the number of Hadamard gates lying between the first and the last \(T\) gate of the circuit.
## 1 Introduction
Fault-tolerant quantum computing enables reliable and large-scale quantum computation at the cost of an important resource overhead when compared to an error-free model. Much work has been put into quantum circuit optimization in order to reduce this additional cost and make fault-tolerant quantum computing more practical and scalable. In particular, numerous algorithms have been designed to minimize the number of \(T\) gates in a quantum circuit [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. This focus on \(T\)-count minimization is primarily due to the sizable amount of resources, in terms of time and number of qubits, generally required by fault-tolerance protocols, such as magic state distillation [13], to implement the \(T\) gate. In contrast, Clifford operations can typically be implemented at little expense in most common quantum error correcting codes via transversal operations, code deformation [14] or lattice surgery [15]. In such context, and considering the fact that the Clifford+\(T\) gate set is approximatively universal, the \(T\)-count stands out as a key metric to minimize in order to make fault-tolerant quantum computing more efficient. Moreover, minimizing the \(T\)-count is also crucial in the field of quantum circuits simulation as many simulators have a runtime that scales exponentially with respect to the number of \(T\) gates [16, 17, 18, 19, 20].
The problem of finding the optimal number of \(T\) gates in a \(\{\text{CNOT},S,T\}\) circuit composed of \(n\) qubits has been well formalized for \(\{\text{CNOT},S,T\}\) circuits by demonstrating its equivalence with the problem of finding a maximum likelihood decoder for the punctured Reed-Muller code of order
\(n-4\) and length \(2^{n}-1\)[4], which is tantamount to the third order symmetric tensor rank decomposition problem [21]. In order to make use of this formalism in Clifford\(+T\) circuits it is necessary to circumvent the Hadamard gates in some way; this can be achieved by applying one of the two following strategies. The first method consists of extracting \(\{\text{CNOT},S,T\}\) subcircuits and interposing them with layers of Hadamard gates [1]. Then an independent and Hadamard-free instance of the \(T\)-count minimization problem can be formulated for each \(\{\text{CNOT},S,T\}\) subcircuit extracted. The second strategy involves a measurement-based gadget which can substitute a Hadamard gate. This Hadamard gadgetization procedure requires the following additional resources for each Hadamard gate gadgetized: an ancilla qubit, a CZ gate and a measurement [22].
The number \(T\) gates in a circuit containing \(h\) Hadamard gates can be upper bounded by \(\mathcal{O}(n^{2}h)\) or \(\mathcal{O}((n+h)^{2})\) in the case where all Hadamard gates are gadgetized [4]. Hence, each Hadamard gate that must be circumvented, regardless of the strategy applied, for a lack of a good Hadamard gate optimization procedure is potentially the cause of missed opportunities for further \(T\) gate reduction. Therefore, a preliminary procedure consisting in reducing the number of Hadamard gates can result in an important \(T\)-count reduction, as demonstrated in Reference [3]. It has been shown that circumventing all Hadamard gates using the Hadamard gadgetization procedure is the strategy that leads to the best reduction in the number of \(T\) gates [6]. However, the main drawback of this method is the use of one additional qubit for each Hadamard gate gadgetized. This is obviously an inconvenient if the number of qubits at disposal is limited, but can also be detrimental to the optimization process in two ways. Firstly, as suggested in Reference [10], it may become more difficult to find opportunities to reduce the \(T\)-count as the ratio between the number of qubits and the number of \(T\) gates increases. In addition, the runtime of a \(T\)-count optimizer can drastically increase as the number of qubits grows. For all these reasons it is important to minimize the number of auxiliary qubits needed, which further motivates investigations into a pre-processing step optimizing the number Hadamard gates in the initial circuit.
We can mainly distinguish two strategies for the optimization of quantum circuits. The first one is referred to as pattern matching and involves the detection of patterns of gates within the circuit to then substitute them by an equivalent, but nonetheless different, sequence of gates. A series of transformation is therefore applied to the circuit, but its semantic is preserved at each step of the process. This method has already been applied to the optimization of Hadamard gates by using rewriting rules that preserve or reduce the number of Hadamard gates within the circuit [3, 10]. The second method is circuit re-synthesis which consists in extracting some parts of the circuit, representing them by higher level constructs and performing their synthesis to obtain an equivalent circuit. This method has not yet been considered for the optimization of Hadamard gates, despite displaying excellent performances for other optimization problems such as \(T\) gate reduction [6, 8].
In the case of circuit re-synthesis, a commonly used fact is that the operation performed by a given Clifford\(+T\) circuit can be represented by a sequence of \(\pi/4\) Pauli rotations followed by a final Clifford operator [2]. A strategy for optimizing the number of Hadamard gates could then consist of synthesizing this sequence of \(\pi/4\) Pauli rotations using as few Hadamard gates as possible. In Section 3, we present an algorithm that solves this problem optimally. With the Hadamard gadgetization approach, a Hadamard gate needs to be gadgetized only if it comes after and precedes a \(T\) gate in the circuit, we say that such Hadamard gates are internal Hadamard gates. This leads to a more specific Hadamard gate reduction problem consisting in reducing the number of internal Hadamard gates within the circuit. We tackle this problem in Section 4 by proposing an algorithm that synthesizes a sequence of Pauli rotations with a minimal number of internal Hadamard gates. Section 5 presents alternative versions of our algorithms with lower complexities. Benchmarks are then given in Section 6 to evaluate the performances and scalability of our algorithms on a library of reversible logic circuits and on large-scale quantum circuits. Our
algorithms are not working exclusively for the Clifford\(+T\) gate set but can also be executed on any circuit composed of \(\{X,\text{CNOT},S,H,R_{Z}\}\) gates.
## 2 Preliminaries
The four Pauli matrices are defined as follows:
\[I=\begin{pmatrix}1&0\\ 0&1\end{pmatrix},\quad X=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad Y=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\quad Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}.\]
Two Pauli matrices commute if they are equal or if one of them is the identity matrix \(I\), otherwise they anticommute. All tensor products of \(n\) Pauli matrices, together with an overall phase of \(\pm 1\) or \(\pm i\), generate the Pauli group \(\mathcal{P}_{n}\). We define the subset \(\mathcal{P}_{n}^{*}\subset\mathcal{P}_{n}\) as the set of Pauli operators which have an overall phase of \(\pm 1\). We will use \(P_{i}\) to denote the \(i\)th Pauli matrix of a Pauli operator \(P\), for instance if \(P=Z\otimes X\) then \(P_{1}=Z\) and \(P_{2}=X\). We say that a Pauli operator \(P\) is diagonal if and only if \(P_{i}\in\{I,Z\}\) for all \(i\). Two Pauli operators \(P\) and \(P^{\prime}\) commute if there is an even number of indices \(i\) such that \(P_{i}\) anticommutes with \(P_{i}^{\prime}\), otherwise they anticommute. Given a Pauli operator \(P\in\mathcal{P}_{n}^{*}\) and an angle \(\theta\in\mathbb{R}\), a Pauli rotation \(R_{P}(\theta)\) is defined as follows:
\[R_{P}(\theta)=\exp(-i\theta P/2)=\cos(\theta/2)I-i\sin(\theta/2)P.\]
For example the \(T\) gate is defined as a \(\pi/4\) Pauli \(Z\) rotation:
\[T=R_{Z}(\pi/4)\]
Clifford gates can also be represented in terms of Pauli rotations, we will mostly make use of the CNOT, \(S\) and \(H\) gates defined as follows:
\[\text{CNOT} =R_{ZX}(\pi/2)R_{ZI}(-\pi/2)R_{IX}(-\pi/2),\] \[S =R_{Z}(\pi/2),\] \[H =R_{Z}(\pi/2)R_{X}(\pi/2)R_{Z}(\pi/2).\]
The Clifford group \(\mathcal{C}_{n}\) is defined as the set of unitaries stabilizing \(\mathcal{P}_{n}\):
\[\mathcal{C}_{n}=\{U\mid U^{\dagger}PU\in\mathcal{P}_{n},\,\forall P\in \mathcal{P}_{n}\}.\]
and is generated by the \(\{\text{CNOT},S,H\}\) gate set. Note that for each pair of Pauli operators \(P,P^{\prime}\in\mathcal{P}_{n}\setminus\{I^{\otimes n}\}\) there exists a Clifford operator \(U\in\mathcal{C}_{n}\) such that \(P^{\prime}=U^{\dagger}PU\). Unless indicated otherwise, the term Clifford circuit will refer to a circuit exclusively composed of gates from the set \(\{X,\text{CNOT},S,H\}\), the use of other Clifford gate set is discussed at the end of Section 3.2.
A Pauli operator \(P\in\mathcal{P}_{n}^{*}\) can be encoded using \(2n+1\) bits: \(2n\) bits for the \(n\) Pauli matrices and \(1\) bit for the sign [23]. In the following we will encode a Pauli operator \(P\in\mathcal{P}_{n}^{*}\) with \(2n\) bits and neglect its sign as it has no impact on the formulation of our problem; we will use the term Pauli product to designate a Pauli operator deprived of its sign. Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a block matrix of size \(2n\times m\) representing a sequence of \(m\) Pauli products acting on \(n\) qubits such that \(\mathcal{Z}\) is the submatrix of \(\mathcal{S}\) formed by its first \(n\) rows and \(\mathcal{X}\) is the submatrix of \(\mathcal{S}\) formed by its last \(n\) rows. The value \((\mathcal{Z}_{i,j},\mathcal{X}_{i,j})\) represents the \(i\)th component of the \(j\)th Pauli product encoded by \(\mathcal{S}\), such that the values \((0,0),(0,1),(1,1)\) and \((1,0)\) are corresponding to the Pauli matrices \(I,X,Y\) and \(Z\) respectively. We use the notation \(\mathcal{S}_{,i}\) to refer to the column \(i\) of the matrix \(\mathcal{S}\), we will denote
\(P(\mathcal{S}_{:,i})\) the Pauli product encoded by \(\mathcal{S}_{:,i}\), and we will say that \(\mathcal{S}_{:,i}\) is diagonal if and only if \(P(\mathcal{S}_{:,i})\) is diagonal and that \(\mathcal{S}_{:,i}\) and \(\mathcal{S}_{:,j}\) commute (or anticommute) if their associated Pauli products \(P(\mathcal{S}_{:,i})\) and \(P(\mathcal{S}_{:,j})\) commute (or anticommute). Throughout the document we use the zero-based indexing for vectors and matrices and the initial element is termed the zeroth element, for instance the zeroth column of \(\mathcal{S}\) is \(\mathcal{S}_{:,0}\). If all Pauli products encoded by \(\mathcal{S}\) are conjugated by a Clifford gate \(U\in\{\mathrm{CNOT},S,H\}\), then \(\mathcal{S}\) can be updated to encode the Pauli products \(U^{\dagger}P(\mathcal{S}_{:,i})U\), for all \(i\), via the operations depicted in Figure 1. These operations are analogous to the operations performed in the tableau representation [23]. We will say that \(\tilde{\mathcal{S}}=U^{\dagger}\mathcal{S}U\) if and only if \(P(\tilde{\mathcal{S}}_{:,i})=\pm U^{\dagger}P(\mathcal{S}_{:,i})U\) for all \(i\) and for some Clifford operator \(U\).
Any Clifford\(+R_{Z}\) circuit can be represented up to a global phase by a sequence of Pauli rotations followed by a final Clifford operator [2]. The synthesis of a Pauli rotation is then a key procedure for constructing an equivalent Clifford\(+R_{Z}\) circuit from this representation. Let \(U\) be a Clifford operator such that
\[U^{\dagger}R_{P}(\theta)U=R_{U^{\dagger}PU}(\theta)=R_{Z_{i}}(\theta) \tag{1}\]
for some qubit \(i\) and some Pauli operator \(P\in\mathcal{P}_{n}^{*}\). Then the synthesis of the Pauli rotation \(R_{P}(\theta)\) can be performed by implementing \(U\), \(U^{\dagger}\) and inserting a \(R_{Z}(\theta)\) gate in between on qubit \(i\). If \(P\) is diagonal then the Clifford operator \(U\) satisfying Equation 1 can be implemented using only CNOT and \(X\) gates. Otherwise, if \(P\) is not diagonal, at least one Hadamard gate is required to implement \(U\) over the \(\{X,\mathrm{CNOT},S,H\}\) gate set such that
\[U^{\dagger}R_{P}(\theta)U=R_{P^{\prime}}(\theta) \tag{2}\]
where \(P^{\prime}\) is diagonal. Note that the gate set considered is not minimal as the \(X\) gate can be generated from the \(S\) and \(H\) gates. As our cost model is the number of Hadamard gates we include the \(X\) gate so that no \(H\) gates are required to implement it. The \(X\) gate finds its purpose in the case where
\[U^{\dagger}R_{P}(\theta)U=R_{U^{\dagger}PU}(\theta)=R_{Z_{i}}(-\theta) \tag{3}\]
by allowing the implementation of the \(R_{Z}(-\theta)\) gate using the \(R_{Z}(\theta)\) gate via the equality
\[R_{Z}(-\theta)=XR_{Z}(\theta)X. \tag{4}\]
Nonetheless, in the case where \(\theta=\pi/4\), the minimal \(\{\text{CNOT},S,H\}\) gate set can be used since the negative sign can be compensated by inserting three \(S\) gates as
\[UR_{Z_{i}}(7\pi/4)U^{\dagger}=UR_{Z_{i}}(-\pi/4)U^{\dagger}=R_{UZ_{i}U^{\dagger} }(-\pi/4)=R_{P}(\pi/4). \tag{5}\]
The synthesis of a sequence of Pauli rotations using the Clifford+\(R_{Z}\) gate set implies the construction of a diagonalization network, derived from the notion of parity network established in Reference [24] and which is defined as follows.
**Definition 1** (Diagonalization network).: _A Clifford circuit \(C\) is a diagonalization network for a sequence \(\mathcal{S}\) of \(m\) Pauli products if and only if there exists \(m\) non-negative integers \(\alpha_{0}\leq\ldots\leq\alpha_{m-1}\) such that \(U_{i}^{\dagger}P(\mathcal{S}_{:,i})U_{i}\) is diagonal, where \(U_{i}\) is the Clifford operator implemented by the first \(\alpha_{i}\) gates of \(C\)._
A sequence of \(m\) Pauli rotations can be represented by a triple \((\mathcal{S},\mathbf{b},\mathbf{\theta})\), where \(\mathcal{S}\) encodes a sequence of \(m\) Pauli products, \(\mathbf{b}\in\{-1,1\}^{m}\) and \(\mathbf{\theta}\in\mathbb{R}^{m}\) such that \(b_{i}\) and \(\theta_{i}\) correspond to the sign and angle associated with the Pauli product \(P(\mathcal{S}_{:,i})\). Let \(C\) be a diagonalization network for \(\mathcal{S}\), then the sequence of Pauli rotations represented by \((\mathcal{S},\mathbf{b},\mathbf{\theta})\) can be easily implemented from \(C\) up to a final Clifford circuit by inserting \(m\)\(\{X,\text{CNOT},R_{Z}\}\) subcircuits into \(C\). Indeed, as stated previously, if a Pauli product \(P\) is diagonal then the Clifford operator \(V\) satisfying
\[V^{\dagger}R_{P}(\theta)V=R_{V^{\dagger}PV}(\theta)=R_{Z_{j}}(\theta) \tag{6}\]
for some qubit \(j\), can be implemented using only CNOT and \(X\) gates. And because \(C\) is a diagonalization network for \(\mathcal{S}\) then by definition there exists \(m\) non-negative integers \(\alpha_{0}\leq\ldots\leq\alpha_{m-1}\) such that \(U_{i}^{\dagger}P(\mathcal{S}_{:,i})U_{i}\) is diagonal, where \(U_{i}\) is the Clifford operator implemented by the first \(\alpha_{i}\) gates of \(C\). It follows that inserting, for all \(i\) and just after the \(\alpha_{i}\)th gate of \(C\), a \(\{\text{CNOT},X\}\) implementation of the Clifford operators \(V_{i}\) and \(V_{i}^{\dagger}\) with the \(R_{Z_{j}}(b_{i}\theta_{i})\) gate in between, such that \(V_{i}\) satisfies
\[V_{i}^{\dagger}U_{i}^{\dagger}P(\mathcal{S}_{:,i})U_{i}V_{i}=R_{Z_{j}}(b_{i} \theta_{i}) \tag{7}\]
for some qubit \(j\), will result in an implementation of the sequence of Pauli rotations defined by \((\mathcal{S},\mathbf{b},\mathbf{\theta})\) up to a final Clifford circuit.
The circuit obtained by this procedure obviously contains the same number of Hadamard gates as \(C\) as no additional Hadamard gate was inserted. Thus, synthesizing a sequence of Pauli rotations represented by \((\mathcal{S},\mathbf{b},\mathbf{\theta})\) with a minimal number of Hadamard gates up to a final Clifford operator is equivalent to the problem of constructing a diagonalization network for \(\mathcal{S}\) using a minimal number of Hadamard gates. This approach can easily be extended to take into account the final Clifford operator, as explained in Section 3.3. We define \(h(C)\) as being the number of Hadamard gates in a Clifford circuit \(C\), and we extend the notation for a sequence of Pauli products \(\mathcal{S}\) such that \(h(\mathcal{S})=\min\{h(C)\mid C\) is a diagonalization network for \(\mathcal{S}\}\). The problem of synthesizing a sequence of Pauli rotations ignoring the final Clifford operator with a minimal number of Hadamard gates can then be defined as follows.
**Problem 1** (H-Opt).: _Given a sequence \(\mathcal{S}\) of Pauli products, find a Clifford circuit \(C\) that is a diagonalization network for \(\mathcal{S}\) and such that \(h(C)=h(\mathcal{S})\)._
In Clifford+\(T\) circuits, the Hadamard gadgetization procedure aims to transform the circuit in order to obtain an Hadamard-free subcircuit containing all the \(T\) gates. Hence, a Hadamard gate does not need to be gadgetized if there is no \(T\) gate preceding it. To take this particularity into consideration we define the following problem relating to the synthesis of a sequence of Pauli rotations up to a final Clifford circuit with a minimal number of internal Hadamard gates.
**Problem 2** (Internal-H-Opt).: _Given a sequence \(\mathcal{S}\) of Pauli products, find a Clifford circuit \(C=C_{1}::C_{2}\), i.e. \(C\) is the circuit resulting from the concatenation of \(C_{1}\) and \(C_{2}\), such that \(h(C_{2})\) is minimized and \(C_{2}\) is a diagonalization network for \(\tilde{\mathcal{S}}=U^{\dagger}\mathcal{S}U\) where \(U\) is the Clifford operator associated with \(C_{1}\)._
In Section 3.1 we propose a diagonalization network synthesis algorithm to solve the H-Opt problem. We prove its optimality in Section 3.2, and it is then employed in Section 4 to design an algorithm solving the Internal-H-Opt problem.
## 3 Hadamard gates minimization
### Diagonalization network synthesis algorithm
We first describe a simple procedure, of fundamental importance in our diagonalization network synthesis algorithm, to construct a Clifford operator \(U\) such that \(U^{\dagger}PU\) is diagonal, where \(P\) is a non-diagonal Pauli product. Let \(i\) be such that \(P_{i}\in\{X,Y\}\), which necessarily exists as \(P\) is non-diagonal. If there exists \(j\neq i\) such that \(P_{j}\in\{X,Y\}\), then, based on the operation depicted in Figure 1(a), we can deduce that the Pauli product \(P^{\prime}\) resulting from the conjugation of \(P\) by the \(\text{CNOT}_{i,j}\) gate satisfies \(P^{\prime}_{i}\in\{X,Y\}\), \(P^{\prime}_{j}\in\{I,Z\}\) and \(P^{\prime}_{k}=P_{k}\) for all \(k\not\in\{i,j\}\). More generally, if \(P^{\prime}=U^{\dagger}PU\) where \(U\) is the Clifford operator associated with the fan-out formed by the gates \(\{\text{CNOT}_{i,j}\mid P_{j}\in\{X,Y\},\forall j\neq i\}\), then \(P^{\prime}_{j}\) is diagonal for all \(j\neq i\). To complete the diagonalization of \(P^{\prime}\) we then just have to make \(P^{\prime}_{i}\) diagonal while preserving this property. If \(P^{\prime}_{i}=Y\) then conjugating \(P^{\prime}\) by a \(S\) gate on qubit \(i\) maps \(P^{\prime}_{i}\) to \(X\). And in the case where \(P^{\prime}_{i}=X\)
then conjugating \(P^{\prime}\) by a \(H\) gate on qubit \(i\) maps \(P^{\prime}_{i}\) to \(Z\), and our diagonalization procedure is complete as the \(S_{i}\) and \(H_{i}\) operations do not affect \(P^{\prime}_{j}\) where \(j\neq i\).
Consider the diagonalization network synthesis algorithm whose pseudo-code is given in Algorithm 1 and which takes a sequence \(\mathcal{S}\) of \(m\) Pauli products as input. The algorithm constructs a Clifford circuit \(C\) iteratively by processing the Pauli products constituting \(\mathcal{S}\) in order. When a Pauli product \(P=P(\mathcal{S}_{:,i})\) is being processed, if \(U^{\dagger}PU\), where \(U\in\mathcal{C}_{n}\) is the Clifford operator implemented by \(C\), is diagonal then the algorithm moves on to the next Pauli product. Otherwise, if \(U^{\dagger}PU\) is not diagonal, a sequence of gates, constructed using the procedure described above, are appended to \(C\) so that the updated Pauli product \(U^{\dagger}PU\) is diagonal. Thus, Algorithm 1 outputs a Clifford circuit that is a diagonalization network for \(\mathcal{S}\).
**Complexity analysis.** At each iteration the algorithm carries out at most \(\mathcal{O}(n)\) row operations on \(\mathcal{S}\) where \(n\) is the number of qubits, \(m\) iterations are performed and \(\mathcal{S}\) has \(m\) columns, therefore the complexity of Algorithm 1 is \(\mathcal{O}(nm^{2})\).
In the typical case where \(n<m\), a faster version of Algorithm 1 can be implemented using the tableau representation [23]. Let \(\mathcal{T}\) be a tableau initialized at the begining of the algorithm. Instead of updating \(\mathcal{S}\) for each Clifford gate appended to the circuit \(C\), we can use \(\mathcal{T}\) to keep track of the Clifford operator \(U\) implemented by \(C\). For each Clifford gate appended to \(C\), \(\mathcal{T}\) can be updated in \(\mathcal{O}(n)\)[23]. Then, the algorithm proceeds in the same way than Algorithm 1 by sequentially diagonalizing the Pauli products represented by \(\mathcal{S}\). However, the \(i\)th Pauli product to diagonalized is not \(P(\mathcal{S}_{:,i})\) but \(U^{\dagger}P(\mathcal{S}_{:,i})U\), which can be computed in \(\mathcal{O}(n^{2})\) using the tableau \(\mathcal{T}\). This operation must be performed \(\mathcal{O}(m)\) times and \(\mathcal{T}\) must be updated \(\mathcal{O}(nm)\) times as the number of gates in the final Clifford circuit is \(\mathcal{O}(nm)\), therefore the overall time complexity of this algorithm is \(\mathcal{O}(n^{2}m)\). More details on this approach are given in Section 5, where this algorithm is adapated to take a Clifford\(+R_{Z}\) circuit as input instead of a sequence of Pauli products.
**Hadamard gate count.** In order to evaluate \(h(C)\), where \(C\) is the output circuit of Algorithm 1, we will rely on the following definition.
**Definition 2** (Commutativity matrix).: _Let \(\mathcal{S}\) be a sequence of \(m\) Pauli products. The commutativity matrix \(A^{(\mathcal{S})}\) associated with \(\mathcal{S}\) is a strictly upper triangular Boolean matrix of size \(m\times m\) such that for all \(i<j\):_
\[A^{(\mathcal{S})}_{i,j}=\begin{cases}0&\text{ if }\mathcal{S}_{:,i}\text{ commutes with }\mathcal{S}_{:,j},\\ 1&\text{ if }\mathcal{S}_{:,i}\text{ anticommutes with }\mathcal{S}_{:,j}. \end{cases}\]
For convenience we will drop the superscript \((\mathcal{S})\) from \(A\) when it is clear from the context that \(A\) is associated with \(\mathcal{S}\). The commutativity matrix \(A^{(\mathcal{S})}\) can also be seen as the adjacency matrix of a directed acyclic graph, which has already been studied and linked to the \(T\)-depth optimization problem [8]. In this work, we further reinforce the interest in this structure by establishing a relation between the H-Opt and Internal-H-Opt problems and the rank of \(A^{(\mathcal{S})}\). Note that if \(\tilde{\mathcal{S}}=U^{\dagger}\mathcal{S}U\), where \(U\) is some Clifford operator, then \(A^{(\tilde{\mathcal{S}})}=A^{(\mathcal{S})}\) because if two Pauli products \(P\) and \(P^{\prime}\) are commuting (or anticommuting) then \(U^{\dagger}PU\) and \(U^{\dagger}P^{\prime}U\) are commuting (or anticommuting).
The number of Hadamard gates in the circuit produced by Algorithm 1 can be characterized via the following theorem.
**Theorem 1**.: _Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a sequence of \(m\) Pauli products, \(A\) be its commutativity matrix and \(C\) be the Clifford circuit returned by Algorithm 1 when \(\mathcal{S}\) is given as input. Then \(h(C)=\operatorname{rank}(M)\) where \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\)._
Proof.: Let \(\mathcal{S}^{(i)}=\begin{bmatrix}\mathcal{Z}^{(i)}\\ \mathcal{X}^{(i)}\end{bmatrix}\) be the sequence of Pauli products given as input to the \(i\)th recursive call of Algorithm 1 with \(\mathcal{S}^{(0)}=\mathcal{S}\), and let \(M^{(i)}=\begin{bmatrix}\mathcal{X}^{(i)}\\ A^{(i)}\end{bmatrix}\) where \(A^{(i)}\) is the commutativity matrix associated with \(\mathcal{S}^{(i)}\). We first start by analyzing how \(M^{(i)}\) evolves to \(M^{(i+1)}\) when \(\mathcal{S}^{(i)}_{:,0}\) is diagonal. In such case, we can obtain \(M^{(i+1)}\) from \(M^{(i)}\) by removing the first column of \(M^{(i)}\) and the first row of its submatrix \(A^{(i)}\). Let \(P=P(\mathcal{S}^{(i)}_{:,0})\), then, because \(P\) is diagonal, the following equation holds:
\[\bigoplus_{k\in K}\mathcal{X}^{(i)}_{k}=\bigoplus_{k\in K}M^{(i)}_{k}=A^{(i)}_ {0} \tag{8}\]
where \(K=\{k\mid\mathcal{Z}^{(i)}_{k,0}=1\}\). Indeed, as \(P\) is diagonal we necessarily have \(P_{k}=Z\) for some \(k\), and in the case where \(P_{j}=I\) for all \(j\neq k\) we have \(\mathcal{X}^{(i)}_{k,j}=1\) if and only if \(\mathcal{S}^{(i)}_{:,0}\) anticommutes with \(\mathcal{S}^{(i)}_{:,j}\), and so \(\mathcal{X}^{(i)}_{k}=A^{(i)}_{0}\). In a more general case, if there exists \(j\neq k\) satisfying \(P_{j}=Z\) then we can apply a CNOT\({}_{j,k}\) gate for all such \(j\) in order to fall back on our previous case, which implies Equation 8. Consequently, removing the first row of the submatrix \(A^{(i)}\) will not change the rank of \(M^{(i)}\). Moreover, due to the fact that \(\mathcal{S}^{(i)}_{:,0}\) is diagonal, the first column of \(M^{(i)}\) is equal to the null vector. Therefore we have \(\text{rank}(M^{(i+1)})=\text{rank}(M^{(i)})\) when \(\mathcal{S}^{(i)}_{:,0}\) is diagonal.
In the case where \(\mathcal{S}^{(i)}_{:,0}\) is not diagonal, Algorithm 1 will apply a sequence of CNOT and \(S\) gates followed by a single \(H\) gate. Let \(\tilde{\mathcal{S}}^{(i)}=\begin{bmatrix}\tilde{\mathcal{Z}}^{(i)}\\ \tilde{\mathcal{X}}^{(i)}\end{bmatrix}\) be the sequence of Pauli products obtained by conjugating all Pauli products of \(\mathcal{S}^{(i)}\) by this sequence of CNOT and \(S\) gates, and let \(\tilde{M}^{(i)}=\begin{bmatrix}\tilde{\mathcal{X}}^{(i)}\\ A^{(i)}\end{bmatrix}\). Note that we have \(\text{rank}(\tilde{M}^{(i)})=\text{rank}(M^{(i)})\) as applying a \(S\) or CNOT operation on \(\mathcal{S}^{(i)}\) does not change the rank of \(\mathcal{X}^{(i)}\). Let \(j\) be the qubit on which the Hadamard gate is applied, we must have \(\tilde{M}^{(i)}_{j,0}=1\) and \(\tilde{M}^{(i)}_{k,0}=0\) for all \(k\neq j\), which implies that \(\tilde{M}^{(i)}_{j}\) is independent from all the other rows of \(\tilde{M}^{(i)}\). Let \(\hat{M}^{(i)}=\begin{bmatrix}\tilde{\mathcal{X}}^{(i)}\\ A^{(i)}\end{bmatrix}\) where \(\hat{\mathcal{S}}^{(i)}=\begin{bmatrix}\tilde{\mathcal{Z}}^{(i)}\\ \tilde{\mathcal{X}}^{(i)}\end{bmatrix}\) is obtained by conjugating all Pauli products of \(\tilde{\mathcal{S}}^{(i)}\) by a Hadamard gate on qubit \(j\), and notice that \(\hat{M}^{(i)}_{k}=\hat{M}^{(i)}_{k}\) for all \(k\neq j\). Analogously to Equation 8, since \(\tilde{\mathcal{S}}^{(i)}_{:,0}\) is diagonal, the following equation holds:
\[\bigoplus_{k\in\hat{K}}\tilde{\mathcal{X}}^{(i)}_{k}=\bigoplus_{k\in\hat{K}} \hat{M}^{(i)}_{k}=A^{(i)}_{0} \tag{9}\]
where \(\hat{K}=\{k\mid\hat{\mathcal{Z}}^{(i)}_{k,0}=1\}\). Furthermore, as \(j\in\hat{K}\), \(\hat{M}^{(i)}_{j}\) can be expressed as follows:
\[\hat{M}^{(i)}_{j}=\bigoplus_{k\in\hat{K}\setminus\{j\}}\hat{M}^{(i)}_{k}\oplus A ^{(i)}_{0}=\bigoplus_{k\in\hat{K}}\tilde{M}^{(i)}_{k}\oplus A^{(i)}_{0} \tag{10}\]
where \(\tilde{K}=\{k\mid\hat{\mathcal{Z}}^{(i)}_{k,0}=1\}=\hat{K}\setminus\{j\}\). It follows that \(\hat{M}^{(i)}_{j}\) is a linear combination of the rows of \(\tilde{M}^{(i)}\) whereas \(\tilde{M}^{(i)}_{j}\) is an independent row, and so \(\text{rank}(\hat{M}^{(i)})=\text{rank}(\tilde{M}^{(i)})-1\). After the Hadamard gate has been applied we end up in the same case as when \(\mathcal{S}^{(i)}_{:,0}\) is diagonal, therefore we have \(\text{rank}(M^{(i+1)})=\text{rank}(\hat{M}^{(i)})=\text{rank}(M^{(i)})-1\).
We demonstrated that \(\text{rank}(M^{(i+1)})=\text{rank}(M^{(i)})\) when no Hadamard gate is applied at the \(i\)th recursive call, and that \(\text{rank}(M^{(i+1)})=\text{rank}(M^{(i)})-1\) if one Hadamard gate is applied. Thus,
the number of Hadamard gates in the Clifford circuit \(C\) is equal to \(\operatorname{rank}(M)-\operatorname{rank}(M^{(m)})\) where \(m\) is the number of Pauli products in \(\mathcal{S}\). The sequence of Pauli products \(\mathcal{S}^{(m)}\) is empty, hence \(\operatorname{rank}(M^{(m)})=0\) and \(h(C)=\operatorname{rank}(M)\).
### Optimality
In this section we demonstrate the optimality of Algorithm 1 by proving the following theorem.
**Theorem 2**.: _Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a sequence of \(m\) Pauli products, \(A\) be its commutativity matrix and \(C\) be a Clifford circuit that optimally solves the H-Opt problem for \(\mathcal{S}\). Then \(h(C)=\operatorname{rank}(M)\) where \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\)._
Our proof of Theorem 2 rests on the following proposition, which puts an upper bound on the number of Hadamard gates required to simultaneously diagonalize a set of mutually commuting Pauli products.
**Proposition 1**.: _Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a sequence of \(m\) mutually commuting Pauli products of size \(n\) and let \(U\in\mathcal{C}_{n}\) be a Clifford operator such that \(U^{\dagger}P(\mathcal{S}_{:,i})U\) is diagonal for all \(i\). Then \(h(C)\geq\operatorname{rank}(\mathcal{X})\), where \(C\) is a Clifford circuit implementing \(U\)._
Proof.: Let \(\mathcal{S}^{(i)}\) be the state of \(\mathcal{S}\) resulting from conjugating all its Pauli products by the Clifford operator implemented by the first \(i\) gates of \(C\). If the \((i+1)\)th gate of \(C\) is a CNOT or \(S\) gate, then \(\operatorname{rank}(\mathcal{X}^{(i+1)})=\operatorname{rank}(\mathcal{X}^{(i)})\). Else, if the \((i+1)\)th gate of \(C\) is a Hadamard gate, then \(\mathcal{X}^{(i+1)}\) and \(\mathcal{X}^{(i)}\) have at least \(n-1\) rows in common and \(1\geq|\operatorname{rank}(\mathcal{X}^{(i)})-\operatorname{rank}(\mathcal{X}^ {(i+1)})|\geq 0\). Therefore, the number of Hadamard gates in \(C\) is at least \(|\operatorname{rank}(\mathcal{X})-\operatorname{rank}(\mathcal{X}^{(k)})|\), where \(k\) is the number of gates in \(C\). The circuit \(C\) performs a simultaneous diagonalization of the Pauli products constituting \(\mathcal{S}\), which implies that \(\operatorname{rank}(\mathcal{X}^{(k)})=0\), hence \(h(C)\geq|\operatorname{rank}(\mathcal{X})-\operatorname{rank}(\mathcal{X}^{( k)})|=\operatorname{rank}(\mathcal{X})\).
In the following we use \(\mathcal{S}_{:,:j}\) to denote the submatrix formed by the first \(j\) columns of \(\mathcal{S}\). Theorem 1 implies an upper bound on the number of Hadamard gates required to solve the H-Opt problem. There always exists a Clifford circuit \(C\) that is a diagonalization network for a sequence of Pauli products \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) such that \(\operatorname{rank}(M)\geq h(C)\) where \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\) and \(A\) is the commutativity matrix associated with \(\mathcal{S}\). In order to prove Theorem 2 it remains to show that if \(C\) is a diagonalization network for \(\mathcal{S}\) then \(h(C)\geq\operatorname{rank}(M)\). To do so, we will show that we can derive a Clifford circuit \(C^{\prime}\) from \(C\) such that \(C^{\prime}\) is satisfying \(h(C^{\prime})=h(C)\) and is a solution to a specific instance \(\mathcal{S}^{\prime}\) of the simultaneous diagonalization problem, where \(\mathcal{S}^{\prime}=\begin{bmatrix}\mathbf{0}\\ M^{\prime}\end{bmatrix}\) is a sequence of mutually commuting Pauli products satisfying \(\operatorname{rank}(M^{\prime})\geq\operatorname{rank}(M)\). By Proposition 1 we then would have \(h(C)=h(C^{\prime})\geq\operatorname{rank}(M^{\prime})\geq\operatorname{rank}(M)\). We first give a construction for \(M^{\prime}\) and prove that \(\operatorname{rank}(M^{\prime})\geq\operatorname{rank}(M)\) via the following proposition.
**Proposition 2**.: _Let \(C\) be a diagonalization network for a sequence \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) of \(m\) Pauli products of size \(n\), and let \(C^{(i)}\) be the subcircuit of \(C\) truncated after its \(ith\) Hadamard gate. And let the
matrices \(\mathcal{S}^{(i)}=\begin{bmatrix}\mathcal{Z}^{(i)}\\ \mathcal{X}^{(i)}\end{bmatrix}\) be such that \(\mathcal{S}^{(0)}=\mathcal{S}\) and in the case where \(i>0\):_
\[\mathcal{S}^{(i)}_{:,j} =\mathbf{0} \text{if }C^{(i-1)}\text{ is a diagonalization network for }\mathcal{S}_{:,:j},\] \[P(\mathcal{S}^{(i)}_{:,j}) =\pm U^{\dagger}_{(i)}P(\mathcal{S}_{:,j})U_{(i)} \text{otherwise},\]
_where \(U_{(i)}\in\mathcal{C}_{n}\) is the Clifford operator associated with \(C^{(i)}\). Consider the matrices \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\) and \(M^{\prime}=\begin{bmatrix}\mathcal{X}\\ A^{\prime}\end{bmatrix}\) where \(A\) is the commutativity matrix associated with \(\mathcal{S}\), and \(A^{\prime}\) is a matrix composed of \(h(C)\) rows such that \(A^{\prime}_{i-1}=\mathcal{X}^{(i)}_{j}\) where \(j\) is the qubit on which the \(i\)th Hadamard gate of \(C\) is applied. Then we have \(\operatorname{rank}(M^{\prime})\geq\operatorname{rank}(M)\)._
Proof.: We will prove Proposition 2 by showing that the rows of \(M\) are in the span of the set formed by the rows of the \(\mathcal{X}^{(i)}\) matrices, which are themselves in the row space of the matrix \(M^{\prime}\). We first show that the rows of \(\mathcal{X}^{(i)}\) are in the span of the set formed by the first \(n+i\) rows of \(M^{\prime}\). For \(i=0\), this assertion is obviously true as we have \(\mathcal{X}_{j}=M^{\prime}_{j}\) for all \(0\leq j<n\). Let \(\tilde{\mathcal{S}}^{(i)}=\begin{bmatrix}\tilde{\mathcal{Z}}^{(i)}\\ \tilde{\mathcal{X}}^{(i)}\end{bmatrix}=U^{\dagger}\mathcal{S}^{(i)}U\), where \(U\) is the Clifford operator implemented by the \(\{\text{CNOT},S\}\) subcircuit comprised between the \(i\)th and \((i+1)\)th Hadamard gate of \(C\). Note that performing a CNOT or \(S\) operation on \(\mathcal{S}^{(i)}\) doesn't change the row space of \(\mathcal{X}^{(i)}\), therefore the rows of \(\tilde{\mathcal{X}}^{(i)}\) are in the row space of \(\mathcal{X}^{(i)}\). Let \(\boldsymbol{\alpha}\) be a vector of size \(m\) such that \(\alpha_{j}\) is the smallest non-negative integer for which \(C^{(\alpha_{j})}\) is a diagonalization network for \(\mathcal{S}_{:,j}\) and let \(k\) be the qubit on which the \((i+1)\)th Hadamard gate of \(C\) is applied. We can then distinguish \(3\) cases for the value of \(\alpha_{j}\), where \(0\leq j<m\):
* \(\alpha_{j}<i\), then \(\mathcal{S}^{(i+1)}_{:,j}=\tilde{\mathcal{S}}^{(i)}_{:,j}=\mathbf{0}\) for all \(j\), because if \(C^{(i-1)}\) is a diagonalization network for \(\mathcal{S}_{:,:j}\) then \(C^{(i)}\) is also a diagonalization network for \(\mathcal{S}_{:,:j}\);
* \(\alpha_{j}=i\), then \(\mathcal{S}^{(i+1)}_{:,j}=\mathbf{0}\) and \(\tilde{\mathcal{S}}^{(i)}_{:,j}\) is diagonal, therefore \(\mathcal{X}^{(i+1)}_{:,j}=\tilde{\mathcal{X}}^{(i)}_{:,j}=\mathbf{0}\);
* \(\alpha_{j}>i\) then \(\tilde{\mathcal{S}}^{(i)}_{:,j}\) encodes the same Pauli product as \(\mathcal{S}^{(i+1)}_{:,j}\) up to a Hadamard operation on qubit \(k\), therefore \(\mathcal{X}^{(i+1)}_{\ell,j}=\tilde{\mathcal{X}}^{(i)}_{\ell,j}\) for all \(\ell\neq k\).
To sum up we have \(\mathcal{X}^{(i+1)}_{j}=\tilde{\mathcal{X}}^{(i)}_{j}\) for all \(j\neq k\) and by definition we have \(\mathcal{X}^{(i+1)}_{k}=M^{\prime}_{n+i+1}\). It follows that the rows of \(\mathcal{X}^{(i+1)}\) are in the span of the set formed by the first \(n+i+1\) rows of \(M^{\prime}\). We now show that, for all \(j\), \(A_{j}\) is in the row space of \(\mathcal{X}^{(\alpha_{j})}\). Since \(\mathcal{S}^{(\alpha_{j})}_{:,j}\) is diagonal, similarly to Equation 8, the following holds:
\[\bigoplus_{k\in K}\mathcal{X}^{(\alpha_{j})}_{k,\ell}=\begin{cases}0&\text{if } \mathcal{S}_{:,\ell}\text{ commutes with }\mathcal{S}_{:,j}\text{ or }\ell\leq j,\\ 1&\text{if }\mathcal{S}_{:,\ell}\text{ anticommutes with }\mathcal{S}_{:,j},\forall\ell>j, \end{cases} \tag{11}\]
where \(K=\{k\mid\mathcal{Z}^{(\alpha_{j})}_{k,j}=1\}\). This sum satisfy the same properties as the row \(j\) of the commutativity matrix \(A\) associated with \(\mathcal{S}\), therefore we have:
\[A_{j}=\bigoplus_{k\in K}\mathcal{X}^{(\alpha_{j})}_{k}. \tag{12}\]
Thus, for all \(j\), \(A_{j}\) is in the row space of the matrix \(\mathcal{X}^{(\alpha_{j})}\), whose rows are themselves in the row space of \(M^{\prime}\). Consequently, \(M_{j}\) is in the row space of \(M^{\prime}\) for all \(j\) and \(\operatorname{rank}(M^{\prime})\geq\operatorname{rank}(M)\).
The proof of Theorem 2 can now be formulated based on Proposition 1 and 2.
Proof of Theorem 2.: Consider a Clifford \(C^{\prime}\) containing the same number of Hadamard gates as \(C\), acting over \(n+h(C)\) qubits and constructed by the following process:
1. Start with \(C^{\prime}\) as a copy of \(C\) with \(h(C)\) additional qubits.
2. Remove all \(S\) gates from \(C^{\prime}\).
3. After the \(i\)th Hadamard gate of \(C^{\prime}\), insert a SWAP gate operating over the qubits \(n+i-2\) and \(j\) where \(j\) is the qubit on which the \(i\)th Hadamard gate is applied.
An example of this process is provided in Figure 2. The SWAP\({}_{i,j}\) gate can be implemented using CNOT gates:
\[\text{SWAP}_{i,j}=\text{CNOT}_{i,j}\text{CNOT}_{j,i}\text{CNOT}_{i,j}\]
This operation can be performed on \(\mathcal{S}\) by swapping the rows \(\mathcal{Z}_{i}\) and \(\mathcal{Z}_{j}\) as well as the rows \(\mathcal{X}_{i}\) and \(\mathcal{X}_{j}\). Let \(M^{\prime}\) be defined as in Proposition 2, let \(\mathcal{S}^{\prime}=\begin{bmatrix}\mathbf{0}\\ M^{\prime}\end{bmatrix}\) be a sequence of \(m\) mutually commuting Pauli products of size \(n+h(C)\) and let \(U^{\prime}\) is the Clifford operator implemented by \(C^{\prime}\), we will show that \(U^{\prime\dagger}P(\mathcal{S}^{\prime}_{:,i})U^{\prime}\) is diagonal for all \(i\). We reuse \(C^{(i)}\) and \(\mathcal{S}^{(i)}=\begin{bmatrix}\mathcal{Z}^{(i)}\\ \mathcal{X}^{(i)}\end{bmatrix}\) as defined in Proposition 2, and we define \(\mathcal{S}^{\prime(i)}=\begin{bmatrix}\mathcal{Z}^{\prime(i)}\\ \mathcal{X}^{\prime(i)}\end{bmatrix}\) and \(C^{\prime(i)}\) analogously where \(C^{\prime(i)}\) is the subcircuit resulting from truncating \(C^{\prime}\) after its \(i\)th inserted SWAP gate and \(C^{\prime(0)}\) is the empty circuit.
We now prove by induction that for all \(0\leq i\leq h(C)\), \(0\leq j<n\) and \(n\leq k<n+i\) we have \(\mathcal{X}^{\prime(i)}_{j}=\mathcal{X}^{(i)}_{j}\), \(\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\) and \(\mathcal{X}^{\prime(i)}_{k}=\mathbf{0}\). For \(i=0\) and \(0\leq j<n\), the equalities \(\mathcal{X}^{\prime}_{j}=\mathcal{X}_{j}\) and \(\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\)
Figure 2: Construction example of \(C^{\prime}\) and \(\mathcal{S}^{\prime}\) for the proof of Theorem 2.
are satisfied by definition. Let \(0\leq i<h(C)\) and \(\alpha_{i}\) be the qubit on which the \((i+1)\)th Hadamard gate of \(C\) is applied. The matrix \(\mathcal{S}^{(i+1)}\) can be obtained from \(\mathcal{S}^{(i)}\) by performing a sequence of \(\{\text{CNOT},S\}\) operations and a Hadamard operation on qubit \(\alpha_{i}\). Similarly, the matrix \(\mathcal{S}^{\prime(i+1)}\) can be obtained from \(\mathcal{S}^{\prime(i)}\) by performing the same sequence of CNOT operations, a Hadamard operation on qubit \(\alpha_{i}\) and a SWAP operation acting on the qubits \(\alpha_{i}\) and \(n+i\). In both cases, the rows \(\mathcal{X}^{(i)}_{j}\) and \(\mathcal{X}^{\prime(i)}_{j}\), where \(0\leq j<n,j\neq\alpha_{i}\), are only affected by the CNOT operations, and so if \(\mathcal{X}^{\prime(i)}_{j}=\mathcal{X}^{(i)}_{j}\) for all \(0\leq j<n\), then \(\mathcal{X}^{\prime(i+1)}_{j}=\mathcal{X}^{(i+1)}_{j}\) for all \(0\leq j<n,j\neq\alpha_{i}\). Notice that the only gate in \(C^{\prime}\) acting on the qubit \(n+i\) is the SWAP gate operating on the qubits \(\alpha_{i}\) and \(n+i\) and recall that by definition \(\mathcal{X}^{\prime}_{n+i}=\mathcal{X}^{(i+1)}_{\alpha_{i}}\); then, because this SWAP gate is the last gate of the circuit \(C^{\prime(i+1)}\), we have \(\mathcal{X}^{\prime(i+1)}_{\alpha_{i}}=\mathcal{X}^{\prime}_{n+i}=\mathcal{X} ^{(i+1)}_{\alpha_{i}}\). Therefore, for all \(0\leq j<n\), if \(\mathcal{X}^{\prime(i)}_{j}=\mathcal{X}^{(i)}_{j}\) then \(\mathcal{X}^{\prime(i+1)}_{j}=\mathcal{X}^{(i+1)}_{j}\).
If \(\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\) for all \(0\leq j<n\), then applying a sequence of CNOT operations on \(\mathcal{S}^{\prime(i)}\) acting on the first \(n\) qubits will not alter the matrix \(\mathcal{Z}^{\prime(i)}\). Thus, if \(\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\) for all \(0\leq j<n\), then \(\mathcal{Z}^{\prime(i+1)}_{j}=\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\) for all \(0\leq j<n,j\neq\alpha_{i}\). Furthermore, if \(\mathcal{Z}^{\prime(i)}_{\alpha_{i}}=\mathbf{0}\) for all \(0\leq j<n\), then applying a Hadamard operation on \(\mathcal{S}^{\prime(i)}\) acting on qubit \(\alpha_{i}\) after this sequence of CNOT operations would yield \(\mathcal{X}^{\prime(i)}_{\alpha_{i}}=\mathbf{0}\). This Hadamard operation is followed by a SWAP operation between the qubits \(\alpha_{i}\) and \(k=n+i\) which would induce that \(\mathcal{X}^{\prime(i+1)}_{k}=\mathbf{0}\) and \(\mathcal{Z}^{\prime(i+1)}_{\alpha_{i}}=\mathbf{0}\) because \(\mathcal{Z}^{\prime}_{k}=\mathbf{0}\). Thus, for all \(0\leq j<n\), if \(\mathcal{Z}^{\prime(i)}_{j}=\mathbf{0}\) then \(\mathcal{Z}^{\prime(i+1)}_{j}=\mathbf{0}\). In addition, for all \(k\) such that \(n\leq k<n+i\), the circuit \(C^{\prime(i+1)}\) doesn't contain any gate operating on the qubit \(k\) other than those included in \(C^{\prime(i)}\); therefore if \(\mathcal{X}^{\prime(i)}_{k}=\mathbf{0}\) for all \(n\leq k<n+i\) then \(\mathcal{X}^{\prime(i+1)}_{k}=\mathbf{0}\) for all \(n\leq k<n+i+1\).
Let \(i=h(C)\), by combining the facts that \(\mathcal{X}^{\prime(i)}_{j}=\mathcal{X}^{(i)}_{j}=\mathbf{0}\) for all \(0\leq j<n\) and \(\mathcal{X}^{\prime(i)}_{n+j}=\mathbf{0}\) for all \(0\leq j<i\), we can deduce that \(\mathcal{X}^{\prime(i)}\) is the null matrix which imply that \(U^{\prime\dagger}P(\mathcal{S}^{\prime}_{,j})U^{\prime}\) is diagonal for all \(j\) where \(U^{\prime}\) is the Clifford operator implemented by \(C^{\prime}\). By Proposition 1 we have \(h(C)=h(C^{\prime})\geq\text{rank}(M^{\prime})\), and by Proposition 2 we have \(\text{rank}(M^{\prime})\geq\text{rank}(M)\) which entails \(h(C)\geq\text{rank}(M)\). This lower bound is satisfied by Algorithm 1 as stated by Theorem 1, this implies that Algorithm 1 is optimal and concludes the proof of Theorem 2.
**Pauli rotations ordering.** Algorithm 1 solves the H-Opt problem for a fixed sequence of Pauli rotations. However, if two adjacent Pauli rotations are commuting then their order could be inverted, leading to another sequence of Pauli rotations representing the same operator. We show that changing the ordering in this way doesn't affect the minimal number of Hadamard gate required to implement the diagonalization network associated with the sequence of Pauli rotations. Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a sequence of Pauli products, let \(i\) be such that \(\mathcal{S}_{:,i}\) commutes with \(\mathcal{S}_{:,i+1}\) and let \(\mathcal{S}^{\prime}=\begin{bmatrix}\mathcal{Z}^{\prime}\\ \mathcal{X}^{\prime}\end{bmatrix}\) be a sequence of Pauli products obtained by swapping the columns \(i\) and \(i+1\) of \(\mathcal{S}\). Let \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\) and \(M^{\prime}=\begin{bmatrix}\mathcal{X}^{\prime}\\ A^{\prime}\end{bmatrix}\) where \(A\) and \(A^{\prime}\) are the commutativity matrices of \(\mathcal{S}\) and \(\mathcal{S}^{\prime}\) respectively. Since \(\mathcal{S}_{:,i}\) commutes with \(\mathcal{S}_{:,i+1}\) we have \(A_{i,i+1}=A^{\prime}_{i,i+1}=0\), and so \(M_{:,i}=M^{\prime}_{:,i+1}\) and \(M_{:,i+1}=M^{\prime}_{:,i}\). The matrix \(M^{\prime}\) can be obtained from \(M\) by swapping its columns \(i\) and \(i+1\), which entails \(\text{rank}(M)=\text{rank}(M^{\prime})\). Thus, inverting the order of two adjacent and commuting Pauli rotations doesn't change the minimal number of Hadamard gates required to implement the diagonalization network associated with the sequence of Pauli rotations.
**Other gate sets.** One could consider the problem over other Clifford gate sets, which raises
the question of whether these gate sets could perform better than the \(\{X,\text{CNOT},S,H\}\) gate set considered. In order to achieve a number of Hadamard gate inferior to \(\text{rank}(M)\), where \(M\) is defined as in Theorem 2, the gate set considered needs to have at least one gate, other than the Hadamard gate, such that its decomposition over the \(\{X,\text{CNOT},S,H\}\) gate set necessarily involves at least one Hadamard gate. Said otherwise, the number of Hadamard gates is at least \(\text{rank}(M)\) for any gate set in which the Hadamard gate is the only gate \(U\) for which there exists a non-diagonal Pauli operator \(P\) such that \(U^{\dagger}PU\) is diagonal.
### Extension to Clifford\(+R_{z}\) circuit re-synthesis
Any Clifford\(+R_{Z}\) circuit can be characterized by a sequence of Pauli rotations followed by a final Clifford operator \(C_{f}\)[2]. We demonstrated that Algorithm 1 solves the H-Opt problem optimally, and so it can be used to synthesize a sequence of Pauli rotations up to a final Clifford operator \(C_{f^{\prime}}\) with a minimal number of Hadamard gates. The synthesis of the full Clifford\(+R_{Z}\) circuit can then be performed by coupling Algorithm 1 with a procedure to synthesize the Clifford operator \(C_{f}\cdot C_{f^{\prime}}\). We will demonstrate that this procedure can in fact also be performed by Algorithm 1 with a minimal number of Hadamard gates.
A Clifford operator \(U\in\mathcal{C}_{n}\) can be represented by a tableau encoding \(2n\) Pauli operators such that \(n\) of them are mutually commuting Pauli operators called stabilizer generators and the other half are also mutually commuting Pauli operators referred to as destabilizer generators. If the stabilizer generators are all diagonalized, then the Clifford operator can be synthesized using only \(\{X,S,\text{CNOT}\}\) gates [23]. Thus, synthesizing a Clifford operator with a minimal number of Hadamard gates amounts to finding a Clifford circuit \(C\) containing a minimal number of Hadamard gates and such that \(U^{\dagger}PU\) is diagonal for all \(P\) in the stabilizer generators, where \(U\) is the Clifford operator associated with \(C\). We will demonstrate via the following proposition that a Clifford circuit satisfying these properties is produced by Algorithm 1 when the sequence of Pauli products \(\mathcal{S}\) given as input encodes the stabilizer generators on any order.
**Proposition 3**.: _Let \(\mathcal{S}\) be a sequence of \(m\) mutually commuting Pauli products and \(C\) be the Clifford circuit returned by Algorithm 1 when \(\mathcal{S}\) is given as input. Then \(U^{\dagger}P(\mathcal{S}_{:,j})U\) is diagonal for all \(j\), where \(U\) is the Clifford operator associated with \(C\)._
Proof.: Let \(P\) and \(P^{\prime}\) be commuting Pauli operators such that \(P\) is diagonal and \(P^{\prime}\) is not diagonal. If there exists \(k\) such that \(P^{\prime}_{k}=X\) and \(P^{\prime}_{\ell}\in\{I,Z\}\) for all \(\ell\neq k\), then \(P_{k}=I\) because \(P\) commutes with \(P^{\prime}\) and \(P\) is diagonal. Therefore conjugating \(P\) and \(P^{\prime}\) with a Hadamard gate on qubit \(k\) will result in both operators being diagonalized. Let \(C^{(i)}\) be the subcircuit of \(C\) truncated before its \(i\)th Hadamard gate with \(C^{(0)}\) defined as the empty circuit, and let \(U_{(i)}\) be the Clifford operator associated with \(C^{(i)}\). Due to the construction process of \(C\), for each subcircuit \(C^{(i)}\) where \(i>0\) there exists \(j\) such that \(P^{\prime}=U^{\dagger}_{(i)}\mathcal{S}_{:,j}U_{(i)}\) satisfies \(P^{\prime}_{k}=X\) and \(P^{\prime}_{\ell}\in\{I,Z\}\) for all \(\ell\neq k\) where \(k\) is the qubit on which the \(i\)th Hadamard gate of \(C\) is applied. Hence, for all \(i<h(C)\) and for all \(j\), if \(U^{\dagger}_{(i)}P(\mathcal{S}_{:,j})U_{(i)}\) is diagonal, then \(U^{\dagger}_{(i+1)}P(\mathcal{S}_{:,j})U_{(i+1)}\) is also diagonal. The circuit \(C\) is a diagonalization network for \(\mathcal{S}\), which imply that for all \(j\) there exists \(U_{(i)}\) such that \(U^{\dagger}_{(i)}P(\mathcal{S}_{:,j})U_{(i)}\) is diagonal, and so \(U^{\dagger}P(\mathcal{S}_{:,j})U\) is a diagonal for all \(j\).
Based on Proposition 3, we can now show that Algorithm 1 can be used to synthesize a sequence of Pauli rotations followed by a final Clifford operator with a minimal number of Hadamard gates. Let \(\mathcal{S}\) be a sequence of Pauli products associated with the sequence of Pauli rotations we are aiming
to implement, let \(\mathcal{S}^{\prime}\) be a sequence of Pauli products encoding the stabilizer generators of the final Clifford operator, and let \(\tilde{\mathcal{S}}=\begin{bmatrix}\mathcal{S}&\mathcal{S}^{\prime}\end{bmatrix}\). Any \(\{X,\text{CNOT},S,H,R_{Z}\}\) circuit implementing this sequence of Pauli rotations followed by the final Clifford operator is necessarily a diagonalization network for \(\tilde{\mathcal{S}}\). The circuit \(C\) returned by Algorithm 1 when \(\tilde{\mathcal{S}}\) is given as input satisfies this condition with a minimal number of Hadamard gates. Moreover, as indicated by Proposition 3, \(C\) simultaneously diagonalize the sequence of Pauli products encoded by \(\mathcal{S}^{\prime}\). Thus, the synthesis of the sequence of Pauli rotations and the final Clifford operator can be completed with a minimal number of Hadamard gate by inserting \(\{X,\text{CNOT},S,R_{Z}\}\) subcircuits into \(C\).
## 4 Internal Hadamard gates minimization
In this section, we tackle the problem of minimizing the number of internal Hadamard gates, which corresponds to the number of Hadamard gates occurring between the first and the last non-Clifford \(R_{Z}\) gate of the circuit. We first give an algorithm in Section 4.1 that performs the synthesis of a diagonalization network while minimizing the number of internal Hadamard gates. We then prove its optimality in Section 4.2.
### Algorithm
Solving the Internal-H-Opt problem for a sequence \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) of Pauli products consists in finding a Clifford operator \(U\) such that \(\text{rank}(\tilde{M})\) is minimal where \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A\end{bmatrix}\), \(\tilde{\mathcal{S}}=\begin{bmatrix}\tilde{\mathcal{Z}}\\ \tilde{\mathcal{X}}\end{bmatrix}=U^{\dagger}\mathcal{S}U\) and \(A\) is the commutativity matrix associated with \(\mathcal{S}\). The inequality \(\text{rank}(A)\leq\text{rank}(\tilde{M})\leq\text{rank}(M)\leq\text{rank}(A)+n\), where \(M=\begin{bmatrix}\mathcal{X}\\ A\end{bmatrix}\) and \(n\) is the number of qubits, imply that the circuit produced by Algorithm 1 contains at most \(n\) additional internal Hadamard gates when compared to an optimal solution. To go beyond this approximation and obtain an optimal solution, it is necessary to find a sequence of Clifford operations which, when applied to \(\mathcal{S}\), transform \(\mathcal{X}\) into \(\tilde{\mathcal{X}}\). As discussed in Section 3.3, implementing a Clifford operator can be done in two parts: finding a circuit that simultaneously diagonalize the stabilizer generators of the Clifford operator and finishing the implementation with a \(\{X,S,\text{CNOT}\}\) circuit. The \(\{X,S,\text{CNOT}\}\) circuit can be disregarded as the associated operations have no impact on the rank of \(\tilde{M}\). Hence, solving the Internal-H-Opt problem for a sequence \(\mathcal{S}\) of Pauli products consists in finding a set of mutually commuting Pauli products, encoded in a matrix \(\mathcal{S}^{\prime}\), that are simultaneously diagonalized by a Clifford operator \(U\) and such that \(\text{rank}(\tilde{M})\) is minimal where \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A\end{bmatrix}\) and \(\tilde{\mathcal{S}}\) is the sequence of Pauli products resulting from conjugating all the Pauli products of \(\mathcal{S}\) by \(U\). As stated by Proposition 3, a circuit that simultaneously diagonalize the Pauli products of \(\mathcal{S}^{\prime}\) is produced by Algorithm 1 when \(\mathcal{S}^{\prime}\) is given as input. Thus, if \(\begin{bmatrix}\mathcal{S}^{\prime}&\mathcal{S}\end{bmatrix}\) is given as input to Algorithm 1, then the constructed circuit is a diagonalization network for \(\mathcal{S}\) which containing a minimal number of internal Hadamard gates. An example of the execution of Algorithm 2 is given in Figure 3.
We propose an algorithm, whose pseudo-code is given in Algorithm 2, to solve the Internal-H-Opt problem optimally by finding the Pauli products constituting \(\mathcal{S}^{\prime}\). Let \(J_{m}\) be an exchange matrix of size \(m\times m\) defined as follows:
\[J_{m_{i,j}}=\begin{cases}1&\text{if }i+j=m-1,\\ 0&\text{otherwise.}\end{cases} \tag{13}\]
As such, the Pauli products encoded by the columns of the matrix \(\mathcal{S}J_{m}\) are then the same as the ones encoded by \(\mathcal{S}\) but in reverse order. The algorithm starts by performing a call to Algorithm 1 to obtain a Clifford circuit \(C\) that is a diagonalization network for the sequence of Pauli products encoded in \(\mathcal{S}J_{m}\). Then, a set of stabilizer generators associated with the inverse of \(C\) are encoded in the columns of \(\mathcal{S}^{\prime}\) and a second and final call to Algorithm 1 is performed where \(\big{[}\mathcal{S}^{\prime}\quad\mathcal{S}\big{]}\) is given as input. We prove that the resulting circuit gives an optimal solution to the Internal-H-Opt problem in the next subsection. When one uses Algorithm 2 to perform the re-synthesis of a circuit, as explained in Section 3.3, the stabilizer generators associated with the final Clifford operator of the input circuit can be append to the final call to Algorithm 1 to obtain a full re-synthesis of the circuit containing both a minimal number of Hadamard gates and internal Hadamard gates.
Note that a set of stabilizer generators associated with the inverse of the Clifford circuit \(C\) can be computed in \(\mathcal{O}(n^{2}m)\) using the tableau representation as \(C\) is composed of \(\mathcal{O}(nm)\) gates and a tableau can be updated in \(\mathcal{O}(n)\) operations when a Clifford gate is applied. The complexity of the algorithm then resides in the two calls made to Algorithm 1. The first call has a complexity of \(\mathcal{O}(n^{2}m)\) as \(\mathcal{S}J_{m}\) is composed of \(m\) Pauli products. For the second call, \(n+m\) Pauli products are given as input because a Clifford operator acting on \(n\) qubits has \(n\) stabilizer generators. This induces a complexity of \(\mathcal{O}(n^{2}(n+m))=\mathcal{O}(n^{3}+n^{2}m)\), which corresponds to \(\mathcal{O}(n^{2}m)\) in the typical case where \(n\leq m\). Thus, the overall complexity of Algorithm 2 matches the complexity of Algorithm 1.
### Optimality
This subsection is dedicated to the proof of the following theorem, which states the optimality of Algorithm 2.
**Theorem 3**.: _Let \(\mathcal{S}\) be a sequence of \(m\) Pauli products, \(A\) be its commutativity matrix and let \(C\) be the Clifford circuit returned by Algorithm 2 when \(\mathcal{S}\) is given as input. Then \(C\) optimally solves the Internal-H-Opt problem with \(\operatorname{rank}(A)\) internal Hadamard gates._
We first show that the optimal number of internal Hadamard gates is equal to \(\operatorname{rank}(A)\). Our proof rests on the following proposition.
**Proposition 4**.: _Let \(\mathcal{S}\) be a sequence of \(m\) Pauli products, \(A\) be its commutativity matrix and let \(\mathbf{y},\mathbf{y}^{\prime}\) be such that \(A\mathbf{y}=\mathbf{0}\) and \(A\mathbf{y}^{\prime}=\mathbf{0}\). Then the Pauli products encoded by \(\mathcal{S}\mathbf{y}\) and \(\mathcal{S}\mathbf{y}^{\prime}\) are commuting._
Proof.: Notice that the Pauli product \(\mathcal{S}_{:,i}\) commutes with \(\mathcal{S}_{:,j}\) if and only if \((A\oplus A^{T})_{i,j}=(A\oplus A^{T})_{j,i}=0\). Then \(\mathcal{S}\mathbf{y}^{\prime}\) commutes with \(\mathcal{S}_{:,i}\) if and only if \(v_{i}=0\), where \(\mathbf{v}=(A\oplus A^{T})\mathbf{y}^{\prime}\). And \(\mathcal{S}\mathbf{y}^{\prime}\) commutes with \(\mathcal{S}\mathbf{y}\) if and only if \(\mathbf{y}^{T}\mathbf{v}=\mathbf{y}^{T}(A\oplus A^{T})\mathbf{y}^{\prime}=0\). As \(A\mathbf{y}=\mathbf{0}\) and \(A\mathbf{y}^{\prime}=\mathbf{0}\), we can show that
\(\mathbf{y}^{T}(A\oplus A^{T})\mathbf{y}^{\prime}=\mathbf{y}^{T}A\mathbf{y}^{\prime}\oplus\mathbf{y}^{T }A^{T}\mathbf{y}^{\prime}=\mathbf{y}^{T}A\mathbf{y}^{\prime}\oplus(A\mathbf{y})^{T}\mathbf{y}^{ \prime}=0\), which implies that \(\mathcal{S}\mathbf{y}\) commutes with \(\mathcal{S}\mathbf{y}^{\prime}\).
Based on Proposition 4 we can show that the optimal number of internal Hadamard gates is equal to \(\mathrm{rank}(A)\). Let \(\mathcal{S}^{\prime}\) be a sequence of Pauli products such that the columns of \(\mathcal{S}^{\prime}\) are forming a spanning set of \(\{\mathcal{S}\mathbf{y}\mid A\mathbf{y}=\mathbf{0},\mathbf{y}\in\mathbb{F}_{2}^{m}\}\). It follows that for all \(\mathbf{y}\) satisfying \(A\mathbf{y}=\mathbf{0}\) there exists a vector \(\mathbf{y}^{\prime}\) such that \(\mathcal{S}\mathbf{y}=\mathcal{S}^{\prime}\mathbf{y}^{\prime}\). Moreover, Proposition 4 entails that all the Pauli products of \(\mathcal{S}^{\prime}\) are mutually commuting. Therefore if the Pauli products encoded in \(\mathcal{S}^{\prime}\) were all to be diagonal, then, for all \(\mathbf{y}\) satisfying \(A\mathbf{y}=\mathbf{0}\), the Pauli product \(\mathcal{S}\mathbf{y}\) would be diagonal, i.e. \(\mathcal{X}\mathbf{y}=\mathbf{0}\). Let \(C^{\prime}\) be the circuit resulting from the execution of Algorithm 1 when \(\mathcal{S}^{\prime}\) is given as input and let \(\tilde{\mathcal{S}}\) be the sequence of Pauli products where, for all \(i\), the Pauli product encoded by \(\tilde{\mathcal{S}}_{\cdot,i}\) is equal to the Pauli product encoded by \(\mathcal{S}_{\cdot,i}\) conjugated by the Clifford operator associated with \(C^{\prime}\). Let \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A\end{bmatrix}\), for all \(\mathbf{y}\) satisfying \(A\mathbf{y}=\mathbf{0}\) we have \(\tilde{\mathcal{X}}\mathbf{y}=\mathbf{0}\) because \(C^{\prime}\) performs a simultaneous diagonalization on the Pauli products of \(\mathcal{S}^{\prime}\), as stated by Proposition 3. Consequently we have \(\tilde{M}\mathbf{y}=\mathbf{0}\) for all \(\mathbf{y}\in\mathrm{nullspace}(A)\) and so \(\mathrm{rank}(\tilde{M})=\mathrm{rank}(A)\). Then we can use Algorithm 1 to produce a Clifford circuit \(\tilde{C}\) that is a diagonalization network for \(\tilde{\mathcal{S}}\) and such that \(h(\tilde{C})=\mathrm{rank}(\tilde{M})=\mathrm{rank}(A)\). It follows that the Clifford circuit \(C^{\prime}::\tilde{C}\) is a diagonalization network for \(\mathcal{S}\) containing \(h(\tilde{C})=\mathrm{rank}(A)\) internal Hadamard gates.
To solve the Internal-H-Opt problem optimally it is then essential to find a spanning set of \(\{\mathcal{S}\mathbf{y}\mid A\mathbf{y}=\mathbf{0},\mathbf{y}\in\mathbb{F}_{2}^{m}\}\), which we encode in the columns of \(\mathcal{S}^{\prime}\). Constructing such a spanning set naively by finding all \(\mathbf{y}\in\mathbb{F}_{2}^{m}\) satisfying \(A\mathbf{y}=\mathbf{0}\) would imply a complexity of \(\mathcal{O}(m^{3})\) using a Gaussian elimination procedure, which is more computationally expensive than minimizing the
Figure 3: Example of an execution of Algorithm 2. For a sequence of Pauli products \(\mathcal{S}\) (c), the first call to Algorithm 1 will produce a circuit (a) with associated Pauli products \(\mathcal{S}^{\prime}\) (d). The algorithm will then output the circuit produced by Algorithm 1 when \(\begin{bmatrix}\mathcal{S}^{\prime}&\mathcal{S}\end{bmatrix}\) is given as input (b).
number of Hadamard gates via Algorithm 1 in the case where \(n<m\). Fortunately, we can actually rely on Algorithm 1 to compute \(\mathcal{S}^{\prime}\) with a complexity of \(\mathcal{O}(n^{2}m)\), as it is done in Algorithm 2. Indeed, if Algorithm 1 is used to constructed a diagonalization network \(C\) for the sequence of Pauli products \(\mathcal{S}J_{m}\), then the stabilizer generators of the Clifford operator implemented by \(C\) are forming a spanning set of \(\{\mathcal{S}\boldsymbol{y}\mid A\boldsymbol{y}=\boldsymbol{0},\boldsymbol{y} \in\mathbb{F}_{2}^{m}\}\). We demonstrate this statement via the following proposition.
**Proposition 5**.: _Let \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) be a sequence of \(m\) Pauli products, \(J_{m}\) be an exchange matrix of size \(m\times m\) and let \(U\) be the Clifford operator associated with the Clifford circuit \(C\) produced by Algorithm 1 when \(\mathcal{S}J_{m}\) is given as input. Let \(\tilde{\mathcal{S}}\) be the sequence of Pauli products obtained by conjugating all the Pauli products of \(\mathcal{S}\) by \(U\), then \(\tilde{\mathcal{X}}J_{m}\boldsymbol{y}=\boldsymbol{0}\) for all \(\boldsymbol{y}\) satisfying \(\boldsymbol{y}^{T}A^{(\mathcal{S}J_{m})}=\boldsymbol{0}\), where \(A^{(\mathcal{S}J_{m})}\) is the commutativity matrix associated with \(\mathcal{S}J_{m}\)._
Proof.: Let \(C^{(i)}\) be the circuit obtained after the \(i\)th recursive call to Algorithm 1 when \(\mathcal{S}J_{m}\) is given as input, as such \(C^{(i)}\) is a diagonalization network for the first \(i+1\) columns of \(SJ_{m}\). And let \(\mathcal{S}^{(i)}=\begin{bmatrix}\mathcal{Z}^{(i)}\\ \mathcal{X}^{(i)}\end{bmatrix}\) be the sequence of Pauli products resulting from conjugating \(\mathcal{S}\) by the Clifford operator associated with the circuit \(C^{(i)}\). We defined \(\boldsymbol{y}^{(i)}\in\mathbb{F}_{2}^{m}\) as follows:
\[y_{j}^{(i)}=\begin{cases}y_{j}&\text{if }j\leq i,\\ 0&\text{otherwise},\end{cases} \tag{14}\]
where \(\boldsymbol{y}\in\mathbb{F}_{2}^{m}\) satisfies \(\boldsymbol{y}^{T}A^{(\mathcal{S}J_{m})}=\boldsymbol{0}\).
In the case where \(i=0\), the equality \(\mathcal{X}^{(0)}J_{m}\boldsymbol{y}^{(0)}=\boldsymbol{0}\) is satisfied because the Pauli product encoded by the first column of \(\mathcal{S}^{(0)}J_{m}\) is diagonal and \(y_{j}^{(0)}=0\) for all \(j>0\). More generally, the Pauli product encoded by the \(i\)th column of \(\mathcal{S}^{(i)}J_{m}\) is diagonal, and so the following holds:
\[\bigoplus_{k\in K}\mathcal{X}_{k}^{(i)}J_{m}=A_{i}^{(\mathcal{S}J_{m})}\oplus A _{:,i}^{(\mathcal{S}J_{m})} \tag{15}\]
where \(K=\{k\mid\mathcal{Z}_{k,m-i-1}^{(i)}=1\}\). Here the \(i\)th column of \(A^{(\mathcal{S}J_{m})}\) must be added to the \(i\)th row of \(A^{(\mathcal{S}J_{m})}\) to form the vector describing how the \(i\)th Pauli rotation commutes or anticommutes with the other Pauli rotations of the sequence. In this sense, it is a generalization of Equation 8 to the other rows of \(A^{(\mathcal{S}J_{m})}\). Equation 15 entails
\[\left[\bigoplus_{k\in K}\mathcal{X}_{k}^{(i)}J_{m}\right]^{T}\boldsymbol{y}^{ (i)}=\left[A_{i}^{(\mathcal{S}J_{m})}\oplus A_{:,i}^{(\mathcal{S}J_{m})} \right]^{T}\boldsymbol{y}^{(i)} \tag{16}\]
Moreover, we have \(\left[A_{i}^{(\mathcal{S}J_{m})}\right]^{T}\boldsymbol{y}^{(i)}=0\) because \(A_{i,j}^{(\mathcal{S}J_{m})}=0\) for all \(j\leq i\) and \(y_{j}^{(i)}=0\) for all \(j>i\). And we also have \(\left[A_{:,i}^{(\mathcal{S}J_{m})}\right]^{T}\boldsymbol{y}^{(i)}=0\) because \(\left[A_{:,i}^{(\mathcal{S}J_{m})}\right]^{T}\boldsymbol{y}^{(i)}=\left[A_{:,i} ^{(\mathcal{S}J_{m})}\right]^{T}\boldsymbol{y}\) as \(A_{j,i}^{(\mathcal{S}J_{m})}=0\) for all \(j>i\) and \(\left[A_{:,i}^{(\mathcal{S}J_{m})}\right]^{T}\boldsymbol{y}=\boldsymbol{y}^{T}A _{:,i}^{(\mathcal{S}J_{m})}=0\) by definition. Thus, we proved that the following holds:
\[\left[\bigoplus_{k\in K}\mathcal{X}_{k}^{(i)}J_{m}\right]^{T}\boldsymbol{y}^{ (i)}=0 \tag{17}\]
where \(K=\{k\mid\mathcal{Z}^{(i)}_{k,m-i-1}=1\}\).
Let's assume that \(\mathcal{X}^{(i)}J_{m}\mathbf{y}^{(i)}=\mathbf{0}\), we can then distinguish two cases for the \((i+1)\)th iteration of Algorithm 1. In the case where the \((i+1)\)th Pauli product of \(\mathcal{S}^{(i)}J_{m}\) is diagonal, the circuit \(C^{(i+1)}\) can be obtained from \(C^{(i)}\) by appending a \(\{\text{CNOT},S\}\) circuit to it. If the Pauli product encoded by \(\mathcal{S}^{(i)}J_{m}\mathbf{y}^{(i)}\) is diagonal, as we assumed, then the Pauli product encoded by the vector \(\mathcal{S}^{(i+1)}J_{m}\mathbf{y}^{(i)}\) is also diagonal as no Hadamard gate was appended to \(C^{(i)}\) to derive \(C^{(i+1)}\) from it. In addition, the \((i+1)\)th Pauli product of \(\mathcal{S}^{(i+1)}J_{m}\) is also diagonal which imply that the Pauli product encoded by the vector \(\mathcal{X}^{(i+1)}J_{m}\mathbf{y}^{(i+1)}\) is diagonal and so \(\mathcal{X}^{(i+1)}J_{m}\mathbf{y}^{(i+1)}=\mathbf{0}\). Therefore, in such case where the \((i+1)\)th Pauli product of \(\mathcal{S}^{(i)}J_{m}\) is diagonal, the equality \(\mathcal{X}^{(i)}J_{m}\mathbf{y}^{(i)}=\mathbf{0}\) implies that \(\mathcal{X}^{(i+1)}J_{m}\mathbf{y}^{(i+1)}=\mathbf{0}\).
In the case where the \((i+1)\)th Pauli product of \(\mathcal{S}^{(i)}J_{m}\) is not diagonal, the circuit \(C^{(i+1)}\) can be constructed from \(C^{(i)}\) by appending a \(\{\text{CNOT},S\}\) circuit to it and a final Hadamard gate on some qubit \(j\). Let \(\hat{C}^{(i+1)}\) be the circuit resulting from appending this \(\{\text{CNOT},S\}\) circuit to \(C^{(i)}\), i.e. \(\hat{C}^{(i+1)}\) corresponds to the circuit \(C^{(i+1)}\) whose last gate, which is a Hadamard gate, has been removed. Let \(\hat{\mathcal{S}}^{(i+1)}\) be the sequence of Pauli products obtained by conjugating all the Pauli products of \(\mathcal{S}\) by the Clifford operator associated with \(\hat{C}^{(i+1)}\). Using the same reasoning as before, if the Pauli product encoded by \(\mathcal{S}^{(i)}J_{m}\mathbf{y}^{(i)}\) is diagonal then the Pauli product encoded by the vector \(\hat{\mathcal{S}}^{(i+1)}J_{m}\mathbf{y}^{(i)}\) is also diagonal as no Hadamard gate was appended to \(C^{(i)}\) to derive \(\hat{C}^{(i+1)}\) from it, and so we have \(\hat{\mathcal{X}}^{(i+1)}J_{m}\mathbf{y}^{(i)}=\mathbf{0}\).
The circuit \(C^{(i+1)}\) can be obtained from \(\hat{C}^{(i+1)}\) by appending a Hadamard gate to it on some qubit \(j\). Therefore, \(\mathcal{X}^{(i+1)}_{k}=\hat{\mathcal{X}}^{(i+1)}_{k}\) for all \(k\neq j\), and so
\[\left[\mathcal{X}^{(i+1)}_{k}J_{m}\right]^{T}\mathbf{y}^{(i)}=\left[\hat{\mathcal{ X}}^{(i+1)}_{k}J_{m}\right]^{T}\mathbf{y}^{(i)}=0 \tag{18}\]
for all \(k\neq j\). The \((i+1)\)th Pauli product of \(\mathcal{S}^{(i+1)}J_{m}\) is diagonal which means that the \((i+1)\)th column of \(\mathcal{X}^{(i+1)}J_{m}\) is equal to \(\mathbf{0}\), and so the equality holds as well for \(\mathbf{y}^{(i+1)}\):
\[\left[\mathcal{X}^{(i+1)}_{k}J_{m}\right]^{T}\mathbf{y}^{(i+1)}=\left[\mathcal{X }^{(i+1)}_{k}J_{m}\right]^{T}\mathbf{y}^{(i)}=0 \tag{19}\]
for all \(k\neq j\). Notice that \(j\in K\) where \(K=\{k\mid\mathcal{Z}^{(i+1)}_{k,m-i-1}=1\}\), then from Equation 17 we can infer that
\[\left[\mathcal{X}^{(i+1)}_{j}J_{m}\right]^{T}\mathbf{y}^{(i+1)}\oplus\left[\bigoplus _{k\in K}\mathcal{X}^{(i+1)}_{k}J_{m}\right]^{T}\mathbf{y}^{(i+1)}=0 \tag{20}\]
where \(\hat{K}=K\setminus\{j\}\). From Equation 19 we can deduce that the second term of Equation 20 is equal to \(0\), therefore we have
\[\left[\mathcal{X}^{(i+1)}_{j}J_{m}\right]^{T}\mathbf{y}^{(i+1)}=0 \tag{21}\]
which, when combined with Equation 19, entails \(\mathcal{X}^{(i+1)}J_{m}\mathbf{y}^{(i+1)}=\mathbf{0}\) and concludes the proof of Proposition 5.
We can now demonstrate Theorem 3 on the basis of Proposition 5.
Proof of Theorem 3.: Let \(\mathcal{S}^{\prime}\) be as defined in Algorithm 2 and let \(C\) be the circuit produced by Algorithm 2 when \(\mathcal{S}=\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) is given as input. As \(C\) is a diagonalization network for \(\begin{bmatrix}\mathcal{S}^{\prime}&\mathcal{S}\end{bmatrix}\) it can
be splitted in two subcircuits such that \(C=C_{1}::C_{2}\), where \(C_{1}\) and \(C_{2}\) are diagonalization networks for \(\mathcal{S}^{\prime}\) and \(\tilde{\mathcal{S}}\) respectively with \(\tilde{\mathcal{S}}=\begin{bmatrix}\tilde{\mathcal{Z}}\\ \tilde{\mathcal{X}}\end{bmatrix}=U^{\dagger}\mathcal{S}U\) where \(U\) is the Clifford operator associated with \(C_{1}\). The number of internal Hadamard gates in \(C\) is therefore equal to the number of Hadamard gates in \(C_{2}\), proving Theorem 3 can then be done by proving that \(h(C_{2})=\mathrm{rank}(\tilde{M})=\mathrm{rank}(A^{(\mathcal{S})})\) where \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A^{(\mathcal{S})}\end{bmatrix}\).
The Pauli products encoded in the matrix \(\mathcal{S}J_{m}\) are the same as in \(\mathcal{S}\) but in reverse order. Consequently we have \(A^{(\mathcal{S})}_{i,j}=A^{(\mathcal{S}J_{m})}_{m-j-1,m-i-1}\), therefore by reversing the order of the rows and columns of \(A^{(\mathcal{S}J_{m})}\) and transposing it to obtain a strictly upper triangular matrix we get the matrix \(A^{(\mathcal{S})}\):
\[\begin{bmatrix}J_{m}A^{(\mathcal{S}J_{m})}J_{m}\end{bmatrix}^{T}=A^{(\mathcal{ S})} \tag{22}\]
From this we can deduce that
\[\begin{split} A^{(\mathcal{S})}\boldsymbol{y}&= \boldsymbol{0}\\ \Rightarrow&\begin{bmatrix}J_{m}A^{(\mathcal{S}J_{m})}J_{m} \end{bmatrix}^{T}\boldsymbol{y}&=\boldsymbol{0}\\ \Rightarrow&\boldsymbol{y}^{T}J_{m}A^{(\mathcal{S}J_{m})}J_{m} &=\boldsymbol{0}\\ \Rightarrow&\boldsymbol{\overline{y}}^{T}A^{(\mathcal{S}J_{m})} &=\boldsymbol{0}\end{split} \tag{23}\]
where \(\boldsymbol{\overline{y}}=J_{m}\boldsymbol{y}\). And based on Proposition 5 we have
\[\begin{split}\tilde{\mathcal{X}}J_{m}\boldsymbol{\overline{y}}& =\boldsymbol{0}\\ \Rightarrow&\tilde{\mathcal{X}}\boldsymbol{y}&= \boldsymbol{0}\end{split} \tag{24}\]
Thus, for all \(\boldsymbol{y}\in\mathrm{nullspace}(A^{(\mathcal{S})})\) we have \(\tilde{\mathcal{X}}\boldsymbol{y}=\boldsymbol{0}\) and therefore \(\tilde{M}\boldsymbol{y}=\boldsymbol{0}\), which implies that \(h(C_{2})=\mathrm{rank}(\tilde{M})=\mathrm{rank}(A^{(\mathcal{S})})\) and concludes the proof of Theorem 3.
## 5 Improving the complexity
Algorithm 1 and 2 are taking a sequence of Pauli products \(\mathcal{S}\) as input and output a diagonalization network for \(\mathcal{S}\). In order to use these algorithms to minimize the number of Hadamard gates, or internal Hadamard gates, in a circuit \(C\) it is then required to first extract from \(C\) the sequence of Pauli products \(\mathcal{S}\) for which the diagonalization network must be constructed. This procedure can be done with a complexity \(\mathcal{O}(nM)\) by using a tableau, where \(n\) is the number of qubits and \(M\) is the number of gates in \(C\). In this section, we will see how we can merge the extraction of the sequence of Pauli products \(\mathcal{S}\) with our algorithms to obtain the desired re-synthesis of \(C\) with a complexity of \(\mathcal{O}(nM+n^{2}h)\) instead of \(\mathcal{O}(nM+n^{2}m)\) where \(m\) is the number of Pauli products in \(\mathcal{S}\) and \(h\leq m\) is the minimal number of Hadamard gates required to construct a diagonalization network for \(\mathcal{S}\). We first explain our notations related to the tableau representation, commonly used to represent a Clifford operator. In Subsection 5.1 we present an algorithm which performs the re-synthesis of a sequence of Pauli rotations implemented by a given circuit up to a final Clifford operator and with a minimal number of Hadamard gates. Finally, in Subsection 5.2, we present an algorithm which produces a circuit that is a re-synthesis of a given circuit and which implements the same sequence of Pauli rotations but with a minimal number of Hadamard gates and internal Hadamard gates.
```
Input: A Clifford+\(R_{Z}\) circuit \(C\) and a tableau \(\mathcal{T}=\begin{bmatrix}\mathbf{s}^{T}\\ \mathcal{Z}\\ \mathcal{X}\end{bmatrix}\). Output: A circuit \(C_{out}\) and a tableau \(\mathcal{T}_{out}\), such that \(C_{out}\) is a re-synthesis of \(C\) and implements the same sequence of Pauli rotations as \(C\) up to an initial and final Clifford operator represented by \(\mathcal{T}\) and \(\mathcal{T}_{out}^{-1}\) respectively.
1procedureHOpt(\(C\), \(\mathcal{T}\))
2\(C_{out}\leftarrow\) new empty circuit
3foreach gate \(G\in C\)do
4if\(G\) is Clifford then
5 Prepend \(G^{\dagger}\) to \(\mathcal{T}\)
6
7 end if
8if\(G\) is a non-Clifford \(R_{Z_{k}}(\theta)\) gate then
9if\(\exists i\) such that \(\mathcal{X}_{i,k}=1\)then
10foreach\(j\in\{j\mid\mathcal{X}_{j,k}=1\}\setminus\{i\}\)do
11\(C_{out}\gets C_{out}::\text{CNOT}_{i,j}\)
12 Append \(\text{CNOT}_{i,j}\) to \(\mathcal{T}\)
13 end if
14if\(\mathcal{Z}_{i,k}=1\)then
15\(C_{out}\gets C_{out}::S_{i}\)
16 Append \(S_{i}\) to \(\mathcal{T}\)
17 end if
18\(C_{out}\gets C_{out}::H_{i}\)
19 Append \(H_{i}\) to \(\mathcal{T}\)
20 end if
21\(i\leftarrow\) any value satisfying \(\mathcal{Z}_{i,k}=1\)
22\(\tilde{C}\leftarrow\) new empty circuit
23foreach\(j\in\{j\mid\mathcal{Z}_{j,k}=1\}\setminus\{i\}\)do
24\(\tilde{C}\leftarrow\tilde{C}::\text{CNOT}_{j,i}\)
25 end if
26if\(s_{k}=1\)then
27\(\tilde{C}\leftarrow\tilde{C}::X_{i}\)
28 end if
29\(C_{out}\gets C_{out}::\tilde{C}::R_{Z_{i}}(\theta)::\tilde{C}^{-1}\)
30 end if
31
32 end if
33return\((C_{out},\mathcal{T})\)
```
**Algorithm 3**H-Opt
**The tableau representation.** A tableau encodes \(2n\) generators which can be represented by \(2n\) independent Pauli products along with a phase for each one of these Pauli products. We can thus reuse our method of encoding for a sequence of Pauli products \(\mathcal{S}\) and represent a tableau by a block matrix \(\mathcal{T}=\begin{bmatrix}\boldsymbol{s}^{T}\\ \mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) of size \((2n+1)\times 2n\) where \(n\) is the number of qubits. The first row of \(\mathcal{T}\) corresponds to a vector \(\boldsymbol{s}\in\{0,1\}^{2n}\) which encodes the phases of the generators, the subsequent \(n\) rows of \(\mathcal{T}\) are forming the submatrix \(\mathcal{Z}\) and the last \(n\) rows of \(\mathcal{T}\) are forming the submatrix \(\mathcal{X}\). The \(j\)th column of \(\mathcal{T}\) is then encoding the \(j\)th generator: \(s_{j}\) encodes its phase which corresponds to \((-1)^{s_{j}}\) and \((\mathcal{Z}_{i,j},\mathcal{X}_{i,j})\) encodes its \(i\)th Pauli matrix, such that the values \((0,0),(0,1),(1,1)\) and \((1,0)\) are corresponding to the Pauli matrices \(I,X,Y\) and \(Z\) respectively. The first \(n\) columns of \(\mathcal{T}\) are encoding the stabilizer generators, whereas the last \(n\) columns of \(\mathcal{T}\) are encoding the destabilizer generators. The identity tableau \(\mathcal{T}\) associated with an empty circuit is such that the matrix \(\begin{bmatrix}\mathcal{Z}\\ \mathcal{X}\end{bmatrix}\) is forming the identity matrix and \(\boldsymbol{s}=\boldsymbol{0}\), said otherwise the \(i\)th stabilizer generator of \(\mathcal{T}\) is \(\tilde{Z}_{i}\) and the \(i\)th destabilizer generator of \(\mathcal{T}\) is \(X_{i}\). The inverse tableau of \(\mathcal{T}\), denoted by \(\mathcal{T}^{-1}\), is the tableau associated with the Clifford operator \(U^{\dagger}\) where \(U\) is the Clifford operator associated with \(\mathcal{T}\). Analogously, the inverse of a circuit \(C\), denoted \(C^{-1}\), is the circuit obtained from \(C\) by replacing every gate \(G\) by \(G^{\dagger}\) and by reversing the order of its gates. Let \(\mathcal{S}\) be a sequence of Pauli products, if \(\tilde{\mathcal{S}}=U^{\dagger}\mathcal{S}U\) then we will equivalently say that \(\tilde{\mathcal{S}}=\mathcal{T}^{-1}\mathcal{S}\mathcal{T}\) where \(\mathcal{T}\) is the tableau associated with the Clifford operator \(U\).
Let \(C\) be a Clifford circuit such that its associated Clifford operator is represented by a tableau \(\mathcal{T}\). If a Clifford gate from the set \(\{\text{CNOT},S,H\}\) is appended to \(C\) then the generators of \(\mathcal{T}\) can be updated accordingly with \(\mathcal{O}(n)\) operations, where \(n\) is the number of qubits. The operations to perform on the Pauli products encoded by \(\mathcal{T}\) are the same as the one depicted in Figure 1, similar operations can be performed to update the phases associated with the Pauli products in \(\mathcal{O}(n)\)[23]. Also, if a Clifford gate from the set \(\{\text{CNOT},S,H\}\) is prepended to \(C\), then \(\mathcal{T}\) can also be updated with \(\mathcal{O}(n)\) operations [25]. When \(\mathcal{T}\) is updated in such manner we will say that we append, or prepend, a gate to \(\mathcal{T}\). As explained in Section 3.3, a Clifford operator, represented by a tableau \(\mathcal{T}\) and acting on \(n\) qubits, can be implemented over the \(\{X,\text{CNOT},S,H\}\) gate set with \(\mathcal{O}(n^{3})\) operations and with a minimal number of Hadamard gates by first diagonalizing its
stabilizer generators using Algorithm 1, and then finishing its synthesis using only \(\{X,\text{CNOT},S\}\) gates. We use the term CliffordSynthesis to denote this procedure in our algorithms.
### H-Opt algorithm
Consider the algorithm whose pseudo-code is given in Algorithm 3 and which takes a circuit \(C\) and a tableau \(\mathcal{T}_{in}\) as input, and let \(\mathcal{S}\) be the sequence of Pauli products associated with the sequence of Pauli rotations implemented by \(C\). This algorithm outputs a circuit \(C_{out}\) and a tableau \(\mathcal{T}\) such that \(C_{out}\) is a re-synthesis of \(C\) and implements the same sequence of Pauli rotations as \(C\) up to an initial and final Clifford operator represented by \(\mathcal{T}_{in}\) and \(\mathcal{T}_{out}^{-1}\) respectively.
Algorithm 3 is composed of a loop iterating over the gates of \(C\) and which contains two distinct cases: either the current gate \(G\) is a Clifford gate or it is not. If \(G\) is Clifford gate then \(G^{\dagger}\) is prepended into \(\mathcal{T}\). If \(G\) is a non-Clifford \(R_{Z_{i}}(\theta)\) gate then we must compute the Pauli rotation that should be appended to \(C_{out}\). To do so we can first compute which Pauli rotation is actually being implemented by \(C\) by pulling all the Clifford gates preceding \(G\) through the Pauli rotation \(R_{Z_{i}}(\theta)\). The Pauli rotation obtained is then \(UR_{Z_{i}}(\theta)U^{\dagger}\) where \(U\) is the Clifford operator associated with the Clifford circuit composed of all the Clifford gates preceding \(G\). Then, to be appended into \(C_{out}\), the Pauli rotation must also be propagated through the initial tableau \(\mathcal{T}_{in}\), we will denote \(V\) the Clifford operator associated with \(\mathcal{T}_{in}\). Finally, the Pauli rotation must be propagated through all the Clifford gates that are in \(C_{out}\) so far, we denote \(W\) the associated Clifford operator. The Pauli rotation to append to the circuit \(C_{out}\) is therefore \(W^{\dagger}V^{\dagger}UR_{Z_{i}}(\theta)U^{\dagger}VW\). We can notice that the Clifford operator \(U^{\dagger}VW\) is in fact associated with the tableau \(\mathcal{T}\). Indeed, \(\mathcal{T}\) is initially equal to \(\mathcal{T}_{in}\), the inverse Clifford gates that are preceding \(G\) in \(C\) has been prepended to \(\mathcal{T}\) and the Clifford gates that are in \(C_{out}\) so far has been appended to \(\mathcal{T}\). The Pauli operator \(P\) satisfying \(R_{P}(\theta)=W^{\dagger}V^{\dagger}UR_{Z_{i}}(\theta)U^{\dagger}VW\) is therefore the \(i\)th stabilizer generator of \(\mathcal{T}\), which is encoded by the \(i\)th column of \(\mathcal{T}\).
The Pauli rotation \(R_{P}(\theta)\) can then be implemented by first performing the synthesis of a Clifford operator \(U\) such that \(U^{\dagger}R_{P}(\theta)U\) is diagonal, and then by performing the synthesis of a Clifford operator \(V\) satisfying \(V^{\dagger}U^{\dagger}R_{P}(\theta)UV=R_{Z_{i}}(\theta)\), for some qubit \(i\). The Clifford operator \(V\) can be synthesized using only \(\{X,\text{CNOT}\}\) gates as \(U^{\dagger}R_{P}(\theta)U\) is diagonal, this is done in Algorithm 3 by constructing the circuit \(\tilde{C}\). Once the operators \(U\) and \(V\) have been implemented, the gate \(R_{Z_{i}}(\theta)\) can be appended to the circuit. The operator \(V^{\dagger}\) does not necessarily need to be implemented, but it is actually implemented in Algorithm 3 to avoid additional operations that would be required to update the tableau \(\mathcal{T}\). We should not treat the operator \(U^{\dagger}\) the same way as it would increase the number of Hadamard gates in the circuit, \(U^{\dagger}\) is therefore not implemented in Algorithm 3 and the tableau \(\mathcal{T}\) is updated accordingly by appending the gates realizing the implementation of \(U\) to it. Note that the method utilized to implement \(U\) is the same as the one in Algorithm 1, which uses exactly one Hadamard gate when \(P\) is not diagonal. It follows from the results in Section 3 that Algorithm 3 can be used to solve the H-Opt problem for \(\tilde{\mathcal{S}}=\mathcal{T}_{in}^{-1}\mathcal{ST}_{in}\). More concretely, by removing all the non-Clifford \(R_{Z}\) gates from the circuit produced by Algorithm 3 we obtain a diagonalization network which solves the H-Opt problem for \(\tilde{\mathcal{S}}\).
Let \(C^{\prime}\) and \(C^{\prime}_{out}\) be the Clifford circuits obtained by removing all the non-Clifford \(R_{Z}\) gates from \(C\) and \(C_{out}\) respectively, and let \(C_{\mathcal{T}_{in}}\) be a Clifford circuit whose Clifford operator is associated with the tableau \(\mathcal{T}_{in}\). In the end of Algorithm 3, the tableau \(\mathcal{T}\) is associated with the Clifford operator implemented by the circuit \(C_{\mathcal{T}}=C^{\prime-1}::C_{\mathcal{T}_{in}}::C^{\prime}_{out}\). As \(C_{out}\) implements the sequence of Pauli rotations associated with \(\tilde{\mathcal{S}}=\mathcal{T}_{in}^{-1}\mathcal{ST}_{in}\) up to a final Clifford operator implemented by \(C^{\prime-1}_{out}\), it follows that \(C_{f}=C_{\mathcal{T}_{in}}::C_{out}::C^{-1}_{\mathcal{T}}\) is a re-synthesis of \(C\) and implements the same sequence of Pauli rotations as \(C\). If the input tableau \(\mathcal{T}_{in}\) is the identity tableau, or can be implemented with
no Hadamard gates, and if \(C_{\mathcal{T}}\) is implemented with a minimal number of Hadamard gates using the procedure described in Section 3.3, then \(C_{f}\) is a re-synthesis of \(C\) which implements the same sequence of Pauli rotations with a minimal number of Hadamard gates.
**Complexity analysis.** The main loop of Algorithm 3 is performing \(M\) iterations where \(M\) is the number of gates in the input circuit. At each iteration, if the current gate is a Clifford gate then it is prepended to \(\mathcal{T}\) which is done in \(\mathcal{O}(n)\) operations, where \(n\) is the number of qubits in the input circuit. If the current gate is a non-Clifford \(R_{Z_{k}}(\theta)\) gate then the algorithm append \(\mathcal{O}(n)\) gates to \(C_{out}\). In the case where the \(k\)th stabilizer generator of \(\mathcal{T}\) is not diagonal then a subset of these gates are appended to \(\mathcal{T}\) which takes \(\mathcal{O}(n)\) operations for each gates. This happens exactly \(h\) times where \(h\) is the number of Hadamard gates in the output circuit \(C_{out}\), which implies a cost of \(\mathcal{O}(n^{2}h)\) operations. Thus, the overall complexity of Algorithm 3 is \(\mathcal{O}(nM+n^{2}h)\).
### Internal-H-Opt algorithm
Algorithm 4 is based on the procedure explained in Section 4 and utilized by Algorithm 2 to synthesize a diagonalization network for a sequence of Pauli products with a minimal number of internal Hadamard gates. It takes a Clifford\(+R_{Z}\) circuit \(C\) as input and outputs a circuit which is a re-synthesis of \(C\) and which implements the same sequence of Pauli rotations as \(C\) with a minimal number of Hadamard gates and internal Hadamard gates.
As explained in Section 4, in order to solve the Internal-H-Opt problem for a sequence of \(m\) Pauli products \(\mathcal{S}\) it is necessary to find a Clifford operator \(U\) that minimizes \(\text{rank}(\tilde{M})\) where \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A^{(\mathcal{S})}\end{bmatrix}\), \(A^{(\mathcal{S})}\) is the commutativity matrix associated with \(\mathcal{S}\) and \(\tilde{\mathcal{S}}=\begin{bmatrix}\tilde{\mathcal{Z}}\\ \tilde{\mathcal{X}}\end{bmatrix}=U^{\dagger}\mathcal{S}U\). We proved that the Clifford operator associated with the circuit produced by Algorithm 1 when \(\mathcal{S}J_{m}\), where \(J_{m}\) is an exchange matrix of size \(m\times m\), is given as input is satisfying this property. Let \(\mathcal{S}\) be a sequence of \(m\) Pauli products associated with the sequence of Pauli rotations implemented by a Clifford\(+R_{Z}\) circuit \(C\), then the Clifford operator \(U\) described above can be computed by the HOpt procedure described in Algorithm 3. To do so, the circuit \(C^{-1}\) and the tableau \(\mathcal{T}\) must be given must be given as input to the HOpt procedure, such that \(\mathcal{T}\) is the tableau associated with the Clifford operator implemented by the circuit \(C^{\prime-1}\) where \(C^{\prime}\) is the Clifford circuit obtained by removing all the non-Clifford \(R_{Z}\) gates of \(C\). The circuit \(C^{-1}\) is provided so that the Pauli rotations are processed in reversed order by the HOpt procedure. For the tableau \(\mathcal{T}\), it must be provided because the circuit \(C^{-1}\) does not necessarily implements the same sequence of Pauli rotations as \(C\), however the circuit \(C^{\prime}::C^{-1}\) do implement the same sequence of Pauli rotations as the circuit \(C\). We can be convinced by this fact by noticing that the Clifford operator formed by all the Clifford gates preceding a non-Clifford gate in \(C\) is the same as the Clifford operator formed by all the Clifford gates preceding the corresponding non-Clifford gate in \(C^{\prime}::C^{-1}\). Then, as shown in Section 5.1, when the HOpt procedure is executed with \(C^{-1}\) and \(\mathcal{T}\) as parameters, it will produce a circuit \(\tilde{C}\) and a tableau \(\tilde{\mathcal{T}}\) associated with the Clifford operator implemented by the circuit \(C^{\prime}::C^{\prime-1}::\tilde{C}^{\prime}\), which is equivalent to the circuit \(\tilde{C}^{\prime}\), and where \(\tilde{C}^{\prime}\) is the Clifford circuit obtained by removing all the non-Clifford \(R_{Z}\) gates from \(\tilde{C}\). The circuit \(\tilde{C}^{\prime}\) then solves the H-Opt problem for \(\mathcal{S}J_{m}\), and is an implementation of the Clifford operator associated with the tableau \(\tilde{\mathcal{T}}\). From the results of Section 4, it follows that if \(\tilde{\mathcal{S}}=\begin{bmatrix}\tilde{\mathcal{Z}}\\ \tilde{\mathcal{X}}\end{bmatrix}=\tilde{\mathcal{T}}^{-1}\mathcal{S}\tilde{ \mathcal{T}}\) then \(\text{rank}(\tilde{M})=\text{rank}(A^{(\mathcal{S})})\) where \(\tilde{M}=\begin{bmatrix}\tilde{\mathcal{X}}\\ A^{(\mathcal{S})}\end{bmatrix}\).
Algorithm 4 then performs the synthesis of the Clifford operator associated with \(\tilde{\mathcal{T}}\) with a minimal number of Hadamard gates, the Clifford circuit \(C_{\tilde{\mathcal{T}}}\) obtained will be the initial Clifford
circuit of the circuit produced by Algorithm 4. The algorithm then calls a second time the HOpt procedure with \(C\) and \(\tilde{\mathcal{T}}\) given as parameters in order to implement the sequence of Pauli rotations associated with \(\tilde{\mathcal{S}}\) with a minimal number of internal Hadamard gates and up to a final Clifford circuit. The tableau \(\tilde{\mathcal{T}}\) must be given as input so that the sequence of Pauli rotations implemented is the one associated with the sequence of Pauli products \(\tilde{\mathcal{S}}\) and not \(\mathcal{S}\). The procedure HOpt will then produce a circuit \(C_{out}\) and a tableau \(\mathcal{T}_{f}\) such that \(C^{\prime}_{out}\) is solving the H-Opt problem for \(\tilde{\mathcal{S}}\) and \(\mathcal{T}_{f}\) is associated with the Clifford operator implemented by \(C_{f}=C^{\prime-1}::C_{\tilde{\mathcal{T}}}::C^{\prime}_{out}\), where \(C^{\prime}\) and \(C^{\prime}_{out}\) are the circuits obtained by removing all the non-Clifford \(R_{Z}\) gates from \(C\) and \(C_{out}\) respectively. We can then deduce that \(C_{\tilde{\mathcal{T}}}::C_{out}::C^{-1}_{f}\) is implementing the same sequence of Pauli rotations as \(C\) and the Clifford operator formed by all the Clifford gates of this circuit is the same as the Clifford operator formed by all the Clifford gates of \(C\). Thus, the circuit produced by Algorithm 4 is a re-synthesis of \(C\) and it implements the same sequence of Pauli rotations with a minimal number of Hadamard gates and internal Hadamard gates.
**Complexity analysis.** Let \(h\) be the number of Hadamard within the circuit produced by Algorithm 4, and let \(n\) be the number of qubits in \(C\). The algorithm performs two calls to the HOpt procedure for \(C^{-1}\) and \(C\) respectively, which both contains \(M\) gates. The first call, with \(C^{-1}\) given as input, will produce a circuit which contains \(\tilde{h}\) number of Hadamard gates, such that \(\tilde{h}\leq h\). The second call, with \(C\) given as input, will produce a circuit which contains a number of Hadamard gates that is equal to the number of internal Hadamard gates in the circuit produced by Algorithm 4, and which is therefore less than or equal to \(h\). Hence, these two calls to Algorithm 3 have a cost of \(\mathcal{O}(nM+n^{2}h)\) operations. The procedure CliffordSynthesis is also called two times, which induces a cost of \(\mathcal{O}(n^{3})\) operations. Thus, the overall complexity of Algorithm 4 is \(\mathcal{O}(nM+n^{2}h+n^{3})\), which corresponds to \(\mathcal{O}(nM+n^{2}h)\) in the typical case where \(h>n\).
Note that the two calls to the CliffordSynthesis procedure can be avoided if the objective is to minimize the number of internal Hadamard gates in the circuit and not the number of Hadamard gates. Indeed, the first call to the HOpt procedure will produce a circuit \(\tilde{C}\) and a tableau \(\mathcal{T}\) such that \(\mathcal{T}\) is associated with the Clifford operator implemented by \(\tilde{C}^{\prime}\) where \(\tilde{C}^{\prime}\) is obtained by removing all the non-Clifford \(R_{Z}\) gates from \(\tilde{C}\). Performing the synthesis of \(\mathcal{T}\) will therefore produce a circuit that is equivalent to \(\tilde{C}^{\prime}\). Consequently, instead of calling the procedure CliffordSynthesis, an equivalent circuit could be obtained by constructing \(\tilde{C}^{\prime}\) which can be done with \(\mathcal{O}(nM)\) operations as \(\tilde{C}\) contains \(\mathcal{O}(nM)\) gates. Of course, the drawbacks of this method are that \(\tilde{C}^{\prime}\) may not contain an optimal number of Hadamard gates and that the worst-case complexity would be greater than \(\mathcal{O}(n^{3})\) in the case where \(M>n^{2}\). The second call to CliffordSynthesis can also be avoided in a similar manner. Indeed, \(\mathcal{T}_{f}\) is associated with the Clifford operator implemented by \(C_{f}=C^{\prime-1}::C_{\tilde{\mathcal{T}}}::C^{\prime}_{out}\), where \(C^{\prime}\) and \(C^{\prime}_{out}\) are the circuits obtained by removing all the non-Clifford \(R_{Z}\) gates from \(C\) and \(C_{out}\) respectively. The circuit \(C_{f}\) can then be constructed in \(\mathcal{O}(nM)\) as the circuits \(C^{\prime-1}\), \(C_{\tilde{\mathcal{T}}}\) and \(C^{\prime}_{out}\) all contain \(\mathcal{O}(nM)\) gates. Thus, we can design an algorithm which produces a circuit \(\tilde{C}\) with a complexity of \(\mathcal{O}(nM+n^{2}h)\), even in the case where \(h<n\), and such that \(\hat{C}\) is a re-synthesis of a Clifford\(+R_{Z}\) circuit \(C\) and implements the same sequence of Pauli rotations as \(C\) but with a minimal number of internal Hadamard gates.
## 6 Benchmarks
We compare the performances of Algorithm 4, the InternalHOpt procedure, to the moveH procedure presented in Reference [10] and which has a complexity of \(\mathcal{O}(M^{2})\) where \(M\) is the number of gate in the input circuit. Note that the moveH procedure does not include the \(T\) gates reduction
\begin{table}
\begin{tabular}{l r r r r r r r r r r} \hline \hline & \multicolumn{4}{c}{InternalHOpt} & \multicolumn{4}{c}{TMerge [8] + InternalHOpt} & \multicolumn{4}{c}{moveH [10]} \\ \cline{2-11} Circuit & \(H\)-count & \(T\)-count & Time (s) & \(H\)-count & \(T\)-count & Time (s) & \(H\)-count & \(T\)-count & Time (s) \\ \hline \(\text{Tof}_{3}\) & 2 & 21 & 0.00 & 2 & 15 & 0.00 & 2 & 15 & 0.00 \\ \(\text{Tof}_{4}\) & 4 & 35 & 0.00 & 4 & 23 & 0.00 & 4 & 23 & 0.00 \\ \(\text{Tof}_{5}\) & 6 & 49 & 0.00 & 6 & 31 & 0.00 & 6 & 31 & 0.00 \\ \(\text{Tof}_{10}\) & 16 & 119 & 0.00 & 16 & 71 & 0.00 & 16 & 71 & 0.00 \\ \(\text{Barenco}\ \text{Tof}_{3}\) & 3 & 28 & 0.00 & 3 & 16 & 0.00 & 3 & 16 & 0.00 \\ \(\text{Barenco}\ \text{Tof}_{4}\) & 7 & 56 & 0.00 & 7 & 28 & 0.00 & 7 & 28 & 0.00 \\ \(\text{Barenco}\ \text{Tof}_{5}\) & 11 & 84 & 0.00 & 11 & 40 & 0.00 & 11 & 40 & 0.00 \\ \(\text{Barenco}\ \text{Tof}_{10}\) & 31 & 224 & 0.00 & 31 & 100 & 0.01 & 31 & 100 & 0.00 \\ \(\text{Mod}5_{4}\) & 0 & 28 & 0.00 & 0 & 8 & 0.00 & 0 & 8 & 0.00 \\ \(\text{VBE}\ \text{Adder}_{3}\) & 4 & 70 & 0.00 & 4 & 24 & 0.00 & 4 & 24 & 0.00 \\ CSLA MUX\({}_{3}\) & 6 & 70 & 0.00 & 6 & 62 & 0.00 & 6 & 62 & 0.00 \\ CSUM MUX\({}_{9}\) & 12 & 196 & 0.00 & 12 & 84 & 0.01 & 12 & 84 & 0.00 \\ QCLA Com\({}_{7}\) & 18 & 203 & 0.00 & 18 & 95 & 0.01 & 18 & 95 & 0.00 \\ QCLA Mod\({}_{7}\) & 58 & 413 & 0.00 & 58 & 237 & 0.02 & 58 & 237 & 0.00 \\ QCLA Adder\({}_{10}\) & 25 & 238 & 0.00 & 25 & 162 & 0.01 & 25 & 162 & 0.00 \\ \(\text{Adders}\) & 41 & 399 & 0.00 & 37 & 173 & 0.02 & 41 & 215 & 0.01 \\ \(\text{Mod}\ \text{Adder}_{1024}\) & 304 & 1995 & 0.00 & 304 & 1011 & 0.12 & 304 & 1011 & 0.06 \\ \(\text{RC}\ \text{Adder}_{6}\) & 10 & 77 & 0.00 & 10 & 47 & 0.00 & 10 & 47 & 0.00 \\ \(\text{Mod}\ \text{Red}_{21}\) & 17 & 119 & 0.00 & 17 & 73 & 0.00 & 17 & 73 & 0.00 \\ \(\text{Mod}\ \text{Mult}_{55}\) & 3 & 49 & 0.00 & 3 & 35 & 0.00 & 3 & 35 & 0.00 \\ \(\text{GF}(2^{4})\) Mult & 0 & 112 & 0.00 & 0 & 68 & 0.00 & 0 & 68 & 0.00 \\ \(\text{GF}(2^{5})\) Mult & 0 & 175 & 0.00 & 0 & 115 & 0.01 & 0 & 115 & 0.00 \\ \(\text{GF}(2^{6})\) Mult & 0 & 252 & 0.00 & 0 & 150 & 0.01 & 0 & 150 & 0.00 \\ \(\text{GF}(2^{7})\) Mult & 0 & 343 & 0.00 & 0 & 217 & 0.02 & 0 & 217 & 0.01 \\ \(\text{GF}(2^{8})\) Mult & 0 & 448 & 0.00 & 0 & 264 & 0.04 & 0 & 264 & 0.02 \\ \(\text{GF}(2^{9})\) Mult & 0 & 567 & 0.00 & 0 & 351 & 0.05 & 0 & 351 & 0.03 \\ \(\text{GF}(2^{10})\) Mult & 0 & 700 & 0.00 & 0 & 410 & 0.07 & 0 & 410 & 0.04 \\ \(\text{GF}(2^{16})\) Mult & 0 & 1792 & 0.01 & 0 & 1040 & 0.43 & 0 & 1040 & 0.14 \\ \(\text{GF}(2^{32})\) Mult & 0 & 7168 & 0.05 & 0 & 4128 & 7.19 & 0 & 4128 & 0.98 \\ \(\text{GF}(2^{64})\) Mult & 0 & 28672 & 0.19 & 0 & 16448 & 125.07 & 0 & 16448 & 7.46 \\ \(\text{GF}(2^{128})\) Mult & 0 & 114688 & 1.20 & 0 & 65664 & 2294.64 & 0 & 65664 & 60.47 \\ \(\text{GF}(2^{256})\) Mult & 0 & 458752 & 8.22 & 0 & 262400 & 41474.34 & 0 & 262400 & 2922.20 \\ \(\text{GF}(2^{512})\) Mult & 0 & 1835008 & 53.85 & - & - & - & - & 0 & 1049088 & 59186.15 \\ \(\text{Adder}_{1024}\) & 2044 & 14322 & 3.57 & 2044 & 8184 & 31.08 & 2046 & 8184 & 6.12 \\ \(\text{Adder}_{2048}\) & 4092 & 28658 & 18.98 & 4092 & 16376 & 179.07 & 4094 & 16376 & 25.69 \\ \(\text{Adder}_{4096}\) & 8188 & 57330 & 90.46 & 8188 & 32760 & 1182.67 & 8190 & 32760 & 131.11 \\ \(\text{DEFAULT}\) & 11936 & 62720 & 13.72 & 11936 & 39744 & 39.33 & 12030 & 39744 & 1602.60 \\ \(\text{Shor}_{4}\) & 9780 & 68320 & 0.21 & 5010 & 17052 & 5.91 & 9829 & 22514 & 77.52 \\ \(\text{Shor}_{8}\) & 69759 & 489741 & 1.74 & 35585 & 121341 & 158.91 & 69759 & 163827 & 6895.79 \\ \(\text{Shor}_{16}\) & 537630 & 3755115 & 15.80 & 312274 & 1042881 & 2821.94 & - & - \\ \(\text{Shor}_{32}\) & 4173389 & 29622691 & 172.98 & 387103 & 1303156 & 24150.54 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different methods for the optimization of the number of internal Hadamard gates. The \(H\)-count corresponds to the number of internal Hadamard gates. A blank entry indicates that the execution couldn’t be carried out in less than a day.
method of Reference [10] based on spider nest identities and which is normally performed once the number of Hadamard gates have been reduced. The moveH procedure applies a sequence of rewriting rules on the circuit with the aim of reducing the number of internal Hadamard gates. During this process the number of \(T\) gates may also be reduced, which modifies the sequence of Pauli rotations implemented by the circuit. This can then lead to a better reduction in the number of internal Hadamard gates than the one obtained when only the InternalHOpt procedure is performed. Which is why, in order to better exploit the InternalHOpt procedure, it can be helpful to first execute an algorithm which can reduce the number of \(T\) gates in the circuit quickly and efficiently. The \(T\)-count reduction algorithms that are closest to these requirements are the provided in Reference [8] and in Reference [7], these two algorithms have in fact been proven to be equivalent [26]. The method used in these algorithms consists in merging the Pauli rotations in the sequence that are equivalent and that are not separated by another Pauli rotation with which they anticommute. We implemented the algorithm provided in Reference [8] such that it is not increasing the number of gates in the circuit in order to not increase the execution time of the InternalHOpt procedure. This procedure, which we refer to as TMerge, has a complexity of \(\mathcal{O}(nM+nm^{2})\) where \(n\) is the number of qubits, \(M\) is the number of gates in the input circuit and \(m\) is the number of Pauli rotations. If the \(T\) gates reduction rules used in the moveH subroutine is only consisting in merging two adjacent \(R_{Z}\) gates together, then we can infer that the number of \(T\) gates in the circuit after moveH procedure has been performed is always higher or equal to the number of \(T\) gates in the circuit after the TMerge procedure has been performed; this is corroborated by the results of our benchmarks.
We evaluate the different methods on a set of commonly used circuits which were obtained from Reference [27] and Reference [28]. We extended the set of circuits over which the benchmarks are performed by adding larger quantum circuit to better test the scalability of the different approach on various types of circuits. We added large adders circuits which are performing an addition over two registers of size 1024, 2048 and 4096 qubits, the implementation of these circuits is based on Reference [29]. We also added a circuit, given in Reference [30], that is an implementation of the block cipher DEFAULT. Finally, we added quantum circuits implementing the modular exponentiation part of Shor's algorithm for number factoring over 4, 8, 16 and 32 bits.
The TMerge and InternalHOpt procedures were implemented with the Rust programming language, while the moveH procedure was extracted from the implementation realized in Haskell by the authors of the method [31]. Our implementation of the InternalHOpt procedure used for the benchmarks is publicly available [32], along with the circuits used in the benchmarks and which have a reasonable size. The operations performed by the InternalHOpt algorithm mostly consist in bitwise operations between vectors in order to update the tableau. Thus, our algorithm can greatly benefits from SIMD (Same Instruction Multiple Data) instructions which enable the simultaneous execution of some of these bitwise operations. This have for example been used in the CHP stabilizer circuit simulator [23]. We also exploit this concept in our implementation of the InternalHOpt procedure by using 256 bit wide Advanced Vector Extensions (AVX).
**Benchmarks analysis.** The results of our benchmarks are presented in Table 1. We can notice that the InternalHOpt procedure outperforms the moveH procedure in term of execution time on some circuits of large size. For instance, the Shor\({}_{32}\) circuit was optimized in 173 seconds by the InternalHOpt procedure while the two other methods did not succeed in optimizing the circuit in less than a day. However, the InternalHOpt procedure alone does not always achieve the best results in the number of internal Hadamard gates. For the set of circuits and methods considered, the method achieving the best results in term of internal Hadamard gates is the TMerge+ InternalHOpt approach. Indeed, the TMerge+InternalHOpt approach always leads to a number
of internal Hadamard gates that is lower or equal to the numbers obtained by the moveH procedure. This fact also holds for the number of \(T\) gates. However, for some circuits, the performances of the moveH procedure and the \(\mathtt{TMerge}+\mathtt{InternalHOpt}\) approach are similar with respect to the \(H\)-count and \(T\)-count metrics, but the execution time of the moveH procedure is much lower. This is notably the case for the adder circuits of large size. These adder circuits have a low depth and a high number of qubits, which is far from the ideal case for \(\mathtt{TMerge}+\mathtt{InternalHOpt}\) approach since the complexity of both procedures is dependent on the number of qubits. On the contrary, the moveH procedure is not affected by the number of qubits as it has a complexity of \(\mathcal{O}(M^{2})\) where \(M\) is the number of gates within the circuit. This explains why the moveH procedure is competitive for these adder circuits and has an execution time that is close to the one of the InternalHOpt procedure.
Another series of circuits for which the moveH procedure is much faster than the \(\mathtt{TMerge}+\mathtt{InternalHOpt}\) approach are the "\(\mathrm{GF}(2^{n})\) Mult" circuits. This behaviour can be explained by analyzing the structure of the "\(\mathrm{GF}(2^{n})\) Mult" circuits and the design of the TMerge algorithm. The "\(\mathrm{GF}(2^{n})\) Mult" circuits are all implementing a sequence of Pauli rotations that are mutually commuting, which is why no internal Hadamard gate is required for these circuits. In the worst case, for every pair of Pauli rotations, the TMerge procedure will check whether two Pauli rotations commute or not. This routine, which seems unnecessary in the case where we know that the Pauli rotations are all mutually commuting, is particularly expensive for the "\(\mathrm{GF}(2^{n})\) Mult" circuits for which \(n\) is high since the number of Pauli rotations increases drastically with respect to \(n\).
**Outlook.** Our primary motivation for optimizing the number of internal Hadamard gate is to foster the minimization of \(T\)-gates. Conversely, our benchmarks show that optimizing the number of \(T\)-gate leads to better minimization in the number of internal Hadamard gates. This interdependence between the \(T\)-count and \(H\)-count minimization problems could lead us to think that a second round of \(T\)-count optimization followed by a \(H\)-count optimization could lead to a lower number of internal Hadamard gates. Our investigations on that second round of optimization have not be fruitful as we did not succeed in reducing the number of internal Hadamard gates below the numbers obtained by the \(\mathtt{TMerge}+\mathtt{InternalHOpt}\) approach. It seems that once the TMerge procedure has been performed, it becomes difficult to modify the underlying sequence of Pauli rotations in such a way that it enables further reduction in the number of internal Hadamard gates. Our conclusion here is only based on some of our tests, more investigations with a wide variety of \(T\)-count optimizers should be performed to know whether or not this second round of optimization could lead to an improvement in the number of internal Hadamard gates.
Two lines of investigations on how to perform the optimization of internal Hadamard gates more efficiently can be drawn out from these benchmarks. Firstly, the TMerge procedure is outperformed, with respect to the execution time, by the moveH procedure on some circuits such as the "\(\mathrm{GF}(2^{n})\) Mult" circuits, can the complexity of the TMerge procedure be improved so that it is more competitive on these circuits? Secondly, is it possible to design an algorithm similar to the moveH procedure, so that it has approximatively the same execution time, but which systematically obtains the same number of \(T\) gates as the TMerge procedure and which optimally minimizes the number of internal Hadamard gates in the resulting sequence of Pauli rotations as done by the InternalHOpt procedure?
## 7 Conclusion
We presented an algorithm to realize the synthesis of a sequence of Pauli rotations over the \(\{X,\mathrm{CNOT},S,H,R_{Z}\}\) gate set using a minimal number of Hadamard gates and with a time com |
2308.12370 | AdVerb: Visually Guided Audio Dereverberation | We present AdVerb, a novel audio-visual dereverberation framework that uses
visual cues in addition to the reverberant sound to estimate clean audio.
Although audio-only dereverberation is a well-studied problem, our approach
incorporates the complementary visual modality to perform audio
dereverberation. Given an image of the environment where the reverberated sound
signal has been recorded, AdVerb employs a novel geometry-aware cross-modal
transformer architecture that captures scene geometry and audio-visual
cross-modal relationship to generate a complex ideal ratio mask, which, when
applied to the reverberant audio predicts the clean sound. The effectiveness of
our method is demonstrated through extensive quantitative and qualitative
evaluations. Our approach significantly outperforms traditional audio-only and
audio-visual baselines on three downstream tasks: speech enhancement, speech
recognition, and speaker verification, with relative improvements in the range
of 18% - 82% on the LibriSpeech test-clean set. We also achieve highly
satisfactory RT60 error scores on the AVSpeech dataset. | Sanjoy Chowdhury, Sreyan Ghosh, Subhrajyoti Dasgupta, Anton Ratnarajah, Utkarsh Tyagi, Dinesh Manocha | 2023-08-23T18:20:59Z | http://arxiv.org/abs/2308.12370v1 | # AdVerb: Visually Guided Audio Dereverberation
###### Abstract
We present AdVerb, a novel audio-visual dereverberation framework that uses visual cues in addition to the reverberant sound to estimate clean audio. Although audio-only dereverberation is a well-studied problem, our approach incorporates the complementary visual modality to perform audio dereverberation. Given an image of the environment where the reverberated sound signal has been recorded, AdVerb employs a novel geometry-aware cross-modal transformer architecture that captures scene geometry and audio-visual cross-modal relationship to generate a complex ideal ratio mask, which, when applied to the reverberant audio predicts the clean sound. The effectiveness of our method is demonstrated through extensive quantitative and qualitative evaluations. Our approach significantly outperforms traditional audio-only and audio-visual baselines on three downstream tasks: speech enhancement, speech recognition, and speaker verification, with relative improvements in the range of 18% - 82% on the LibriSpeech test-clean set. We also achieve highly satisfactory RT60 error scores on the AVSpeech dataset.
## 1 Introduction
Reverberation occurs when an audio signal reflects from multiple surfaces and objects in the environment to alter the dry sound thereby degrading its quality. Far-field speech recorded at a considerable distance from the speaker is significantly degraded by the strong reverberation effects caused by the environment. The amount of reverberation is highly correlated to the geometry of the surroundings and the materials present in the vicinity [9, 11]. For instance, the auditory experience changes drastically when listening to a pleasant symphony in a large empty hallway vs. a relatively small furnished living room (Fig. 2). Recent studies have shown that the reverberation effects can be estimated from a single image of the environment with reasonable accuracy [69, 46, 36]. Removal of reverberation in recorded speech signals is highly desirable and would help improve the performance of several other auxiliary downstream tasks like automatic speech recognition (ASR), speaker verification (SV), source separation (SP), speech enhancement (SE), etc., which are widely used in several day-to-day tools.
Audio-only dereverberation is a well-studied problem with various systems achieving encouraging results [53, 34, 99, 91, 90]. In contrast, using the visual stream as an additional cue to solve this task is a particularly understudied problem. We attribute the lack of research in this space to the scarcity of datasets. Most open-source datasets, both real and synthetic, contain only room impulse responses (RIRs) with no information about their source of origin [77, 83]. Note that obtaining such RIRs can be challenging as doing so requires access to the physical environment, thereby limiting their applicability. However, in real-world settings, reverberant audio is naturally accompanied by a visual stream; video conferencing, augmented reality (AR),
Figure 1: We present AdVerb, a novel audio-visual dereverberation framework that leverages visual cues of the environment to estimate clean audio from reverberant audio. E.g, given a reverberant sound produced in a large hall, our model attempts to remove the reverb effect to predict the anechoic or clean audio.
and web video indexing are some examples.
Recently, audio-visual speech enhancement methods [95, 79, 12, 48] have shown significant improvements over the audio-only speech enhancement approaches. These tasks benefit from the presence of sound-producing objects in the visual scene, which allows the model to effectively utilize these strong stimuli for accomplishing the task. Many of these approaches track the lip movements of the speaker to separate the noise from the voice components in degraded speech which builds on the assumption that a speaker is always close to and facing the camera. These assumptions might not always hold in our case as the scope of the problem under consideration (mid/far-field) makes it difficult to obtain such cues. Thus, in a real-world setting, the available cue for audio-visual dereverberation is a panoramic view of the environment with or without a speaker in the field of view. Effectively utilizing visual cues in order to perform audio-visual dereverberation would require the model to understand the room's implicit geometric and material properties, which poses its own challenges.
**Our Contributions:** We propose AdVerb, comprising a modified conformer block [24] with specially designed positional encoding to learn audio-visual dereverberation. The network takes corrupted audio and the corresponding visual image1 of the surrounding environment (from where the RIR is obtained) as input to perform this task (Fig. 1). Our approach employs a _novel geometry-aware module_ with cross-modal attention between the audio and visual modalities to generate a _complex ideal ratio mask_, which is applied to the reverberant spectrogram to obtain the estimated clean spectrogram. This conformer block consists of a _modified (Shifted) Window Block_[44] and _Panoptic Blocks_ to combine local and global geometry relations. We discuss key motivations behind our approach in Section 4. To learn audio-visual dereverberation, AdVerb solves two objectives, _Spectrogram Prediction Loss_ and _Acoustic Token Matching Loss_, which makes the output audio retain phonetic and prosodic properties. To summarize, our main contributions are as follows:
Footnote 1: We use panoramic images to train; inference can be done on both panoramic and non-panoramic images.
**(1)** We propose AdVerb, _a novel cross-modal framework_ for dereverberating audio by exploiting complementary low-level visual cues and specially designed relative position embedding.
**(2)** To this end, AdVerb employs _a novel geometry-aware conformer network_ to capture 3D spatial semantic information to equip the network with salient vision cues through (Shifted) Window Blocks and Panoptic Blocks.
**(3)** Our architecture involves the prediction of _complex ideal ratio mask_ and simultaneous optimization of two objective functions to estimate the dereverbed speech.
**(4)** On objective evaluation our approach significantly outperforms traditional audio-only and audio-visual [12] baselines with a relative improvement in the range 18% - 82% on three downstream tasks: speech enhancement, speech recognition, and speaker verification, when evaluated on the LibriSpeech test-clean set on all difficulty levels. It also achieves highly satisfactory RT60 error scores on the AVSpeech dataset.
**(5)** User study analysis reveals our method outperforms prior approaches on perceptual audio quality assessment.
## 2 Related Works
**Audio Dereverberation:** In communication and speech processing applications, reverberation can reduce intelligibility and weaken a dry audio signal [53, 34, 99, 91, 90]. Lately, there has been a paradigm shift from using the traditional signal processing-based methods to neural networks and, subsequently, deep learning-based methods for dereverberation. Kinoshita _et al._[35] presents a deep neural network to estimate the power spectrum of the target sig
Figure 2: Reverberation is a function of the speaker’s relative position and the surrounding environment. The visual signals present critical details that determine the nature of the distortion. E,g, ���
nal for weighted prediction error. Extending this, Wang _et al._[87] deploy a CNN-based model to separate the real and imaginary parts of clean speech. Typically, there are two prominent ways of training such models: through supervised learning [92, 45] or through adversarial networks (GANs) [73, 75]. Audio reverberation in nature is heavily influenced by room acoustics [43]. We find studies in the literature that try to capture room-specific information for finer modeling of acoustic environments [72, 23]. Another line of work [80, 41] attempts to extract visual features of target lip movements. Work from Chen _et al._[12] is most similar in spirit to our proposed approach. These studies motivate us to pursue audio-visual dereverberation by leveraging room-aware geometric cues. Our framework exploits panoramic image features and is applicable even for out-of-view speaker cases.
**Room Impulse Response and Geometry Awareness:** For a given environment, the amount of reverberation in the speech signal is mathematically described using a function known as room impulse response (RIR). RIR generators are used to simulate large-scale speech training data [63, 64]. While [28, 71, 78] engage dedicated in-room amenities to estimate this function, another line of research [5, 50, 9, 81, 62] choose to produce RIRs synthetically. These works [37, 69] estimate RIRs from an RGB and depth image. One downside of these approaches is that they require access to paired image and impulse response data. In contrast, some prior methods [32, 33, 51] for generating RIR operate by using images taken at arbitrary distances from the point of audio capture.
Video streams, by nature, capture the natural association between visual and audio modalities. Wang _et al._[85] propose a geometry-aware approach for room layout estimation by horizon depth, which is only effective in the horizontal direction. Hu _et al._[31] and Eder _et al._[16] introduce gradient of depth and plane aware loss, respectively for improved depth estimation of panoramic images. These works inspire us to leverage room geometry to model this problem.
**Audio-Visual Learning:** Cross-modal learning powered by large-scale video datasets has been pushing boundaries in applications like audio-visual sound separation [98, 97, 22, 19, 93], audio-visual speech enhancement [1, 2, 27, 94], active speaker detection [3, 4, 82, 67], talking head generation [86, 13, 60], embodied AI for audio-visual navigation [7, 10, 47, 96], etc. In addition, many recent works have utilized paired audio-visual data for representation learning. Owens _et al._[56] learned visual representations for materials from impact sounds. Another line of work learns features, scene structure, and geometric properties [14, 57, 20] respectively from audio. However, our approach to estimating the geometric cues for audio-visual dereverberation is complementary to these methods.
## 3 Problem Formulation
We propose a novel framework that takes reverberant speech \(\mathcal{A}_{r}\) and the corresponding environment panoramic image \(\mathcal{V}_{r}\) as input and outputs estimated clean audio \(\mathcal{A}_{e}\). Both \(\mathcal{V}_{r}\) and \(\mathcal{A}_{r}\) are captured from the listener position focusing on the environment surrounding the speaker (considers far, mid, and near field examples). The reverberation effects can be described using a transfer function known as room impulse response \(\mathcal{R}(t)\). \(\mathcal{A}_{r}\) can be obtained by convolving clean speech \(\mathcal{A}_{s}\) with \(\mathcal{R}(t)\) (Equation 1) [54]. Here, \(\mathcal{R}\) depends on the listener and speaker positions, room geometry, and acoustic material characteristics.
\[\mathcal{A}_{r}(t)=\mathcal{A}_{s}(t)*\mathcal{R}(t) \tag{1}\]
## 4 Our Approach: AdVerb
Fig. 3 depicts a pictorial representation of AdVerb, our proposed audio-visual dereverberation model. Our primary objective is to learn the inverse function given a reverberant audio signal by exploiting the audio and visual cues. Elaborations on the individual components are as follows:
### Feature Encoder
**Vision Encoder:** To encode geometric layout-specific visual features \(\mathcal{E}_{\mathcal{V}}(\cdot)\), we use HorizonNet [76], which is based on ResNet-50 [26] backbone. HorizonNet takes a panoramic image of the surroundings as input \(\mathcal{V}\), with dimensions 512 \(\times\) 1024 \(\times\) 3. The output is a 2D feature map of 4 different scales. For each feature map, the height is down-sampled, and the width \(\mathcal{N}\) is up-sampled to obtain 1D spatial property-infused feature sequences with dimension \(\mathbb{R}^{\mathcal{D}/4}\) and connect all the feature maps to obtain \(\mathbb{R}^{\mathcal{D}}\), where \(\mathcal{D}\) is 1024 in our case.
**Audio Encoder:** For audio features, we employ Short-Time Fourier Transform (STFT) \(\mathcal{E}_{\mathcal{A}}(\cdot)\) on the reverberant 1D audio \(\mathcal{A}\) to obtain a 2D spectrogram \(\mathcal{A}(t,f)\), where \(t\) and \(f\) index time and frequency, respectively. In contrast to prior work, which learns a convolution network for this transformation [8], we employ STFT with the motivation of using complex masks for learning dereverberation. We calculate STFT with a window of size 400 samples or 25 milliseconds, a hop length of 160 samples or 10 milliseconds, and a 512-point FFT. All our audios are sampled at 16kHz.
### Complex Ideal Ratio Masks
**Intuition Behind Masks:** We hypothesize that learning to generate clean anechoic speech in an end-to-end fashion might not be effective owing to the nature of the task. Traditionally, the input audio learns to align to the visual cues, which proves to be effective for the visual acoustic matching [8] task. Similarly, synthesizing speech directly has also seen huge success in audio-visual speech enhancement, and
separation [25, 94], where the visual cues have a high correlation with the contents of the speech, e.g., lip movements. However, the dereverberation algorithm tries to learn an inverse function making the task intrinsically challenging. From Fig. 2 it is evident that the same speech content incurs heavy reverberation artifacts when the speaker is far away in a reverberant environment () while the corruption of the speech signal is not significant when the speaker is closer in a relatively less reverberant environment (). Thus, we hypothesize such visual cues can be instead used to learn a mask that, when applied to the reverberated speech, suppresses reverberation effects. STFT mask prediction has seen success in the past in a variety of tasks, including source separation [98, 22], speech enhancement [89], etc.
**Complex Ideal Ratio Mask Construction:** A complex ideal ratio mask (cIRM) [88] is an extension of the conventional ideal ratio mask to process the real and imaginary components of an audio signal separately. The product of cIRM and reverberant speech results in estimated clean speech. It is calculated in the time-frequency (T-F) domain, and thus learning to generate cIRM enhances both the magnitude and phase of reverberant speech, improving overall perceptual speech quality. Given the STFT of reverberant speech, \(\mathcal{A}_{r}(t,f)\), and the cIRM, \(\mathcal{C}(t,f)\), clean speech, \(\mathcal{A}_{s}(t,f)\), is computed as follows:
\[\mathcal{A}_{s}(t,f)=\mathcal{C}(t,f)*\mathcal{A}_{r}(t,f) \tag{2}\]
where \(t\) and \(f\) are index time and frequency respectively. Since the STFT is complex, \(*\) indicates complex multiplication. \(\mathcal{C}(t,f)\) is computed by dividing the STFT of direct speech, by the STFT of reverberant speech:
\[\mathcal{A}_{s}^{r}(t,f)+j\mathcal{A}_{s}^{i}(t,f)=\mathcal{C}(t,f)*\mathcal{A }_{r}^{r}(t,f)+j\mathcal{A}_{r}^{i}(t,f) \tag{3}\]
\[\mathcal{C}(t,f) =\frac{\mathcal{A}_{s}^{r}(t,f)+j\mathcal{A}_{s}^{i}(t,f)}{ \mathcal{A}_{r}^{r}(t,f)+j\mathcal{A}_{r}^{i}(t,f)}*\frac{\mathcal{A}_{r}^{r} (t,f)-j\mathcal{A}_{r}^{i}(t,f)}{\mathcal{A}_{r}^{r}(t,f)-j\mathcal{A}_{r}^{i} (t,f)} \tag{4}\] \[=\frac{\mathcal{A}_{s}^{r}(t,f)\mathcal{A}_{r}^{r}(t,f)+\mathcal{ A}_{s}^{i}(t,f)\mathcal{A}_{r}^{i}(t,f)}{\mathcal{A}_{r}^{r2}(t,f)-\mathcal{A}_{r}^{ r2}(t,f)}\] \[+j\frac{\mathcal{A}_{s}^{i}(t,f)\mathcal{A}_{r}^{r}(t,f)- \mathcal{A}_{s}^{r}(t,f)\mathcal{A}_{r}^{i}(t,f)}{\mathcal{A}_{r}^{r2}(t,f)- \mathcal{A}_{r}^{r2}(t,f)}\]
### Cross-Modal Geometry-Aware Conformer
**Overview:** In this module, we aim to learn cross-modal attention between audio and visual features, which enables incorporating fine-grained interactions between them in a geometry-aware fashion. The visual and audio feature maps obtained from corresponding encoders are used as inputs here. The sequence of features from each time step represents a part of the input stream. These sequences are then passed to the conformer-based [24] cross-modal encoder. For the audio stream, we obtain a complex ideal ratio mask by employing a complex-valued self-attention block. This is then fed into the geometry-aware cross-modal self-attention (GCA) block for audio-visual modeling. We specially design a relative position embedding to encode position-specific information. Finally, the learned representations are passed through a complex-valued decoder to
Figure 3: Overview of **AdVerb**. AdVerb estimates clean source audio from a reverberant speech signal leveraging two primary components: ) The visual stream processing path comprises a HorizonNet-based backbone \(\mathcal{E}_{\mathcal{V}}(\cdot)\) to obtain 1D feature sequences, which are subsequently passed to the cross-modal geometry-aware attention subnetwork. ) The audio processing module applies STFT \(\mathcal{E}_{\mathcal{A}}(\cdot)\) to get 2D spectrograms which are fed to the cross-modal encoder. The cross-attention subnetwork powered by geometry-aware (Shifted) Window Blocks, Panoptic Blocks, and Relative Position Embedding generates a complex ideal ratio mask.
generate the predicted cIRM. We next describe these components in detail.
**Complex Self-Attention:** The self-attention mechanism [84] transforms a sequence into a set of vectors, where each vector is computed as the weighted sum of all other vectors. Here the weights are determined by a learnable function based on the similarities between the input and output. The primary difference between conventional and complex self-attention (CSA) is that the latter operates on complex-valued representations and calculates self-attention separately on the real and imaginary parts. We use CSA instead of vanilla SA layers because of the nature of our input spectrogram. For our implementation, we use Complex-Valued Time-Frequency SA (CTSA) proposed in [39], which improves over CSA by accurately modeling inter-dependencies between real and imaginary components of the encoded audio features.
**Geometry-Aware Cross-Modal Encoder:** A wealth of studies [8, 14] establish that direct concatenation of cross-modal features [21, 55] might lead to suboptimal performance. A key observation here is such techniques don't seem suitable in our case, as our application demands more robust reasoning on how different regions of the 3D space contribute to the acoustics differently. For instance, if the sound originates from inside a highly absorptive chamber, less reverberation will be noticeable. In contrast, in the case of a reflective surface, an extended reverberation effect will persist. Hence, it is imperative to attend to image patches to study how they contribute to the overall acoustics.
Inspired by Swin-Transformer [44], our novel GCA module exploits window partitioning for robust spatial modeling ability. However, we observe that using window partition alone limits the conception of the holistic representation of the visual scene. As a result, we equip our Transformer module with (Shifted) Window Blocks and Panoptic Blocks to combine the local and global geometry relations efficiently. Each loop contains four consecutive blocks: Window Block, Panoptic Block, and Shifted Window Block, followed by another Panoptic Block. As shown in Fig.4, the individual blocks follow the Transformer [84] architecture, with modifications done before and after the multi-head attention layer. Note that the dimension of the sequence and corresponding positions of tokens don't get altered in any block.
In Window Block, we use a patchwise partition on the input feature sequence to obtain \(\frac{\mathcal{N}}{\mathcal{N}_{w}}\) window feature sequences \(\mathbb{R}^{\mathcal{N}_{w}\times\mathcal{D}}\) where \(\mathcal{N}_{w}\) is the window length and is set to 16 in our case. The window partition captures local geometry relations and facilitates the calculation of self-attention by reducing computation while calculating attention. Subsequently, window features are combined after the multi-head attention, as depicted in Fig. 4
A.
Inspired by [44], we deploy Shifted Window Block, which connects adjacent windows to facilitate the exchange of information flow between nearby patches. Here a fold and unfold operation is performed by a fraction of \(\frac{\mathcal{N}_{w}}{2}\) to retain the original positions of the feature sequence even after merging: refer to Fig.4
B. Finally, the Panoptic Block follows the native Transformer [84] encoder to enhance holistic geometry-aware relations of the visual scene (Fig. 4
C).
To model the natural association between visual and audio streams by ensuring cross-modal information flow, we employ the conformer variant [24] of encoder blocks, which adjoins a convolution layer inside the block for modeling local interactions of audio features. Building on this, we insert one cross-modal attention layer \(\xi_{cm}\) after the first feed-forward layer, described as follows:
\[\xi_{cm}\left(\mathcal{A}_{i},\mathcal{V}_{i}\right)=\mathrm{softmax}\left( \frac{\mathcal{A}_{i}^{Q}\mathcal{V}_{i}^{K^{T}}}{\sqrt{\mathcal{S}}}\right) \mathcal{V}_{i}^{V}. \tag{5}\]
where superscripts \(K\), \(Q\), and \(V\) indicate Key, Query, and Value, respectively. Here, we compute the attention scores between the visual (\(\mathcal{V}_{i}\)) and the audio (\(\mathcal{A}_{i}\)) sequences by dot-product. This is followed by softmax normalization and scaling by \(\frac{1}{\sqrt{\mathcal{S}}}\), which is then used to factor \(\mathcal{V}_{i}\). The key observation here is that cross-modal attention thus designed enables the model to attend to spatial regions in the visual stream and comprehend its acoustic nature.
Figure 4: Overview of the Geometry-Aware Cross-Modal Attention block. Window and Panoptic Relative Position Embedding (RPE) are fused into Cross-modal Self-Attention (CSA) blocks. In Window Block
A, partitioning and merging of windows before and after CSA. In B, Folding and Unfolding of sequence features before and after CSA, respectively. C integrates another RPE to CSA.
**Position Embedding:** The conventional attention module is found to be insensitive to the positions of the tokens producing suboptimal results. To this end, we introduce specially designed relative position embedding (RPE) [61] to strengthen its spatial identification ability. We denote the input sequence of multi-head cross-modal self-attention as \(\mathcal{X}=\left\{x_{i}\right\}_{i=1}^{\mathcal{Z}}\), where \(\mathcal{Z}\) is the sequence length and \(x_{i}\in\mathbb{R}^{\mathcal{D}}\). A bias matrix \(\mathcal{B}\in\mathbb{R}^{\mathcal{Z}\times\mathcal{Z}}\) is added to Scaled Query-Key product [84]:
\[\alpha_{ij}=\frac{1}{\sqrt{\mathcal{D}}}\left(x_{i}\mathcal{W}^{ Q}\right)\left(x_{j}\mathcal{W}^{K}\right)^{T}+\mathcal{B}_{ij}, \tag{6}\] \[\text{Attention }(\mathcal{X})=\mathrm{Softmax}(\alpha)\left( \mathcal{X}\mathcal{W}^{V}\right),\]
where \(\mathcal{W}^{Q},\mathcal{W}^{K},\mathcal{W}^{V}\in\mathbb{R}^{\mathcal{D} \times\mathcal{D}}\) are learnable project matrices and each bias \(\mathcal{B}_{ij}\) comes from a learnable scalar table. In (Shifted) Window Block, \(\mathcal{Z}=\mathcal{N}_{w}\). We denote the learnable scalar table as \(\left\{b_{k}\right\}_{k=-\mathcal{N}_{w}+1}^{\mathcal{N}_{w}-1}\), and \(\mathcal{B}_{ij}\) corresponds to \(b_{j-i}\). This Patch RPE is fed into multi-head attention.
For Panoptic Block, we consider \(\mathcal{Z}=\mathcal{N}\). Here we propose a symmetric representation of only distance and denote the learnable scalar table as \(\left\{b_{k}\right\}_{k=0}^{n}\), where \(n=\frac{\mathcal{N}}{2}\). When \(|j-i|\leq\frac{\mathcal{N}}{2},B_{ij}\) corresponds to \(b_{|j-i|}\), otherwise \(\mathcal{B}_{ij}\) corresponds to \(b_{\mathcal{N}-|j-i|}\).
### Complex Mask Decoder
The complex mask decoder takes input from the conformer and generates a complex ideal ratio mask \(\mathcal{C}\). The decoder comprises a complex-valued ReLU activation function followed by a complex-valued convolutional layer, a self-attention module, a dense block, and finally a normalization layer.
### Vocoder
After generating the complex ideal ratio mask \(\mathcal{C}\), we decode the output spectrogram \(\mathcal{G}\) by performing the complex multiplication operation between \(\mathcal{C}\) and \(\mathcal{G}\). Next, we use a pre-trained vocoder \(\chi\)[73] to convert the spectral representation of the audio signal to the waveform. We perform this step specifically to calculate the SSL-based HuBERT Loss, which we describe later.
### Model Optimization
**Spectrogram Prediction Loss**: The first of the two objective functions we use for model optimization is the Spectrogram Prediction Loss (SP). Learning to reconstruct the clean spectrogram is a common optimization methodology used in speech enhancement and dereverberation [10, 40]. It computes the \(L_{2}\) norm between the spectrogram predicted by our network \(\Theta\) and the ground truth clean spectrogram \(\mathcal{A}_{s}\). It is defined as:
\[\mathcal{L}_{SP}=\mathbb{E}_{(\mathcal{A}_{s},\mathcal{V})\sim\mathcal{U}} \left\|\phi(\Theta(\mathcal{A}_{r},\mathcal{V}))-\phi(\mathcal{A}_{s})) \right\|_{2}, \tag{7}\]
where \(\mathcal{A}_{r}\) is the reverberant audio and \(\mathcal{V}\) is the corresponding panoramic image in some distribution \(\mathcal{U}\). \(\phi\) is the function that transforms the speech waveform to the corresponding spectrogram representation.
**Acoustic Token Matching Loss**: Inspired by the recent success of self-supervised speech representation learning [49], we introduce Acoustic Token Matching Loss (ATM). The traditional MSE loss ignores the inherent speech characteristics, like phonetic and prosodic properties, that are essential for learning and reconstructing speech information [29]. Speech representations learned with SSL effectively encode such characteristics in their latent representations [59]. Thus, we propose a simple yet effective method to enforce the output speech from AdVerb to encode such information by solving the Acoustic Token Matching Loss. To calculate ATM loss, we first generate latent representations \(\tilde{\mathcal{H}}\in\mathbb{R}^{\mathcal{J}\times d}\) from the clean waveform \(\mathcal{A}_{s}\) with a pre-trained HuBERT [30] model \(\mathrm{e}(\cdot)\), where \(d\) is the HuBERT embedding dimension and \(\mathcal{J}\) is the sequence length. Next, we cluster these latent representations using _K-means_ to generate a sequence of pseudo-labels \(\mathcal{P}=\left\{p_{t}\right\}_{t=1}^{\mathcal{J}}\). These pseudo-labels are representative of the latent space in our speech input. Finally, we predict these pseudo-labels from latent representations of \(\mathcal{A}_{e}\) (estimated output audio from AdVerb) obtained after passing it through HuBERT. The ATM Loss function can be expressed as follows:
\[\mathcal{L}_{ATM}(\mathrm{e};\mathcal{A}_{s},\mathcal{A}_{e})=\sum_{t\in \mathcal{J}}\log p_{f}\left(p_{t}\mid\tilde{\mathcal{H}},t\right) \tag{8}\]
where \(p_{f}\) is the distribution over the target indices at each timestep \(t\). Finally, we optimize our model with a total loss \(\mathcal{L}\) as follows:
\[\mathcal{L}=\lambda\mathcal{L}_{SP}+\mu\mathcal{L}_{ATM} \tag{9}\]
where \(\lambda,\mu\in\mathbb{R}\) are hyper-parameters to balance the contribution of each loss component.
## 5 Experiments and Results
For a fair assessment of our model, we evaluate our model through speech dereverberation on three downstream tasks: speech enhancement (SE), automatic speech recognition (ASR), and speaker verification (SV), respectively. The environments are taken from Matterport3D [6], with speech samples from the LibriSpeech dataset [58].
### Dataset
**SoundSpaces-Speech Dataset:** We use the SoundSpaces-Speech dataset proposed in [12] for our experiments. It comes with paired anechoic and reverberant audio with
camera views from 82 Matterport3D [6] environment convolved with speech clips from LibriSpeech [58] samples. SoundSpaces [9] provide precomputed RIRs \(\mathcal{R}(t)\), which are convolved with speech waveforms to obtain reverberant signal \(\mathcal{A}_{r}(t)\) for a total of 49,430/2,700/2,600 train/validation/test samples, respectively.
**Acoustic AVSpeech Web Videos:** Web videos offer natural supervision between visuals and acoustics in abundance. To be consistent with prior work, we use the collection from [8], which is a subset of the AVSpeech[17] dataset. The clip durations range between 3-10 seconds with a visible human subject in each video frame. To evaluate our model on real-world data in addition to synthetic data, we use these 3K samples only for testing purposes.
**Evaluation Tasks And Metrics:** We follow the standard practice of reporting Perceptual Evaluation of Speech Quality (PESQ) [66], Word Error Rate (WER), and Equal Error Rate (EER) to compare our method with the baselines for the three tasks. Following [12], we employ the pre-trained models from the SpeechBrain [65] for ASR and SV tasks. These models were evaluated on the LibriSpeech test-clean set. SV evaluation was done on a set of 80K randomly sampled utterance pairs from the test-clean set.
### Baselines
**WPE**[52]: A statistical method that estimates an inverse system for late reverberation. It deploys variance normalization to improve dereverberation results with relatively short observations.
**MetricGan+**[18]: We use the implementation by [65] for benchmarking. As presented by the authors, it can be used to optimize different metrics. We optimize PESQ to report values from the best model for individual downstream tasks.
**SkipConvGAN**[40]: A recent model where the generator network estimates a complex time-frequency mask and the discriminator aids in driving the generator to restore the lost formant structure. The model achieves SOTA results on the dereverberation task.
**HiFi-GAN**[38]: A GAN-based high-fidelity speech synthesis system that shows satisfactory results on speech dereverberation. It models periodic patterns of audio to enhance sample quality.
**VIDA**[12]: An end-to-end vision backed speech dereverberation framework. It combines RGB-D image information to estimate clean speech.
### Results
**Evaluation Setup on LibriSpeech:** We compare model performance on three speech tasks: Speech Enhancement (SE), Automatic Speech Recognition (ASR), and Speaker Verification (SV). To evaluate our trained models, we use the dereverbed version of the test-clean set split of the LibriSpeech dataset. Similar to [12], for SR and SV, we either use pre-trained models from SpeechBrain [65] or fine-tune a model from scratch using dereverbed LibriSpeech train-clean-360 split.
**Quantitative Analysis on LibriSpeech:** Table 1 compares the performance of AdVerb with the baselines. Experimental results show AdVerb outperforms all audio-only base
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Speech Enhancement (SE)\({}^{\dagger}\)**} & \multirow{2}{*}{**Speech Recognition (SR)\({}^{\dagger}\)**} & \multirow{2}{*}{**Speaker Verification (SV)\({}^{\dagger}\)**} & \multirow{2}{*}{**RTE\(\downarrow\)**} \\ & & **PESQ \(\uparrow\)** & & & & **(in sec)** \\ \hline Anechoic (Upper bound) & 4.72 & 2.89 & 2.33 & 1.53 & 1.57 & - \\ \hline Reverberant & 1.49 & 8.20 & 4.44 & 4.51 & 4.88 & 0.382 \\ MetricGAN+ [18]\({}^{\ddagger}\) & 2.45 (+64\%) & 7.48 (+9\%) & 4.86 (-9\%) & 4.67 (-4\%) & 2.85 (+42\%) & 0.187 \\ HiFi-GAN [38]\({}^{\ddagger}\) & 1.83 (+23\%) & 9.31 (-14\%) & 5.59 (-26\%) & 4.32 (+44\%) & 2.49 (+49\%) & 0.196 \\ WPE [52]\({}^{\ddagger}\) & 1.63 (+9\%) & 8.43 (-3\%) & 4.30 (+3\%) & 5.90 (-31\%) & 4.11 (+16\%) & 0.173 \\ SkipConvGAN [40]\({}^{\ddagger}\) & 2.10 (+41\%) & 7.22 (+12\%) & 4.17 (+6\%) & 4.86 (-8\%) & 3.98 (+18\%) & 0.119 \\ VIDA [12] & 2.37 (+59\%) & 4.44 (+46\%) & 3.66 (+18\%) & 3.97 (+12\%) & 2.40 (+51\%) & 0.155 \\ \hline Adverb w/o Image & 2.31 (+55\%) & 3.92 (+52\%) & 3.41 (+23\%) & 3.67 (+19\%) & 2.19 (+55\%) & 0.119 \\ Adverb w/ Random Image & 2.54 (+70\%) & 4.12 (+50\%) & 3.62 (+18\%) & 3.76 (+17\%) & 2.26 (+54\%) & 0.110 \\ Adverb w/o ATM Loss & 2.89 (+94\%) & 4.67 (+43\%) & 3.66 (+18\%) & 3.17 (+30\%) & 2.07 (+58\%) & 0.117 \\ Adverb w/o Complex SA & 2.91 (+95\%) & 3.63 (+56\%) & 2.98 (+33\%) & 3.21 (+29\%) & 2.10 (+57\%) & 0.117 \\ Adverb w/o Geometry Aware Block & 2.30 (+54\%) & 4.01 (+51\%) & 3.12 (+30\%) & 3.68 (+18\%) & 2.12 (+57\%) & 0.113 \\ Adverb w/o RPE & 2.79 (+8\%) & 3.54 (+57\%) & 3.01 (+32\%) & 3.17 (+30\%) & 2.11 (+57\%) & 0.107 \\ Adverb w/o Window Block & 2.81 (+89\%) & 3.61 (+56\%) & 2.99 (+33\%) & 3.14 (+30\%) & 2.12 (+57\%) & 0.108 \\ Adverb w/o Panoptic Block & 2.92 (+96\%) & 3.59 (+56\%) & 2.92 (+34\%) & 3.29 (+27\%) & 2.01 (+59\%) & 0.107 \\ \hline
**Adverb (ours)** & **2.96 (+98\%)** & **3.54 (+57\%)** & **2.91 (+34\%)** & **3.11 (+31\%)** & **1.98 (+59\%)** & **0.101** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of AdVerb with various baselines on multiple spoken language processing tasks based on the LibriSpeech test-clean set (marked with \(\dagger\)) and on sim-to-real transfer based on the AVSpeech dataset (marked with *). “Anechoic (Upper bound)” refers to clean speech, while “Reverberant” refers to clean speech convolved with RIR. WER-FT and EER-FT denote evaluations when the SR and SV models are finetuned with the audio-enhanced data. Numbers in parentheses denote the relative improvement compared to Reverberant. Methods marked with \(\ddagger\) are audio-only.
lines by a significant margin on all three tasks. We achieved relative improvements of 41%, 51%, and 36% over the best audio-only baseline, SkipConvGAN, on SE, SR, and SV, respectively, in terms of relative gain from reverberant speech. AdVerb also outperforms VIDA by 25%, 20%, and 22% on SE, SR, and SV, respectively, which shows the superiority of AdVerb in audio-visual dereverberation tasks.
**Quantitative Analysis on AVSpeech:** To examine the robustness of our proposed approach in real-world settings, we evaluate our model on an in-the-wild AVSpeech audio-visual dataset collected from YouTube [17]. The AVSpeech dataset has non-panoramic images; therefore the field-of-view is limited in the test dataset, and the performance of our network trained on panoramic images is not optimal. In the absence of the ground truth clean speech, we use the average reverberation time (RT) of the dereverberated speech signal for evaluation. RT is the time taken to decay the sound pressure in RIR by 60 decibels. We can estimate RT from the reverberant speech signal [8]. According to Equation 1, in clean speech, RIR will be an impulse response (\(\delta(t)\)) and \(\approx 0\). The dereverberated speech with the least amount of reverberation will have the lowest RT. Therefore, reverberation time error (RTE) is the average RT of the dereverberated test speech samples. From Table 1, we can see that AdVerb reports the lowest RTE.
**Ablation Study:** To show the importance of the individual components in AdVerb, we perform an extensive ablation study shown in Table 1. Note that AdVerb sees the steepest fall in performance across tasks when trained and evaluated w/o images, i.e., in an audio-only setup. In this setup, our GCA block is replaced with a simple uni-modal self-attention block. There is also a considerable drop in performance across all tasks w/o the geometry-aware module, thus underlining the importance of this block. In this setup, our GCA block is replaced with a simple cross-modal self-attention block with queries as audio cues and keys and values as visual cues. We carry out further ablations to study the contributions of the individual components of the cross-modal geometry-aware attention block. Interestingly, the drop in performance when removing the individual elements is much less than the entire GCA block. This underlines that these components combine to have a telling impact on the overall setup. Finally, we show that ATM loss improves AdVerb's SR performance by a significant margin. Refer to the supplementary for further ablations.
**More Comparison Against Audio-only Methods:** Table 2 demonstrates the performance of our model against SOTA audio-only methods. AdVerb outperforms existing audio-only methods and sets new benchmarks.
**Results on Noisy Dataset**: To evaluate the robustness of the proposed AdVerb model to outdoor unwanted noise, we add ambient sounds from urban environments to the LibriSpeech test-clean dataset using the MUSAN dataset [70]. Following [10], we maintain an SNR of 20 for our mixture. Table 3 compares the performance of AdVerb on three downstream speech-based tasks. All experiments were done for the non-fine-tuned version of our experimental setup, where a pre-trained model was used from SpeechBrain. Though we see a drop in performance compared to the noise-free dataset, AdVerb outperforms all our baselines and maintains similar margins compared to the original noise-free dataset.
**Ablation on Noisy Dataset**: Table 4 illustrates the results of the ablation study on the noisy LibriSpeech dataset. The noise addition process is the same as before.
**Analysis On Visual Features:** To underline the importance of the visual cues, we show the activations of the network using Grad-CAM[68] in Fig. 5. Note that the network attends to the sides of the hallway or empty regions with al
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & **SE** & **SR** & **SV** \\
**Method** & **PESQ \(\uparrow\)** & **WER \(\downarrow\)** & **WER \(\downarrow\)** \\ \hline w/o Image & 2.03 & 4.68 & 3.81 \\ w/o ATML & 2.28 & 5.10 & 3.87 \\ w/o Geom. aware & 2.29 & 4.99 & 3.64 \\ w/o Window block & 2.34 & 4.43 & 3.43 \\ w/o Panoptic block & 2.39 & 4.37 & 3.51 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation on LibriSpeech noisy data. AdVerb performs considerably well on noisy data with the individual modules contributing to the overall gain.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & **SE** & **SR** & **SV** \\
**Method** & **PESQ \(\uparrow\)** & **WER \(\downarrow\)** & **WER \(\downarrow\)** \\ \hline Anechoic (Upper bound) & 4.72 & 2.89 & 1.53 \\ \hline Reverberant & 1.57 & 11.45 & 4.76 \\ MetricGAN+ & 2.29 & 8.92 & 4.89 \\ HiFi-GAN & 1.95 & 10.55 & 4.73 \\ WPE & 1.88 & 9.10 & 5.11 \\ SkipConvGAN & 2.06 & 7.28 & 4.94 \\ VIDA & 2.14 & 4.97 & 4.01 \\ \hline
**AdVerb (Ours)** & **2.52** & **4.20** & **3.46** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of AdVerb for more audio-only approaches. AdVerb results in a relative gain of **14%-56%**. Percentages in bracket represent an improvement on reverberant audio.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & **SE** & **SR** & **SV** \\
**Method** & **PESQ \(\uparrow\)** & **WER \(\downarrow\)** & **WER \(\downarrow\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result comparison of AdVerb with baseline methods on noise added dataset splits for 3 speech tasks.
most or no sound absorbers which lead to longer reverberation effects. Fig. 6 demonstrates some cases where our model attends to spurious regions.
### User Study For Subjective Evaluation
In addition to objective metric evaluation, we perform a subjective human listening study on a synthetic (generated using SoundSpaces) and an in-the-wild (AVSpeech) dataset over Amazon MTurk. We believe this can be a good measure to understand how realistic and aesthetically pleasing the output produced by our model is. Moreover, through this, we try to understand other aural artifacts not captured in an objective measure like PESQ. In our study, a total of 89 participants were presented with 8 sets of samples containing the reverberant speech, clean speech (not present for AVSpeech), and estimated dereverberant speech. Table 5 demonstrates that users find samples generated by our method better than the three other baselines VIDA [12], WPE [52] and SkipConvGAN [40], in both cases.
## 6 Conclusions and Future Works
In this paper, we present a novel audio-visual dereverberation framework. To this end, we introduce the GCA module with a specially designed position embedding scheme to capture the local and global spatial relations of the 3D environment. The experimental analysis demonstrates how modeling the visual information efficiently can lead to improved performance of such a system. We believe our work will encourage further research in this space. One limitation of our approach is that the efficacy of the method drops for non-panoramic images. Future work can aim towards finding more sophisticated ways of modeling the acoustic property of the environment and combining cross-modal information. Although our framework achieves highly satisfactory results at all difficulty levels on both simulated and real-world samples, we notice the performance of our model can be improved for situations with extreme reverb effects, and far away subjects. A potential use case of our work can be to leverage the properties of target visual scenes to provide immersive experiences to users in AR/VR applications. This work can also find applications in the audio/speech simulation domain.
Figure 5: Grad-CAM visualization of activated regions. Our model attends to regions that cause heavy reverberation effects.
Figure 6: Some failure cases. The \(\checkmark\) denotes the regions with correct activation while \(\boldsymbol{\chi}\) spurious detections.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Baseline** & **SoundSpaces(in \%)** & **AVSpeech(in \%)** \\
**Method** & **(A\% / B\% / C\%)** & **(A\% / B\% / C\%)** \\ \hline Clean Speech & **61.3** / 8.1 / 30.6 & – / – / – \\ VIDA[12] & 16.5 / 6.5 / **77.0** & 13.5 / 0.0 / **86.5** \\ WPE[52] & 8.8 / 3.5 / **87.7** & 3.7 / 7.4 / **88.9** \\ SCGAN[40] & 9.2 / 0.0 / **90.8** & 0.0 / 8.0 / **92.0** \\ \hline \hline \end{tabular}
\end{table}
Table 5: User study results. A% of participants find the baseline audio samples better, B% have no preference, and C% prefer AdVerb. |
2307.13833 | Computational Design of Anisotropic Stealthy Hyperuniform Composites
with Engineered Directional Scattering Properties | Disordered hyperuniform materials are an emerging class of exotic amorphous
states of matter that endow them with singular physical properties. Here, we
generalize the Fourier-space based numerical construction procedure for
designing {\it isotropic} disordered hyperuniform two-phase heterogeneous
materials (i.e., composites) developed by Chen and Torquato [Acta Mater. {\bf
142}, 152 (2018)] to {\it anisotropic} microstructures by explicitly
incorporating the {\it vector-dependent} spectral density function ${\tilde
\chi}_{_V}({\bf k})$ of {\it arbitrary form} that is realizable. We demonstrate
the utility of the procedure by generating a wide spectrum of {\it anisotropic}
stealthy hyperuniform (SHU) microstructures with ${\tilde \chi}_{_V}({\bf k}) =
0$ for ${\bf k} \in \Omega$. We show how different exclusion-region shapes with
various discrete symmetries and varying size affect the resulting statistically
anisotropic microstructures as a function of the and phase volume fraction. We
find that, among other properties, the directional hyperuniform behaviors
imposed by the shape asymmetry (or anisotropy) of certain exclusion regions
give rise to distinct anisotropic structures and degree of uniformity in the
distribution of the phases on intermediate and large length scales along
different directions. Moreover, while the anisotropic exclusion regions impose
strong constraints on the {\it global} symmetry of the resulting media, they
can still possess almost isotropic {\it local} structures. Our construction
algorithm enables one to control the statistical anisotropy of composite
microstructures which is crucial to engineering directional optical, transport
and mechanical properties of two-phase composite media. | Wenlong Shi, David Keeney, Duyu Chen, Yang Jiao, Salvatore Torquato | 2023-07-25T22:05:14Z | http://arxiv.org/abs/2307.13833v1 | Computational Design of Anisotropic Stealthy Hyperuniform Composites with Engineered Directional Scattering Properties
###### Abstract
Disordered hyperuniform materials are an emerging class of exotic amorphous states of matter that endow them with singular physical properties, including large isotropic photonic band gaps, superior resistance to fracture, and nearly optimal electrical and thermal transport properties, to name but a few. Here, we generalize the Fourier-space based numerical construction procedure for designing and generating digital realizations of _isotropic_ disordered hyperuniform two-phase heterogeneous materials (i.e., composites) developed by Chen and Torquato [Acta Mater. **142**, 152 (2018)] to _anisotropic_ microstructures with targeted spectral densities. Our generalized construction procedure explicitly incorporates the _vector-dependent_ spectral density function \(\tilde{\chi}_{V}(\mathbf{k})\) of _arbitrary form_ that is realizable. We demonstrate the utility of the procedure by generating a wide spectrum of _anisotropic_ stealthy hyperuniform (SHU) microstructures with \(\tilde{\chi}_{{}_{V}}(\mathbf{k})=0\) for \(\mathbf{k}\in\Omega\), i.e., complete suppression of scattering in an "exclusion" region \(\Omega\) around the origin in the Fourier space. We show how different exclusion-region shapes with various discrete symmetries, including circular-disk, elliptical-disk, square, rectangular, butterfly-shaped and lemniscate-shaped regions of varying size, affect the resulting statistically anisotropic microstructures as a function of the and phase volume fraction. The latter two cases of \(\Omega\) lead to directionally hyperuniform composites, which are stealthy hyperuniform only along certain directions, and are non-hyperuniform along others. We find that, while the circular-disk exclusion regions give rise to isotropic hyperuniform composite microstructures, the directional hyperuniform behaviors imposed by the shape asymmetry (or anisotropy) of certain exclusion regions give rise to distinct anisotropic structures and degree of uniformity in the distribution of the phases on intermediate and large length scales along different directions. Moreover, while the anisotropic exclusion regions impose strong constraints on the _global_ symmetry of the resulting media, they can still possess almost isotropic _local_ structures. Both the isotropic and anisotropic hyperuniform microstructures associated with the elliptical-disk, square and rectangular \(\Omega\) possess phase-inversion symmetry over certain range of volume fractions and a percolation threshold \(\phi_{c}\approx 0.5\). On the other hand, the directionally hyperuniform microstructures associated with the butterfly-shaped and lemniscate-shaped \(\Omega\) do not possess phase-inversion symmetry and percolate along certain directions at much lower volume fractions. We also apply our general procedure to construct stealthy non-hyperuniform systems. Our construction algorithm enables one to control the statistical anisotropy of composite microstructures via the shape, size and symmetries of \(\Omega\), which is crucial to engineering directional optical, transport and mechanical properties of two-phase composite media.
+
Footnote †: correspondence sent to: [email protected]
+
Footnote †: correspondence sent to: [email protected]
## I Introduction
Disordered hyperuniform materials are exotic states of matter [1; 2] that lie between a perfect crystal and liquid. These systems are similar to liquids or glasses in that they are statistically isotropic and generally possess no Bragg peaks, and yet they completely suppress large-scale normalized density fluctuations like crystals and in this sense possess a hidden long-range order [1; 2; 3]. A hyperuniform many-particle system, disordered or ordered, is one in which the static structure factor \(S(\mathbf{k})\) vanishes in the infinite-wavelength (or zero-wavenumber) limit, i.e., \(\lim_{|\mathbf{k}|\to 0}S(\mathbf{k})=0\), where \(\mathbf{k}\) is the wavevector [1]. Here \(S(\mathbf{k})\) is defined as \(S(\mathbf{k})\equiv 1+\rho\tilde{h}(\mathbf{k})\), where \(\tilde{h}(\mathbf{k})\) is the Fourier transform of the total correlation function \(h(\mathbf{r})=g_{2}(\mathbf{r})-1\), and \(g_{2}(\mathbf{r})\) is the pair correlation function, and \(\rho\) is the number density of the system. Note that this definition implies that the forward scattering contribution to the diffraction pattern is omitted. Equivalently, a hyperuniform point pattern is one which the local number variance \(\sigma_{N}^{2}(R)\) associated with a spherical observation window of radius \(R\) grows more slowly than the window volume in the large-\(R\) limit [1; 2].
The concept of hyperuniformity was subsequently generalized by Torquato and co-workers to two-phase het
erogeneous materials [3] and random scalar, vector and tensor fields [4]. For two-phase heterogeneous materials (i.e., composites), the focus of this work, the quantity of interest is the spectral density function \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\), which is Fourier transform of the auto-covariance function \(\chi_{{}_{V}}(\mathbf{r})=S_{2}(\mathbf{r})-\phi^{2}\), where \(S_{2}(\mathbf{r})\) and \(\phi\) are respectively the two-point correlation function and volume fraction for the phase of interest in the composite (see Sec. II for detailed discussions) [3; 5; 6]. A hyperuniform heterogeneous two-phase material possesses a vanishing spectral density function in the zero-wavenumber limit, i.e., \(\lim_{|\mathbf{k}|\to 0}\tilde{\chi}_{{}_{V}}(\mathbf{k})=0\). Since \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) is trivially proportional to the scattering intensity [5], which indicates that the scattering of a disordered hyperuniform composite is completely suppressed at the infinite-wavelength limit. Equivalently, a hyperuniform heterogeneous medium, disordered or not, possesses a local volume fraction variance \(\sigma_{{}_{V}}^{2}(R)\) that decreases _more rapidly_ than \(R^{d}\) for large \(R\), i.e., \(\lim_{R\to\infty}\sigma_{{}_{V}}^{2}(R)\cdot R^{d}=0\), where \(R\) is the radius of spherical observation windows used to compute \(\sigma_{{}_{V}}^{2}(R)\). This behavior is to be contrasted with those of typical disordered two-phase media for which the variance decays as \(R^{-d}\).
A variety of exotic correlated disordered systems, which can be in both equilibrium and non-equilibrium settings, and come in both quantum-mechanical and classical varieties, are known to be hyperuniform. Examples include the density fluctuations in early universe [7], disordered jammed packing of hard particles [8; 9; 10; 11], certain exotic classical ground states of many-particle systems [12; 13; 14; 15; 16; 17; 18; 19], jammed colloidal systems [20; 21; 22; 23], driven non-equilibrium systems [24; 25; 26; 27; 28], certain quantum ground states [29; 30], avian photoreceptor patterns [31], organization of adapted immune systems [32], amorphous silicon [33; 34], a wide class of disordered cellular materials [35], dynamic random organizing systems [36; 37; 38; 39; 40], electron density distributions [41; 42], vortex distribution in superconductors [43; 44], certain medium/high-entropy alloys [45], disordered 2D materials [46; 47; 48; 49], amorphous carbon nano-tubes [51], and certain metallic glasses [52]. The readers are referred to the review article by Torquato [2] for further details about disordered hyperuniform states of matter.
The unique characteristics of disordered hyperuniform materials, i.e., the combination of both disordered liquid-like structures on small-scales and crystal-like hidden order on large scales, endow them with many superior physical properties, including wave propagation characteristics [53; 54; 55; 56; 57; 58; 59; 60; 61], diffusion and electrical properties [62; 63; 64], mechanical properties [65; 66] as well as optimal multifunctional characteristics [67; 68; 69]. A particularly important discovery was the demonstration that disordered "stealthy" hyperuniform materials possess large, complete and isotropic photonic band gaps, blocking all directions and polarizations [53; 54], which had been thought not to be possible. Such exotic disordered photonic band gap materials enable waveguide geometries that have advantages over their periodic counterparts [70], which have also important ramifications for electronic and phononic device applications [71]. Subsequently, it was shown that disordered stealthy hyperuniform materials can be made fully transparent to electromagnetic waves [72; 73; 74]. These discoveries set a new paradigm for engineered disorder in photonic metamaterials [75; 76] and optical applications [77; 78; 79]. Designer disordered hyperuniform materials have also been successfully fabricated or synthesized using different techniques [80; 81]. A recent review article [82] describes engineered metamaterials for photonic applications with an emphasis on disordered hyperuniform materials.
An inverse problem of great importance is the generation of realizations of heterogeneous two-phase materials with a prescribed set of statistical descriptors, which is usually referred to as the material _construction_ problem [5; 83; 84]. To this end, a variety of numerical construction methods have been developed, including Gaussian random field method [85; 86], phase recovery method [87; 88; 89], multi-point statistics [90; 91; 92; 93], and machine-learning based methods [94; 95; 96; 97; 98; 99] to name but a few.
One of the most widely used construction methods is a procedure devised by Yeong and Torquato, who formulated the construction as an energy minimization problem and solved it using stochastic optimization [83; 84]. In particular, an energy is defined as the sum of the squared differences between a prescribed set of statistical microstructural descriptors and the corresponding descriptors computed from a trial microstructure. The simulated annealing method [100] is subsequently employed to evolve the trial microstructure in order to minimize the energy. The statistical descriptors such as various correlation functions [5], can be obtained either by sampling from microstructure images or theoretical considerations. In the former case, the problem is typically referred to as reconstruction. In the latter instance, the target func
Figure 1: An example of directionally hyperuniform system [4]. Left panel: A directionally hyperuniform scattering pattern in which the exclusion region \(\Omega\) is a lemniscate shape around the origin where the scattering intensity is exactly zero (darkest shade). This “stealthy” pattern clearly shows that hyperuniformity depends on the direction in which the origin \(\mathbf{k}=\mathbf{0}\) is approached. Right panel: a statistically anisotropic point configuration associated with the scattering pattern.
tions need to satisfy the necessary _realizability_ conditions in order to achieve a successful construction [101]. The Yeong-Torquato procedure has been employed to incorporate a variety of statistical descriptors [102; 103; 104; 105; 106] and applied to a wide spectrum of heterogeneous two-phase material systems [107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117].
Very recently, Chen and Torquato [118] further generalized the Yeong-Torquato procedure to generate microstructures of _isotropic_ disordered hyperuniform heterogeneous two-phase materials, from realizable _angular-averaged_ spectral density functions \(\tilde{\chi}_{{}_{V}}(|\mathbf{k}|)\), including both stealthy and non-stealthy hyperuniform ones. Importantly, this allowed them to design disordered stealthy hyperuniform dispersion that possesses nearly optimal effective conductivity, which are also transparent to electromagnetic radiation for certain wavelengths and can be readily experimentally fabricated using 3D printing and lithographic technologies.
Hyperuniformity has recently been generalized to treat structurally anisotropic systems [4], which necessitates the introduction of the concept of "directional hyperuniformity", i.e., hyperuniformity along certain directions in Fourier space and non-hyperuniformity along other directions. Figure 1 shows a directionally (anisotropic) hyperuniform scattering pattern (left panel) possessing a lemniscate-shaped region around the origin in which the scattering intensity is exactly zero (darkest shade), which was originally studied in Ref. [4]. In this example, hyperuniformity clearly depends on the direction in which the origin \(\mathbf{k=0}\) is approached. Specifically, the pattern is stealthy hyperuniform along the horizontal direction (i.e., \(S(\mathbf{k})=0\) for \(k_{x}<K\) and \(k_{y}=0\), see definition below), and nonhyperuniform along the vertical direction [2]. The right panel shows a statistically anisotropic point configuration that corresponds to the scattering pattern [4]. It is seen that the points are arranged in "wavy" chains along the horizontal direction, and no such "wavy" patterns are observed along the vertical direction. Other examples of directionally hyperuniform point patterns include exotic ground states of directional pair potentials whose Fourier representations are non-zero on compact sets around the origin and zero everywhere else [119]. Such anisotropic hyperuniform systems have important implications for the design of waveguide with engineered direction-dependent performance [120]. To the best of our knowledge, no systematic investigations on directionally hyperuniform heterogeneous two-phase materials have been carried out.
Here, we generalize the Fourier-space based numerical construction procedure developed by Chen and Torquato [118] to generate digital realizations of _anisotropic_ disordered hyperuniform heterogeneous two-phase material microstructures with designed spectral densities. Controlling the statistical anisotropy of composite microstructures via the shape, size and discrete symmetries of \(\Omega\) is crucial to engineering directional optical, transport and mechanical properties of the two-phase media for a wide spectrum of applications [120; 55], including achieving exotic anisotropic dispersion relations for electromagnetic and acoustic wave propagation [121; 122]. Our generalized construction procedure explicitly incorporates the vector-dependent spectral density function \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\). We demonstrate the utility of the procedure by applying it to render a wide spectrum of anisotropic _stealthy hyperuniform_ (SHU) composites [123], which possess a spectral density
\[\tilde{\chi}_{{}_{V}}(\mathbf{k})=0,\quad\text{for}\quad\mathbf{k}\in\Omega, \tag{1}\]
where \(\Omega\) is an "exclusion" region [124] around the origin in the \(\mathbf{k}\)-space in which scattering is completely suppressed, excluding forward scattering at \(\mathbf{k=0}\). Thus, SHU composites anomalously suppress local volume fraction fluctuations from intermediate to infinite wavelengths. We systematically investigate the effects of shape anisotropy and size of the exclusion region \(\Omega\) with certain discrete symmetries on the resulting composite microstructure by constructing corresponding statistically anisotropic realizations as a function of phase volume fraction. We begin by investigating SHU media with circular-disk regions and then consider anisotropic shapes, namely, elliptical-disk, square, and rectangular regions. Moreover, we study directionally hyperuniform composites associated with butterfly-shaped and lemniscate-shaped \(\Omega\) regions, which are stealthy and hyperuniform only along certain directions, and are non-hyperuniform along others. These selected representa
Figure 2: Illustration of the Fourier-space exclusion regions \(\Omega\) investigated in this work. From left to right: (a) circular-disk, (b) elliptical-disk, (c) square, (d) rectangular, (e) butterfly-shaped and (f) lemniscate-shaped regions.
tive exclusion-region shapes are schematically illustrated in Fig. 2.
We find that while the circular-disk exclusion regions give rise to isotropic hyperuniform structures on both global and local scales, distinct anisotropic structures and degree of uniformity in the distribution of the phases on intermediate and large length scales along different directions, leading to directional hyperuniform behaviors, can result from the shape asymmetry (or anisotropy) of certain exclusion regions. Moreover, while the anisotropic exclusion regions impose strong constraints on the _global_ symmetry of the resulting media, they can still possess almost isotropic _local_ structures. Both the isotropic and anisotropic hyperuniform microstructures associated with the elliptical-disk, square and rectangular \(\Omega\) possess phase-inversion symmetry over certain range of volume fractions and a percolation threshold \(\phi_{c}\approx 0.5\). On the other hand, the directionally hyperuniform microstructures associated with the butterfly-shaped and lemniscate-shaped \(\Omega\) do not possess phase-inversion symmetry and percolate along certain directions at much lower volume fractions.
It is noteworthy that our present results together with theoretical predictions for effective properties that depend on the spectral density enable the inverse design of heterogeneous two-phase materials with desirable properties. Such predictive formulations include not only nonlocal theories for the effective dynamic dielectric constant [125, 73] that accurately accounts for multiple scattering but also the spreadability for time-dependent diffusive transport behaviors [63, 126]. Such theories allow one to achieve desirable anisotropic composite properties by tuning the microstructure to have a targeted spectral density \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\)[127, 128, 129]. Once an optimized \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) is obtained, our procedure enables one to render realizations of the designed microstructure, which can subsequently experimental manufactured using, e.g., 3D printing techniques.
The rest of the paper is organized as follows: In Sec. II, we provide definition of correlation functions, spectral density and hyperuniformity in heterogeneous two-phase material systems. In Sec. III, we discuss the numerical construction procedure in detail, including its mathematical formulation as a constrained optimization problem and its solution via stochastic simulated annealing method. In Sec. IV, we present constructions of realizations of a variety of disordered SHU composites with prescribed exclusion-region shapes, sizes and symmetries as a function of phase volume fraction. In Sec. VI, we provide concluding remarks and outlook of future work. The effects of large exclusion regions and application of our general method to non-hyperuniform stealthy composites, along with other supporting information and results, are presented in the Appendix.
## II Definitions
### Correlation Functions
Consider a two-phase random heterogeneous material (i.e., medium or a composite), which is a sub-domain of \(d\)-dimensional Euclidean space, i.e., \(\mathcal{V}\subseteq\mathbb{R}^{d}\) of volume \(V\leq+\infty\), composed of two regions \(\mathcal{V}=\mathcal{V}_{1}\cup\mathcal{V}_{2}\). \(\mathcal{V}_{1}\) is the phase 1 region of volume \(V_{1}\) and fraction \(\phi_{1}=V_{1}/V\); and \(\mathcal{V}_{2}\) is the phase 2 region of volume \(V_{2}\) and volume fraction \(\phi_{2}=V_{2}/V\). In the infinite-volume limit \(V\rightarrow\infty\), \(V_{i}\) (\(i=1,2\)) also increases proportionally such that the ratio \(V_{i}/V\) (i.e., the volume fraction \(\phi_{i}\)) tends to a well-defined constant. The statistical properties of each phase \(i\) of the system are specified by the countably infinite set of \(n\)_-point correlation functions_\(S_{n}^{(i)}\), which are defined by [5]:
\[S_{n}^{(i)}(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})=\left\langle\prod_{i=1}^{n}I ^{(i)}(\mathbf{x}_{i})\right\rangle, \tag{2}\]
where \(I^{(i)}(\mathbf{x})\) is the indicator function for phase \(i\), i.e.,
\[I^{(i)}(\mathbf{x})=\begin{cases}1,&\text{if }\mathbf{x}\in\mathcal{V}_{1}\\ 0,&\text{otherwise}.\end{cases} \tag{3}\]
The function \(S_{n}^{(i)}(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\) can also be interpreted to be the probability of randomly throwing down \(n\) points at positions \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\) and having all of the points fall into the same phase \(i\). Henceforth, we consider statistically homogeneous, i.e., there is no preferred origin in the system and thus \(S_{n}^{(i)}\) only depends on the relative displacement between the points [5]. The one-point function is simply the volume fraction of phase \(i\), \(\phi_{i}\), which is a constant, i.e.,
\[S_{1}^{(i)}(\mathbf{x}_{1})=\phi_{i}, \tag{4}\]
and the two-point \(S_{2}^{(i)}(\mathbf{r})\) depends only the relative displacement \(\mathbf{r}=\mathbf{x}_{2}-\mathbf{x}_{1}\). For media without long-range order, \(S_{2}(\mathbf{r})\) possesses the following asymptotic behavior:
\[\lim_{|\mathbf{r}|\rightarrow\infty}S_{2}(\mathbf{r})=\phi^{2}, \tag{5}\]
where we henceforth drop indicating phase \(i\) and simply refer to the phase of interest.
Upon subtracting the long-range behavior from \(S_{2}\), one obtains the _autocovariance function_, i.e.,
\[\chi_{{}_{V}}(\mathbf{r})=S_{2}(\mathbf{r})-\phi^{2} \tag{6}\]
which is generally an \(L^{2}\)-function. The associated spectral density function \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) is given by
\[\tilde{\chi}_{{}_{V}}(\mathbf{k})=\int_{\mathbb{R}^{d}}\chi_{{}_{V}}(\mathbf{r })e^{-i\mathbf{k}\cdot\mathbf{r}}d\mathbf{r}, \tag{7}\]
which is the Fourier transform of \(\chi_{{}_{V}}(r)\) and is obtainable from scattering intensity measurements [131].
The autocovariance function obeys the bounds [101]:
\[-\text{min}\{(1-\phi)^{2},\phi^{2}\}\leq\chi_{{}_{V}}(\mathbf{r})\leq(1-\phi)\phi, \tag{8}\]
where \(\phi\) is the volume fraction of the reference phase. We remark that it is an open problem to identify additional necessary and sufficient conditions that the autocovariance function must satisfy in order to correspond to a binary stochastic process.
### Hyperuniform Random Heterogeneous Two-Phase Materials
For a two-phase heterogeneous material, the quantity of interest is the local volume fraction variance \(\sigma_{{}_{V}}^{2}(R)\), which was first introduced in Ref. [130]:
\[\sigma_{{}_{V}}^{2}(R)=\frac{1}{v_{1}(R)}\int_{\mathbb{R}^{d}}I(\mathbf{r}) \alpha_{2}(r;R)d\mathbf{r}, \tag{9}\]
where \(\alpha_{2}(r;R)\) is the scaled intersection volume, i.e., the intersection volume of two spherical windows of radius \(R\) whose centers are separated by a distance \(r\), divided by the volume \(v_{1}(R)\) of the window, i.e.,
\[v_{1}(R)=\frac{\pi^{d/2}R^{d}}{\Gamma(1+d/2)}. \tag{10}\]
A hyperuniform random medium is one whose \(\sigma_{{}_{V}}^{2}(R)\) decreases more rapidly than \(R^{d}\) for large \(R\), i.e.,
\[\lim_{R\rightarrow\infty}\sigma_{{}_{V}}^{2}(R)\cdot R^{d}=0. \tag{11}\]
This behavior is to be contrasted with those of typical disordered two-phase media for which the variance decays as \(R^{-d}\), i.e., as the inverse of the window volume \(v_{1}(R)\).
The hyperuniform condition is equivalently given by
\[\lim_{|\mathbf{k}|\to 0}\tilde{\chi}_{{}_{V}}(\mathbf{k})=0, \tag{12}\]
which implies that the direct-space autocovariance function \(\chi_{{}_{V}}(\mathbf{r})\) exhibits both positive and negative correlations such that its volume integral over all space is exactly zero [123], i.e.,
\[\int_{\mathbb{R}^{d}}\chi_{{}_{V}}(\mathbf{r})d\mathbf{r}=0, \tag{13}\]
which is a direct-space sum rule for hyperuniformity.
For hyperuniform two-phase media whose spectral density goes to zero as a power-law scaling as \(|\mathbf{k}|\) tends to zero [4], i.e.,
\[\tilde{\chi}_{{}_{V}}(\mathbf{k})\sim|\mathbf{k}|^{\alpha} \tag{14}\]
there are three different scaling regimes (classes) that describe the associated large-\(R\) behaviors of the local volume fraction variance:
\[\sigma_{{}_{V}}^{2}(R)\sim\begin{cases}R^{-(d+1)},&\alpha>1\quad\text{(Class I)}\\ R^{-(d+1)}\ln R,\quad\alpha=1\quad\text{(Class II)}\\ R^{-(d+\alpha)},\quad 0<\alpha<1\quad\text{(Class III)}.\end{cases} \tag{15}\]
Classes I and III are the strongest and weakest forms of hyperuniformity, respectively. Class I media include all crystal structures [1], many quasicrystal structures [133] and exotic disordered media [3; 118]. Examples of Class II systems include some quasicrystal structures [133], perfect glasses [134], and maximally random jammed packings [135; 136; 8; 9; 10]. Examples of Class III systems include classical disordered ground states [137], random organization models [24], perfect glasses [134], and perturbed lattices [138]; see Ref. [2] for a more comprehensive list of systems that fall into the three hyperuniformity classes. SHU media, the focus of this study, are also of class I. Known examples of such media are periodic packings of spheres as well as unusual disordered sphere packings derived from stealthy point patterns for which \(\Omega\) is a spherical/circular-disk region [123].
## III Methods
### Mathematical Formulation
In the _construction_ problem, one aims to find a digital realization (e.g., represented by a binary-valued matrix with entries equal to 0 or 1) associated with a prescribed set of statistical descriptors. Here, we focus on the construction of a disordered SHU two-phase medium with a prescribed spectral density \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\), if realizable. We consider digitized medium in a square domain of length \(L\) in \(\mathbb{R}^{2}\) with periodic boundary conditions. We note that our construction formulation and procedure can be readily generalized to three dimensional systems. In this case, the indicator function \(I(\mathbf{r})\) also takes a discrete form, i.e., \(I(\mathbf{r})\) is only defined on a discrete set of \(\mathbf{r}=n_{1}\mathbf{e}_{1}+n_{2}\mathbf{e}_{2}\), where \(\mathbf{e}_{i}\) are unit vectors along the orthogonal directions and \(n_{1},n_{2}=0,1,\ldots,N\) with \(N\) being the system size or "resolution" (i.e., number of pixels along each dimension). The Fourier-space wavevectors for the system also take discrete values, i.e., \(\mathbf{k}=(2\pi/L)(n_{1}\mathbf{e}_{1}+n_{2}\mathbf{e}_{2})\). The corresponding spectral density for the digital realization is given by
\[\tilde{\chi}_{{}_{V}}(\mathbf{k})=\frac{1}{L^{2}}\tilde{m}^{2}(\mathbf{k})| \tilde{J}(\mathbf{k})|^{2} \tag{16}\]
where \(\tilde{J}(\mathbf{k})\) is the _generalized collective coordinate_[2; 118] defined as
\[\tilde{J}(\mathbf{k})=\sum_{\mathbf{r}}\exp(\mathbf{i}\mathbf{k}\cdot\mathbf{r })\mathbf{J}(\mathbf{r}) \tag{17}\]
where the sum is over all pixel centers \(\mathbf{r}\), and
\[J(\mathbf{r})=I(\mathbf{r})-\phi. \tag{18}\]
The quantity \(\tilde{m}(\mathbf{k})\) is the Fourier transform of the shape function (or indicator function) \(m(\mathbf{r})\) of a square pixel which is given by
\[\tilde{m}(\mathbf{k})=\begin{cases}\frac{sin(k_{x}/2)}{k_{x}/2}\frac{sin(k_{y}/ 2)}{k_{y}/2},&k_{x}\neq 0,k_{y}\neq 0\\ \frac{sin(k_{x}/2)}{k_{x}/2},&k_{x}\neq 0,k_{y}=0\\ \frac{sin(k_{y}/2)}{k_{y}/2},&k_{x}=0,k_{y}\neq 0\\ 1,&k_{x}=0,k_{y}=0.\end{cases} \tag{19}\]
where \(k_{x}\) and \(k_{y}\) are the components of the discrete wavevector \(\mathbf{k}\).
Here we are interested in disordered SHU system, which is characterized by \(\tilde{\chi}_{{}_{V}}(\mathbf{k})=0\) for wavevectors in the exclusion region, i.e., \(\mathbf{k}\in\Omega\). This directly imposes a set of constraints on the discrete indicator function \(I(\mathbf{r})\) through Eqs. (16) to (18), i.e.,
\[\tilde{m}^{2}(\mathbf{k})|\sum_{\mathbf{r}}\exp(i\mathbf{k}\cdot\mathbf{r})[ I(\mathbf{r})-\phi)]|^{2}=0 \tag{20}\]
for \(\mathbf{k}\in\Omega\). We note Eq. (20) represents a set of \(N_{\Omega}\) number of nonlinear equations of \(I(\mathbf{r})\) (and equivalently of \(J(\mathbf{r})\)), where \(N_{\Omega}\) is the number of _independent_\(\mathbf{k}\) points in \(\Omega\). We note that due to the symmetry of the spectral density function \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\)[118], only a half of \(\mathbf{k}\) points in \(\Omega\) are independent and thus, \(N_{\Omega}\sim\frac{1}{2}Vol(\Omega)\). For a digital realization of linear size \(L\), the total number of pixels is \(N=L^{2}\). The number of unknowns in Eq. (20), i.e., the value of each pixel, is also \(N\). We are interested in the cases \(N_{\Omega}<N\), i.e., the number of constraints (equations) is much smaller than that of unknowns. Therefore, Eq. (20) does not have unique solutions and we will employ stochastic optimization method to iteratively solve Eq. (20).
As with stealthy point configurations [13; 2], it is also convenient to introduce a ratio \(\chi\) between the number of constraints (equations) and the number of unknowns (degrees of freedom):
\[\chi=N_{\Omega}/(N-2) \tag{21}\]
which is the fraction of constrained degree of freedom in the systems. We note 2 degrees of freedom associated with the trivial overall translation of the entire system are subtracted in Eq. (21). In the case of point configurations, it has been shown that increasing \(\chi\) leads to increased degree of order in the SHU many-particle systems [139; 2]. It is expected that increasing \(\chi\) requires suppression of local volume fraction fluctuations on a broader range of length scales, which leads to microstructures with very fine and uniformly dispersed phase morphologies.
### Simulation Annealing Procedure
We employ the simulated annealing procedure [100] to solve Eq. (16), which has been widely used in composite construction problems [101; 102; 84; 103; 83; 104; 110; 84]. In particular, the construction problem is formulated as an "energy" minimization problem, with the energy functional \(E\) defined as follows
\[E=\sum_{\mathbf{k}\in\Omega_{I}}\left[\tilde{\chi}_{{}_{V}}(\mathbf{k})- \tilde{\chi}_{{}_{V}}^{0}(\mathbf{k})\right]^{2}, \tag{22}\]
where \(\tilde{\chi}_{{}_{V}}^{0}(\mathbf{k})\) is the target spectral density function and \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) is the corresponding function associated with a trial microstructure, \(\Omega_{I}\) is the set of _independent_\(\mathbf{k}\) points due to the symmetry of \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\), which is defined as
\[\Omega_{I}=\{\mathbf{k}\in\Omega,\ k_{x}>0,\ k_{x}=0\cap k_{y}>0\}. \tag{23}\]
For stealthy systems, we have \(\tilde{\chi}_{{}_{V}}^{0}(\mathbf{k})=0\) for \(\mathbf{k}\in\Omega\). Thus, Eq. (22) simply reduces to
\[E=\sum_{\mathbf{k}\in\Omega_{I}}\left[\tilde{\chi}_{{}_{V}}(\mathbf{k})\right] ^{2}. \tag{24}\]
We note that our procedure can be employed to generate realizations with arbitrary \(\tilde{\chi}_{{}_{V}}^{0}(\mathbf{k})\), which allows us to engineer anisotropic scattering properties for these composites.
The simulated annealing procedure is then employed to solve the aforementioned minimization problem. Specifically, starting from an initial trial configuration (i.e., old realization) which contains a fixed number of pixels for each phase consistent with the volume fraction of that phase with an energy \(E_{old}\), two randomly selected pixels associated with different phases are exchanged to generate a new trial microstructure. Relevant correlation functions are sampled from the new trial configuration and the associated energy \(E_{new}\) is evaluated, which determines whether the new trial configuration should be accepted or not via the probability [100]:
\[p_{acc}=min\{\exp(-\Delta E/T),1\}, \tag{25}\]
where \(\Delta E=E_{new}-E_{old}\) is the energy difference between the new and old trial configurations and \(T\) is a virtual temperature that is chosen to be initially high and slowly decreases according to a cooling schedule. An appropriate cooling schedule reduces the chances that the system gets stuck in a shallow local energy minimum. In practice, a power law schedule \(T(n)=\gamma^{n}T_{0}\) is usually employed, where \(T_{0}\) is the initial temperature, \(n\) is the cooling stage and \(\gamma\in(0,1)\) is the cooling factor (\(\gamma=0.99\) is used here). The simulation is terminated when \(E\) is smaller than a prescribed tolerance (e.g., \(10^{-6}\) in this case).
Generally, a large number of trial configurations (\(\sim 10^{7}\)) need to be searched to generate a successful construction. Therefore, highly efficient methods [118] are
used that enable one to rapidly obtain the spectral density function of a new configuration by updating the corresponding function associated with the old configuration, instead of completely re-computing the function from scratch. In particular, the generalized collective coordinate \(\tilde{I}(\mathbf{k})\) is tracked during the construction process. At the beginning of the simulation, \(\tilde{I}(\mathbf{k})\) of the initial configuration is computed from scratch and the values for all \(\mathbf{k}\) are stored. After each pixel exchange, since only a single pixel of the phase of interest changes its position from \(\mathbf{r}_{old}\) to \(\mathbf{r}_{new}\), we can then obtain the updated \(\tilde{I}(\mathbf{k})\) values by only explicitly computing the contributions from this changed pixel, i.e.,
\[\tilde{I}(\mathbf{k})\leftarrow\tilde{I}(\mathbf{k})+\delta\tilde{I}_{new}( \mathbf{k})-\delta\tilde{I}_{old}(\mathbf{k}), \tag{26}\]
where
\[\delta\tilde{I}_{new}(\mathbf{k})=\exp(i\mathbf{k}\cdot\mathbf{r}_{new}), \quad\delta\tilde{I}_{old}(\mathbf{k})=\exp(i\mathbf{k}\cdot\mathbf{r}_{old}) \tag{27}\]
Once the updated \(\tilde{I}(\mathbf{k})\) is obtained, the updated \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) can be immediately computed using Eq. (16).
To enhance the convergence of the construction, we also employ surface optimization technique [103]: towards the end of the construction process, instead of randomly selecting pixels throughout the systems with equal probability, the pixels that are isolated or on the surface of connected regions of the phase of interest are assigned larger probabilities of being selected compared to those within the phase regions. Interestingly, as we will show in Sec. IV, even with surface optimization the construction renders realizations containing dispersed pixels or small clusters of pixels for large \(\chi\) values.
In our subsequent constructions, we mainly consider realizations in a square domain with \(L=300\) pixels and periodic boundary conditions. We have also investigated smaller and larger system sizes, including \(L=150\) and \(L=600\) pixels to verify that a resolution of \(L=300\) pixels does not affect the construction results.
## IV Results
In this section, we present constructions of anisotropic SHU composite microstructures across volume fractions associated with exclusion regions with a prescribed shape, size and symmetry. Specifically, we consider circular-disk, elliptical-disk, square, rectangular, butterfly-shaped and lemniscate-shaped exclusion regions. The size of \(\Omega\) for a given exclusion-region shape is controlled by a characteristic length scale \(\ell\) of the shape (e.g., radius for circles, semi-axis length for ellipses, edge lengths for squares and rectangles), which will specified in detail below. We note that the parameter \(\chi\) that characterizes the ratio of the number of constraints (i.e., number of independent wavevectors \(\mathbf{k}\) in \(\Omega\)) over the number of total degrees of freedom can be estimated as \(\chi\approx Vol(\Omega(\ell))/L^{2}\) for a given \(\ell\), which will also be specified for each case below.
### Isotropic Media with Circular-Disk Exclusion Regions
To validate the construction procedure based on vector spectral densities \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\), we first generate realizations of SHU composites possessing zero spectral density with a circular-disk exclusion region \(\Omega(\ell)\) with continuous rotational symmetry and varying radius \(\ell\) (see Fig. 2a), i.e.,
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{ 3},(\frac{k_{x}}{\ell})^{2}+(\frac{k_{y}}{\ell})^{2}\leq 1\}. \tag{28}\]
This corresponds to the isotropic stealthy system investigated in Ref. [118] characterized by angularly-averaged spectral density \(\tilde{\chi}_{{}_{V}}(k)=0\) for \(k\leq\ell\). The size of \(\Omega\) is chosen to be \(\ell=5,10,15,20,25\), corresponding to \(\chi\approx 8.7\times 10^{-4},3.5\times 10^{-3},7.9\times 10^{-3},1.4\times 10^{-2},2.2\times 10^{-2}\), respectively. The full spectrum of phase volume fractions from \(0\) to \(1\) was investigated and composite microstructures with \(\phi=0.1,0.3,0.5,0.7,0.9\) are shown here, which are representative of a wide spectrum of morphologies of the system.
The generated microstructures are shown in Fig. 3, along with the representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) associated with \(\phi=0.5\) for different \(\ell\) (left column). Figure 4 shows the corresponding angle averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)), which exhibit a "peak" immediately beyond the exclusion region and the peak value diminishes as the exclusion region size increases. We note that similar features are observed for \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) associated the same region size for all phase volume fractions, and all different \(\Omega\) regions considered here. Therefore, we only show the representative cases with \(\phi=0.5\)
The realizations from left to right corresponding to increasing \(\phi\); and from bottom to top corresponding to increasing \(\ell\). Taking the realization with \(\phi=0.1\) and \(\ell=5\) as a starting point, it can be seen that the phase of interest forms statistically isotropic particles with similar sizes, which overall possess an isotropic distribution. These are expected for a circular-disk \(\Omega(\ell)\) and are consistent with the observations reported in Ref. [118]. As \(\phi\) increases while keeping \(\ell\) the same, it can be seen that the size of the particles increases with increasing \(\phi\), and the "particles" merge into ligaments which eventually form a connected morphology.
Interestingly, the microstructures appear to possess the so-called _phase-inversion symmetry_[1], i.e., the microstructure associated with a volume fraction \(\phi\) is statistical equivalent to that with a volume fraction \(1-\phi\) with the two phases exchanged. This can be clearly seen by visually comparing the pairs of microstructures with \(\phi=0.1\) and \(0.9\), and \(\phi=0.3\) and \(0.7\) shown in Fig. 3. Detailed analysis of the rescaled autocovariance functions, which are reported in Sec. IV.G and Appendix A below, indicates the constructed microstructures with a volume fraction \(\phi\) approximately in the range [\(0.4\), \(0.6\)] possess a high degree of phase inversion symmetry (i.e., the corresponding microstructures possess virtually identical rescaled autocovariance functions, see Sec. IV.G).
As a consequence of the phase inversion symmetry, the resulting _isotropic_ composite microstructures possess a percolation threshold \(\phi_{c}=0.5\)[1], at which a system-spanning cluster of the phase of interest (i.e., the "blue" phase) first emerges.
In addition, we observe an interesting morphology evolution when increasing the size \(\ell\) of the circular-disk exclusion region while keeping \(\phi\) fixed. Using the realizations with \(\phi=0.1\) as examples, it can be clearly seen that as \(\ell\) increases the particles become finer, i.e., they get smaller in size while remain a statistically isotropic morphology (expect for very small particle where the intrinsic anisotropy of the square grid is manifested) and statistically isotropic distributions. Similar trend is also observed for the \(\ell=5\) case, where the \(\ell=5\) case is not considered.
Figure 3: Isotropic hyperuniform composites with circular-disk exclusion regions with varying radius \(\ell\) and phase volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each radius \(\ell\): from bottom to top \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 8.7\times 10^{-4},3.5\times 10^{-3},7.9\times 10^{-3},1.4\times 10^{-2},2.2\times 10^{-2}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
Figure 4: Representative angle-averaged \(\tilde{\chi}_{{}_{V}}(k)\) (where \(k=|\mathbf{k}|\)) associated with the constructed SHU composites with \(\phi=0.5\) and a circular-disk \(\Omega\) region in the Fourier space for different radius \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
served for other volume fractions, e.g., for the connected morphology, the width of the ligaments gets smaller as \(\ell\) increases. We note that microstructures at the same \(\phi\) and different \(\ell\) are not related by a simple scaling, i.e., they are not self-similar to one another.
These behaviors can be understood from the formulation of the construction problem. As discussed in Sec. III.A, the construction is formulated as a constrained optimization problem, in which the variables are the discrete values (1 or 0) of individual pixels and constraints are imposed through the zero-value spectral density \(\tilde{\chi}_{V}\left(\mathbf{k}\right)\) at certain wavevectors \(\mathbf{k}\). The number of con
Figure 5: Anisotropic hyperuniform composites with elliptical-disk exclusion regions with varying size \(\ell\) (i.e., the length of long semi-axis) and volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{V}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each long semi-axis \(\ell\): from bottom to top \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 2.9\times 10^{-4},1.2\times 10^{-3},2.6\times 10^{-3},4.7\times 10^{-3},7.3\times 10^{-3}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
Figure 6: Representative averaged \(\tilde{\chi}_{V}(k)\) (where \(k=|\mathbf{k}|\)) along two orthogonal directions (horizontal and vertical) in the Fourier space associated with the constructed SHU composites with \(\phi=0.5\) and an elliptic exclusion region for different long semi-axis \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
straints is determined by the number of \(\mathbf{k}\) points in the circular-disk exclusion region. When the region size \(\ell\) is small, the pixels are less constrained and can be organized freely to form clustering morphologies. On the other hand, for large \(\ell\) the degrees of freedom for each pixel are activated in order to satisfy the large number of constraints. In this case, the pixels behavior like non-overlapping particles moving on the construction grid, and thus are difficult to form clusters. In addition, increasing \(\ell\) (i.e., the size of \(\Omega\)) in Fourier space requires the resulting composites need to be very "uniform" on smaller and smaller scales in real space. This forces the realizations to possess finer morphology composed of individual pixels instead of clusters of pixels.
To further verify this, we consider very large exclusion regions (i.e., large \(\chi\) values) and present the construction results in Appendix C. It can be seen in Fig. 21 that as \(\chi\) increases, individual pixels in the realizations are mutually separated as if they were particles with repulsive interactions, and the over distribution resembles that for a SHU point configuration [2], albeit on a square grid. Although the realizations associated with large \(\Omega\) (i.e., large \(\ell\)) exhibit interesting particle-like behaviors, in the subsequent discussions, we will focus on cases with relatively small \(\ell\), corresponding to heterogeneous two-phase material regime.
In summary, we have found that the circular-disk exclusion regions result in isotropic hyperuniform structures on both global and local scales. The isotropic composite microstructures with \(\phi\) approximately in the range [0.4, 0.6] possess phase-inversion symmetry. We speculate that the emergence of phase-inversion symmetry is due to the fact that the stealthy spectral density function effectively imposes constraints on the microstructures beyond the exclusion region, especially for microstructures with \(\phi\) in the vicinity of 0.5. Increasing phase volume fractions lead to enhanced connectedness of the phase of interest, which percolates at \(\phi_{c}=0.5\). Moreover, increasing the size of the exclusion region results in a finer phase morphology, which is required to suppress local volume fraction fluctuations over a broader range of length scales associated with the larger exclusion region.
### Anisotropic Media with Elliptical-Disk Exclusion Regions
We now consider the generation of statistically anisotropic media associated with an elliptical-disk exclusion region \(\Omega\), which is an affine transformation of a circular disk. Hence, it breaks the continuous rotational symmetry of a circle and possesses two-fold rotational symmetry. Specifically, we consider an elliptic exclusion region with an aspect ratio of \(1/3\) and a long semi-axis with length \(\ell\) (see Fig. 2b), defined as
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{3 },(\frac{k_{x}}{\ell/3})^{2}+(\frac{k_{y}}{\ell})^{2}\leq 1\}. \tag{29}\]
The representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) with varying \(\ell\) (left column) and the associated realizations are shown in Fig. 5. Figure 6 shows the corresponding averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)) along the horizontal and vertical directions.
We find that, similar to the circular-disk \(\Omega\) cases, increasing \(\phi\) leads to increased connectivity and degree of clustering in the constructed microstructures for all \(\ell\) values. The microstructures with \(\phi\) approximately in the range [0.4, 0.6] also possess a high degree of phase-inversion symmetry (see Sec. IV.G for details) and percolation of the "blue" phase along the horizontal direction is observed at \(\phi_{c}\approx 0.5\). Interestingly, the anisotropy in \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) does not result in elongated "particles" at low \(\phi\) on a local scale, as one might speculate; but rather leads to significantly anisotropic distributions of the particles on a global scale. For example, it can be clearly seen that the particle phase in realizations with \(\phi=0.1\) form "necklace"-like chains in the horizontal direction. The particles in the chains eventually connect to one another as \(\phi\) increases to form connected bands. In addition, increasing \(\ell\) leads to finer morphologies, with reduced particle size at lower \(\phi\) and reduced ligament width at higher \(\phi\), while the anisotropy effects along the horizontal direction persist for all \(\phi\) and \(\ell\) values.
### Anisotropic Media with Square Exclusion Regions
Next, we consider the effects of a square exclusion region with four-fold rotational symmetry and edge length \(\ell\) (see Fig. 2c), which is defined as
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{3 },|k_{x}|\leq\ell/2,|k_{y}|\leq\ell/2\}. \tag{30}\]
The representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) with varying \(\ell\) (left column) and the associated realizations are shown in Fig. 7. Figure 8 shows the corresponding averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)) along the horizontal and diagonal directions.
We find that again the anisotropy of the exclusion region in the spectral density is not manifested in the local morphology of the phases, i.e., the particles formed by the pixels are still statistically isotropic _locally_ and no square particles were observed. On the other hand, the _global_ distributions of the entire particulate microstructure exhibit a high degree of four-fold rotational symmetry. For example, at low \(\phi\), the particles are essentially arranged on a distorted square lattice (which is most apparent for the case with \(\ell=5\) and \(\phi=0.3\)). At high \(\phi\), the connected phase morphology are composed of perpendicularly arranged ligaments, as clearly illustrated in the case with \(\ell=10\) and \(\phi=0.5\). Similar effects for increasing volume fraction (i.e., increasing phase connectivity) and increasing \(\ell\) (i.e., leading to finer morphologies) are also observed. In particular, the composite microstructures with \(\phi\) approximately in the range [0.4, 0.6] are found to
possess phase-inversion symmetry and the "blue" phase percolates along both orthogonal directions at \(\phi_{c}\approx 0.5\) (see Sec. IV.G for details).
### Anisotropic Media with Rectangular Exclusion Regions
We also investigate \(\tilde{\chi}_{{}_{V}}\left(\mathbf{k}\right)\) with a rectangular exclusion region, possessing two-fold rotational symmetry and an aspect ratio of \(1/3\) and the length of long edge being \(\ell\)
Figure 8: Representative averaged \(\tilde{\chi}_{{}_{V}}(k)\) (where \(k=|\mathbf{k}|\)) along two directions (horizontal and diagonal) in the Fourier space associated with the constructed SHU composites with \(\phi=0.5\) and a square exclusion region for different edge length \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
Figure 7: Anisotropic hyperuniform composites with square exclusion regions with varying size \(\ell\) (i.e., edge length) and volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each edge length \(\ell\): from bottom to top \(\ell=10,20,35,40,50\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 1.1\times 10^{-3},4.4\times 10^{-3},9.9\times 10^{-3},1.8\times 10^{- 2},2.8\times 10^{-2}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
(see Fig. 2d), which is defined as
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{3},|k_ {x}|\leq\ell/6,|k_{y}|\leq\ell/2\}. \tag{31}\]
The representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) with varying \(\ell\) (left column) and the associated realizations are shown in Fig. 7. Figure 10 shows the corresponding averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)) along the horizontal and vertical directions.
We find that, as expected, the shape asymmetry of the exclusion region is also manifested in the morphology of the realizations. Specifically, the elongation effect seems to dominate the morphology of the realization
Figure 10: Representative averaged \(\tilde{\chi}_{{}_{V}}(k)\) (where \(k=|\mathbf{k}|\)) along two orthogonal directions (horizontal and vertical) in the Fourier space associated with the constructed SHU composites with \(\phi=0.5\) and a rectangle exclusion region for different edge length \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
Figure 9: Anisotropic hyperuniform composites with rectangular exclusion regions with varying size \(\ell\) (i.e., long-edge length) and volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each long-edge length \(\ell\): from bottom to top \(\ell=10,20,30,40,50\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 3.7\times 10^{-4},1.5\times 10^{-3},3.3\times 10^{-3},5.9\times 10^{-3},9.3 \times 10^{-3}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
tions, leading to chain-like arrangements of "particles" at low \(\phi\) and stripes at high \(\phi\). Increasing volume fraction again leads to increasing phase connectivity, resulting in a directional percolation along the horizontal direction at \(\phi_{c}\approx 0.5\), similar to that observed in the microstructures associated with the elliptical-disk exclusion regions. The anisotropic microstructures are found to possess phase-inversion symmetry for \(\phi\) approximately in the range [0.4, 0.6] (see Sec. IV.G for details). Finer morphologies due to increasing \(\ell\) are also observed.
### Directionally Hyperuniform Media with Butterfly-Shaped Exclusion Regions
In the previous cases, the composite systems are stealthy hyperuniform along all directions, although the characteristic length scales on which the scattering in the system is completely suppressed (corresponding to \(\tilde{\chi}_{{}_{V}}(\mathbf{k})=0\)) are different along different directions. In this section, we investigate systems which are directionally hyperuniform, i.e., the complete suppression of scattering only occurs in certain directions in the Fourier space. In particular, we consider two representative spectral density functions respectively with a butterfly-shaped exclusion region and lemniscate exclusion region.
The butterfly-shaped \(\Omega\) is defined as the region in the 2nd and 4th quadrants enclosed by the concave superdisk curve [140; 141] (see Fig. 2e), which possesses two-fold rotational symmetry, i.e.,
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{ 3},|\frac{k_{x}}{\ell}|^{p}+|\frac{k_{y}}{\ell}|^{p}\leq 1,k_{x}\cdot k_{y} \leq 0\} \tag{32}\]
where \(p=1.5\) in this study. The associated composites thus only show hyperuniform behaviors when \(\mathbf{k}\) approaches the origin in 2nd and 4th quadrants. Figure 11 show the realizations associated the butterfly-shaped \(\Omega\), as well as representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) for different \(\ell\) values. The corresponding averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)) along the two diagonal directions are shown in Fig. 12.
We find that, consistent with previous examples, increasing connectivity with increasing \(\phi\) and finer morphologies with increasing \(\ell\). However, unlike the previous cases where the constructed anisotropic media are hyperuniform along all directions, these directionally hyperuniform microstructures do not possess phase-inversion symmetry anymore, as can be clearly seen in Fig. 2 and the associated with autocovariance functions in Appendix A. Increasing \(\phi\) leads to percolation of the "blue" phase along the 45-degree diagonal direction at \(\phi_{c}\approx 0.45\), which is lower than that for the microstructures possessing phase-inversion symmetry.
The realizations for all \(\ell\) and \(\phi\) values show strong anisotropy along the diagonal directions of the construction domain, i.e., chain-like arrangements of particles at low \(\phi\) and stripe-like bands at high \(\phi\), which is consistent with the anisotropy directions of the \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\). Specifically, the systems are hyperuniform along the directions of these chain-like or stripe-like structures, and non-hyperuniform along the perpendicular directions. This can be intuitively understood by imagining move observation windows along the chains/stripes. Once the window size is larger than the average distance between a pair of chains/stripes, moving the window along the chain/stripe direction would not result in large volume fraction fluctuations. On the other hand, moving the window along the directions perpendicular to that of chains/stripes is expected to lead to large fluctuations as the window would alternatively contains the white or dark phases.
### Directionally Hyperuniform Media with Lemniscate-Shaped Exclusion Regions
The lemniscate \(\Omega\) (see Fig. 2f) which possesses two-fold rotational symmetry is defined by [142]
\[\Omega(\ell)=\{\mathbf{k}=\frac{2\pi}{L}\mathbf{n}:\mathbf{n}\in\mathbb{Z}^{ 3},\rho^{2}\leq\ell^{2}\cos(2\theta)\} \tag{33}\]
where is \(\rho\) the Fourier space polar coordinates \(\rho^{2}=k_{x}^{2}+k_{y}^{2}\), \(\theta=\tan^{-1}(k_{x}/k_{y})\), and \(\ell\) is the length scale parameter. Similar to the butter-fly shaped case, the associated composites only exhibit hyperuniform behaviors along certain directions in the Fourier spaces, i.e., those are enclosed in the lemniscate exclusion region.
Figure 13 show the realizations associated with the lemniscate-shaped \(\Omega\), as well as representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) for different \(\ell\) values. The corresponding averaged spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) (\(k=|\mathbf{k}|\)) along the horizontal and vertical directions are shown in Fig. 14. Similar to the media associated with the butterfly-shaped exclusion regions, the resulting directionally hyperuniform microstructures do not possess phase-inversion symmetry. Percolation is found to occur at a much lower volume fraction \(\phi_{c}\approx 0.4\) along the anisotropy (i.e., horizontal) direction, compared to that for the microstructures possessing phase-inversion symmetry. It can been seen that the realizations contain chain-like (for low \(\phi\)) and stripe-like (for high), along the horizontal directions, leading to strong anisotropy in this direction. Overall, these structural elements are arranged in a wavy manner along the horizontal direction. A similar pattern in directionally hyperuniform point configurations with a lemniscate-shaped structure factor was also observed [2], see Fig. 1. The emergence of the wavy structures might result from the unique direction dependence scattering behavior imposed by the lemniscate pattern in \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\).
In summary, we have found that the directional hyperuniformity imposed by the butterfly-shaped and lemniscate-shaped exclusion regions are realized by the formation of anisotropic chain-like or stripe-like structures in the construction media. These two distinct exclusion-region shapes lead to two distinct sets of
anisotropic directions in the constructed composite microstructures (i.e., diagonal directions in the case of butterfly-shaped \(\Omega\) and horizontal directions in the case of lemniscate-shaped \(\Omega\)). Among these directions, the anisotropic chain/stripe structures possess a much more uniform distribution of the phases, giving rise to hyperuniformity along these directions. On the other hand, large gaps and voids are present between the chains/stripes, which result in large local volume fraction fluctuations along the directions perpendicular to these chains/stripes, leading to non-hyperuniform behaviors along these directions.
Figure 11: Direally hyperuniform composites with butterfly-shaped exclusion regions with varying size \(\ell\) and volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each \(\ell\): from bottom to top \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 2.7\times 10^{-4},8.9\times 10^{-4},1.9\times 10^{-3},3.2\times 10^{- 3},4.8\times 10^{-3}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
Figure 12: Representative averaged \(\tilde{\chi}_{{}_{V}}(k)\) (where \(k=|\mathbf{k}|\)) along two orthogonal diagonal directions in the Fourier space associated with the constructed SHU composites with \(\phi=0.5\) and a butterfly-shaped exclusion region for different size \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
### Phase Inversion Symmetry and Percolation Threshold
In this section, we provide a detailed analysis of the phase-inversion symmetry of the constructed composite microstructures. A random medium possesses phase-inversion symmetry if the morphology of phase 1 at volume fraction \(\phi_{1}\) is statistically identical to that of phase 2 in the system where the volume fraction is \(\phi_{2}=1-\phi_{1}\), and hence [1]
\[S_{2}^{(1)}(r;\phi_{1},\phi_{2})=S_{2}^{(2)}(r;\phi_{2},\phi_{1}). \tag{34}\]
Figure 14: Representative averaged \(\tilde{\chi}_{{}_{V}}(k)\) (where \(k=|\mathbf{k}|\)) along two orthogonal directions (horizontal and vertical) in the Fourier space associated with the constructed SHU composites with \(\phi=0.5\) and a lemniscate-shaped exclusion region for different size \(\ell\): from left to right \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)).
Figure 13: Directionally hyperuniform composites with lemniscate-shaped exclusion regions with varying size \(\ell\) and volume fraction \(\phi\). The left most column shows representative \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) (associated with \(\phi=0.5\)) for each \(\ell\): from bottom to top \(\ell=5,10,15,20,25\) (in units of \(2\pi/L\)), corresponding to \(\chi\approx 5.6\times 10^{-4},2.2\times 10^{-3},5.0\times 10^{-3},8.8\times 10^{-3},1.4\times 10^{-2}\), respectively. The phase volume fractions for the realizations from left to right are \(\phi=0.1,0.2,0.3,0.4,0.5\), respectively. The linear size of the system is \(L=300\) pixels.
Equivalently, the corresponding microstructures with phase-inversion symmetry should possess identical rescaled autocovariance function
\[f(r)=\frac{S_{2}^{(1)}(r)-\phi_{1}^{2}}{\phi_{1}(1-\phi_{1})}=\frac{S_{2}^{(2)}(r )-\phi_{2}^{2}}{\phi_{2}(1-\phi_{2})}. \tag{35}\]
Figure 15 shows the comparison of the rescaled autocovariance function \(f(r)\) for the isotropic microstructures associated with the circular-disk exclusion (\(\ell=5\)) and varying volume fractions \(\phi\) of the "blue" phase, i.e., phase "1" (see Fig. 3). It can be seen as the phase volume fraction increases towards 0.5, the corresponding \(f(r)\) functions match better with one another. For \(\phi=\phi_{1}\geq 0.4\), the corresponding \(f(r)\) derived from the two phases are virtual identical to one another, indicating the corresponding microstructures possess a degree of phase-inversion symmetry. We subsequently analyze the rescaled autocovariance functions for the microstructures associated with the elliptical-disk, square and rectangular exclusion regions (see Appendix A) and find that all of these anisotropic hyperuniform microstructures possess phase-inversion symmetry for \(\phi\) approximately in the range [0.4, 0.6].
It is well known that a statistically isotropic medium with phase-inversion symmetry must possess a percolation threshold \(\phi_{c}=0.5\)[1]. Indeed, we observed that the isotropic microstructures associated with the circular-disk exclusions all percolate at \(\phi_{c}=0.5\). Interestingly, our results suggest this statement can be possibly generalized to the anisotropic hyperuniform microstructures possessing phase-inversion symmetry as well. In particular, for the microstructures associated with the elliptical-disk and rectangular exclusion regions, we observe that percolation first occurs along the anisotropic horizontal direction. For the microstructures associated with the square exclusion regions, percolation simultaneously occurs along the both the orthogonal directions.
On the other hand, the directionally hyperuniform microstructures associated with the butterfly-shaped and lemniscate-shaped \(\Omega\) do not possess phase inversion symmetry. This can be seen by visually comparing the corresponding microstructures and quantitatively comparing the associated \(f(r)\) functions (see Appendix A). We speculate that this is because the very asymmetric spectral density function \(\tilde{\chi}_{{}_{V}}(k)\) associated with \(\Omega\) regions can only be realized by distinctly different topology and morphology of the phases at different \(\phi\), and thus breaks the phase-inversion symmetry. As a consequence, the percolation threshold for these microstructures (along certain directions) are significantly lower than that for microstructures with phase-inversion symmetry, i.e., \(\phi_{c}\approx 0.45\) for the butterfly-shaped \(\Omega\) and \(\phi\approx 0.4\) for the lemniscate-shaped \(\Omega\). We note that the percolation threshold values reported here are only estimate based on the constructed microstructures. A more detailed study is required in order to precisely determine the threshold values [143].
## V Conclusions and discussion
In this work, we devised a Fourier-space based numerical construction procedure that explicitly incorporates the vector-dependent spectral density function \(\tilde{\chi}_{{}_{V}}(\mathbf{k})\) to design and generate anisotropic microstructures with targeted directional scattering properties. We mainly focused on anisotropic SHU composites, which possess a spectral density \(\tilde{\chi}_{{}_{V}}(\mathbf{k})=0\) in an exclusion region \(\Omega\) around the origin in the Fourier space [see Eq. (1)]. We systematically investigate the different shaped exclusion regions \(\Omega\) with various discrete symmetries, including circular-disk, elliptical-disk, square, rectangular, butterfly-shaped and lemniscate-shaped exclusion regions, with the latter two instances leading to directionally hyperuniform composites. Our study allows us to understand how different discrete symmetries of \(\Omega\) affect the directional scattering properties of the resulting composite, which is crucial to the engineering of novel disordered photonic and phononic two-phase media, as
Figure 15: Rescaled autocovariance functions \(f(r)\) for isotropic stealthy hyperuniform microstructures associated with the circular-disk exclusion region with \(\ell=5\). The microstructures possess phase inversion symmetry for \(\phi\) approximately in the range [0.4, 0.6].
elaborated below.
We have found that the circular-disk \(\Omega\) regions give rise to isotropic hyperuniform structures on both global and local scales, and the resulting microstructures possess phase-inversion symmetry for volume fractions \(\phi\) approximately in the range [0.4, 0.6]. Increasing \(\phi\) leads to enhanced phase connectedness and subsequent percolation at \(\phi_{c}=0.5\). The anisotropic microstructures associated with the elliptical-disk, square and rectangular exclusion regions also possess phase-inversion symmetry and directional percolation at \(\phi_{c}\approx 0.5\). On the other hand, the directionally hyperuniform microstructures associated with the butterfly-shaped and lemniscate-shaped \(\Omega\) do not possess phase-inversion symmetry and percolate along certain directions at much lower volume fractions (\(\phi_{c}\approx 0.45\) for the butterfly-shaped \(\Omega\) and \(\phi\approx 0.4\) for the lemniscate-shaped \(\Omega\)). Increasing the size of \(\Omega\) for all different shapes considered here results in finer structures, which is required to suppress local volume fraction fluctuations over a broader range of length scales associated with the larger exclusion region.
The directional hyperuniform behavior imposed by the shape asymmetry of exclusion regions is clearly manifested in the anisotropic phase morphology of the constructed media. For example, in the case of lemniscate-shaped exclusion regions, the resulting media are hypereuniform along the direction associated with the chain-like structures, which possess a much more uniform distribution of the phases along these direction on the global scale. On the other hand, the media are non-hyperuniform along the direction perpendicular to these chain-like structures, along which large gaps and voids are present and the distribution of the phases is much less uniform, both of which can lead to large local volume fraction fluctuations that destroy hyperuniformity. The exclusion regions possessing only two-fold rotational symmetry, such as the elliptical-disk and rectangular regions, impose different length scales (along the two orthogonal directions) over which the systems are required to be stealthy hyperuniform, which is also achieved by the anisotropic chain-like or stripe-like structures in the associated composite microstructures.
In addition, while the anisotropic exclusion region imposes strong constraints on the _global_ symmetry of the resulting microstructure, the composite can still have almost isotropic _local_ morphology. For example, in the case of square exclusion regions, the constructed microstructures contain almost isotropic "particles" (local clusters
Figure 16: Rescaled autocovariance functions \(f(r)\) along the horizontal direction for anisotropic stealthy hyperuniform microstructures associated with the elliptical-disk exclusion region with \(\ell=5\).
Figure 17: Rescaled autocovariance functions \(f(r)\) along the two orthogonal directions for anisotropic stealthy hyperuniform microstructures associated with the square exclusion region with \(\ell=5\).
of pixels), arranged on distorted "square lattices" to realize the global four-fold rotational symmetry imposed by the exclusion region. We also found that the anisotropic phase morphologies seem to be more sensitive to the overall asymmetry but not the detailed shapes for \(\Omega\) regions possessing two-fold rotational symmetry. This can be seen from the comparison of the microstructures associated with the elliptical-disk and rectangular exclusion regions: both of them possess similar chain-like or stripe-like structures along the horizontal directions.
Although all of the illustrative construction examples we considered are hyperuniform, the general construction procedure can be readily employed to generate microstructures of composites with an arbitrary anisotropic spectral density function \(\tilde{\chi}_{{}_{\rm V}}(\mathbf{k})\). Indeed, in Appendix B, we demonstrate this utility by generating stealthy nonhyperuniform composites, which exhibit a multi-scale structure in order to achieve the prescribed scattering behavior across scales. In addition, our procedure can be straightforwardly generalized to three dimensions. The resulting digitized microstructures can be experimentally fabricated using 3D printing techniques [144; 145].
While our focus in the present work was to engineer anisotropic scattering properties directly encoded in the spectral density function, our general construction procedure is a key initial step in the computational design of disordered hyperuniform composites with other photonic. Such designs could be achieved, for example, by leveraging recently developed predictive formulations including nonlocal theories for the effective dynamic elastic moduli and dielectric constant [68; 73; 125] and the spreadability for time-dependent diffusive transport behaviors [126; 63; 146]. Such theories rigorously connect effective properties of the composites to their spectral density \(\tilde{\chi}_{{}_{\rm V}}(\mathbf{k})\), allowing us to achieve desirable composite properties by tuning a targeted spectral density. The associated microstructure can then be obtained using our construction procedure. In future work, we will explore this framework to design disordered hyperuniform composites with targeted electromagnetic and transport properties.
###### Acknowledgements.
This work was supported by the Army Research Office under Cooperative Agreement Number W911NF-22-2-0103.
Figure 19: Rescaled autocovariance functions \(f(r)\) along the 45-degree diagonal direction for directionally hyperuniform microstructures associated with the butterfly-shaped exclusion region with \(\ell=5\).
Figure 18: Rescaled autocovariance functions \(f(r)\) along the horizontal direction for anisotropic stealthy hyperuniform microstructures associated with the rectangular exclusion region with \(\ell=5\).
## Appendix A Auto-covariance Functions for Anisotropic Composite Microstructures
In this section, we present results on rescaled auto-covariance functions (see Eq. (35)) for the anisotropic composite microstructures. In particular, Figs. 16, 17 and 18 respectively show \(f(r)\) for the anisotropic hypereuriform microstructures associated with the elliptical-disk, square and rectangular exclusion regions. It can be seen that the rescaled autocovariance functions for the microstructures with \(\phi\in[0.4,0.6]\) are virtually identical, indicating these microstructures possess phase-inversion symmetry (see Sec. IV.G). On the other hand, Figs. 19 and 20 respectively show \(f(r)\) for the directionally hypereuriform microstructures associated with the butterfly-shaped and lemniscate-shaped exclusion regions. It can be seen that the rescaled autocovariance functions for the microstructures are distinctly different for all corresponding \(\phi\) values, indicating these microstructures do not possess phase-inversion symmetry.
## Appendix B Effects of Increasing \(\Omega(\ell)\)
We have shown that increasing \(\ell\) (i.e., size of \(\Omega\) region) results in finer morphologies and enhances dispersion of individual pixels in the constructed composite microstructures. As discussed in Sec. IV.A, this was due to the increasing number of constrained \(\mathbf{k}\) vectors, which requires suppression of local volume fraction fluctuations on a broader range. Here, we further show that the system indeed behaves like "particles" on a lattice as \(\ell\) (or equivalently \(\chi\)) increases.
Previous studies have shown that for a point configuration in a two-dimensional Euclidean space, increasing \(\chi\) leads to increasing order in the distribution of the points and a perfect distribution of points on the triangular-lattice can be achieved for \(\chi\geq 0.5\)[2]. Here the composite microstructures are composed of pixels arranged on a square grid (lattice). Therefore, we investigate the evolution of microstructures associated with the _square_ exclusion regions with increasing size, whose symmetry is compatible with the underlying square lattice. In particular, we consider a system with \(\phi=1/9\), \(L=72\) pixels and \(N=576\), which possesses an order microstructure with the individual "blue" pixels arranged on a perfect square lattice with a lattice constant \(a=3\) pixels. The
Figure 21: Constructed microstructures with \(\phi=1/9\) associated with the square exclusion regions and increasing \(\chi\). The linear size of the system is \(L=72\) pixels.
Figure 20: Rescaled autocovariance functions \(f(r)\) along the horizontal direction for directionally hyperuniform microstructures associated with the lemniscate-shaped exclusion region with \(\ell=5\).
smaller system size allows fast convergence of the optimization algorithm.
Figure 21 shows the constructed microstructures associated with the square \(\Omega\) and increasing \(\ell\), and thus increasing \(\chi\). It can be seen that for low \(\chi\) values, the system is in the "random-medium" regime, and the resulting microstructures contain clusters of blue phase pixels. As \(\chi\) increases, the blue pixels behave more like "particles" with an increasing degree of repulsion, and the resulting microstructures contain distributions of individual pixels with increasing local order. An almost perfect square-lattice packing of the blue pixels is obtained at \(\chi=0.45\). These results are consistent with SHU point configurations associated with increasing \(\chi\) values [2].
## Appendix C Construction of Stealthy Non-hyperuniform Systems
To demonstrate the utility of our general construction procedure, we employ it to render realizations of stealthy but non-hyperuniform composite systems. Without loss of generality, we consider a class of spectral density function chi \(\tilde{\chi}_{{}_{\mathrm{V}}}(\mathbf{k})=0\) for \(K_{1}\leq|\mathbf{k}|\leq K_{2}\), where \(0<K_{1}<K_{2}\), i.e., the \(\Omega\) region corresponds to a circular ring with inner radius \(K_{1}\) and outer radius \(K_{2}\). Here we use \(L=150\) and fix \(K_{2}=10\) and vary \(K_{1}\) by choosing \(K_{1}=2,4,6,8\). The realizations and representative \(\tilde{\chi}_{{}_{\mathrm{V}}}(\mathbf{k})=0\) are shown in Fig. 22. It can be seen that the realizations include "particles" which are grouped into clusters. As \(K_{1}\) increases, the clusters get denser and their size decreases. Thus, the systems can be considered to possess two characteristic length scales, i.e., the particle size corresponding to \(K_{2}\) and the cluster size corresponding to \(K_{1}\). These structural elements (e.g., particles and particle clusters) are arranged in a way such that scatterings on multiple length scale within these bounds are completely suppressed. In future work, we will explore additional designs for the stealthy non-hyperuniformity composite systems, with a focus on engineering anisotropic scattering behaviors.
|
2305.05828 | Convergence of a Normal Map-based Prox-SGD Method under the KL
Inequality | In this paper, we present a novel stochastic normal map-based algorithm
($\mathsf{norM}\text{-}\mathsf{SGD}$) for nonconvex composite-type optimization
problems and discuss its convergence properties. Using a time window-based
strategy, we first analyze the global convergence behavior of
$\mathsf{norM}\text{-}\mathsf{SGD}$ and it is shown that every accumulation
point of the generated sequence of iterates $\{\boldsymbol{x}^k\}_k$
corresponds to a stationary point almost surely and in an expectation sense.
The obtained results hold under standard assumptions and extend the more
limited convergence guarantees of the basic proximal stochastic gradient
method. In addition, based on the well-known Kurdyka-{\L}ojasiewicz (KL)
analysis framework, we provide novel point-wise convergence results for the
iterates $\{\boldsymbol{x}^k\}_k$ and derive convergence rates that depend on
the underlying KL exponent $\boldsymbol{\theta}$ and the step size dynamics
$\{\alpha_k\}_k$. Specifically, for the popular step size scheme
$\alpha_k=\mathcal{O}(1/k^\gamma)$, $\gamma \in (\frac23,1]$, (almost sure)
rates of the form $\|\boldsymbol{x}^k-\boldsymbol{x}^*\| = \mathcal{O}(1/k^p)$,
$p \in (0,\frac12)$, can be established. The obtained rates are faster than
related and existing convergence rates for $\mathsf{SGD}$ and improve on the
non-asymptotic complexity bounds for $\mathsf{norM}\text{-}\mathsf{SGD}$. | Andre Milzarek, Junwen Qiu | 2023-05-10T01:12:11Z | http://arxiv.org/abs/2305.05828v1 | # Convergence of a Normal Map-Based Proximal Stochastic Gradient Method under the KL Inequality+
###### Abstract
In this paper, we present a novel stochastic normal map-based algorithm (norM-SGD) for nonconvex composite-type optimization problems and discuss its convergence properties. Using a time window-based strategy, we first analyze the global convergence behavior of \(\mathsf{norM}\)-SGD and it is shown that every accumulation point of the generated sequence of iterates \(\{\mathbf{x}^{k}\}_{k}\) corresponds to a stationary point almost surely and in an expectation sense. The obtained results hold under standard assumptions and extend the more limited convergence guarantees of the basic proximal stochastic gradient method. In addition, based on the well-known Kurdyka-Lojasiewicz (KL) analysis framework, we provide novel point-wise convergence results for the iterates \(\{\mathbf{x}^{k}\}_{k}\) and derive convergence rates that depend on the underlying KL exponent \(\mathbf{\theta}\) and the step size dynamics \(\{\alpha_{k}\}_{k}\). Specifically, for the popular step size scheme \(\alpha_{k}=\mathcal{O}(1/k^{\gamma})\), \(\gamma\in(\frac{2}{3},1]\), (almost sure) rates of the form \(\|\mathbf{x}^{k}-\mathbf{x}^{*}\|=\mathcal{O}(1/k^{p})\), \(p\in(0,\frac{1}{2})\), can be established. The obtained rates are faster than related and existing convergence rates for SGD and improve on the non-asymptotic complexity bounds for \(\mathsf{norM}\)-SGD.
**Keywords:** Stochastic optimization \(\cdot\) Normal map \(\cdot\) Asymptotic convergence \(\cdot\) Kurdyka-Lojasiewicz inequality
## 1 Introduction
In this work, we propose and investigate a novel normal map-based variant of the proximal stochastic gradient method for the composite-type problem
\[\min_{x\in\mathbb{R}^{d}}\ \psi(x):=f(x)+\varphi(x), \tag{1}\]
where \(\varphi:\mathbb{R}^{d}\to(-\infty,\infty]\) is a convex, lower semicontinuous, and proper mapping and \(f:\mathbb{R}^{d}\to\mathbb{R}\) is continuously differentiable on an open set containing the effective domain \(\operatorname{dom}(\varphi):=\{x\in\mathbb{R}^{d}:\varphi(x)<\infty\}\).
The composite model (1) has gained increasing attention during the recent decades and is used frequently in large-scale applications and stochastic optimization tasks, including machine learning [13, 21, 84, 53], statistical learning and sparse regression [42, 85], image and signal processing [27, 24], stochastic programming [70, 87], and many other areas. The (potentially nonsmooth) function \(\varphi\) typically acts as a regularization that allows to promote specific structural properties, such as sparsity, group sparsity, low rank, etc., while the smooth (not necessarily convex) part \(f\) often corresponds to a data-driven learning model. Prominent choices of \(f\) comprise _expected_ or _empirical risk_ terms of the form
\[f(x):=\mathbb{E}[F(x,\mathbf{\zeta})]=\int_{\Omega}F(x,\mathbf{\zeta}(\omega))\, \mathrm{d}\mathbb{P}(\omega)\quad\text{or}\quad f(x):=\frac{1}{N}\sum_{i=1}^{ N}f_{i}(x), \tag{2}\]
where \((\Omega,\mathcal{F},\mathbb{P})\) is a given probability space, \(\mathbf{\zeta}:\Omega\to\Xi\) is a random variable, \(\Xi\) is a measure space, and \(F:\mathbb{R}^{d}\times\Xi\to\mathbb{R}\) and the mappings \(f_{i}:\mathbb{R}^{d}\to\mathbb{R}\), \(i=1,\dots,N\), represent suitable loss models. Since a full evaluation of the function and gradient values \(f(x)\) and \(\nabla f(x)\) can be prohibitively expensive or is not even possible, sampling schemes or stochastic approximation techniques are employed in practice to generate more tractable function and gradient information of the risk functions (2). This fundamental mechanism -- pioneered by Robbins and Monro in their seminal work [80] -- is the basis of the stochastic gradient descent method (SGD) and many other successful stochastic algorithms.
Stochastic proximal gradient methods, [31, 96, 71, 39, 2, 65, 83], extend the basic stochastic approximation principles used in SGD to the composite problem (1). More precisely, the update scheme of the basic stochastic proximal gradient method (prox-SGD) for (1) is defined via
\[x^{k+1}=\operatorname{prox}_{\alpha_{k}\varphi}(x^{k}-\alpha_{k}g^{k}), \tag{3}\]
where \(g^{k}\approx\nabla f(x^{k})\) denotes a stochastic approximation of the gradient \(\nabla f(x^{k})\) at \(x^{k}\), \(\{\alpha_{k}\}_{k}\subset\mathbb{R}_{++}\) is a given (diminishing) step size sequence, and the function \(\operatorname{prox}_{\lambda\varphi}:\mathbb{R}^{d}\to\mathbb{R}^{d}\), \(\operatorname{prox}_{\lambda\varphi}(x):=\operatorname{argmin}_{y\in\mathbb{ R}^{d}}\varphi(y)+\frac{1}{2\lambda}\|x-y\|^{2}\), refers to the well-known proximity operator of \(\varphi\), [96, 71, 39]. Here, we are interested in a variant of the stochastic proximal gradient method that relies on the so-called _normal map_\(F^{\lambda}_{\operatorname{nor}}:\mathbb{R}^{d}\to\mathbb{R}^{d}\):
\[F^{\lambda}_{\operatorname{nor}}(z):=\nabla f(\operatorname{prox}_{\lambda \varphi}(z))+\nabla\operatorname{env}_{\lambda\varphi}(z):=\nabla f( \operatorname{prox}_{\lambda\varphi}(z))+\frac{1}{\lambda}(z-\operatorname{ prox}_{\lambda\varphi}(z)),\quad\lambda>0. \tag{4}\]
The core steps of this normal map-based approach (norM-SGD) are given by:
\[g^{k}\approx\nabla f(x^{k}),\quad z^{k+1}=z^{k}-\alpha_{k}(g^{k}+\nabla \operatorname{env}_{\lambda\varphi}(z^{k})),\quad\text{and}\quad x^{k+1}= \operatorname{prox}_{\lambda\varphi}(z^{k+1}). \tag{5}\]
The full method is summarized in Algorithm 1. In the following, we provide further motivation and we discuss the proposed normal-based approach in more detail.
```
0: Choose an initial point \(z^{0}\in\mathbb{R}^{d}\), \(\lambda>0\), step sizes \(\{\alpha_{k}\}_{k}\subset\mathbb{R}_{++}\), and set \(x^{0}=\operatorname{prox}_{\lambda\varphi}(z^{0})\);
1:for\(k=1,2,\dots\)do
2: Generate a gradient estimate \(g^{k}\approx\nabla f(x^{k})\) and perform the updates \[z^{k+1}=z^{k}-\alpha_{k}(g^{k}+\nabla\operatorname{env}_{\lambda\varphi}(z^{k }))\quad\text{and}\quad x^{k+1}=\operatorname{prox}_{\lambda\varphi}(z^{k+1});\]
3:endfor
```
**Algorithm 1** A Normal Map-Based Stochastic Proximal Gradient Method (norM-SGD)
### Background and Motivation
The normal map \(F^{\lambda}_{\operatorname{nor}}\) was initially introduced by Robinson in [81] and has been primarily utilized in the context of classical variational inequalities and generalized equations, see, e.g., [34]. The normal map is closely related to the associated first-order optimality conditions of problem (1),
\[0\in\nabla f(x)+\partial\varphi(x), \tag{6}\]
where \(\partial\varphi\) is the standard subdifferential of the convex function \(\varphi\). Specifically, solutions \(\bar{z}\in\mathbb{R}^{d}\) of the nonsmooth equation \(F^{\lambda}_{\operatorname{nor}}(z)=0\) correspond to stationary points of (1) and setting \(\bar{x}:=\operatorname{prox}_{\lambda\varphi}(\bar{z})\), it holds that \(0\in\nabla f(\bar{x})+\partial\varphi(\bar{x})\). (See Subsection 2.3 and [74, Section 2]). The optimality condition (6) can also be equivalently expressed via the _natural residual_ mapping:
\[x\in\operatorname{crit}(\psi)\quad\Longleftrightarrow\quad F^{\lambda}_{ \operatorname{nat}}(x):=x-\operatorname{prox}_{\lambda\varphi}(x-\lambda \nabla f(x))=0,\quad\lambda>0, \tag{7}\]
where \(\operatorname{crit}(\psi):=\{x\in\mathbb{R}^{d}:0\in\nabla f(x)+\partial \varphi(x)\}\) denotes the set of all stationary points of problem (1). The natural residual \(F^{\lambda}_{\operatorname{nat}}\) serves as a fundamental stationarity measure and is a basic tool in the design of numerical algorithms for composite-type problems of the form (1). Compared to \(F^{\lambda}_{\operatorname{nat}}\), the normal map \(F^{\lambda}_{\operatorname{nor}}\) exchanges the order of evaluating the proximity operator \(\operatorname{prox}_{\lambda\varphi}\) and the gradient \(\nabla f\). This simple but distinctive feature can have certain advantages:
* Since the range of the proximity operator \(\operatorname{prox}_{\lambda\varphi}\) is a subset of the effective domain \(\operatorname{dom}(\varphi)\), the normal map remains well-defined if the mappings \(f\) and \(\nabla f\) are only defined on \(\operatorname{dom}(\varphi)\).
* If \(g\approx\nabla f(x)\) is an unbiased estimator of \(\nabla f(x)\) satisfying \(\mathbb{E}[g]=\nabla f(x)\), then the update (5) defines an unbiased normal map-based step for the variable \(z\): \[\mathbb{E}[z-\alpha(g+\nabla\operatorname{env}_{\lambda\varphi}(z))]=z-\alpha F ^{\lambda}_{\operatorname{nor}}(z),\quad\alpha>0.\] A similar property does not hold for prox-SGD, i.e., we generally have \(\mathbb{E}[x-\operatorname{prox}_{\lambda\varphi}(x-\lambda g)]\neq F^{\lambda} _{\operatorname{nat}}(x)\).
In this paper, we study asymptotic convergence of Algorithm 1 in a general nonconvex setting with a particular focus on the convergence properties of the associated stochastic process of iterates \(\{\mathbf{x}^{k}\}_{k}\). Our analysis is motivated by the following core questions:
1. _Does the stationarity measure_ \(\|F_{\mathrm{mat}}^{\lambda}(\mathbf{x}^{k})\|\) _converge to zero as_ \(k\to\infty\) _(in expectation or in an almost sure sense), i.e., do accumulation points of_ \(\{\mathbf{x}^{k}\}_{k}\) _correspond to stationary points of (_1_)?_
2. _Can almost sure convergence of the stochastic process_ \(\{\mathbf{x}^{k}\}_{k}\) _be ensured even when_ \(f\) _is nonconvex?_
3. _Which (almost sure) rate of convergence for_ \(\{\mathbf{x}^{k}\}_{k}\) _can be expected and guaranteed?_
While these questions are well-understood in the convex and strongly convex case, general results for stochastic and nonconvex composite-type problems seem to be far more limited. Surprisingly and to the best of our knowledge, it is not known whether the iterates generated by prox-SGD satisfy the properties mentioned in (Q.1)-(Q.3). In fact, it is not even fully known if the basic conditions in (Q.1) hold for prox-SGD under the common set of assumptions
1. Regularity: \(f\) _is bounded from below and Lipschitz smooth,_
2. Stochastic oracles: \(\mathbf{g}^{k}\) _is an unbiased estimator of_ \(\nabla f(\mathbf{x}^{k})\) _for all_ \(k\) _with_ \(\sup_{k}\mathbb{E}[\|\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\|^{2}]<\infty\),
3. Step sizes: \(\sum_{k=0}^{\infty}\alpha_{k}=\infty\) _and_ \(\sum_{k=0}^{\infty}\alpha_{k}^{2}<\infty\)_,_
44, 57 -- we also refer to [65, 28] and Subsection 1.2 for further comparison. We will see that the normal map-based perspective allows to overcome these limitations and the properties in (Q.1) can be shown to hold for Algorithm 1 in the aforementioned standard setting (P.1)-(P.3). In addition, we address the more challenging questions (Q.2)-(Q.3) and provide affirmative and general insights that are based on a novel combination of time window techniques, approximate descent conditions, and the Kurdyka-Lojasiewicz (KL) framework.
### Related Work
_Related Advances in_ SGD. The analysis of SGD and stochastic approximation methods has a long and rich history. Initiated by the highly influential work by Robbins and Monro, [80], a plethora of stochastic approximation techniques and extensions of SGD-type algorithms have been investigated for different problem formulations and under various basic assumptions in the recent decades. As both prox-SGD and norM-SGD reduce to the pure stochastic gradient descent method when \(\varphi\equiv 0\), we first give a non-exhaustive overview of related directions and work on SGD.
The so-called ODE method plays a central role in early studies of SGD and stochastic approximations schemes, see, e.g., Ljung [60], Kushner and Clark [51], Benveniste et al. [10], Benaim [8], Kushner and Yin [50], and Borkar [20]. It allows to show that the path of iterates generated by SGD is an asymptotic pseudo-trajectory of the associated gradient flow and thus, SGD trajectories eventually approach solutions of the ordinary differential equation
\[\dot{x}(t)=-\nabla f(x(t)),\quad x(0)=x^{0}, \tag{8}\]
almost surely. Typically, applicability of the ODE technique is based on certain conditions on the noise, almost sure boundedness of the iterates, and diminishing step size strategies. More specifically, the assumptions (P.2), (P.3), and
1. Boundedness: \(\sup_{k}\|\mathbf{x}^{k}\|<\infty\)_almost surely_,
represent classical core requirements in the analysis of SGD, see, e.g., [51, Section 2], [8, Section 4], or [50, Section 5]. Many extensions of the base components (P.2)-(P.4) are possible including, e.g., biased gradient estimates, correlated noise, or weaker conditions on the step sizes, [51, 8, 50]. The fundamental connection between SGD trajectories and the gradient flow (8) allows to apply results from dynamical system theory to specify the limiting behavior of \(\{\mathbf{x}^{k}\}_{k}\). In [7, 8], Benaim showed that the limit sets of the trajectories of \(\{\mathbf{x}^{k}\}_{k}\) are nonempty compact connected sets that are internally chain-recurrent for the flow induced by \(-\nabla f\). More strictly, if \(x^{*}\) is an asymptotically stable solution to (8) and if \(\{\mathbf{x}^{k}\}_{k}\) enters a compact subset of the domain of attraction of \(x^{*}\) infinitely often and almost surely, then it holds that \(\mathbf{x}^{k}\to x^{*}\) almost surely, [60, 51, 10, 7, 50]. Many variants of these convergence results are available. For instance, Ljung [61] showed that \(\{\mathbf{x}^{k}\}_{k}\) "converges" to the set \(\mathrm{crit}(f):=\{x\in\mathbb{R}^{d}:\nabla f(x)=0\}\) almost surely, if \(\mathrm{crit}(f)\) consists of finitely many isolated components or if a Morse-Sard-type condition is satisfied, see also [8, Section 6]. Recently, Mertikopoulos et al. [66] established a similar result without this restriction but under a different set of base conditions. Ljung et al. [62] and Duflo [33] show almost sure convergence of \(\{\mathbf{x}^{k}\}_{k}\) to a unique limit point if there exists \(x^{*}\in\mathrm{crit}(f)\) such that \(\langle\nabla f(x)-\nabla f(x^{*}),x-x^{*}\rangle>0\) for all \(x\in\mathbb{R}^{d}\backslash\{x^{*}\}\).
Following this classical line of research, the "rate of convergence" of a stochastic approximation algorithm typically refers to the asymptotic behavior of the normalized errors \(\mathbf{w}^{k}:=(\mathbf{x}^{k}-x^{*})/\sqrt{\alpha_{k}}\) where \(x^{*}\) is the limit point of the process \(\{\mathbf{x}^{k}\}_{k}\). Specifically, under local strong convexity and additional probabilistic assumptions, the sequence
can be shown to converge weakly to a stationary Gauss-Markov process which yields convergence in distribution to some normally distributed random variable. Detailed discussions of these classical results and more references can be found in [52, 10, 62, 33, 50]. More related to our line of analysis and to question (Q.3), Pelletier [78, Theorem 2] established the following rate for SGD
\[\|\mathbf{x}^{k}-x^{*}\|=\mathcal{O}(\alpha_{k}^{\frac{1}{2}(\sum_{i=1}^{k}\alpha_ {i})^{\frac{1}{2}+\epsilon}})\quad\text{almost surely on}\quad\{\omega\in \Omega:\mathbf{x}^{k}(\omega)\to x^{*}\},\quad\text{for arbitrary $\varepsilon>0$},\]
if the Hessian \(\nabla^{2}f(x^{*})\) is assumed to be positive definite at the limit point \(x^{*}\). In addition, a slightly stronger law of the iterated logarithm can be derived under a higher order moment condition on \(\{\mathbf{g}^{k}\}_{k}\), [78, Theorem 1].
Complexity bounds offer a different and non-asymptotic perspective on the convergence behavior of SGD and have been central in machine learning and recent related research [69, 38, 86, 21, 54, 46]. In contrast to the aforementioned classical studies on SGD, complexity results often utilize the (global) Lipschitz condition (P.1) instead of the almost sure boundedness assumption (P.4). In the nonconvex setting and under the basic conditions (P.1)-(P.3), typical expected iteration complexities for SGD are then given by
\[\min_{i=0,\dots,k-1}\mathbb{E}[\|\nabla f(\mathbf{x}^{i})\|^{2}]\leq\mathcal{O}(k ^{-\frac{1}{2}})\quad\text{or}\quad\mathbb{E}[\|\nabla f(\mathbf{x}^{k_{\circ}})\| ^{2}]\leq\mathcal{O}(k^{-\frac{1}{2}}), \tag{9}\]
where \(k\) denotes the total number of iterations and \(k_{\circ}\) is an index sampled uniformly at random from \(\{0,\dots,k-1\}\), see, e.g., [38]. It is also possible to further extend the elementary complexity bounds in (9) to full asymptotic convergence results of the form "\(\mathbb{E}[\|\nabla f(\mathbf{x}^{k})\|]\to 0\)" and "\(\|\nabla f(\mathbf{x}^{k})\|\to 0\) almost surely" which allows to address question (Q.1) through an amenable optimization lens, [12, 21, 54, 73, 76, 59, 57].
_Advances in_ prox-SGD. Surprisingly and different from SGD, the convergence properties of the proximal stochastic gradient method are generally less understood -- especially when \(f\) is nonconvex. In [28], Davis and Drusvyatskiy present one of the first complexity results for prox-SGD in the nonconvex setting using the Moreau envelope \(\mathrm{env}_{\theta\psi}\) as a smooth merit function for the problem (1). In particular, under the assumptions (P.1)-(P.2) and for general nonconvex \(f\), Davis and Drusvyatskiy establish the complexity bound
\[\min_{i=0,\dots,k-1}\mathbb{E}[\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{i})\|^{2} ]\leq\mathcal{O}(\sum_{i=0}^{k-1}\alpha_{i}^{2}\,/\sum_{i=0}^{k-1}\alpha_{i}). \tag{10}\]
Earlier studies of prox-SGD for nonconvex \(f\) appeared in [39] where convergence of prox-SGD is shown if the variance \(\sigma_{k}^{2}=\mathbb{E}[\|\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\|^{2}]\to 0\) vanishes as \(k\to\infty\). This can be ensured by choosing a sequence of progressively increasing mini-batches or via suitable variance reduction techniques, [96, 71, 44]. As mentioned, it is not fully known whether prox-SGD satisfies the asymptotic convergence guarantees formulated in (Q.1) under the base conditions (P.1)-(P.3). Following the classical ODE-based analysis of SGD, Majewski et al. utilize a differential inclusion approach to study convergence of prox-SGD, [65]. The authors introduce additional compact constraints to ensure boundedness of the iterates \(\{\mathbf{x}^{k}\}_{k}\) and applicability of the differential inclusion techniques. The analyses in [32, 29] establish asymptotic convergence for a broader class of subgradient-type and model-based proximal algorithms which includes prox-SGD as a special case. Both works apply differential inclusion-based mechanisms and require a priori (almost) sure boundedness of \(\{\mathbf{x}^{k}\}_{k}\) and a density / Sard-type condition in order to show that accumulation points of \(\{\mathbf{x}^{k}\}_{k}\) correspond to stationary points. More recently, in [57], Li and Milzarek derive general asymptotic convergence results for prox-SGD of the form
\[\mathbb{E}[\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|]\to 0\quad\text{and} \quad\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|\to 0\ \ \text{almost surely},\]
under the additional assumption that \(\varphi\) is globally Lipschitz continuous. The analysis in [57] allows to remove the boundedness conditions required in [32, 65, 29]. However, it is not known whether the Lipschitz assumption on \(\varphi\) can be further relaxed or dropped. The easier convex and strongly convex cases have been investigated, e.g., in [39, 2, 83, 77]. In particular, if \(f\) is convex and \(\psi\) is strongly convex, then convergence in expectation -- \(\mathbb{E}[\|\mathbf{x}^{k}-\mathbf{x}^{*}\|^{2}]=\mathcal{O}(k^{-1})\) -- can be guaranteed for proper choices of the step sizes \(\{\alpha_{k}\}_{k}\), [83, 77].
_The Kurdyka-Lojasiewicz framework in stochastic optimization._ The KL property [63, 64, 49] provides a geometric and qualitative relationship between a function and its (sub)gradient information and has been utilized extensively in the past decades to study the limiting behavior of optimization algorithms and of (sub)gradient flows in dynamical systems, see, e.g., [88, 1, 14, 3, 4, 17, 5, 18]. Part of the success of KL-based analysis techniques can be traced back to the fact that the KL property is naturally satisfied in many nonconvex, real-world applications, cf. [18]. Moreover, in a series of works, [3, 4, 5, 18], Attouch and Bolte et al. have developed a comprehensive KL analysis framework that serves as a general blueprint simplifying the study and verification of the asymptotic behavior and strong limit-point convergence of descent-type methods for nonsmooth nonconvex optimization. Hence, in the deterministic case, KL-based techniques have allowed to address the questions (Q.2)-(Q.3) (surely) avoiding classical assumptions such as local strong convexity or isolatedness of accumulation points that can be restrictive in the nonconvex setting. Applicability of the basic KL framework to a specific algorithm typically relies on an inherent sufficient descent condition. Such a descent guarantee
then allows to prove first global convergence results which -- under the KL inequality -- can be strengthened to full convergence and a finite length property of the iterates. In addition, convergence rates can be derived that depend on the underlying KL exponent, cf. [3, 4, 5, 18]. Extensions to other Lyaponuv- or merit function-based analyses and slightly broader frameworks are possible and have been investigated in, e.g, [72, 36, 55, 56].
In contrast to the deterministic setting, applications of the KL-based analysis to stochastic optimization methods are generally rare. In fact, applicability of the KL techniques is significantly hindered by the missing sufficient descent of stochastic algorithms and by the more intricate dynamics of the stochastic errors and utilized step sizes. To the best of our knowledge, Tadic's work [91, 92] provides one of the first comprehensive KL-based analyses for SGD. In particular, under the KL inequality and standard assumptions, the SGD-iterates \(\{\mathbf{x}^{k}\}_{k}\) are shown to converge with the rates:
\[|f(\mathbf{x}^{k})-f(\mathbf{x}^{*})|=\mathcal{O}(k^{-p+\varepsilon}),\quad\|\mathbf{x}^{ k}-\mathbf{x}^{*}\|=\mathcal{O}(k^{-q+\varepsilon}),\quad\text{for arbitrary }\varepsilon>0,\quad k\to\infty,\]
almost surely on \(\mathcal{X}:=\{\omega:\sup_{k}\|\mathbf{x}^{k}(\omega)\|<\infty\}\). Here, the step sizes \(\{\alpha_{k}\}_{k}\) are required to satisfy \(\alpha_{k}=\alpha/(k+\beta)^{\gamma}\), \(\alpha,\beta>0\), \(\gamma\in(\frac{3}{4},1)\) and the rate coefficients \(p\in(0,1]\), \(q\in(0,\frac{1}{2}]\) depend on \(\gamma\) and the KL exponent \(\mathbf{\theta}\) of \(f\) at \(\mathbf{x}^{*}\). In the special case \(\theta=\mathbf{\theta}(\omega)=\frac{1}{2}\) and \(\gamma\to 1\), it further holds that \(p\to 1\), \(q\to\frac{1}{2}\). We refer to [91, 92] and Remark 3.8 for a more general exposition and discussion. Tadic's work has fundamental connections to many recent almost sure (KL-based) convergence analyses [66, 95, 30, 59, 47]. However, to our surprise, the results in [92] have mostly remained unrecognized or unknown in the optimization and machine learning communities. In [9], Benaim applies KL-based techniques to the underlying dynamical system (8). Shadowing properties are then utilized to transfer the obtained convergence results back to the SGD trajectories. A more recent and related convergence analysis of SGD under the KL inequality is also presented in [30]. Li et al., [58], derive convergence and convergence rates for random reshuffling (RR) under the KL inequality. RR is a without-replacement variant of SGD for empirical risk minimization. Similar to SGD, it does not directly satisfy a sufficient descent condition. However, the stochastic gradient errors in RR can be controlled in a deterministic fashion using an epoch-based and more straightforward analysis. Similar strategies are not applicable to SGD-type noise necessitating significantly more careful and different techniques [92, 9, 30]. We note that convergence of SGD has also been established under global KL inequalities and global Polyak-Lojasiewicz (PL)-type conditions, see, e.g., [45, 37, 95, 35]. Such global KL assumptions are common in stochastic nonconvex optimization but pose strict geometric requirements on the objective function that typically are not easy to verify. Finally, in [25], Chouzenoux et al. have proposed a novel KL framework for stochastic methods. The authors apply this stochastic framework to establish asymptotic convergence of prox-SGD. However, applicability of the techniques in [25] requires the step sizes \(\{\alpha_{k}\}_{k}\) to be bounded away from zero (\(\inf_{k}\alpha_{k}>0\)), the variance \(\sigma_{k}=(\mathbb{E}[\|\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\|^{2}])^{1/2}\) needs to be summable, \(\varphi\) is assumed to be Lipschitz smooth, and the set of stationary points \(\operatorname{crit}(\psi)\) needs to be contained in a compact set. These stringent assumptions are perpendicular to what we want to achieve here and the overall setting in [25] is closer to the deterministic KL framework studied by Frankel et al. [36].
### Contributions
In the following, we summarize the core contributions of this work.
* We propose a novel normal map-based proximal stochastic gradient algorithm, \(\mathsf{nord}\mathsf{-SGD}\), for composite optimization problems (1). Both \(\mathsf{nord}\mathsf{-SGD}\) and \(\mathsf{prox}\mathsf{-SGD}\) have similar computational complexities: each iteration of \(\mathsf{nord}\mathsf{-SGD}\) only involves one stochastic gradient and one proximal evaluation.
* We show that the time window-based technique utilized by Tadic in [92] can be applied to a suitable merit function for \(\mathsf{nord}\mathsf{-SGD}\) and problem (1) -- yielding approximate descent over time windows. This allows to establish asymptotic global convergence guarantees of the form \[\|F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{k})\|\to 0\quad\text{and}\quad\psi(\mathbf{x}^{k}) \to\mathbf{\psi}^{*}\quad\text{almost surely on some event }\mathcal{E},\] (11) under standard stochastic assumptions. Here, the event \(\mathcal{E}\) is connected to trajectories of \(\{\mathbf{x}^{k}\}_{k}\) along which \(\nabla f\) is Lipschitz continuous (with potentially trajectory-dependent Lipschitz constants). If \(f\) is Lipschitz smooth, then \(\mathcal{E}\) and the results in (11) are shown to occur almost surely. In addition, convergence in expectation \[\mathbb{E}[\|F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{k})\|^{2}]\to 0\quad\text{and} \quad\mathbb{E}[\psi(\mathbf{x}^{k})]\to\rho^{*}\] (12) and complexity bounds -- matching the estimates in (10) -- can be established under a weaker expected smoothness-type condition on the variance terms \(\mathbb{E}[\|\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\|^{2}]\), cf. [46, 40]. The convergence results in (11) ensure that every accumulation point of \(\{\mathbf{x}^{k}\}_{k}\) is a stationary point of (1) for almost every sample \(\omega\in\mathcal{E}\). As mentioned, it is not known whether accumulation points generated by prox-SGD are always stationary points unless additional conditions such as boundedness of \(\{\mathbf{x}^{k}\}_{k}\) or Lipschitz continuity of \(\varphi\) are introduced, see [65, 32, 29, 57]. To the best of our knowledge, \(\mathsf{nord}\mathsf{-SGD}\) seems to be one of the first stochastic proximal algorithms allowing to simultaneously achieve complexity bounds and convergence in the standard setting (P.1)-(P.3).
* Under mild conditions on the step sizes \(\{\alpha_{k}\}_{k}\) and based on the KL property, we prove that the iterates \(\{\mathbf{x}^{k}\}_{k}\) converge to a \(\operatorname{crit}(\psi)\)-valued random vector \(\mathbf{x}^{*}\) almost surely on \(\mathcal{X}=\{\omega:\sup_{k}\|\mathbf{x}^{k}(\omega)\|<\infty\}\). We further establish convergence rates for \(\{|\psi(\mathbf{x}^{k})-\psi(\mathbf{x}^{*})|\}_{k}\), \(\{\|F_{\text{natt}}^{\lambda}(\mathbf{x}^{k})\|^{2}\}_{k}\), and \(\{\mathbf{x}^{k}\}_{k}\) that depend on the KL exponent \(\mathbf{\theta}\) of \(\psi\) at \(\mathbf{x}^{*}\) and the step size dynamics (see Theorem3.5 and Corollary3.6). In contrast to [92], our analysis follows the classical KL framework [3, 4, 5, 18] more closely which has several advantages:
* Compared to [92], the global convergence results (11)-(12) can be verified without requiring additional KL assumptions.
* Our analysis allows to exploit the bounded variance condition (P.2) in a more direct way. This ultimately leads to _faster_ convergence rates and _stronger_ convergence guarantees for \(\{\mathbf{x}^{k}\}_{k}\). In particular, in the popular case \(\alpha_{k}=\mathcal{O}(1/k^{\gamma})\), convergence of \(\{\mathbf{x}^{k}\}_{k}\) can be ensured as long as \(\gamma\in(\frac{2}{3},1]\). The obtained rates further allow to beat the non-asymptotic complexity bounds (10) and can match the strongly convex setting if \(\mathbf{\theta}\in[0,\frac{1}{2}]\) and \(\gamma=1\). These results and improved rates also seem to be new for SGD, [92, 30]. To the best of our knowledge, the above results seem to be the first ones of this form for a stochastic proximal algorithm applied to the nonsmooth nonconvex composite-type problem (1).
* Finally, we present preliminary numerical experiments on nonconvex binary classification and sparse deep learning. Our numerical results indicate that \(\mathsf{norM}\)-SGD and \(\mathsf{prox}\)-SGD have a similar practical behavior.
### Organization
The rest of this paper is organized as follows. In Section2, we first introduce basic assumptions, stochastic tools, and the time window techniques. In Subsection2.5, we establish global convergence and iteration complexity bounds for \(\mathsf{norM}\)-SGD. In Section3, we prove strong convergence of \(\mathsf{norM}\)-SGD and derive the convergence rates under the Kurdyka-Lojasiewicz inequality. We conclude in Section4 with several preliminary numerical experiments on nonconvex binary classification and sparse deep learning.
### Notation
By \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|:=\|\cdot\|_{2}\) we denote the standard Euclidean inner product and norm. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space. We will use bold letters to describe random variables \(\mathbf{x}:\Omega\to\mathbb{R}^{d}\), while lowercase letters are typically reserved for realizations of a random variable, \(x=\mathbf{x}(\omega)\), or deterministic parameters. An event \(\mathcal{E}\in\mathcal{F}\) is said to occur almost surely on the event \(\mathcal{G}\in\mathcal{F}\) if there exists \(\mathcal{W}\in\mathcal{F}\) with \(\mathbb{P}(\mathcal{W})=1\) such that \(\mathcal{E}\cap\mathcal{G}\cap\mathcal{W}=\mathcal{G}\cap\mathcal{W}\), i.e., \(\mathbb{P}(\mathcal{E}\cap\mathcal{G})=\mathbb{P}(\mathcal{G})\). We use the abbreviations "a.e." and "a.s." for "almost everywhere" and "almost surely", respectively. The space \(L^{p}(\Omega):=L^{p}(\Omega,\mathbb{P})\), \(p\in[1,\infty]\), denotes the standard \(L^{p}\) space on \(\Omega\). For a random variable \(\mathbf{x}\in L^{1}(\Omega)\) and a sub-\(\sigma\)-algebra \(\mathcal{H}\in\mathcal{F}\), the conditional expectation of \(\mathbf{x}\) given \(\mathcal{H}\) is denoted by \(\mathbb{E}[\mathbf{x}\mid\mathcal{H}]\). The Moreau envelope \(\operatorname{env}_{\lambda\varphi}\) -- associated with the function \(\varphi:\mathbb{R}^{d}\to(-\infty,\infty]\) -- is given by \(\operatorname{env}_{\lambda\varphi}(x):=\min_{\lambda}\varphi(y)+\frac{1}{2 \lambda}\left\|x-y\right\|^{2}\). Both \(\operatorname{prox}_{\lambda\varphi}\) and \(\operatorname{env}_{\lambda\varphi}\) enjoy numerous favorable properties. In particular, the mapping \(\operatorname{prox}_{\lambda\varphi}\) is a firmly nonexpansive (hence globally Lipschitz continuous) operator and the Moreau envelope \(\operatorname{env}_{\lambda\varphi}\) is real-valued, convex, and continuously differentiable with gradient \(\nabla\operatorname{env}_{\lambda\varphi}(x)=(x-\operatorname{prox}_{\lambda \varphi}(x))/\lambda\). Additional information can be found in [68, 82, 6].
## 2 Global Convergence Analysis
We first study global convergence properties of the normal map-based proximal stochastic gradient descent method.
### Basic Assumptions
Throughout this work, we assume that there is a sufficiently rich filtered probability space \((\Omega,\mathcal{F},\{\mathcal{F}_{k}\}_{k},\mathbb{P})\) that allows to model and describe the stochastic components of Algorithm1 in a unified way. In particular, each approximation \(g^{k}\) is understood as a realization of an \(\mathcal{F}_{k+1}\)-measurable random vector \(\mathbf{g}^{k}:\Omega\to\mathbb{R}^{d}\). For simplicity, we will also assume that the initial point \(x^{0}\in\mathbb{R}^{d}\) is fixed and not random. This readily implies that the stochastic process \(\{\mathbf{x}^{k}\}_{k}\) -- generated by Algorithm1 -- is adapted to the filtration \(\{\mathcal{F}_{k}\}_{k}\). Let us further define the stochastic error terms \(\mathbf{e}^{k}:\Omega\to\mathbb{R}^{d}\), \(\mathbf{e}^{k}:=\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\), \(k\in\mathbb{N}\), and the trajectory-based Lipschitz modulus \(\mathbf{L}:\Omega\to[-\infty,\infty]\),
\[\mathbf{L}(\omega):=\sup_{\bar{x}\in\text{cl}(\operatorname{conv}(\{\mathbf{x}^{k}( \omega)\}_{k}))}\operatorname{lip}\nabla f(\bar{x})\quad\text{where}\quad \operatorname{lip}\nabla f(\bar{x}):=\limsup_{x,x^{\prime}\to\bar{x},\,x\neq x ^{\prime}}\frac{\|\nabla f(x)-\nabla f(x^{\prime})\|}{\|x-x^{\prime}\|}. \tag{13}\]
It can be shown that the mapping \(\mathbf{L}\) is measurable. (We provide a more detailed verification of measurability of \(\mathbf{L}\) in AppendixA.1). In the following, we will primarily consider samples \(\omega\in\Omega\) belonging to the (measurable) event \(\mathcal{L}:=\{\omega\in\Omega:\mathbf{L}(\omega)<\infty\}\). Clearly, if \(\nabla f\) is Lipschitz continuous on \(\mathbb{R}^{d}\), then the event \(\mathcal{L}\) occurs surely.
We now introduce our core assumptions on \(\psi\), the stochastic terms \(\{\mathbf{g}^{k}\}_{k}\), \(\{\mathbf{e}^{k}\}_{k}\), and the step sizes \(\{\alpha_{k}\}_{k}\).
**Assumption 2.1**.: _We consider the following conditions:_
1. _The function_ \(\psi\) _is bounded from below on_ \(\mathrm{dom}(\varphi)\)_, i.e., there is_ \(\bar{\psi}\in\mathbb{R}\) _such that_ \(\psi(x)\geq\bar{\psi}\) _for all_ \(x\in\mathrm{dom}(\varphi)\)_._
2. _Each_ \(\mathbf{g}^{k}\) _defines an unbiased estimator of_ \(\nabla f(\mathbf{x}^{k})\)_, i.e., we have_ \(\mathbb{E}[\mathbf{g}^{k}\mid\mathcal{F}_{k}]=\nabla f(\mathbf{x}^{k})\) _a.s., and there exists a sequence_ \(\{\sigma_{k}\}_{k}\subseteq\mathbb{R}_{+}\) _such that_ \[\mathbb{E}[\|\mathbf{e}^{k}\|^{2}]\leq\sigma_{k}^{2}\quad\forall\;k\in\mathbb{N}.\]
3. _The step sizes_ \(\{\alpha_{k}\}_{k}\) _satisfy_ \(\sum_{k=0}^{\infty}\alpha_{k}=\infty\) _and_ \(\alpha_{k}\to 0\) _as_ \(k\to\infty\)_._
4. _There exists a positive sequence_ \(\{\beta_{k}\}_{k}\subseteq\mathbb{R}_{++}\) _such that_ \(\{\beta_{k}\}_{k}\) _is non-decreasing for all_ \(k\) _sufficiently large and we have_ \(\sum_{k=0}^{\infty}\alpha_{k}^{2}\beta_{k}^{2}\sigma_{k}^{2}<\infty\)_._
We will also work with the following conditions that are specialized to the case when \(\nabla f\) is Lipschitz continuous.
**Assumption 2.2**.: _We assume:_
1. _The gradient mapping_ \(\nabla f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) _is Lipschitz continuous on_ \(\mathbb{R}^{d}\) _with modulus_ \(\mathsf{L}>0\)_._
2. _Each_ \(\mathbf{g}^{k}\) _defines an unbiased estimator of_ \(\nabla f(\mathbf{x}^{k})\)_, i.e., we have_ \(\mathbb{E}[\mathbf{g}^{k}\mid\mathcal{F}_{k}]=\nabla f(\mathbf{x}^{k})\) _a.s., and there exist a constant_ \(\mathsf{A}\geq 0\) _and a sequence_ \(\{\sigma_{k}\}_{k}\subseteq\mathbb{R}_{+}\) _such that_ \[\mathbb{E}[\|\mathbf{e}^{k}\|^{2}\mid\mathcal{F}_{k}]\leq\sigma_{k}^{2}+\mathsf{A }[\psi(\mathbf{x}^{k})-\bar{\psi}]\quad\text{a.s. and for all $k\in\mathbb{N}$}.\] (14)
The conditions formulated in Assumption 2.1 and Assumption 2.2 are fairly standard in the analysis of stochastic optimization methods. Assumptions (A.2) and (B.2) are martingale error-type conditions for the stochastic approximations \(\{\mathbf{g}^{k}\}_{k}\). The second moment bound (14) can be seen as a nonsmooth analogue of the so-called _expected smoothness_ or _expected residual_ assumption
\[\exists\;\mathsf{B},\mathsf{C}\geq 0,\;\{\sigma_{k}\}_{k}\subseteq\mathbb{R}_ {+}:\quad\mathbb{E}[\|\mathbf{e}^{k}\|^{2}\mid\mathcal{F}_{k}]\leq\sigma_{k}^{2}+ \mathsf{B}\|\nabla f(\mathbf{x}^{k})\|^{2}+\mathsf{C}[f(\mathbf{x}^{k})-\bar{f}]\quad \text{a.s. and for all $k$}, \tag{15}\]
which has been recently introduced in [46, 40] for smooth stochastic optimization. Here, \(\bar{f}\) is assumed to be a lower bound of \(f\) on \(\mathbb{R}^{d}\). Remarkably, this property holds for a large class of nonconvex problems and sampling strategies, e.g., when \(f\) is an empirical risk function with Lipschitz gradient(s), see [46, Proposition 2 and 3]. If both \(f\) and \(\varphi\) are bounded from below and \(\nabla f\) is Lipschitz continuous, then (14) follows readily from the expected smoothness condition (15). Assumptions (A.3) and (A.4) summarize our requirements for the step sizes. Notice that the parameters \(\beta_{k}\) in (A.4) can be simply set to \(\beta_{k}=1\) for all \(k\). Hence, the conditions (A.3) and (A.4) do not entail any loss of generality compared to the standard setting \(\alpha_{k}\to 0\), \(\sum_{k=0}^{\infty}\alpha_{k}=\infty\), and \(\sum_{k=0}^{\infty}\alpha_{k}^{2}\sigma_{k}^{2}<\infty\). As we will see in Section 3, the specific choice of \(\{\beta_{k}\}_{k}\) will become more important when establishing convergence of \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) and when deriving corresponding rates of convergence.
### Stochastic and Deterministic Tools
We now introduce several stochastic concepts and techniques that will be the backbone of our analysis. In order to control the stochastic behavior of the error terms \(\mathbf{e}^{k}\) and to establish almost sure convergence of the sequence of iterates \(\{\mathbf{x}^{k}\}_{k}\) generated by Algorithm 1, we will utilize several results from martingale theory.
**Definition 2.3** (Martingale).: _Let \((\Omega,\mathcal{U},\mathbb{P})\) be a probability space and \(\{\mathcal{U}_{k}\}_{k}\) a family of increasing sub-\(\sigma\)-fields of \(\mathcal{U}\). A random process \(\{\mathbf{w}^{k}\}_{k}\) defined on this probability space is said to be a martingale with respect to the family \(\{\mathcal{U}_{k}\}_{k}\) if each \(\mathbf{w}^{k}\) is integrable and \(\mathcal{U}_{k}\)-measurable and we have \(\mathbb{E}[\mathbf{w}^{k+1}\mid\mathcal{U}_{k}]=\mathbf{w}^{k}\) a.s. for all \(k\)._
Next, we state a standard convergence theorem for vector-valued martingales, see, e.g., [90, Theorem 5.2.22] and [22, Theorem 5.14].
**Theorem 2.4** (Martingale Convergence Theorem).: _Let \(\{\mathbf{w}^{k}\}_{k}\) be a given vector-valued martingale with respect to some filtration \(\{\mathcal{U}_{k}\}_{k}\). If \(\sup_{k}\mathbb{E}[\|\mathbf{w}^{k}\|]<\infty\), then \(\{\mathbf{w}^{k}\}_{k}\) converges a.s. to an integrable random vector \(\mathbf{w}\)._
We will also require the well-known Burkholder-Davis-Gundy inequality [23, 90] in our analysis.
**Theorem 2.5** ( Burkholder-Davis-Gundy Inequality).: _Let \(\{\mathbf{w}^{k}\}_{k}\) be a given vector-valued martingale with an associated filtration \(\{\mathcal{U}_{k}\}_{k}\) and \(\mathbf{w}^{0}=0\). Then, for all \(p\in(1,\infty)\), there exists \(C_{p}>0\) such that_
\[\mathbb{E}\left[\sup_{k\geq 0}\|\mathbf{w}^{k}\|^{p}\right]\leq C_{p}\cdot \mathbb{E}\left[\left(\sum_{k=1}^{\infty}\|\mathbf{w}^{k}-\mathbf{w}^{k-1}\|^{2}\right)^{ \frac{p}{2}}\right].\]
Notice that the constant \(C_{p}\) in Theorem2.5 is universal and does neither depend on the martingale \(\{\mathbf{w}^{k}\}_{k}\) nor on the dimension of the underlying space. In particular, it holds that \(C_{2}\leq 4\), [23, 90].
Next, we introduce Gronwall's inequality [41, 20, 92]. Here, we state a discrete variant of Gronwall's inequality, which can be found in, e.g., [20, Appendix B, Lemma 8].
**Proposition 2.6** (Gronwall's Inequality).: _Let \(\{a_{k}\}_{k}\subseteq\mathbb{R}_{++}\) and \(\{y_{k}\}_{k}\subseteq\mathbb{R}_{+}\) be given sequences. Suppose that we have \(y_{t+1}\leq p+q\sum_{k=0}^{t}a_{k}y_{k}\) for all \(t\) and some \(p,q\geq 0\). Then, it holds that \(y_{t+1}\leq p\cdot\exp(q\sum_{k=0}^{t}a_{k})\) for all \(t\geq 0\)._
### Preparatory Lemmas
In this subsection, we present several basic lemmas that will allow us to study the descent behavior of Algorithm1. We first elucidate the strong connection between the normal map and the natural residual -- as outlined in the introduction.
**Lemma 2.7**.: _Let \(\lambda>0\) be given. If \(x\in\mathbb{R}^{d}\) is a stationary point of (1) with \(F^{\lambda}_{\mathrm{nat}}\left(x\right)=0\), then \(z:=x-\lambda\nabla f(x)\) is a zero of the normal map \(F^{\lambda}_{\mathrm{nor}}\). Conversely, if \(z\) is a zero of the normal map \(F^{\lambda}_{\mathrm{nor}}\), then it holds that \(x:=\mathrm{prox}_{\lambda\varphi}(z)\in\mathrm{crit}(\psi)\)._
Proof.: The derivation is identical to the proof of [74, Lemma 2.1] and is included for completeness. Let \(x\) be a stationary point with \(F^{\lambda}_{\mathrm{nat}}(x)=0\). Setting \(z=x-\lambda\nabla f(x)\), it follows \(F^{\lambda}_{\mathrm{nor}}(z)=\nabla f(\mathrm{prox}_{\lambda\varphi}(x- \lambda\nabla f(x)))+\lambda^{-1}F^{\lambda}_{\mathrm{nat}}(x)-\nabla f(x)=0\). Conversely, let \(z\) be a zero of \(F^{\lambda}_{\mathrm{nor}}\). Then, \(x=\mathrm{prox}_{\lambda\varphi}(z)\) satisfies \(F^{\lambda}_{\mathrm{nat}}(x)=\mathrm{prox}_{\lambda\varphi}(z)-\mathrm{prox}_ {\lambda\varphi}(z-\lambda F^{\lambda}_{\mathrm{nor}}(z))=0\), i.e., \(x\) is a stationary point of (1).
We now return to the analysis of Algorithm1. For \(m,n\in\mathbb{N}\), \(m<n\) and according to the update rule of \(\mathsf{norM}\)-SGD, we can decompose the iterate \(\mathbf{z}^{n}\) as follows:
\[\mathbf{z}^{n} =\mathbf{z}^{n-1}-\alpha_{n-1}(\mathbf{g}^{n-1}+\mathrm{Venv}_{\lambda \varphi}(\mathbf{z}^{n-1}))=\mathbf{z}^{n-1}-\alpha_{n-1}F^{\lambda}_{\mathrm{nor}}( \mathbf{z}^{n-1})-\alpha_{n-1}(\mathbf{g}^{n-1}-\nabla f(\mathbf{x}^{n-1}))\] \[=\mathbf{z}^{m}-\sum\nolimits_{i=m}^{n-1}\alpha_{i}\cdot F^{\lambda}_ {\mathrm{nor}}(\mathbf{z}^{m})-\sum\nolimits_{i=m}^{n-1}\alpha_{i}[F^{\lambda}_{ \mathrm{nor}}(\mathbf{z}^{i})-F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{m})]-\sum \nolimits_{i=m}^{n-1}\alpha_{i}[\mathbf{g}^{i}-\nabla f(\mathbf{x}^{i})]\] \[=:\mathbf{z}^{m}-\tau_{m,n}F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{m})+\bm {e}^{m,n}, \tag{16}\]
where \(\tau_{m,n}:=\sum_{i=m}^{n-1}\alpha_{i}\). This representation will play a key role in our derivations.
In the following, we provide a standard martingale-type estimate for the aggregated noise terms \(\{\mathbf{e}^{k}\}_{k}\). The proof of Lemma2.8 relies on a routine application of the Burkholder-Davis-Gundy inequality and is deferred to AppendixA.2. We refer to [8, Proposition 4.2] and [92, Lemma 6.1] for related results.
**Lemma 2.8**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) be generated by \(\mathsf{norM}\)-SGD and let \(\{n_{k}\}_{k}\subseteq\mathbb{N}\) be an arbitrary subsequence. Suppose that the conditions (A.2)-(A.4) are satisfied and let us further define_
\[\mathbf{r}_{k}:=\max_{n_{k}<j\leq n_{k+1}}\left\|\sum\nolimits_{i=n_{k}}^{j-1} \alpha_{i}\mathbf{e}^{i}\right\|.\]
_Then, it holds that \(\sum_{k=0}^{\infty}\beta_{n_{k}}^{2}\mathbf{r}_{k}^{2}<\infty\) almost surely._
Next, we derive upper bounds for \(\mathbf{d}_{m,n}:=\max_{m<i\leq n}\left\|\mathbf{x}^{i}-\mathbf{x}^{m}\right\|\) and \(\|\mathbf{e}^{m,n}\|\).
**Lemma 2.9**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\mathsf{norM}\)-SGD. Then, for all \(0\leq m<n\), we have_
\[\mathbf{d}_{m,n} \leq\max_{m<i\leq n}\left\|\mathbf{z}^{i}-\mathbf{z}^{m}\right\|\leq(1+ \tau_{m,n}\bar{\mathbf{\tau}}_{m,n})[\tau_{m,n}\left\|F^{\lambda}_{\mathrm{nor}}( \mathbf{z}^{m})\right\|+\mathbf{s}_{m,n}],\quad\text{and}\] \[\|\mathbf{e}^{m,n}\| \leq\bar{\mathbf{\tau}}_{m,n}\tau_{m,n}^{2}\left\|F^{\lambda}_{ \mathrm{nor}}(\mathbf{z}^{m})\right\|+(1+\bar{\mathbf{\tau}}_{m,n}\tau_{m,n})\,\mathbf{s}_ {m,n},\]
_where \(\tau_{m,n}=\sum_{i=m}^{n-1}\alpha_{i}\), \(\bar{\mathbf{\tau}}_{m,n}:=(\mathbf{L}+\frac{2}{\lambda})\exp((\mathbf{L}+\frac{2}{\lambda} )\tau_{m,n})\), and \(\mathbf{s}_{m,n}:=\max_{m<j\leq n}\|\sum_{i=m}^{j-1}\alpha_{i}\mathbf{e}^{i}\|\)._
Proof.: Clearly, the estimate is satisfied for all \(\omega\notin\mathcal{L}\) and hence, we only need to consider samples \(\omega\) with \(\omega\in\mathcal{L}\). Let \(\omega\in\mathcal{L}\) be arbitrary and let us set \(\tau\equiv\tau_{m,n}\), \(d\equiv d_{m,n}=\mathbf{d}_{m,n}(\omega)\), \(e\equiv e^{m,n}=\mathbf{e}^{m,n}(\omega)\), \(s\equiv s_{m,n}=\mathbf{s}_{m,n}(\omega)\), \(x^{k}=\mathbf{x}^{k}(\omega)\), \(z^{k}=\mathbf{z}^{k}(\omega)\), \(L=\mathbf{L}(\omega)<\infty\), etc. By [82, Theorem 9.2], we have
\[\|\nabla f(u)-\nabla f(v)\|\leq L\|u-v\|\quad\forall\;u,v\in\mathrm{cl}(\mathrm{ conv}(\{x^{k}\}_{k})). \tag{17}\]
Furthermore, using the definition of \(e\) and the triangle inequality, we obtain
\[\|e\|\leq\sum\nolimits_{i=m}^{n-1}\alpha_{i}\|F^{\lambda}_{\mathrm{nor}}(z^{i})-F ^{\lambda}_{\mathrm{nor}}(z^{m})\|+\left\|\sum\nolimits_{i=m}^{n-1}\alpha_{i}e ^{i}\right\|\leq\sum\nolimits_{i=m}^{n-1}\alpha_{i}\|F^{\lambda}_{\mathrm{nor}}(z^ {i})-F^{\lambda}_{\mathrm{nor}}(z^{m})\|+s. \tag{18}\]
By (17) and due to the nonexpansiveness of the proximity operator, it holds that
\[\|F^{\lambda}_{\text{nor}}(z^{i})-F^{\lambda}_{\text{nor}}(z^{m})\|\leq(L+\lambda ^{-1})\|x^{i}-x^{m}\|+\lambda^{-1}\|z^{i}-z^{m}\|\leq(L+2\lambda^{-1})\|z^{i}-z^ {m}\|. \tag{19}\]
Applying (16) and (18) for \(n\equiv i\), (\(m<i\leq n\)), we can bound the term \(\|z^{i}-z^{m}\|\) as follows
\[\|z^{i}-z^{m}\|\leq\tau_{m,i}\|F^{\lambda}_{\text{nor}}(z^{m})\|+\|e^{m,i}\| \leq\sum\nolimits_{j=m}^{i-1}\alpha_{j}\|F^{\lambda}_{\text{nor}}(z^{j})-F^{ \lambda}_{\text{nor}}(z^{m})\|+\tau\|F^{\lambda}_{\text{nor}}(z^{m})\|+s. \tag{20}\]
Thus, combining the previous estimates, we can infer
\[\|F^{\lambda}_{\text{nor}}(z^{i})-F^{\lambda}_{\text{nor}}(z^{m})\|\leq(L+2 \lambda^{-1})\left[\sum\nolimits_{j=m}^{i-1}\alpha_{j}\|F^{\lambda}_{\text{ nor}}(z^{j})-F^{\lambda}_{\text{nor}}(z^{m})\|+\tau\|F^{\lambda}_{\text{ nor}}(z^{m})\|+s\right].\]
We now apply Gronwall's inequality (Proposition2.6) upon setting:
\[p :=(L+2\lambda^{-1})[\tau\|F^{\lambda}_{\text{nor}}(z^{m})\|+s], \quad q:=L+2\lambda^{-1},\quad a_{k}:=\alpha_{m+k},\] \[y_{k} :=\|F^{\lambda}_{\text{nor}}(z^{m+k})-F^{\lambda}_{\text{nor}}(z^{ m})\|,\quad\text{and}\quad t:=i-m-1.\]
This establishes the following upper bound for \(\|F^{\lambda}_{\text{nor}}(z^{i})-F^{\lambda}_{\text{nor}}(z^{m})\|\) for all \(i\in(m,n]\cap\mathbb{N}\):
\[\|F^{\lambda}_{\text{nor}}(z^{i})-F^{\lambda}_{\text{nor}}(z^{m})\|\leq(L+2 \lambda^{-1})\exp((L+2\lambda^{-1})\tau)[\tau\|F^{\lambda}_{\text{nor}}(z^{m}) \|+s]. \tag{21}\]
Using this estimate in (18), we obtain the upper bound for \(\|e^{m,n}\|\). Finally, utilizing the nonexpansiveness of the proximity operator, and invoking estimates (20) and (21), we have
\[d\leq\max_{m<i\leq n}\|z^{i}-z^{m}\|\leq(1+\tau(L+2\lambda^{-1})\exp(\tau(L+2 \lambda^{-1})))\cdot[\tau\|F^{\lambda}_{\text{nor}}(z^{m})\|+s],\]
which completes the proof.
We continue with a descent-type estimate for the objective function \(\psi\) and a technical upper bound for \(\|F^{\lambda}_{\text{nor}}(\mathbf{z}^{n})\|\). These bounds will be utilized in the following subsection to establish approximate descent of a suitable merit function for \(\text{norM-SGD}\). The corresponding proofs of Lemma2.10 and Lemma2.11 are presented in AppendixB.
**Lemma 2.10**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\text{norM-SGD}\). Then, for all \(0\leq m<n\), it holds that_
\[\psi(\mathbf{x}^{n})-\psi(\mathbf{x}^{m})\leq\left(\frac{\mathbf{L}}{2}-\frac{1}{\lambda} \right)\left\|\mathbf{x}^{n}-\mathbf{x}^{m}\right\|^{2}+\langle F^{\lambda}_{\text{ nor}}(\mathbf{z}^{m}),\mathbf{x}^{n}-\mathbf{x}^{m}\rangle+\frac{1}{\lambda}\langle\mathbf{z}^{n}- \mathbf{z}^{m},\mathbf{x}^{n}-\mathbf{x}^{m}\rangle. \tag{22}\]
**Lemma 2.11**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\text{norM-SGD}\). Then, for all \(0\leq m<n\), we have_
\[\|F^{\lambda}_{\text{nor}}(\mathbf{z}^{n})\|^{2}\] \[\qquad\qquad+\frac{2}{\lambda}\left(1-\frac{\tau_{m,n}}{\lambda} \right)\langle F^{\lambda}_{\text{nor}}(\mathbf{z}^{m}),\lambda\left(\nabla f(\mathbf{ x}^{n})-\nabla f(\mathbf{x}^{m})\right)-(\mathbf{x}^{n}-\mathbf{x}^{m})+\mathbf{e}^{m,n}\rangle\] \[\qquad\qquad+\frac{2}{\lambda}\langle\nabla f(\mathbf{x}^{n})-\nabla f (\mathbf{x}^{m}),\mathbf{e}^{m,n}\rangle-\frac{2}{\lambda^{2}}\langle\mathbf{e}^{m,n},\mathbf{ x}^{n}-\mathbf{x}^{m}\rangle.\]
\begin{table}
\begin{tabular}{c l} \hline \hline Notation & Description \\ \hline \(F^{\lambda}_{\text{nor}}\) & normal map \(F^{\lambda}_{\text{nor}}(z)=\nabla f(\text{prox}_{\lambda\varphi}(z))+\frac{1}{ \lambda}(z-\text{prox}_{\lambda\varphi}(z))\). \\ \(H_{\mathbf{\xi}},\mathbf{\xi},\xi\) & merit function \(H_{\mathbf{\xi}}(z)=\psi(\text{prox}_{\lambda\varphi}(z))+\frac{\mathbf{\xi}\lambda}{ \lambda}\|F^{\lambda}_{\text{nor}}(z)\|^{2}\), the parameters \(\mathbf{\xi}\) and \(\xi\) \\ & are typically chosen as \(\mathbf{\xi}=1/(2+2\lambda^{2}\mathbf{L}^{2})\) or \(\xi=1/(2+2\lambda^{2}\mathbf{\mathrm{L}}^{2})\). \\ \(\{\beta_{k}\}_{k},T,\bar{\mathbf{T}},\{m_{k}\}_{k}\) & growth rate / rate parameters, time windows, associated time indices. \\ \(\{\mathbf{e}^{k}\}_{k}\) & stochastic approximation errors \(\mathbf{e}^{k}=\mathbf{g}^{k}-\nabla f(\mathbf{x}^{k})\). \\ \(\mathbf{d}_{m,n}\), \(\mathbf{e}^{m,n}\) & \(\mathbf{d}_{m,n}=\max_{m<j\leq n}\|\mathbf{x}^{j}-\mathbf{x}^{m}\|\), \(\mathbf{e}^{m,n}=-\sum_{i=m}^{n-1}\alpha_{i}[F^{\lambda}_{\text{nor}}(\mathbf{z}^{i})-F^{ \lambda}_{\text{nor}}(\mathbf{z}^{m})+\mathbf{e}^{i}]\). \\ \(\mathbf{s}_{m,n}\), \(\{\mathbf{s}_{k}\}_{k}\) & \(\mathbf{s}_{m,n}=\max_{m<j\leq n}\|\sum_{i=m}^{j-1}\alpha_{i}\mathbf{e}^{i}\|\) and \(\mathbf{s}_{k}=\mathbf{s}_{m,m+k+1}\). \\ \(\tau_{m,n}\), \(\mathbf{\bar{\tau}}_{m,n}\) & \(\tau_{m,n}=\sum_{i=m}^{n-1}\alpha_{i}\) and \(\mathbf{\bar{\tau}}_{m,n}=(\mathbf{L}+\frac{2}{\lambda})\exp((\mathbf{L}+\frac{2}{\lambda}) \tau_{m,n})\). \\ \(\bar{\tau}\), \(\bar{\lambda}\) & \(\bar{\tau}=\bar{\tau}_{m_{k},m_{k+1}}(\omega)\), \(\bar{\lambda}=(\mathbf{L}(\omega)+\frac{2}{\lambda})\exp(\mathbf{L}(\omega)\lambda+2)\) or \(\bar{\lambda}=(\mathbf{\mathrm{L}}+\frac{2}{\lambda})\exp(\mathbf{\mathrm{L}}\lambda+2)\). \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of Variables, Parameters, and Functions.
### A Merit Function for \(\mathsf{norM}\)-\(\mathsf{SGD}\), Time Windows, and Approximate Descent
With Lemma2.10 and Lemma2.11 at hand, we are ready to establish an approximate descent property for the stochastic process \(\{\mathbf{z}^{k}\}_{k}\). Our derivations and steps will rely on a stochastic merit function for problem (1) which combines the objective function \(\psi\) and the normal map and which has been introduced recently in [74].
**Definition 2.12** (Merit Function).: _Let \(\mathbf{\xi}:\Omega\to\mathbb{R}_{+}\) be a non-negative random variable and let \(\lambda>0\) be given. We define the merit function \(H_{\mathbf{\xi}}:\mathbb{R}^{d}\to\mathbb{R}\) as follows:_
\[H_{\mathbf{\xi}}(z):=\psi(\mathrm{prox}_{\lambda\varphi}(z))+\frac{\mathbf{\xi}\lambda} {2}\|F^{\lambda}_{\mathrm{nor}}(z)\|^{2}.\]
Inspired by [92] and classical stochastic approximation literature [60, 10, 50, 20], we will study descent properties of the merit function \(H_{\mathbf{\xi}}\) using the natural time scales \(\tau_{k,n}=\sum_{i=k}^{n-1}\alpha_{i}\) for \(k<n\), and \(\tau_{k,k}=0\). Specifically, similar to [92], let us define
\[\varpi:\mathbb{N}\times\mathbb{R}_{+}\to\mathbb{N},\quad\varpi(k,T):=\max\{k+ 1,\sup\{n\geq k:\tau_{k,n}\leq T\}\}.\]
Here, \(T\in\mathbb{R}_{+}\) is referred to as a _time window_ and the associated _time indices_\(\{m_{k}\}_{k}\) are defined recursively via \(m_{0}=0\) and \(m_{k+1}:=\varpi(m_{k},T)\) for all \(k\in\mathbb{N}\). Let us assume that the step size condition (A.3) is satisfied. Thanks to \(\sum_{k=0}^{\infty}\alpha_{k}=\infty\), the sequence \(\{m_{k}\}_{k}\) is then well-defined and we have \(m_{k}<\infty\) for all \(k\). Next, we summarize several key observations for the time window \(T\) and its associated sequences \(\{m_{k}\}_{k}\) and \(\{\tau_{m_{k},m_{k+1}}\}_{k}\):
1. Due to \(\lim_{k\to\infty}\alpha_{k}=0\), there is \(K\in\mathbb{N}\) such that \(\alpha_{k}\leq T\) for all \(k\geq K\). This implies \(\varpi(k,T)=\sup\{n\geq k:\tau_{k,n}\leq T\}\) for all \(k\geq K\) and using the definition of \(\varpi\), it then holds that \(\tau_{m_{k},m_{k+1}}=\sum_{i=m_{k}}^{m_{k+1}-1}\alpha_{i}\leq T\) for all \(k\) with \(m_{k}\geq K\).
2. In addition, for all \(\delta\in(0,1)\) there exists \(K^{\prime}\geq K\) such that \(\alpha_{k}\leq(1-\delta)T\) for all \(k\geq K^{\prime}\). Hence, by the optimality of \(m_{k+1}\) and \(\varpi(m_{k},T)\), we can establish a lower bound for \(\tau_{m_{k},m_{k+1}}\): \[\tau_{m_{k},m_{k+1}}\geq T-\alpha_{m_{k+1}}\geq\delta T,\] for all \(k\) and \(m_{k}\) satisfying \(m_{k}\geq K^{\prime}\).
Consequently, defining \(K_{\delta}:=\inf\{k\in\mathbb{N}:m_{k}\geq K^{\prime}\}\), it follows:
\[\delta T\leq\tau_{m_{k},m_{k+1}}\leq T,\quad\forall\;k\geq K_{\delta}. \tag{23}\]
Next, based on the time window \(T\) and the indices \(\{m_{k}\}_{k}\), we introduce the aggregated error term \(\mathbf{s}_{k}:=\mathbf{s}_{m_{k},m_{k+1}}=\max_{m_{k}<j\leq m_{k+1}}\|\sum_{i=m_{k}} ^{j-1}\alpha_{i}\mathbf{e}^{i}\|\) and the event
\[\mathcal{T}:=\left\{\omega:\sum_{k=0}^{\infty}\beta_{m_{k}}^{2}\mathbf{s}_{k}( \omega)^{2}<\infty\right\}.\]
Events of this form will play a central role in our analysis as they allow to control the error terms \(\{\mathbf{s}_{k}\}_{k}\). Moreover, under assumptions (A.2)-(A.4) and applying Lemma2.8, the event \(\mathcal{T}\) occurs with probability \(1\). As we will see in Lemma2.13, the specific choice of the time window \(T\) will ultimately depend on the stochastic Lipschitz constant \(\mathbf{L}\). Since \(T\) affects the time indices \(\{m_{k}\}_{k}\) and the event \(\mathcal{T}\), we will work with a master event \(\mathcal{U}\) that allows to decouple the choice of the sample \(\omega\) from the time window \(T\). Let \(\{m_{k}[T]\}_{k}\) denote the time indices associated with the time window \(T\). We use the extended notations \(\mathbf{s}_{k}[T]=\mathbf{s}_{m_{k}[T],m_{k+1}[T]},\mathcal{T}[T]:=\{\omega:\sum_{k=0 }^{\infty}\beta_{m_{k}[T]}^{2}\mathbf{s}_{k}[T](\omega)^{2}<\infty\}\) etc. to indicate the explicit dependence of the terms \(\mathbf{s}_{k}\), \(\mathcal{T}\), etc. on \(T\). We then define the master events \(\mathcal{M}\) and \(\mathcal{U}\) via
\[\mathcal{U}:=\bigcap_{T\in\mathbb{Q}_{++}}\mathcal{T}[T]\quad\text{and}\quad \mathcal{M}:=\mathcal{L}\cap\mathcal{U}. \tag{24}\]
As the intersection of countably many events occurring with probability \(1\) still has probability \(1\), we can infer \(\mathbb{P}(\mathcal{U})=1\) and \(\mathbb{P}(\mathcal{M})=\mathbb{P}(\mathcal{L})\) by Lemma2.8 (under (A.2)-(A.4)). In the following, we will often drop the notational term "\([T]\)" in our derivations whenever the explicit dependence on \(T\) (and \(\{m_{k}[T]\}_{k}\)) is clear.
We are now in the position to state a first descent-type property for \(\mathsf{norM}\)-\(\mathsf{SGD}\).
**Lemma 2.13** (Approximate Descent Property).: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\mathsf{norM}\)-\(\mathsf{SGD}\) and suppose (A.3) and (A.4) hold. Define \(\mathbf{\xi}(\omega):=1/(2+2\lambda^{2}\mathbf{L}(\omega)^{2})\) for \(\omega\in\mathcal{L}\) and set \(\mathbf{\xi}(\omega)=0\) for \(\omega\notin\mathcal{L}\). For all \(\omega\in\mathcal{L}\), there then exists \(\mathbf{\bar{T}}(\omega)>0\) such that for every time window \(T\in(0,\mathbf{\bar{T}}(\omega)]\) with associated time indices \(\{m_{k}\}_{k}\equiv\{m_{k}[T]\}_{k}\) there is \(K_{0}\equiv K_{0}[T]\in\mathbb{N}\) such that_
\[H_{\mathbf{\xi}(\omega)}(\mathbf{z}^{m_{k+1}}(\omega))-H_{\mathbf{\xi}(\omega)}(\mathbf{z}^{m_{k} }(\omega))\leq-\frac{\mathbf{\xi}(\omega)T}{5}\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{m _{k}}(\omega))\|^{2}+\frac{5}{T}\mathbf{s}_{k}(\omega)^{2}\quad\forall\;k\geq K_{0}. \tag{25}\]
Proof.: We define the random variable \(\bar{\mathbf{T}}\) point-wise on \(\mathcal{L}\) via
\[\bar{\mathbf{T}}(\omega):=\sup\big{\{}t\in\mathbb{R}_{+}:t\leq\min\{ \frac{4\lambda}{5},1\},\;\frac{1}{2t}\geq\mathbf{\xi}(\omega)(4\mathbf{L}(\omega)- \lambda^{-1})+\mathbf{L}(\omega), \tag{26}\] \[\mathbf{L}(\omega)\bar{\mathbf{\lambda}}(\omega)^{2}t^{2}+5\bar{\mathbf{ \lambda}}(\omega)^{2}t\leq\frac{8\xi(\omega)}{25\lambda},\;\text{and}\;\left[ \frac{5}{t}+\mathbf{L}(\omega)\right](1+\bar{\mathbf{\lambda}}(\omega)t)^{2}\leq\frac{1 0}{t}\big{\}},\]
where \(\bar{\mathbf{\lambda}}:=(\mathbf{L}+2\lambda^{-1})\exp(\mathbf{L}\lambda+2)\). Notice that the conditions defining \(\bar{\mathbf{T}}(\omega)\) then also hold for all \(T\in(0,\bar{\mathbf{T}}(\omega)]\). Let \(\omega\in\mathcal{L}\) and \(T\in(0,\bar{\mathbf{T}}(\omega)]\) be arbitrary and define the associated time indices \(\{m_{k}\}_{k}\equiv\{m_{k}[T]\}_{k}\) for \(T\). Let us consider the corresponding realizations \(\xi=\mathbf{\xi}(\omega)\), \(L=\mathbf{L}(\omega)\), \(x^{k}=\mathbf{x}^{k}(\omega)\), etc. To simplify the notation, we will also set \(e\equiv e^{m_{k},m_{k+1}}=\mathbf{e}^{m_{k},m_{k+1}}(\omega)\), \(\tau\equiv\tau_{m_{k},m_{k+1}}\), and \(\bar{\tau}\equiv\bar{\tau}_{m_{k},m_{k+1}}=\bar{\mathbf{\tau}}_{m_{k},m_{k+1}}(\omega)\). As discussed in the previous paragraphs, there exists \(K_{0}:=K_{\delta}\) such that the statements in (1.1), (1.2), and (23) hold with \(\delta=\frac{5}{5}\) for all \(k\geq K_{0}\). Setting \(n\equiv m_{k+1}\) and \(m\equiv m_{k}\) for some fixed \(k\geq K_{0}\), combining Lemma 2.10 and Lemma 2.11 and following the proof of [74, Lemma 4.3], we obtain
\[H_{\xi}(z^{n})-H_{\xi}(z^{m})\] \[\quad\leq\psi(x^{n})-\psi(x^{m})+\frac{\xi\lambda}{2}[\|F^{ \lambda}_{\text{nor}}(z^{n})\|^{2}-\|F^{\lambda}_{\text{nor}}(z^{m})^{2}\|]\] \[\quad\leq-\xi\tau\left(1-\frac{\tau}{2\lambda}\right)\|F^{ \lambda}_{\text{nor}}(z^{m})\|^{2}+\langle F^{\lambda}_{\text{nor}}(z^{m})+ \lambda^{-1}(z^{n}-z^{m})-\xi(1-\tau\lambda^{-1})F^{\lambda}_{\text{nor}}(z^{m} )-\xi\lambda^{-1}e,x^{n}-x^{m}\rangle\] \[\qquad\quad+\left(\frac{L^{2}\xi\lambda}{2}+L\left(\xi+\frac{1}{2 }\right)-\frac{2-\xi}{2\lambda}\right)\|x^{n}-x^{m}\|^{2}+\xi\left(1-\frac{ \tau}{\lambda}\right)\langle F^{\lambda}_{\text{nor}}(z^{m}),\lambda[\nabla f (x^{n})-\nabla f(x^{m})]+e\rangle\] \[\qquad\quad+\xi\langle\nabla f(x^{n})-\nabla f(x^{m}),e\rangle+ \frac{\xi}{2\lambda}\|e\|^{2}. \tag{27}\]
Furthermore, based on the definition of \(F^{\lambda}_{\text{nor}}\), \(e\), and \(\tau\), and using (16), it follows
\[F^{\lambda}_{\text{nor}}(z^{m})+\lambda^{-1}(z^{n}-z^{m})-\xi(1- \tau\lambda^{-1})F^{\lambda}_{\text{nor}}(z^{m})-\xi\lambda^{-1}e\] \[\qquad\quad=(1-\xi)F^{\lambda}_{\text{nor}}(z^{m})+\lambda^{-1}(z ^{n}-z^{m})-\xi\lambda^{-1}(e-\tau F^{\lambda}_{\text{nor}}(z^{m}))\] \[\qquad\quad=(1-\xi)\,F^{\lambda}_{\text{nor}}(z^{m})+(1-\xi) \lambda^{-1}(z^{n}-z^{m})=(1-\xi)\left(\frac{1}{\lambda}-\frac{1}{\tau}\right) (z^{n}-z^{m})+(1-\xi)\tau^{-1}e. \tag{28}\]
Thus, using the firm nonexpansiveness of the proximity operator, i.e., \(\|x^{n}-x^{m}\|^{2}=\|\text{prox}_{\lambda\varphi}(z^{n})-\text{prox}_{\lambda \varphi}(z^{m})\|^{2}\leq\langle z^{n}-z^{m},x^{n}-x^{m}\rangle\) (see, e.g., [68, 6]), \(\xi\in(0,1)\), and \(\tau\in(0,T]\subset(0,\lambda)\), it holds that
\[H_{\xi}(z^{n})-H_{\xi}(z^{m})\] \[\quad\leq-\xi\tau\left(1-\frac{\tau}{2\lambda}\right)\|F^{\lambda} _{\text{nor}}(z^{m})\|^{2}+\frac{1-\xi}{\tau}\langle e,x^{n}-x^{m}\rangle+ \left[\left(L+\frac{L^{2}\lambda}{2}-\frac{1}{2\lambda}\right)\xi+\frac{L}{2}- \frac{1-\xi}{\tau}\right]\|x^{n}-x^{m}\|^{2}\] \[\qquad\quad+\xi\left(1-\frac{\tau}{\lambda}\right)\langle F^{ \lambda}_{\text{nor}}(z^{m}),\lambda[\nabla f(x^{n})-\nabla f(x^{m})]+e \rangle+\xi\langle\nabla f(x^{n})-\nabla f(x^{m}),e\rangle+\frac{\xi}{2 \lambda}\left\|e\right\|^{2}. \tag{29}\]
Applying the Cauchy-Schwarz inequality, Young's inequality and the Lipschitz condition (17), we have
\[\langle F^{\lambda}_{\text{nor}}(z^{m}),\lambda[\nabla f(x^{n})-\nabla f(x^{m})] \rangle\leq\frac{\tau}{2}\|F^{\lambda}_{\text{nor}}(z^{m})\|^{2}+\frac{L^{2} \lambda^{2}}{2\tau}\|x^{n}-x^{m}\|^{2}\]
and \(\langle\nabla f(x^{n})-\nabla f(x^{m}),e\rangle\leq L\|x^{n}-x^{m}\|^{2}+\frac{L }{4}\|e\|^{2}\). Similarly, it holds that \(\langle F^{\lambda}_{\text{nor}}(z^{m}),e\rangle\leq\frac{\tau}{4}\|F^{\lambda} _{\text{nor}}(z^{m})\|^{2}+\frac{1}{\tau}\|e\|^{2}\) and \(\langle e,x^{n}-x^{m}\rangle\leq\frac{1}{2}\|x^{n}-x^{m}\|^{2}+\frac{L}{2}\|e\|^ {2}\). Plugging those estimates into formula (29) and using \(\xi=\frac{1}{2}(1+L^{2}\lambda^{2})^{-1}\leq 1\), \(\tau\geq\delta T\equiv\frac{5}{5}T\), \(\tau\leq T\), and (26), we obtain
\[H_{\xi}(z^{n})-H_{\xi}(z^{m})\leq-\frac{\xi\tau}{4}\left(1+ \frac{\tau}{\lambda}\right)\|F^{\lambda}_{\text{nor}}(z^{m})\|^{2}+\left[ \frac{1+\xi}{2\tau}-\frac{\xi}{2\lambda}+\frac{L\xi}{4}\right]\|e\|^{2}\] \[\qquad\quad+\frac{1}{2}\left[\xi\left(4L-\frac{1}{\lambda} \right)+L-\frac{1-\xi(1+L^{2}\lambda^{2})}{\tau}\right]\|x^{n}-x^{m}\|^{2}\] \[\quad\leq-\frac{\xi T}{5}\left(1+\frac{4T}{5\lambda}\right)\|F^{ \lambda}_{\text{nor}}(z^{m})\|^{2}+\frac{1}{4}\left[\frac{5}{T}+L\right]\|e\|^ {2}.\]
Due to \(\tau\leq T<\lambda\), it follows \(\bar{\tau}\leq\bar{\lambda}\) and applying Lemma2.9, we have \(\|e\|^{2}\leq 2\bar{\lambda}^{2}T^{4}\|F_{\mathrm{nor}}^{\lambda}(z^{m})\|^{2}+2(1+ \bar{\lambda}T)^{2}s_{k}^{2}\). Combining these estimates and invoking the third and fourth inequality in (26), we conclude
\[H_{\xi}(z^{n})-H_{\xi}(z^{m}) \leq\left[-\frac{\xi T}{5}+T^{2}\left(\frac{L\bar{\lambda}^{2}T^{ 2}}{2}+\frac{5\bar{\lambda}^{2}T}{2}-\frac{4\xi}{25\lambda}\right)\right]\|F_{ \mathrm{nor}}^{\lambda}(z^{m})\|^{2}+\left[\frac{5}{2T}+\frac{L}{2}\right](1+ \bar{\lambda}T)^{2}s_{k}^{2}\] \[\leq-\frac{\xi T}{5}\|F_{\mathrm{nor}}^{\lambda}(z^{m})\|^{2}+ \frac{5}{T}s_{k}^{2},\]
which establishes the approximate descent property.
**Remark 2.14**.: _The time window-based derivation in Lemma2.13 offers certain advantages compared to the conventional analysis. Specifically, in the adaptive special case "\(T\equiv\alpha_{k}\)", the time indices \(\{m_{k}\}_{k}\) satisfy \(m_{k}=k\) and (25) reduces to a classical descent estimate \(H_{\xi}(\mathbf{z}^{k+1})-H_{\xi}(\mathbf{z}^{k})\leq-\frac{6\alpha_{k}}{5}\|F_{ \mathrm{nor}}^{\lambda}(\mathbf{z}^{k})\|^{2}+\frac{5}{\alpha_{k}}s_{k}^{2}\). However, as the order of the error term \(s_{k}^{2}/\alpha_{k}\) is \(\mathcal{O}(\alpha_{k})\), this type of condition does not allow to establish convergence. Similar observations have been made in earlier studies of \(\mathsf{prox}\)-\(\mathsf{SGD}\) -- initially necessitating the usage of variance reduction strategies [39, 44]. By contrast, the time window technique and normal map-based approach allows to aggregate the errors and step sizes yielding the more controllable term \(\mathbf{s}_{k}^{2}/T\). As mentioned, this aggregated error will be summable on \(\mathcal{M}\)._
### Convergence Analysis
We now prove global convergence of Algorithm1 in two different settings. In Subsection2.5.1, we follow a trajectory-based approach under Assumption2.1 and show almost sure convergence of \(\{\|F_{\mathrm{nor}}^{\lambda}(\mathbf{z}^{k})\|\}_{k}\) and \(\{\psi(\mathbf{x}^{k})\}_{k}\) on the event \(\mathcal{L}\). In Subsection2.5.2, we provide extensions of our results under the expected smoothness condition (B.2) and Lipschitz continuity of the gradient mapping \(\nabla f\). In this case, almost sure convergence, convergence in expectation, and complexity results can be established.
#### 2.5.1 Trajectory-based Convergence
**Theorem 2.15**.: _Let the stochastic processes \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\mathsf{norM}\)-\(\mathsf{SGD}\) and assume that the conditions (A.1)-(A.4) hold. Then, there exists some random variable \(\mathbf{\psi}^{*}:\Omega\to\mathbb{R}\) such that_
\[F_{\mathrm{nor}}^{\lambda}(\mathbf{z}^{k})\to 0,\quad F_{\mathrm{nat}}^{ \lambda}(\mathbf{x}^{k})\to 0,\quad\text{and}\quad\psi(\mathbf{x}^{k})\to\mathbf{\psi}^{*} \quad k\to\infty,\quad\text{almost surely on the event $\mathcal{L}$}.\]
Proof.: We fix an arbitrary sample \(\omega\in\mathcal{M}\) and consider the realizations \(\xi=\mathbf{\xi}(\omega)\), \(L=\mathbf{L}(\omega)\), \(x^{k}=\mathbf{x}^{k}(\omega)\), etc. By Lemma2.13, there are \(T\in\mathbb{Q}\cap(0,\bar{\mathbf{T}}(\omega)]\) (with associated time indices \(\{m_{k}\}_{k}\equiv\{m_{k}[T]\}_{k}\)) and \(K_{0}\in\mathbb{N}\) such that
\[H_{\xi}(z^{m_{k+1}})-H_{\xi}(z^{m_{k}})\leq-\frac{\xi T}{5}\|F_{ \mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}+\frac{5}{T}s_{k}^{2}\quad\forall\;k \geq K_{0}. \tag{30}\]
Let us define \(u_{k}:=\frac{5}{T}\sum_{i=k}^{\infty}s_{i}^{2}\). By (A.4), there exists \(K_{\beta}\in\mathbb{N}\) such that \(\{\beta_{k}\}_{k}\) is non-decreasing for all \(k\geq K_{\beta}\) and hence, we have \(\beta_{k}\geq\beta_{K_{\beta}}\) for all \(k\geq K_{\beta}\). This implies \(\beta_{k}^{-1}\leq\bar{\beta}\) for all \(k\) where \(\bar{\beta}:=\max_{j=0,\ldots,K_{\beta}}\beta_{j}^{-1}\). Thus, using \(\omega\in\mathcal{M}\subseteq\mathcal{U}\subseteq\mathcal{T}[T]=\{\omega^{ \prime}:\sum_{k=0}^{\infty}\beta_{m_{k}(T]}^{2}\mathbf{s}_{k}[T](\omega^{\prime})^{ 2}<\infty\}\), we obtain
\[u_{0}=\frac{5}{T}\!\sum\nolimits_{i=0}^{\infty}\!s_{i}^{2}\leq \frac{5\bar{\beta}^{2}}{T}\!\sum\nolimits_{i=0}^{\infty}\!\beta_{m_{i}}^{2}s_{i }^{2}<\infty.\]
Condition (30) further implies
\[H_{\xi}(z^{m_{k+1}})+u_{k+1}\leq H_{\xi}(z^{m_{k}})+u_{k}-\frac{ \xi T}{5}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}\quad\forall\;k\geq K_{0}. \tag{31}\]
Hence, the sequence \(\{H_{\xi}(z^{m_{k}})+u_{k}\}_{k}\) is non-increasing and bounded from below (which follows from (A.1)). Due to \(\lim_{k\to\infty}u_{k}=0\), we can then infer \(\lim_{k\to\infty}H_{\xi}(z^{m_{k}})=\lim_{k\to\infty}(H_{\xi}(z^{m_{k}})+u_{k})= \psi^{*}\) for some \(\psi^{*}\in\mathbb{R}\). Summing the estimate (31) for \(k\geq K_{0}\) and using the convergence of \(\{H_{\xi}(z^{m_{k}})+u_{k}\}_{k}\), we can conclude \(\sum_{k=0}^{\infty}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}<\infty\) and \(\lim_{k\to\infty}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|=0\). Following the derivation of (21) in Lemma2.9, we further have we
\[\|F_{\mathrm{nor}}^{\lambda}(z^{i})\|\leq\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}}) \|+\bar{\tau}_{m_{k},m_{k+1}}[\tau_{m_{k},m_{k+1}}\|F_{\mathrm{nor}}^{\lambda}(z^ {m_{k}})\|+s_{k}]\leq(1+\bar{\lambda}\lambda)\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k} })\|+\bar{\lambda}s_{k} \tag{32}\]
for all \(i=m_{k}+1,\ldots,m_{k+1}\) where we used \(T\leq\bar{\mathbf{T}}(\omega)\leq\lambda\) and the definition of \(\bar{\lambda}=\bar{\mathbf{\lambda}}(\omega)\) (cf. (26) and Table1). Applying \(u_{0}<\infty\), we can infer \(s_{k}\to 0\) and hence, it follows
\[\lim_{k\to\infty}\|F_{\mathrm{nor}}^{\lambda}(z^{k})\|\leq\lim_{k \to\infty}\max_{m_{k}<i\leq m_{k+1}}\|F_{\mathrm{nor}}^{\lambda}(z^{i})\|\leq(1+ \bar{\lambda}\lambda)\lim_{k\to\infty}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|+ \bar{\lambda}\lim_{k\to\infty}s_{k}=0, \tag{33}\]
which implies \(\lim_{k\to\infty}\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{k})\|=0\) almost surely on \(\mathcal{L}\) as \(\mathbb{P}(\mathcal{M})=\mathbb{P}(\mathcal{L})\). We now continue with the convergence of the sequence \(\{\psi(x^{k})\}_{k}\). Invoking Lemma2.10, Lemma2.9, and \((a+b)^{2}\leq 2a^{2}+2b^{2}\), we have for any pair of integers \(m,n\in\mathbb{N}\), \(0\leq m<n\):
\[\psi(x^{n})-\psi(x^{m})\leq\left(\frac{L}{2}+\frac{1}{2\lambda} \right)\|z^{n}-z^{m}\|^{2}+\frac{\lambda}{2}\|F^{\lambda}_{\mathrm{nor}}(z^{m} )\|^{2}\] \[\qquad\qquad\leq(\tfrac{\lambda}{2}+(L+\lambda^{-1})(1+\bar{\tau} _{m,n}\tau_{m,n})^{2}\tau_{m,n}^{2})\|F^{\lambda}_{\mathrm{nor}}(z^{m})\|^{2}+( L+\lambda^{-1})(1+\bar{\tau}_{m,n}\tau_{m,n})^{2}s_{m,n}^{2},\]
where we applied the nonexpansiveness of the proximity operator and Young's inequality in the first estimate. Hence, choosing \(m=m_{k}\), \(n=i\in(m_{k},m_{k+1})\cap\mathbb{N}\) and utilizing \(\tau_{m_{k},i}\leq\tau_{m_{k},m_{k+1}}\leq T\leq\lambda\), \(\bar{\tau}_{m_{k},i}\leq\bar{\tau}_{m_{k},m_{k+1}}\leq\bar{\lambda}\), and \(s_{m_{k},i}\leq s_{m_{k},m_{k+1}}=s_{k}\), it then follows
\[\psi(x^{i})\leq\psi(x^{m_{k}})+[\tfrac{1}{2}+(L\lambda+1)(1+\bar{ \lambda}\lambda)^{2}]\lambda\cdot\|F^{\lambda}_{\mathrm{nor}}(z^{m_{k}})\|^{2 }+(L+\lambda^{-1})(1+\bar{\lambda}\lambda)^{2}s_{k}^{2}. \tag{34}\]
Similarly, setting \(m=i\in(m_{k},m_{k+1})\cap\mathbb{N}\), \(n=m_{k+1}\), we have \(\tau_{i,m_{k+1}}\leq\tau_{m_{k},m_{k+1}}\leq\lambda\), \(\bar{\tau}_{i,m_{k+1}}\leq\bar{\lambda}\), and
\[s_{i,m_{k+1}}=\max_{i<j\leq m_{k+1}}\left\|\sum\nolimits_{\ell=i}^{j-1} \alpha_{\ell}e^{\ell}\right\|=\max_{i<j\leq m_{k+1}}\left\|\sum\nolimits_{\ell =m_{k}}^{j-1}\alpha_{\ell}e^{\ell}-\sum\nolimits_{\ell=m_{k}}^{i-1}\alpha_{ \ell}e^{\ell}\right\|\leq 2s_{k},\]
which yields
\[\psi(x^{m_{k+1}})\leq\psi(x^{i})+[\tfrac{1}{2}+(L\lambda+1)(1+\bar{\lambda} \lambda)^{2}]\lambda\cdot\max_{m_{k}\leq i\leq m_{k+1}}\|F^{\lambda}_{\mathrm{ nor}}(z^{i})\|^{2}+4(L+\lambda^{-1})(1+\bar{\lambda}\lambda)^{2}s_{k}^{2}. \tag{35}\]
Combining these estimates and using the observations \(\lim_{k\to\infty}\max_{m_{k}\leq i\leq m_{k+1}}\|F^{\lambda}_{\mathrm{nor}}(z^ {i})\|^{2}=0\), \(\lim_{k\to\infty}s_{k}=0\), and \(\lim_{k\to\infty}\psi(x^{m_{k}})=\lim_{k\to\infty}H_{\xi}(z^{m_{k}})=\psi^{*}\), we can infer \(\psi(x^{k})\to\psi^{*}\) as \(k\to\infty\). Clearly, setting \(\mathbf{\psi}^{*}(\omega):=\psi^{*}\), we may extend \(\mathbf{\psi}^{*}\) for \(\omega\not\in\mathcal{M}\) to a full measurable function \(\mathbf{\psi}^{*}:\Omega\to\mathbb{R}\). This establishes \(\psi(\mathbf{x}^{k})\to\mathbf{\psi}^{*}\) almost surely on \(\mathcal{L}\). Convergence of \(\{\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|\}_{k}\) follows from \(\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|\leq\lambda\|F^{\lambda}_{\mathrm{ nor}}(\mathbf{z}^{k})\|\).
#### 2.5.2 Convergence and Complexity Under Global Lipschitz Continuity
Next, we consider the special case when \(\nabla f\) satisfies conditionB.1 and is Lipschitz continuous with modulus \(\mathsf{L}\). Clearly, we then have \(\mathbf{L}(\omega)=\mathsf{L}\) for all \(\omega\), i.e., the event \(\mathcal{L}\) occurs surely. Following the proof of Lemma2.13, this allows us to work with a simpler universal time window \(\mathsf{T}\). Specifically, inspired by (26), we can define \(\mathsf{T}\) as
\[\mathsf{T}:=\sup\big{\{}t\in\mathbb{R}_{+}:t\leq\min\{\tfrac{ \lambda}{5},1\},\ \tfrac{1}{2t}\geq\xi(4\mathsf{L}-\lambda^{-1})+\mathsf{L},\ \mathsf{L}\bar{\lambda}^{2}t^{2}+5\bar{\lambda}^{2}t\leq\tfrac{8\xi}{25\bar{ \lambda}}, \tag{36}\] \[\text{and}\ \big{[}\tfrac{5}{t}+\mathsf{L}\big{]}\,(1+\bar{\lambda}t )^{2}\leq\tfrac{10}{t}\big{\}},\]
where \(\xi\) and \(\bar{\lambda}\) are given as \(\xi:=(2+2\mathsf{L}^{2}\lambda^{2})^{-1}\) and \(\bar{\lambda}:=(\mathsf{L}+2\lambda^{-1})\exp(\mathsf{L}\lambda+2)\). We can then mimic and simplify the derivations of Lemma2.13 and Theorem2.15 under the general error conditionB.2. We summarize our observations for this case in the following theorem.
**Theorem 2.16**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\mathsf{nor}\mathsf{M}\mathsf{-SGD}\) and suppose that the assumptions_A.1,_A.3,_B.1, and_B.2 _are satisfied. Let us further assume \(\sum_{k=0}^{\infty}\alpha_{k}^{2}\max\{\mathsf{A},\alpha_{k}^{2}\}<\infty\). Then, it holds that_
* \(\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{k})\|\to 0\)_,_ \(\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|\to 0\)_, and_ \(\psi(\mathbf{x}^{k})\to\mathbf{\psi}^{*}\) _a.s., where_ \(\mathbf{\psi}^{*}:\Omega\to\mathbb{R}\) _is some random variable;_
* \(\mathbb{E}[\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{k})\|^{2}]\to 0\)_,_ \(\mathbb{E}[\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|^{2}]\to 0\)_, and_ \(\mathbb{E}[\psi(\mathbf{x}^{k})]\to\rho^{*}\)_, where_ \(\rho^{*}\in\mathbb{R}\) _is some constant._
The proof of Theorem2.16 is similar the proof of Lemma2.13 and Theorem2.15. Hence, details are deferred to AppendixC. Under the more explicit condition \(\alpha_{k}\leq\mathsf{T}/5\) on the step sizes \(\{\alpha_{k}\}_{k}\), it is also possible to derive a non-asymptotic iteration complexity bound for \(\mathsf{nor}\mathsf{M}\mathsf{-SGD}\). We now present this convergence result for the simpler case \(\mathsf{A}=0\), i.e., under assumptionA.2. The proof of Theorem2.17 closely follows the proof of Theorem2.16 and can again be found in AppendixC.
**Theorem 2.17**.: _Let \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) be generated by \(\mathsf{nor}\mathsf{M}\mathsf{-SGD}\) and suppose that the assumptions_A.1-_A.3 _and_B.1 _are satisfied. Let us further assume \(\alpha_{k}\in(0,\mathsf{T}/5]\) for all \(k\in\mathbb{N}\). Then, we have_
\[\min_{i=0,\ldots,k-1}\mathbb{E}[\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{i})\|^{2}] \leq\frac{D_{1}[H_{\xi}(z^{0})-\bar{\psi}]}{\sum_{i=0}^{k-1}\alpha_{i}}+\frac{D_ {2}\cdot\sum_{i=0}^{k-1}\alpha_{i}^{2}\sigma_{i}^{2}}{\sum_{i=0}^{k-1}\alpha_{i} },\]
_where \(D_{1}:=10(1+\lambda\bar{\lambda})^{2}\lambda^{2}\xi^{-1}\) and \(D_{2}:=\max\{200(1+\lambda\bar{\lambda})^{2}\lambda^{2}(\mathsf{T}\xi)^{-1},8 \lambda^{2}\lambda^{2}\mathsf{T}\}\)._
**Remark 2.18**.: _The results in Theorem2.15 and Theorem2.16 seem to be new for stochastic proximal gradient-type methods. In particular, it is not known whether \(\mathsf{prox}\mathsf{-SGD}\) possesses similar convergence properties in the general nonconvex setting considered here, cf. [32, 65, 28, 29
## 3 Convergence and Rates of Convergence Under the Kurdyka-Lojasiewicz Inequality
We now address the core questions (10) and (11) and study convergence properties of the stochastic processes \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{z}^{k}\}_{k}\) under the Kurdyka-Lojasiewicz (KL) inequality. We first state a definition of the KL inequality for the nonsmooth objective function \(\psi\), see, e.g., [4, 3, 5, 18].
**Definition 3.1** (KL Property).: _The mapping \(\psi\) is said to satisfy the KL inequality at a point \(\bar{x}\in\mathrm{dom}(\partial\psi)\) if there exist \(\eta\in(0,\infty]\), a neighborhood \(U\) of \(\bar{x}\), and a continuous, concave desingularizing function \(\varrho:[0,\eta)\to\mathbb{R}_{+}\) with_
\[\varrho\in C^{1}((0,\eta)),\quad\varrho(0)=0,\quad\text{and}\quad\varrho^{ \prime}(x)>0\quad\forall\;x\in(0,\eta),\]
_such that for all \(x\in U\cap\{x:0<|\psi(x)-\psi(\bar{x})|<\eta\}\) the KL inequality holds, i.e.,_
\[\varrho^{\prime}(|\psi(x)-\psi(\bar{x})|)\cdot\mathrm{dist}(0,\partial\psi(x ))\geq 1. \tag{37}\]
_Here, \(\partial\psi=\nabla f+\partial\varphi\) denotes the limiting subdifferential of \(\psi\)._
The set of Lojasiewicz functions is defined via
\[\mathfrak{L}:=\{\varrho:\mathbb{R}_{+}\to\mathbb{R}_{+}:\exists\;c>0,\, \theta\in[0,1):\varrho(x)=cx^{1-\theta}\}.\]
The desingularizing functions in \(\mathfrak{L}\) obviously satisfy the conditions in Definition3.1 for all \(\eta>0\). If \(\psi\) satisfies (37) with \(\varrho(x)=cx^{1-\theta},c>0\), and \(\theta\in[0,1)\) at some \(\bar{x}\), then we say that \(\psi\) satisfies the KL inequality at \(\bar{x}\) with exponent \(\theta\). The KL inequality in Definition3.1 is a slightly stronger variant of the classical KL condition that is usually stated without taking the absolute value of \(\psi(x)-\psi(\bar{x})\), see, e.g., [4, Definition 3.1] for comparison. However, the variant in Definition3.1 holds for semialgebraic and subanalytic functions with the desingularizing mapping \(\varrho\) being chosen from the set of Lojasiewicz functions \(\mathfrak{L}\) -- underlining the generality of the KL property and its broad applicability; see [49, Theorem L1] and [15, 14, 16].
Let the stochastic processes \(\{\mathbf{z}^{k}\}_{k}\) and \(\{\mathbf{x}^{k}\}_{k}\) be generated by Algorithm1. We consider the events
\[\mathcal{X}:=\{\omega\in\Omega:\{\mathbf{x}^{k}(\omega)\}_{k}\text{ is bounded}\}\quad\text{and}\quad\mathcal{Z}:=\{\omega\in\Omega:\{\mathbf{z}^{k}( \omega)\}_{k}\text{ is bounded}\}.\]
and the (stochastic) accumulation point mappings \(\mathfrak{A}_{z}:\Omega\equiv\mathbb{R}^{d}\) and \(\mathfrak{A}_{x}:\Omega\equiv\mathbb{R}^{d}\),
\[\mathfrak{A}_{z}(\omega):=\{z\in\mathbb{R}^{d}:\exists\;\text{a subsequence }\{q_{k}\}_{k}\subseteq\mathbb{N}\text{ such that }\mathbf{z}^{q_{k}}(\omega)\to z\},\quad\text{and} \tag{38}\]
In the following, we study convergence of \(\{\mathbf{x}^{k}\}_{k}\) conditioned on the event \(\mathcal{X}\). This restriction is a typical and ubiquitous prerequisite in the application of the KL framework, see, e.g., [3, 4, 5, 18, 72, 19], and in the stochastic approximation literature [60, 8, 50, 20]. We first collect several properties of the accumulation point mappings \(\mathfrak{A}_{x}\) and \(\mathfrak{A}_{z}\).
**Lemma 3.2**.: _Let the assumptions (11)-(12) be satisfied and suppose that \(\nabla f\) is locally Lipschitz continuous. Then, the following statements are valid:_
* _It holds that_ \(\mathcal{Z}\subseteq\mathcal{X}\)_,_ \(\mathcal{X},\mathcal{Z}\subseteq\mathcal{L}\)_, and_ \(\mathcal{Z}\cap\mathcal{U}=\mathcal{X}\cap\mathcal{U}\)_._
* _The sets_ \(\mathfrak{A}_{x}(\omega)\) _and_ \(\mathfrak{A}_{z}(\omega)\) _are nonempty and compact for all_ \(\omega\in\mathcal{X}\cap\mathcal{U}\)_._
* \(\mathfrak{A}_{x}(\omega)\subseteq\mathrm{crit}(\psi)\) _and_ \(\mathfrak{A}_{z}(\omega)\subseteq\{z\in\mathbb{R}^{d}:F^{\lambda}_{\mathrm{ nor}}(z)=0\}\) _for all_ \(\omega\in\mathcal{X}\cap\mathcal{U}\)_._
* _The mappings_ \(\psi\) _and_ \(H_{\xi}\)_,_ \(\xi>0\)_, are finite and constant on_ \(\mathfrak{A}_{x}(\omega)\) _and_ \(\mathfrak{A}_{z}(\omega)\)_, respectively, for all_ \(\omega\in\mathcal{X}\cap\mathcal{U}\)_._
Here, \(\mathcal{U}\) is the master event introduced in (24) and utilized in Theorem2.15.
Proof.: We first consider statement (a). The inclusion \(\mathcal{Z}\subseteq\mathcal{X}\) is a direct consequence of the Lipschitz continuity of the proximity operator. Moreover, due to the local Lipschitz continuity of \(\nabla f\), we clearly have \(\mathcal{Z}\subseteq\mathcal{X}\subseteq\mathcal{L}\). Applying Theorem2.15, it follows \(\lim_{k\to\infty}\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{k}(\omega))\|=0\) and \(\psi(\mathbf{x}^{k}(\omega))\to\mathbf{\psi}^{*}(\omega)\) for all \(\omega\in\mathcal{X}\cap\mathcal{U}\subseteq\mathcal{L}\cap\mathcal{U}= \mathcal{M}\). Utilizing the definition of the normal map, we can further infer \(\|\mathbf{z}^{k}(\omega)\|\leq\lambda(\|F^{\lambda}_{\mathrm{nor}}(\mathbf{z}^{k}( \omega))\|+\|\nabla f(\mathbf{x}^{k}(\omega))\|)+\|\mathbf{x}^{k}(\omega)\|\). This shows \(\mathcal{X}\cap\mathcal{U}\subseteq\mathcal{Z}\cap\mathcal{U}\). The parts (b) and (c) then follow directly from (a) and Theorem2.15. To proceed, let \(\omega\in\mathcal{X}\cap\mathcal{U}\) and \(\bar{x}\in\mathfrak{A}_{x}(\omega)\) be arbitrary. Then, there exist \(\bar{z}\in\mathfrak{A}_{z}(\omega)\) and \(\{\mathbf{z}^{q_{k}}(\omega)\}_{k}\) with \(\bar{x}=\mathrm{prox}_{\lambda\varphi}(\bar{z})\) and \(\mathbf{z}^{q_{k}}(\omega)\to\bar{z}\). This implies
\[\mathbf{\psi}^{*}(\omega)=\lim_{k\to\infty}\psi(\mathbf{x}^{q_{k}}(\omega)) =\lim_{k\to\infty}\varphi(\mathrm{prox}_{\lambda\varphi}(\mathbf{z}^{q_{k}}( \omega)))+f(\mathbf{x}^{q_{k}}(\omega))\] \[=\lim_{k\to\infty}\mathrm{env}_{\lambda\varphi}(\mathbf{z}^{q_{k}}( \omega))-\frac{1}{2\lambda}\|\mathbf{z}^{q_{k}}(\omega)-\mathbf{x}^{q_{k}}(\omega)\|^{2} +f(\bar{x})=\psi(\bar{x}),\]
where we used the continuity of the Moreau envelope. Similarly, we can show that \(H_{\xi}\) is constant and finite on \(\mathfrak{A}_{z}(\omega)\). This finishes the proof of Lemma3.2.
Since the approximate descent properties derived in the previous section are stated using the merit function \(H_{\xi}\), it is natural to ask whether the KL properties of the original objective function \(\psi\) and \(H_{\xi}\) are connected. We present such a connection in the following lemma.
**Lemma 3.3**.: _Suppose that \(\psi\) satisfies the KL inequality at a stationary point \(\bar{x}=\operatorname{prox}_{\lambda\varphi}(\bar{z})\in\operatorname{crit}(\psi)\) with design-larizing function \(\varrho\in\mathfrak{L}\) and \(\varrho(x)=cx^{1-\theta}\). Then, \(H_{\xi}\), \(\xi>0\), satisfies the following KL-type property at \(\bar{z}\):_
\[|H_{\xi}(z)-H_{\xi}(\bar{z})|^{\hat{\theta}}\leq\hat{c}\cdot\|F_{\operatorname {nor}}^{\lambda}(z)\|\quad\forall\ z\in V\cap\{z\in\mathbb{R}^{d}:|H_{\xi}(z)-H _{\xi}(\bar{z})|<\hat{\eta}\},\]
_for some \(\hat{\eta}\in(0,\infty]\) and some neighborhood \(V\) of \(\bar{z}\) and where \(\hat{\theta}=\max\{\theta,\frac{1}{2}\}\) and \(\hat{c}=((c(1-\theta))^{1/\hat{\theta}}+\frac{\xi\lambda}{2})^{\hat{\theta}}\)._
Since Lemma 3.3 is a straightforward extension of [74, Lemma 5.3], we will omit a proof here. The properties (b)-(d) in Lemma 3.2 allow us to apply the uniformized KL inequality [18, Lemma 6] in a trajectory-based fashion. Specifically, suppose that the KL property holds on the set \(\mathfrak{A}_{x}(\mathcal{X}\cap\mathcal{U})\) and each desingularizing function is chosen from \(\mathfrak{L}\). Then, by [18, Lemma 6], for all \(\omega\in\mathcal{X}\cap\mathcal{U}\) there exist \(\zeta_{x}=\boldsymbol{\zeta}_{x}(\omega),\eta_{x}=\boldsymbol{\eta}_{x}( \omega)>0\) and \(\varrho=\boldsymbol{\varrho}(\omega)\in\mathfrak{L}\) such that for all \(\bar{x}\in\mathfrak{A}_{x}(\omega)\) and \(x\in U_{\zeta_{x},\eta_{x}}:=\{\in\mathbb{R}^{d}:\operatorname{dist}(x, \mathfrak{A}_{x}(\omega))<\zeta_{x}\}\cap\{x\in\mathbb{R}^{d}:0<|\psi(x)- \psi^{*}|<\eta_{x}\}\), we have:
\[\varrho^{\prime}(|\psi(x)-\psi^{*}|)\cdot\operatorname{dist}(0,\partial\psi(x ))\geq 1,\]
where \(\psi^{*}=\boldsymbol{\psi}^{*}(\omega)\). By Lemma 3.3 there then exist \(\zeta_{z}=\boldsymbol{\zeta}_{z}(\omega),\eta_{z}=\boldsymbol{\eta}_{z}( \omega)>0\) and \(\hat{\theta}=\hat{\boldsymbol{\theta}}(\omega)\in[\frac{1}{2},1)\), \(\hat{c}=\hat{\boldsymbol{c}}(\omega)\) such that for all \(\bar{z}\in\mathfrak{A}_{z}(\omega)\), it holds that
\[|H_{\xi}(z)-\psi^{*}|^{\hat{\theta}}\leq\hat{c}\cdot\|F_{\operatorname{nor}}^ {\lambda}(z)\|\quad\forall\ z\in V_{\zeta_{x},\eta_{z}}:=\{z:\operatorname{ dist}(z,\mathfrak{A}_{z}(\omega))<\zeta_{z}\}\cap\{z:|H_{\xi}(z)-\psi^{*}|<\eta_{z}\}. \tag{39}\]
Clearly, if \(\mathfrak{A}_{x}(\mathcal{X}\cap\mathcal{U})\) has only a finite number of disjoint components or is contained in a compact set, this KL property can be further uniformized with respect to \(\omega\). (We do not need such a fully uniformized version of (39) here). We now summarize our main assumptions of this section.
**Assumption 3.4**.: _We will work with the conditions:_
1. _The gradient mapping_ \(\nabla f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) _is locally Lipschitz continuous._
2. _There is an event_ \(\mathcal{K}\in\mathcal{F}\) _with_ \(\mathbb{P}(\mathcal{K})=1\) _such that the KL inequality holds for every_ \(\bar{x}\in\mathfrak{A}_{x}(\mathcal{K})\) _and each respective desingularizing function_ \(\varrho=\boldsymbol{\varrho}(\omega)\) _can be chosen from the class of Lojasiewicz functions_ \(\mathfrak{L}\)_._
Let us again emphasize that the KL inequality holds naturally for subanalytic or semialgebraic functions, and hence assumption (C.2) is satisfied for a very general and broad class of problems arising in practice. We refer to [18, Section 5] and [49, 15, 14, 16] for related results and discussions.
### Strong Convergence Results
We are now in the position to present strong convergence results for the stochastic process \(\{\boldsymbol{x}^{k}\}_{k}\) based on the KL inequality and our previous observations.
**Theorem 3.5**.: _Let \(\{\boldsymbol{x}^{k}\}_{k}\) be generated by \(\mathsf{nord}\mathsf{\text{-}SGD}\) and suppose that the assumptions (A.1)-(A.3) and (C.1)-(C.2) are satisfied. For a general mapping \(g:\mathbb{R}\to\mathbb{R}_{++}\), we consider the following condition on the step sizes:_
\[\sum_{k=0}^{\infty}\alpha_{k}^{2}g(\gamma_{k})^{2}\sigma_{k}^{2}<\infty\quad \text{where}\quad\gamma_{k}:=\sum_{i=0}^{k-1}\alpha_{i}. \tag{40}\]
1. _If (_40_) holds for_ \(g(x):=x^{r}\) _and some_ \(r>\frac{1}{2}\)_, then_ \(\{\boldsymbol{x}^{k}\}_{k}\) _converges to a_ \(\operatorname{crit}(\psi)\)_-valued mapping_ \(\boldsymbol{x}^{*}:\Omega\to\operatorname{crit}(\psi)\) _almost surely on_ \(\mathcal{X}\) _and the events_ \[\left\{\omega:\limsup_{k\to\infty}\ \max\{|\psi(\boldsymbol{x}^{k}(\omega))- \boldsymbol{\psi}^{*}|,\|F_{\operatorname{nat}}^{\lambda}(\boldsymbol{x}^{k} (\omega))\|^{2}\}\cdot\gamma_{k}^{\Psi_{r}(\boldsymbol{\theta}(\omega))}<\infty\right\}\] _and_ \(\{\omega:\limsup_{k\to\infty}\ \|\boldsymbol{x}^{k}(\omega)- \boldsymbol{x}^{*}(\omega)\|\cdot\gamma_{k}^{\Psi_{r}^{\boldsymbol{r}}( \boldsymbol{\theta}(\omega))}<\infty\}\) _occur a.s. on_ \(\mathcal{X}\)_, where_ \(\boldsymbol{\theta}:\Omega\to[0,1)\) _denotes the KL exponent function of_ \(\boldsymbol{x}^{*}\) _and the rate mappings_ \(\Psi_{r},\Psi_{r}^{\pi}:[0,1)\to\mathbb{R}_{+}\) _are given by:_ \[\Psi_{r}(\theta):=\begin{cases}2r&\text{if }\ 0\leq\theta<\frac{1+2r}{4r},\\ \frac{1}{2\theta-1}&\text{if }\ \frac{1+2r}{4r}\leq\theta<1\end{cases}\quad\text{and} \quad\Psi_{r}^{\pi}(\theta):=\begin{cases}r-\frac{1}{2}&\text{if }\ 0\leq\theta<\frac{1+2r}{4r},\\ \frac{1-\theta}{2\theta-1}&\text{if }\ \frac{1+2r}{4r}\leq\theta<1.\end{cases}\]
2. _If (_40_) holds for_ \(g(x):=\frac{\exp(rx)}{x^{p}}\) _and some_ \(r>0,p\geq 0\)_, then_ \(\{\boldsymbol{x}^{k}\}_{k}\) _converges to a_ \(\operatorname{crit}(\psi)\)_-valued mapping_ \(\boldsymbol{x}^{*}:\Omega\to\operatorname{crit}(\psi)\) _almost surely on_ \(\mathcal{X}\) _and the events_ \[\left\{\omega:\limsup_{k\to\infty}\ \max\{|\psi(\boldsymbol{x}^{k}(\omega))- \boldsymbol{\psi}^{*}|,\|F_{\operatorname{nat}}^{\lambda}(\boldsymbol{x}^{k}( \omega))\|^{2}\}\cdot g(2\gamma_{k})<\infty\right\}\] _and_ \(\{\omega:\limsup_{k\to\infty}\ \|\boldsymbol{x}^{k}(\omega)- \boldsymbol{x}^{*}(\omega)\|\cdot g(\gamma_{k})<\infty\}\) _occur a.s. on_ \(\{\omega\in\mathcal{X}:\boldsymbol{\theta}(\omega)\in[0,\frac{1}{2}]\) _and_ \(\bar{\boldsymbol{c}}(\omega)r<1\}\) _where_ \(\bar{\boldsymbol{c}}:=10(\boldsymbol{c}^{2}\boldsymbol{\xi}^{-1}+\frac{\lambda}{2})\) _and_ \(\boldsymbol{\theta}\) _and_ \(\boldsymbol{c}\) _denote the associated KL exponent and parameter functions of_ \(\boldsymbol{x}^{*}\)
Next, we discuss a specialization of Theorem 3.5 to the popular family of diminishing step sizes
\[\alpha_{k}=\alpha/(\beta+k)^{\gamma},\quad\alpha,\beta>0,\quad\gamma\in(\tfrac{ 1}{2},1],\]
cf. [80, 26, 21]. For ease of exposition, we will also assume that the variance parameters \(\{\sigma_{k}\}_{k}\) introduced in condition (A.2) are bounded.
**Corollary 3.6**.: _Let the stochastic process \(\{\mathbf{x}^{k}\}_{k}\) be generated by \(\mathsf{norM}\)-\(\mathsf{SGD}\) using the step size rule_
\[\alpha_{k}=\alpha/(\beta+k)^{\gamma},\quad\alpha,\beta>0,\quad\gamma\in( \tfrac{2}{3},1]\]
_and suppose that the conditions (A.1)-(A.2) and (C.1)-(C.2) hold with \(\sup_{k}\sigma_{k}\leq\sigma<\infty\). Then, \(\{\mathbf{x}^{k}\}_{k}\) converges to a \(\operatorname{crit}(\psi)\)-valued mapping \(\mathbf{x}^{*}:\Omega\to\operatorname{crit}(\psi)\) almost surely on \(\mathcal{X}\) with the following rates:_
* _If_ \(\gamma\in(\tfrac{2}{3},1)\)_, then for arbitrary_ \(\varepsilon>0\)_, the events_ \[\left\{\omega:\limsup_{k\to\infty}\ \max\{|\psi(\mathbf{x}^{k}(\omega))-\mathbf{ \psi}^{*}|,\|F^{\lambda}_{\operatorname{nat}}(\mathbf{x}^{k}(\omega))\|^{2}\}\cdot k ^{\Phi_{\gamma}(\mathbf{\theta}(\omega))-\varepsilon}<\infty\right\}\] _and_ \(\{\omega:\limsup_{k\to\infty}\|\mathbf{x}^{k}(\omega)-\mathbf{x}^{*}(\omega)\|\cdot k ^{\Phi_{\gamma}^{*}(\mathbf{\theta}(\omega))-\varepsilon}<\infty\}\) _occur almost surely on_ \(\mathcal{X}\)_, where_ \(\mathbf{\theta}:\Omega\to[0,1)\) _denotes the KL exponent function of_ \(\mathbf{x}^{*}\) _and_ \(\Phi_{\gamma},\Phi_{\gamma}^{*}:[0,1)\to\mathbb{R}_{+}\) _are defined via_ \[\Phi_{\gamma}(\theta):=\begin{cases}2\gamma-1&\text{ if }0\leq\theta<\frac{ \gamma}{4\gamma-2},\\ \frac{1-\gamma}{2\theta-1}&\text{ if }\frac{\gamma}{4\gamma-2}\leq\theta<1 \end{cases}\quad\text{and}\quad\Phi_{\gamma}^{*}(\theta):=\begin{cases}\frac {3\gamma}{2}-1&\text{ if }0\leq\theta<\frac{\gamma}{4\gamma-2},\\ \frac{(1-\theta)(1-\gamma)}{2\theta-1}&\text{ if }\frac{\gamma}{4\gamma-2}\leq \theta<1.\end{cases}\]
* _If_ \(\gamma=1\)_, then for arbitrary_ \(\varepsilon>0\) _the events_ \[\left\{\omega:\limsup_{k\to\infty}\ (\max\{|\psi(\mathbf{x}^{k}(\omega))-\mathbf{\psi}^{*}|,\|F^{ \lambda}_{\operatorname{nat}}(\mathbf{x}^{k}(\omega))\|^{2}\}\cdot k)/\log(k)^{1+ \varepsilon}<\infty\right\}\] _and_ \(\{\omega:\limsup_{k\to\infty}\ (\|\mathbf{x}^{k}(\omega)-\mathbf{x}^{*}(\omega)\|\cdot k ^{\frac{1}{2}}\big{)}\big{/}\log(k)^{\frac{1}{2}+\varepsilon}<\infty\}\) _occur almost surely on_ \(\{\omega\in\mathcal{X}:\mathbf{\theta}(\omega)\in[0,\tfrac{1}{2}]\text{ and }\alpha>\tfrac{1}{2}\bar{\mathbf{c}}(\omega)\}\)_, where_ \(\bar{\mathbf{c}}\) _is given as in Theorem_ 3.5 _(b)._
Proof.: The specific step size policy implies \(\alpha_{k}\to 0\) and \(\gamma_{k}=\sum_{i=0}^{k-1}\alpha_{i}\to\infty\) as \(k\) tends to infinity. In addition, using the integral comparison test, we have
\[\gamma_{k}=\Theta(k^{1-\gamma})\quad\text{if }\gamma\in(0,1)\quad\text{and} \quad\alpha\log(k/\beta+1)\leq\gamma_{k}\leq\alpha/\beta+\alpha\log(k/\beta+1) \quad\text{if }\gamma=1.\]
Hence, for \(\gamma\in(0,1)\), it follows \(\sum_{k=0}^{\infty}\alpha_{k}^{2}\gamma_{k}^{2r}\leq c^{\prime}\alpha^{2} \sum_{k=0}^{\infty}1/(\beta+k)^{2\gamma-2(1-\gamma)r}\) where \(c^{\prime}\) is a suitable constant. This series is finite if \(r<\tfrac{1}{2}(2\gamma-1)/(1-\gamma)\) and we have \(\tfrac{1}{2}(2\gamma-1)/(1-\gamma)>\tfrac{1}{2}\) if and only if \(\gamma>\tfrac{2}{3}\). In the case \(\gamma=1\), the series \(\sum_{k=0}^{\infty}\alpha_{k}^{2}\gamma_{k}^{2r}\) converges for every \(r>\tfrac{1}{2}\). Theorem 3.5 (a) then guarantees the stated almost sure convergence of \(\{\mathbf{x}^{k}\}_{k}\) on the event \(\mathcal{X}\). To establish the claimed rates, we consider \(\gamma\in(\tfrac{2}{3},1)\) and \(\gamma=1\) separately.
**Part (a).** Let us set \(r=\tfrac{2\gamma-1}{2(1-\gamma)}-\tfrac{\varepsilon}{1-\gamma}\) and let \(\varepsilon>0\) be sufficiently small ensuring \(r>\tfrac{1}{2}\). Applying Theorem 3.5 (a), we obtain
\[\limsup_{k\to\infty}y_{k}\cdot k^{(1-\gamma)\Psi_{r}(\theta)}<\infty\quad\text{ and}\quad\limsup_{k\to\infty}\|x^{k}-x^{*}\|\cdot k^{(1-\gamma)\Psi_{r}^{*}(\theta)}<\infty,\]
where \(y_{k}:=\max\{|\psi(x^{k})-\psi^{*}|,\|F^{\lambda}_{\operatorname{nat}}(x^{k}) \|^{2}\}\), \(\Psi_{r},\Psi_{r}^{\pi}:[0,1)\to\mathbb{R}_{+}\) are defined in Theorem 3.5, and \(\{\mathbf{x}^{k}\}_{k}=\{\mathbf{x}^{k}(\omega)\}_{k}\), \(\theta=\mathbf{\theta}(\omega)\), etc. are suitable realizations. Using the definition of \(\Psi_{r}\) and \(\Psi_{r}^{\pi}\) and
\[\tfrac{1+2r}{4r}=\tfrac{\gamma}{2(2\gamma-1)}+\tfrac{(1-\gamma)\varepsilon}{(2 \gamma-1)(2\gamma-1-2\varepsilon)},\quad 2r(1-\gamma)=2\gamma-1-2\varepsilon,\quad\text{and}\quad(1- \gamma)\left(r-\tfrac{1}{2}\right)=\tfrac{3\gamma}{2}-1-\varepsilon,\]
we can then re-express the rate \((1-\gamma)\Psi_{r}(\theta)\) in terms of the step size parameter \(\gamma\in(\tfrac{2}{3},1)\) as follows:
\[(1-\gamma)\Psi_{r}(\theta)\geq\Phi_{\gamma}(\theta)-2\varepsilon\quad\text{ where}\quad\Phi_{\gamma}(\theta):=\begin{cases}2\gamma-1&\text{if }0\leq\theta<\frac{\gamma}{4\gamma-2},\\ \tfrac{1-\gamma}{2\theta-1}&\text{if }\frac{\gamma}{4\gamma-2}\leq\theta<1.\end{cases}\]
Similarly, we have \((1-\gamma)\Psi_{r}^{\pi}(\theta)\geq\Phi_{\gamma}^{\pi}(\theta)-\varepsilon\) This verifies the rates stated in Corollary 3.6 (a).
**Part (b).** In the case \(\gamma=1\), let us define \(g(x)=\exp(rx/\alpha)/x^{p}\) with \(r=\tfrac{1}{2}\) and \(p>\tfrac{1}{2}\). Then, utilizing the previous calculations, we obtain
\[\sum\nolimits_{k=0}^{\infty}\alpha_{k}^{2}g(\gamma_{k})^{2}\sigma_{k}^{2}\leq \alpha^{2-2p}\sigma^{2}\beta^{-1}\exp(\beta^{-1}){\sum\nolimits_{k=0}^{\infty}}1 /[(\beta+k)\log(k/\beta+1)^{2p}]<\infty.\]
Thus, Theorem 3.5 (b) is applicable and the rates in (b) follow from \(g(\gamma_{k})=\Omega(\tfrac{\sqrt{k}}{\log(k)^{p}})\)
**Remark 3.7**.: _Choosing \(\alpha_{k}=\alpha/(\beta+k)^{\gamma}\), the complexity bound for \(\mathsf{norM\text{-}SGD}\) in Theorem2.17 simplifies to_
\[\min_{i=0,\ldots,k-1}\,\mathbb{E}[\|F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{i})\|^{ 2}]=\mathcal{O}(1/k^{1-\gamma}). \tag{41}\]
_Hence, noticing \(2\gamma-1>1-\gamma\) and \(\frac{1-\gamma}{2\theta-1}>1-\gamma\) for all \(\gamma\in(\frac{2}{3},1]\) and \(\theta<1\), the rates for \(\{\|F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{k})\|^{2}\}_{k}\) derived in Corollary3.6 are faster and allow to beat the standard complexity results if \(\gamma\in(\frac{2}{3},1]\). In addition, in the case \(\gamma=1\) and \(\mathbf{\theta}\in[0,\frac{1}{2}]\) (and if \(\alpha\) is sufficiently large), we obtain rates of the form_
\[|\psi(\mathbf{x}^{k})-\mathbf{\psi}^{*}|=\mathcal{O}(\log(k)^{1+\varepsilon}/k)\quad \text{and}\quad\|\mathbf{x}^{k}-\mathbf{x}^{*}\|=\mathcal{O}(\log(k)^{\frac{1}{2}+ \varepsilon}/\sqrt{k})\]
_a.s. on \(\mathcal{X}\). These rates match the strongly convex setting (up to logarithmic factors); see [83, 77] and [78, 66, 59] (for the smooth case). Naturally and in strong contrast to (41), all of the results in Theorem3.5 and Corollary3.6 apply to last iterates. We are not aware of comparable convergence results and rates for other stochastic proximal methods._
**Remark 3.8**.: _Theorem3.5 and Corollary3.6 seem to provide novel KL-based insights for \(\mathsf{SGD}\). In [30, Theorem 1.6] and under assumptions that are in line with the setting in Corollary3.6, Dereich and Kassing prove (a.s.) convergence of \(\mathsf{SGD}\)-iterates to stationary points if the step size parameter \(\gamma\) additionally satisfies the stricter condition \(\gamma>\frac{3}{4}\). Benaim [9, Theorem 1.1] establishes convergence of \(\mathsf{SGD}\)-type algorithms in the special case \(\alpha_{k}=\alpha/k\) with rates of the form \(\|\mathbf{x}^{k}-\mathbf{x}^{*}\|=\mathcal{O}(1/\log(k)^{c})\) a.s. for some \(c>0\). In [92, Theorem 2.2 and Corollary 2.2], Tadic derives convergence rates for \(\mathsf{SGD}\) that are more related to our results. Specifically, using our notation, the corresponding rate functions \(\Psi_{\gamma}^{x,o}\) and \(\Phi_{\gamma}^{x,o}\) in [92] can be expressed via_
\[\Psi_{\tau}^{x,o}(\theta)=\min\{r-1,\frac{1-\theta}{2\theta-1}\},\quad r>1 \quad\text{and}\quad\Phi_{\gamma}^{x,o}(\theta):=\min\{2\gamma-\frac{3}{2}, \frac{(1-\theta)(1-\gamma)}{2\theta-1}\},\quad\gamma\in(\frac{3}{4},1)\]
_where \(\theta\in(\frac{1}{4},1)\). Clearly, the rates shown in Theorem3.5 and Corollary3.6 are faster and improve the existing results even for \(\mathsf{SGD}\). To the best of our knowledge, Corollary3.6 provides the first convergence guarantees for \(\mathsf{SGD}\) when using the larger step sizes \(\gamma\in(\frac{2}{3},\frac{3}{4}]\). In Figure1, we illustrate and compare the obtained rates for \(\{\mathbf{x}^{k}\}_{k}\)._
### Proof of Theorem3.5
The proof of Theorem3.5 is split into several steps. We first provide general estimates that will allow to bound error terms that are related to \(\mathbf{s}_{k}\) and \(\mathbf{u}_{k}\). In **Step 2**, we then establish strong limit convergence of the stochastic processes \(\{\mathbf{x}^{k}\}_{k}\) and \(\{\mathbf{x}^{k}\}_{k}\) for general choices of the rate mapping \(g\). Our convergence proof generally follows the classical KL-based analysis framework for deterministic algorithms, [3, 5, 18]. However, careful adjustments of the standard proof techniques are necessary to cope with the non-descent nature of \(\mathsf{norM\text{-}SGD}\), the stochastic errors, and the time window-based mechanisms; cf. [36, 58] for related extensions. In **Step 3** and **Step 4**, we derive the convergence rates presented in Theorem3.5 for the two choices \(g(x)=x^{r}\) and \(g(x)=\exp(rx)/x^{p}\) in a step-by-step fashion. Our rate analysis mainly relies on the derived step size and error dynamics and on a novel generalized version of Chung's lemma.
**Step 1:**_General error estimates_. The following preparatory lemma provides explicit lower and upper bounds for the accumulated step sizes \(\gamma_{m_{++k}}=\sum_{i=0}^{m_{++k}-1}\alpha_{i}\) in terms of the iteration index "\(k\)". Moreover, estimates for error terms appearing in **Step 2** are presented. The results in Lemma3.9 allow to study the effects of a large variety of growth parameters \(\{\beta_{k}\}_{k}\) which will be important in the rate analysis in **Step 3** and **Step 4** of our proof. A verification of Lemma3.9 can be found in AppendixD.1.
**Lemma 3.9**.: _Suppose (A.2)-(A.3) hold. Let \(\{\beta_{k}\}_{k}\) be defined via \(\beta_{k}:=g(\sum_{i=0}^{k-1}\alpha_{i})\) and let \(\{m_{k}\}_{k}\) denote the time indices associated with \(T\). Setting \(\gamma_{k}:=\sum_{i=0}^{k-1}\alpha_{i}\), we consider the following assumptions on \(g:\mathbb{R}_{++}\to\mathbb{R}_{++}\):_
* _It holds that_ \(\sum_{k=0}^{\infty}\alpha_{k}^{2}g(\gamma_{k})^{2}\sigma_{k}^{2}<\infty\)_._
* _There exists an interval_ \(I:=[\iota,\infty)\) _such that_ \(g^{\prime}(x)>0\) _for all_ \(x\in I\)_._
_Then, assumption (A.4) is satisfied and for all \(\delta\in(0,1)\), there exists \(N_{\delta}\in\mathbb{N}\) such that:_
* \(\delta Tk\leq\gamma_{m_{++k}}-\gamma_{m_{+}}\leq Tk\) _for all_ \(k\in\mathbb{N}\) _and_ \(n\geq N_{\delta}\)_._
* \(\sum_{k=m}^{\infty}g(\gamma_{m_{k}})^{-2\vartheta}\leq g(\gamma_{m_{n}})^{-2 \vartheta}+(\delta T)^{-1}\int_{\gamma_{m_{n}}}^{\infty}\;g(t)^{-2\vartheta} \operatorname{d}\!t\) _for all_ \(\vartheta\in(0,1)\) _and_ \(n\geq N_{\delta}\)_._
Notice that the parameters \(\delta\) and \(N_{\delta}\) in Lemma3.9 can be generally chosen so to satisfy the conditions in (T.2).
**Step 2:**_Convergence of \(\{\mathbf{x}^{k}\}_{k}\) and application of the KL inequality._ We first define the growth parameters \(\{\beta_{k}\}_{k}\) via
\[\beta_{k}:=g(\sum_{i=0}^{k-1}\alpha_{i}),\]
where \(g\) is a general mapping as specified in (i)-(ii) of Lemma3.9. Due to (40), assumption (A.4) is then satisfied. Let \(\mathcal{U}\) be the master event introduced in (24) (for this choice of \(\{\beta_{k}\}_{k}\)). Applying Theorem2.15 and Lemma3.2, all statements in Lemma3.2 hold on \(\mathcal{X}\cap\mathcal{U}\) and we have \(F^{\lambda}_{\operatorname{nor}}(\mathbf{z}^{k}(\omega)),F^{\lambda}_{ \operatorname{nat}}(\mathbf{x}^{k}(\omega))\to 0\) and \(\psi(\mathbf{x}^{k}(\omega))\to\mathbf{\psi}^{*}(\omega)\) for all \(\omega\in\mathcal{X}\cap\mathcal{U}\).
We now fix an arbitrary sample \(\omega\in\mathcal{X}\cap\mathcal{U}\cap\mathcal{K}\) and consider the corresponding realizations \(\{x^{k}\}_{k}=\{\mathbf{x}^{k}(\omega)\}_{k}\), \(\{z^{k}\}_{k}=\{\mathbf{z}^{k}(\omega)\}_{k}\), \(\xi=\mathbf{\xi}(\omega)\) etc. Using Lemma2.13, there exists \(\bar{T}=\bar{\mathbf{T}}(\omega)>0\) such that for every time window \(\bar{T}\in\mathbb{Q}\cap(0,\bar{T}]\) there is \(K_{0}\equiv K_{0}[T]\in\mathbb{N}\) such that
\[H_{\xi}(z^{m_{k+1}})-H_{\xi}(z^{m_{k}})\leq-\frac{\xi T}{5}\|F^{\lambda}_{ \operatorname{nor}}(z^{m_{k}})\|^{2}+\frac{5s_{k}^{2}}{T}\quad\forall\;k\geq K _{0}, \tag{42}\]
where \(\{m_{k}\}_{k}\equiv\{m_{k}[T]\}_{k}\) are the associated time indices and we set \(s_{k}=\mathbf{s}_{m_{k}[T],m_{k+1}[T]}(\omega)\). As discussed in (39) and due to \(\omega\in\mathcal{K}\), there further exist \(\zeta_{z}=\mathbf{\zeta}_{z}(\omega),\eta_{z}=\mathbf{\eta}_{z}(\omega)>0\) and \(\hat{\theta}\in[\frac{1}{2},1)\) such that the following uniformized KL-type property holds on \(\mathfrak{A}_{z}(\omega)\):
\[|H_{\xi}(z)-\psi^{*}|^{\hat{\theta}}\leq\hat{c}\cdot\|F^{\lambda}_{ \operatorname{nor}}(z)\|\quad\forall\;z\in V_{\mathfrak{C}_{z},\eta_{z}}.\]
This inequality is also satisfied for every exponent \(\vartheta\in[\hat{\theta},1)\) as long as \(|H_{\xi}(z)-\psi^{*}|\leq 1\). Since \(\{H_{\xi}(z^{k})\}_{k}\) converges to \(\psi^{*}\) and we have \(\operatorname{dist}(z^{k},\mathfrak{A}_{z}(\omega))\to 0\) by definition (see (38)), there is \(K_{1}\geq K_{0}\) such that \(z^{m_{k}}\in V_{\mathfrak{C}_{z},\eta_{z}}\) and
\[|H_{\xi}(z^{m_{k}})-\psi^{*}|^{\vartheta}\leq\hat{c}\cdot\|F^{\lambda}_{ \operatorname{nor}}(z^{m_{k}})\|\quad\forall\;k\geq K_{1}. \tag{43}\]
As the time window \(T\) can be chosen arbitrarily from \(\mathbb{Q}\cap(0,\bar{T}]\), we may further assume
\[T\leq\min\{1,5\hat{c}\}\quad\text{and}\quad T(L+2\lambda^{-1})\exp(T(L+2 \lambda^{-1}))\leq\sqrt{3/2}-1, \tag{44}\]
which immediately implies \(\tau_{m_{k},m_{++}}\bar{\tau}_{m_{k},m_{++}}\leq\sqrt{3/2}-1\) for all \(k\geq K_{1}\).
Setting \(d_{k}:=\max_{m_{k}<\varepsilon\leq m_{k+1}}\|x^{i}-x^{m_{k}}\|\) and invoking \((a+b)^{2}\leq\frac{4}{3}a^{2}+4b^{2}\), Lemma2.9 now yields \(d_{k}^{2}\leq\frac{3}{2}[T\|F^{\lambda}_{\operatorname{nor}}(z^{m_{k}})\|+s_{k }]^{2}\leq 2T^{2}\|F^{\lambda}_{\operatorname{nor}}(z^{m_{k}})\|^{2}+6s_{k}^{2}\) and
\[-\xi T\|F^{\lambda}_{\operatorname{nor}}(z^{m_{k}})\|^{2}\leq-\frac{1}{2}\xi T^{-1 }d_{k}^{2}+3\xi T^{-1}s_{k}^{2}.\]
Consequently, we obtain
\[H_{\xi}(z^{m_{k+1}})-H_{\xi}(z^{m_{k}})\leq-\frac{\xi T}{5}\|F^{\lambda}_{\rm nor} (z^{m_{k}})\|^{2}+\frac{5s_{k}^{2}}{T}\leq-\frac{\xi T}{10}\|F^{\lambda}_{\rm nor }(z^{m_{k}})\|^{2}-\frac{\xi d_{k}^{2}}{20T}+\frac{6s_{k}^{2}}{T}\]
and setting \(u_{k}:=\frac{6}{T}\sum_{i=k}^{\infty}s_{i}^{2}\), it follows
\[H_{\xi}(z^{m_{k+1}})+u_{k+1}\leq H_{\xi}(z^{m_{k}})+u_{k}-\frac{\xi T}{10}\|F^ {\lambda}_{\rm nor}(z^{m_{k}})\|^{2}-\frac{\xi d_{k}^{2}}{20T}\quad\forall\;k \geq K_{1}. \tag{45}\]
The growth parameters \(\{\beta_{k}\}_{k}\) define a monotonically increasing sequence (for all \(k\) sufficiently large). Hence, due to \(\omega\in\mathcal{X}\cap\mathcal{U}\) and as in the proof of Theorem2.15, we can infer \(u_{0}<\infty\). Let us further define \(\hat{\varrho}(x):=\hat{c}x^{1-\vartheta}/(1-\vartheta)\) and \(\Delta_{k}:=\hat{\varrho}(H_{\xi}(z^{m_{k}})-\psi^{*}+u_{k})\). Since the sequence \(\{H_{\xi}(z^{m_{k}})-\psi^{*}+u_{k}\}_{k}\) is monotonically decreasing with \(H_{\xi}(z^{m_{k}})+u_{k}\to\psi^{*}\), \(\Delta_{k}\) is well-defined (for all \(k\geq K_{1}\)). Moreover, we may write (43) as \([\hat{\varrho}^{\prime}(|H_{\xi}(z^{m_{k}})-\psi^{*}|)]^{-1}\leq\|F^{\lambda} _{\rm nor}(z^{m_{k}})\|\). For all \(k\geq K_{1}\), we then obtain
\[\Delta_{k}-\Delta_{k+1} \geq\hat{\varrho}^{\prime}(H_{\xi}(z^{m_{k}})-\psi^{*}+u_{k})(H_ {\xi}(z^{m_{k}})+u_{k}-[H_{\xi}(z^{m_{k+1}})+u_{k+1}])\] \[\geq\hat{\varrho}^{\prime}(|H_{\xi}(z^{m_{k}})-\psi^{*}|+u_{k}) \left(\frac{\xi T}{10}\|F^{\lambda}_{\rm nor}(z^{m_{k}})\|^{2}+\frac{\xi d_{k }^{2}}{20T}\right)\] \[\geq\frac{\xi T\|F^{\lambda}_{\rm nor}(z^{m_{k}})\|^{2}/10+\xi d_ {k}^{2}/(20T)}{\left[\hat{\varrho}^{\prime}(|H_{\xi}(z^{m_{k}})-\psi^{*}|) \right]^{-1}+[\hat{\varrho}^{\prime}(u_{k})]^{-1}}\geq\frac{\xi T^{2}\|F^{ \lambda}_{\rm nor}(z^{m_{k}})\|^{2}+\xi d_{k}^{2}/2}{10[T\|F^{\lambda}_{\rm nor }(z^{m_{k}})\|+T[\hat{\varrho}^{\prime}(u_{k})]^{-1}]}.\]
Here, the first inequality uses the concavity of \(\hat{\varrho}\), the second inequality is due to (45) and the fact that \(\hat{\varrho}^{\prime}(x)=\hat{c}x^{-\vartheta}\) is monotonically decreasing, the third inequality follows from the subadditivity of \(x\mapsto[\hat{\varrho}^{\prime}(x)]^{-1}=\hat{c}^{-1}x^{\vartheta}\), and the last inequality applies the uniformized KL-type property (43). Rearranging the terms in the latter estimate, it holds that
\[T^{2}\|F^{\lambda}_{\rm nor}(z^{m_{k}})\|^{2}+\frac{d_{k}^{2}}{2}\leq 10( \Delta_{k}-\Delta_{k+1})\xi^{-1}\cdot(T\|F^{\lambda}_{\rm nor}(z^{m_{k}})\|+T [\hat{\varrho}^{\prime}(u_{k})]^{-1}).\]
Furthermore, taking the square root and utilizing \(\frac{1}{2}(a+b)^{2}\leq a^{2}+b^{2}\), \(ab\leq\frac{1}{2}a^{2}+\frac{1}{2}b^{2}\), and the subadditivity of the square root mapping, we obtain
\[\frac{T}{\sqrt{2}}\|F^{\lambda}_{\rm nor}(z^{m_{k}})\|+\frac{d_{k}}{2}\leq \frac{5\sqrt{2}(\Delta_{k}-\Delta_{k+1})}{\xi}+\frac{T}{\sqrt{2}}\|F^{\lambda} _{\rm nor}(z^{m_{k}})\|+\frac{T}{\sqrt{2}}[\hat{\varrho}^{\prime}(u_{k})]^{-1}.\]
Multiplying this inequality with \(2\), summing the resulting estimate for \(k=n,n+1,\dots\) and applying \(\Delta_{k}\to 0\) as \(k\to\infty\), this yields
\[\sum\nolimits_{k=n}^{\infty}\!\!d_{k}\leq 10\sqrt{2}\xi^{-1}\Delta_{n}+\sqrt{2}T \sum\nolimits_{k=n}^{\infty}\left[\hat{\varrho}^{\prime}(u_{k})\right]^{-1} \quad\forall\;n\geq K_{1}. \tag{46}\]
Let us now set \(\varepsilon_{n}:=\sqrt{2}T\sum_{k=n}^{\infty}[\hat{\varrho}^{\prime}(u_{k})]^{-1}\). Using the monotonicity of \(\{\beta_{k}\}_{k}\), \(\sum_{i=k}^{\infty}\!\beta_{m_{i}}^{2}s_{i}^{2}\to 0\), \(\delta<1,T\leq 1\), and Lemma3.9 (b), we can infer
\[\varepsilon_{n}=\frac{6^{\vartheta}\sqrt{2}T}{\hat{c}T^{\vartheta}} \!\sum\nolimits_{k=n}^{\infty}\left[\sum\nolimits_{i=k}^{\infty}\!s_{i}^{2}\right]^ {\vartheta} \leq\frac{6^{\vartheta}\sqrt{2}T}{\hat{c}T^{\vartheta}}\!\sum\nolimits _{k=n}^{\infty}\!\beta_{m_{k}}^{-2\vartheta}\left[\sum\nolimits_{i=k}^{\infty}\! \beta_{m_{i}}^{2}s_{i}^{2}\right]^{\vartheta}\] \[\leq\delta T\!\sum\nolimits_{k=n}^{\infty}\!\beta_{m_{k}}^{-2 \vartheta}\leq\beta_{m_{n}}^{-2\vartheta}+\int_{\gamma_{m_{n}}}^{\infty}\frac{1}{g( t)^{2\vartheta}}\;\mathrm{d}t \tag{47}\]
for all \(n\geq K_{2}\) and some \(K_{2}\geq K_{1}\). At this point, let us assume \(\sup_{n}\varepsilon_{n}<\infty\). Hence, by (46), it follows
\[\sum\nolimits_{k=0}^{\infty}\!\|x^{m_{k+1}}-x^{m_{k}}\|\leq\sum\nolimits_{k=0}^{ \infty}\!d_{k}<\infty.\]
Let \(j,\ell\in\mathbb{N}\), \(\ell>j\) be arbitrary. Then, there are \(n^{\prime},n^{\prime\prime}\in\mathbb{N}\), \(n^{\prime\prime}\geq n^{\prime}\) with \(m_{n^{\prime}}<j\leq m_{n^{\prime}+1}\) and \(m_{n^{\prime\prime}}<\ell\leq m_{n^{\prime\prime}+1}\). Using triangle inequality, this yields
\[\|x^{\ell}-x^{j}\|\leq\|x^{j}-x^{m_{n^{\prime}}}\|+\|x^{\ell}-x^{m_{n^{\prime \prime}}}\|+\sum\nolimits_{k=n^{\prime}}^{n^{\prime\prime}-1}\!\|x^{m_{k+1}}-x^{m _{k}}\|\leq 2\!\sum\nolimits_{k=n^{\prime}}^{\infty}\!d_{k}\to 0 \tag{48}\]
as \(j\) (and hence \(n^{\prime}\)) tends to infinity. Thus, \(\{x^{k}\}_{k}\) is a Cauchy sequence and by Theorem2.15 and \(\omega\in\mathcal{X}\cap\mathcal{U}\cap\mathcal{K}\), \(\{x^{k}\}_{k}\) converges to some stationary point \(x^{*}\in\operatorname{crit}(\psi)\). In addition, we have \(z^{k}=x^{k}-\lambda\nabla f(x^{k})+\lambda F^{\lambda}_{\rm nor}(z^{k})\to x^{*}- \lambda\nabla f(x^{*})=:z^{*}\) and \(x^{*}=\operatorname{prox}_{\lambda\varphi}(z^{*})\). Noticing \(\mathbb{P}(\mathcal{U}\cap\mathcal{K})=1\), this establishes \(\mathbf{x}^{k}\to\mathbf{x}^{*}\) and \(\mathbf{z}^{k}\to\mathbf{z}^{*}\) almost surely on \(\mathcal{X}\) (or \(\mathcal{Z}\)).
**Step 3:**_Specific analysis and convergence rates for \(g(x)=x^{r}\)._ In the following, we verify that the assumption \(\sup_{n}\varepsilon_{n}<\infty\) -- made in **Step 2** -- is satisfied for the specific choice \(g(x)=x^{r}\), \(r>1/2\). We then complete the proof of Theorem 3.5 (a) in **Step 3.2**-**Step 3.4** and establish convergence rates for the sequences \(\{|\psi(\mathbf{x}^{k})-\mathbf{\psi}^{*}|\}_{k}\), \(\{\|F^{\lambda}_{\rm nat}(\mathbf{x}^{k})\|^{2}\}_{k}\), and \(\{\mathbf{x}^{k}\}_{k}\).
**Step 3.1:**_Specific error estimates for \(g(x)=x^{r}\)._ Due to \(r>1/2\), there is \(\varepsilon^{\prime}>0\) such that \(r=1/2+\varepsilon^{\prime}/2\). Thanks to (40) and \(g^{\prime}(x)=rx^{r-1}>0\), the conditions (i)-(ii) in Lemma 3.9 are satisfied for this choice of \(g\) and the derivations in **Step 2** are all fully applicable. Selecting \(\vartheta=\max\{\hat{\theta},2/(2+\varepsilon^{\prime})\}\), we obtain \(2r\vartheta-1\geq\varepsilon^{\prime}/(2+\varepsilon^{\prime})\), and
\[\int_{\gamma_{m_{n}}}^{\infty}\frac{1}{g(t)^{2\vartheta}}\,{\rm d}t=\int_{ \gamma_{m_{n}}}^{\infty}\frac{1}{t^{2r\vartheta}}\,{\rm d}t=\frac{1}{2r \vartheta-1}\frac{1}{\gamma_{m_{n}}^{2r\vartheta-1}}\leq\frac{2+\varepsilon^{ \prime}}{\varepsilon^{\prime}}\frac{1}{\gamma_{m_{n}}^{\varepsilon^{\prime}/( 2+\varepsilon^{\prime})}}.\]
This establishes \(\varepsilon_{n}\to 0\) and \(\sup_{n}\varepsilon_{n}<\infty\).
**Step 3.2:**_Preparations and Chung's lemma._ We first start with general observations that will be the basis of our analysis. Utilizing the special form of the desingularizing function \(\hat{\varrho}\), the derivations in **Step 2** can be further extended. In particular, rearranging (42), applying the KL inequality (43), and recalling \(u_{k}:=\frac{6}{T}\sum_{i=k}^{\infty}s_{i}^{2}\), we have
\[\frac{\xi T}{5\hat{c}^{2}}|H_{\xi}(z^{m_{k}})-\psi^{*}|^{2\vartheta}\leq[H_{ \xi}(z^{m_{k}})+u_{k}-\psi^{*}]-[H_{\xi}(z^{m_{k+1}})+u_{k+1}-\psi^{*}].\]
Adding \(\xi Tu_{k}^{2\vartheta}/(5\hat{c}^{2})\) on both sides and invoking Minkowski's inequality, \(|a+b|^{2\vartheta}\leq 2^{2\vartheta-1}(|a|^{2\vartheta}+|b|^{2\vartheta})\), \(\vartheta\geq\frac{1}{2}\), we obtain
\[\frac{2^{1-2\vartheta}\xi T}{5\hat{c}^{2}}|H_{\xi}(z^{m_{k}})+u_{k}-\psi^{*}|^ {2\vartheta}\leq y_{k}-y_{k+1}+\frac{\xi T}{5\hat{c}^{2}}u_{k}^{2\vartheta}, \tag{49}\]
where \(y_{k}:=H_{\xi}(z^{m_{k}})+u_{k}-\psi^{*}\). As shown in **Step 2**, \(\{y_{k}\}_{k}\) is a non-negative, monotonically decreasing sequence that converges to zero. Moreover, defining \(\nu_{k}:=\gamma_{m_{k}}=\sum_{i=0}^{m_{k}-1}\alpha_{i}\), it holds that
\[g(\nu_{k})^{2}u_{k}=\beta_{m_{k}}^{2}u_{k}\leq\frac{6}{T}{\sum _{i=k}^{\infty}}\beta_{m_{i}}^{2}s_{i}^{2}\to 0\quad\text{as}\quad k\to\infty. \tag{50}\]
Hence, there is \(K_{2}\geq K_{1}\) such that \(u_{k}^{2\vartheta}\leq 2^{1-2\vartheta}g(\nu_{k})^{-4\vartheta}\) for all \(k\geq K_{2}\) and we may rewrite the estimate (49) as
\[y_{k+1}\leq y_{k}-\frac{T}{c_{\vartheta}}y_{k}^{2\vartheta}+\frac{T}{c_{ \vartheta}}\frac{1}{g(\nu_{k})^{4\vartheta}}\quad\text{and}\quad c_{\vartheta} :=\frac{5\hat{c}^{2}}{2^{1-2\vartheta}\xi},\quad\forall\ k\geq K_{2}. \tag{51}\]
Notice that the choice of the time window \(T\) in (44) ensures \(T<c_{\vartheta}\). As in **Step 2**, the inequality (51) holds for general growth mappings \(g\) and for samples \(\omega\in\mathcal{X}\cap\mathcal{U}\cap\mathcal{K}\). Similar recursions have also been derived in [58] when analyzing a random reshuffling method. We will now generally follow the strategies utilized in the proof of [58, Theorem 3.10]. However, a more careful discussion is necessary to cope with the different dynamics and stochastic nature of \(\mathsf{nord}\)-SGD. The following abstract result is a straightforward extension of Chung's lemma allowing to establish convergence rates for certain general sequences, cf. [26, Lemma 1 and 4] and [79, Lemma 4 and 5 (Section 2.2)]. The proof of Lemma 3.10 is presented in Appendix D.2.
**Lemma 3.10** (Generalized Chung's Lemma).: _Let the functions \(s,t:\mathbb{R}\to\mathbb{R}_{++}\) and the sequences \(\{a_{k}\}_{k}\), \(\{b_{k}\}_{k}\) be given with_
\[a_{k+1}\leq\left(1-\frac{1}{s(b_{k})}\right)a_{k}+\frac{1}{t(b_{k})}.\]
_We consider the following assumptions on \(s\), \(t\), and \(\{b_{k}\}_{k}\):_
* _There exists an interval_ \(I=(\iota,\infty)\) _such that_ \(s\) _and_ \(t\) _are continuously differentiable on_ \(I\) _and we have_ \(s(x)>1\) _for all_ \(x\in I\)_. In addition, the mapping_ \(\kappa:=s/t\) _is non-increasing and convex on_ \(I\)_._
* _It holds that_ \(b_{k}\to\infty\) _and there exists_ \(\mathsf{B}\) _such that_ \(b_{k+1}\leq b_{k}+\mathsf{B}\) _for all_ \(k\) _sufficiently large. Furthermore, we have_ \(\sum_{k=0}^{\infty}1/s(b_{k})=\infty\)_._
* _There is_ \(\beta\in(0,1)\) _such that_ \(\mathsf{B}[s^{\prime}(x)-\kappa(x)t^{\prime}(x)]\geq-1+\beta\) _for all_ \(x\in I\)_._
_Then, it holds that \(\limsup_{k\to\infty}a_{k}/\kappa(b_{k})<\infty\)._
We now apply Chung's lemma to (51) to establish rates for the auxiliary sequence \(\{y_{k}\}_{k}\). Our discussion will depend on different possible choices of the KL parameter \(\vartheta\).
**Case 1:**\(\vartheta=\frac{1}{2}\). In this case, the recursion (51) can be rewritten as:
\[y_{k+1}\leq\left[1-\frac{T}{c_{\vartheta}}\right]y_{k}+\frac{T}{c_{\vartheta} \nu_{k}^{2r}}.\]
Setting \(s(x):=c_{\vartheta}/T\) and \(t(x):=c_{\vartheta}x^{2r}/T\), the function \(\kappa(x):=s(x)/t(x)=x^{-2r}\) is non-increasing and convex on the interval \((0,\infty)\) when \(r>0\). Furthermore, it holds that
\[s^{\prime}(x)-\kappa(x)t^{\prime}(x)=\frac{2rc_{\vartheta}}{Tx}\to 0,\quad \text{as }x\to\infty.\]
The conditions \(s(x)>1\) and \(\sum_{k=0}^{\infty}\frac{1}{s(\nu_{k})}=\infty\) are obviously satisfied (for all \(x\)). Consequently, applying Lemma3.10 with \(a_{k}=y_{k}\), \(b_{k}=\nu_{k}\), and \(\mathsf{B}=1\) (thanks to \(T\leq 1\)), we obtain \(\limsup_{k\to\infty}y_{k}\nu_{k}^{2r}<\infty\) if \(r>0\).
**Case 2:**\(\vartheta\in(\frac{1}{2},\frac{1+2r}{4r})\). Due to \(2\vartheta>1\), the function \(x\mapsto h_{\vartheta}(x):=x^{2\vartheta}\) is convex on \(\mathbb{R}_{+}\), i.e.,
\[h_{\vartheta}(y)-h_{\vartheta}(x)\geq 2\vartheta x^{2\vartheta-1}(y-x)=2 \vartheta x^{2\vartheta-1}y-2\vartheta x^{2\vartheta}\quad\forall\;x,y\in \mathbb{R}_{+}. \tag{52}\]
Rearranging the terms in (51) and using the convexity of \(h_{\vartheta}\), we have
\[y_{k+1}\leq y_{k}-\frac{T}{c_{\vartheta}}[h_{\vartheta}(y_{k})-h_{\vartheta} (\nu_{k}^{-2r})]\leq\left[1-\frac{2\vartheta T}{c_{\vartheta}}\frac{1}{\nu_{ k}^{2r(2\vartheta-1)}}\right]y_{k}+\frac{2\vartheta T}{c_{\vartheta}}\frac{1}{\nu_{ k}^{4r\vartheta}}.\]
Next, we verify that Lemma3.10 is applicable. We set
\[s(x):=\frac{c_{\vartheta}}{2\vartheta T}\cdot x^{2r(2\vartheta-1)},\quad t( x):=\frac{c_{\vartheta}}{2\vartheta T}\cdot x^{4r\vartheta},\quad\text{and} \quad\kappa(x):=\frac{s(x)}{t(x)}=x^{-2r}.\]
Hence, (if \(r>0\)) \(\kappa\) is non-increasing and convex on \(\mathbb{R}_{+}\). In addition, the condition \(\vartheta<\frac{1+2r}{4r}\) implies \(4\vartheta r-2r-1<0\) and it follows
\[s^{\prime}(x)-\kappa(x)t^{\prime}(x)=-\frac{c_{\vartheta}r}{\vartheta T}\cdot x ^{4\vartheta r-2r-1}\to 0\quad\text{as}\quad x\to\infty.\]
Furthermore, due to \(2r(2\vartheta-1)=4\vartheta r-2r<1\) and using the bound in Lemma3.9 (a) for \(\nu_{n+k}=\gamma_{m_{n+k}}\), it holds that \(\sum_{k=0}^{\infty}1/s(\nu_{k})=\infty\). Lemma3.10 then yields \(\limsup_{k\to\infty}y_{k}\nu_{n}^{2r}<\infty\) (if \(r>0\)).
**Case 3:**\(\vartheta\in[\frac{1+2r}{4r},1)\). (This requires \(r>\frac{1}{2}\)). Let us introduce \(c_{1}=[\frac{c_{\vartheta}}{\vartheta(2\vartheta-1)}]^{1/(2\vartheta-1)}\). Similar to the steps in **Case 2**, we can rewrite the recursion (51) as
\[y_{k+1} \leq y_{k}-\frac{T}{c_{\vartheta}}[h_{\vartheta}(y_{k})-h_{ \vartheta}(c_{1}/\nu_{k}^{\frac{1}{2\vartheta-1}})]+\frac{T}{c_{\vartheta}} \frac{1}{\nu_{k}^{4r\vartheta}}-\frac{T}{c_{\vartheta}}\frac{c_{1}^{2 \vartheta}}{\nu_{k}^{2\vartheta/(2\vartheta-1)}}\] \[\leq\left[1-\frac{2\vartheta Tc_{1}^{2\vartheta-1}}{c_{\vartheta} }\frac{1}{\nu_{k}}\right]y_{k}+\frac{T}{c_{\vartheta}}\frac{1}{\nu_{k}^{4r \vartheta}}+(2\vartheta-1)\frac{Tc_{1}^{2\vartheta}}{c_{\vartheta}}\frac{1}{ \nu_{k}^{2\vartheta/(2\vartheta-1)}},\]
where the last line utilizes the convexity of \(h_{\vartheta}\) cf. (52). Thanks to \(\vartheta\geq\frac{1+2r}{4r}\), we have \(\frac{2\vartheta}{2\vartheta-1}\leq 4r\vartheta\). Thus, there exists a constant \(c_{2}>0\) such that
\[y_{k+1}\leq\left[1-\frac{2\vartheta Tc_{1}^{2\vartheta-1}}{c_{\vartheta}}\frac{ 1}{\nu_{k}}\right]y_{k}+c_{2}/\nu_{k}^{\frac{2\vartheta}{2\vartheta-1}}=\left[ 1-\frac{2}{2\vartheta-1}\frac{1}{\nu_{k}}\right]y_{k}+c_{2}/\nu_{k}^{\frac{2 \vartheta}{2\vartheta-1}},\]
for all \(k\) sufficiently large. (The last inequality follows by definition of \(c_{1}\)). Next, in order to apply Lemma3.10, we define the following functions:
\[s(x):=\frac{2\vartheta-1}{2}x,\quad t(x):=c_{2}^{-1}x^{\frac{2\vartheta}{2 \vartheta-1}},\quad\text{and}\quad\kappa(x):=\frac{s(x)}{t(x)}=\frac{c_{2}(2 \vartheta-1)}{2}x^{-\frac{1}{2\vartheta-1}}.\]
We have \(s(x)\to\infty\) as \(x\to\infty\) and the mapping \(\kappa\) is non-increasing and convex on the interval \((0,\infty)\). Furthermore, for all \(x\in(0,\infty)\), it holds that
\[s^{\prime}(x)-\kappa(x)t^{\prime}(x)=\frac{2\vartheta-1}{2}-\frac{c_{2}(2 \vartheta-1)}{2}x^{-\frac{1}{2\vartheta-1}}\cdot\frac{2\vartheta}{c_{2}(2 \vartheta-1)}x^{\frac{2\vartheta}{2\vartheta-1}-1}=-\frac{1}{2}.\]
In addition, the condition \(\sum_{k=0}^{\infty}1/s(\nu_{k})=\infty\) directly follows from Lemma3.9 (a). Consequently, Lemma3.10 is applicable and we can infer \(\limsup_{k\to\infty}y_{k}\nu_{k}^{1/(2\vartheta-1)}<\infty\).
In summary, it follows \(\limsup_{k\to\infty}y_{k}\nu_{k}^{\min\{2r,1/(2\vartheta-1)\}}<\infty\). We now express this rate in terms of the original KL exponent \(\theta=\mathbf{\theta}(\omega)\). (Thanks to **Step 2** and **Step 3.1**, we have \(x^{k}\to x^{*}\) and \(z^{k}\to z^{*}\) and we no longer need to work with uniformized exponents in (43)). Since \(\vartheta\in[\max\{\frac{1}{2},\theta\},1)\) can be adjusted freely and the optimal rate is obtained when \(\vartheta=\max\{\frac{1}{2},\theta\}\), we can conclude
\[\limsup_{k\to\infty}\,y_{k}\cdot\nu_{k}^{\Psi_{r}(\theta)}<\infty\quad\text{ where}\quad\Psi_{r}(\theta)=\begin{cases}2r&\text{if }0\leq\theta<\frac{2r+1}{4r},\\ \frac{1}{2\theta-1}&\text{if }\frac{2r+1}{4r}\leq\theta<1,\end{cases}\quad r >\frac{1}{2}. \tag{53}\]
**Step 3.3:**_Full convergence rate for \(\{F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{k})\}_{k}\) and \(\{\psi(\mathbf{x}^{k})\}_{k}\)._ Next, we extend the preliminary convergence result (53) to full rates for the sequences \(\{F_{\mathrm{nat}}^{\lambda}(\mathbf{x}^{k})\}_{k}\) and \(\{\psi(\mathbf{x}^{k})\}_{k}\). Using the approximate descent condition (42) (cf. (45)) and the non-negativity of \(\{y_{k}\}_{k}\), we have \(\frac{\bar{c}T}{5}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}\leq y_{k}-y_{k +1}\leq y_{k}\). This implies
\[\limsup_{k\to\infty}\,\|F_{\mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}\cdot\nu_{ k}^{\Psi_{r}(\theta)}<\infty. \tag{54}\]
Invoking (32) and \((a+b)^{2}\leq 2a^{2}+2b^{2}\), we can infer \(\|F_{\mathrm{nor}}^{\lambda}(z^{i})\|^{2}\leq 2(1+\bar{\lambda}\lambda)^{2}\|F_{ \mathrm{nor}}^{\lambda}(z^{m_{k}})\|^{2}+2\bar{\lambda}^{2}s_{k}^{2}\) for all \(i=m_{k}+1,\ldots,m_{k+1}\). Noticing \(s_{k}^{2}\nu_{k}^{2r}=g(\nu_{k})^{2}s_{k}^{2}\leq\frac{T}{6}g(\nu_{k})^{2}u_{k }\to 0\) (cf. (50)) and \(\Psi_{r}(\theta)\leq 2r\), this readily yields
\[\limsup_{k\to\infty}\left[\max_{m_{k}<i\leq m_{k+1}}\|F_{\mathrm{nor}}^{ \lambda}(z^{i})\|^{2}\right]\cdot\nu_{k}^{\Psi_{r}(\theta)}<\infty. \tag{55}\]
Next, for all \(j\in\mathbb{N}\), there is \(k\in\mathbb{N}\) with \(m_{k}<j\leq m_{k+1}\) and we have \(\|F_{\mathrm{nor}}^{\lambda}(z^{j})\|^{2}\leq\max_{m_{k}<i\leq m_{k+1}}\|F_{ \mathrm{nor}}^{\lambda}(z^{i})\|^{2}\). Moreover, due to \(\sum_{i=0}^{\infty}\alpha_{i}=\infty\), it follows \(\gamma_{j}=\sum_{i=0}^{j-1}\alpha_{i}\geq\gamma_{m_{k}}=\nu_{k}\) and
\[\gamma_{j}\leq\sum\nolimits_{i=0}^{m_{k+1}-1}\alpha_{i}=\gamma_{m_{k}}+\sum \nolimits_{i=m_{k}}^{m_{k+1}-1}\!\!\alpha_{i}\leq\gamma_{m_{k}}+T\leq 2\gamma_{m_{k} }=2\nu_{k}\]
for all \(j\) and \(k\) sufficiently large. Hence, combining the latter discussions and \(\|F_{\mathrm{nat}}^{\lambda}(x^{j})\|\leq\lambda\|F_{\mathrm{nor}}^{\lambda}(z ^{k})\|\), we can conclude
\[\limsup_{j\to\infty}\,\|F_{\mathrm{nat}}^{\lambda}(x^{j})\|^{2}\cdot\gamma_{j} ^{\Psi_{r}(\theta)}\leq\lambda^{2}\,\limsup_{j\to\infty}\,\|F_{\mathrm{nor}}^ {\lambda}(z^{j})\|^{2}\cdot\gamma_{j}^{\Psi_{r}(\theta)}<\infty.\]
Noticing \(|\psi(x^{m_{k}})-\psi^{*}|\leq y_{k}+\frac{\bar{c}\lambda}{2}\|F_{\mathrm{nor }}^{\lambda}(z^{m})\|^{2}+u_{k}\) and applying (50), (53), (54), and \(\Psi_{r}(\theta)\leq 2r\), we can further infer
\[\limsup_{k\to\infty}\,|\psi(x^{m_{k}})-\psi^{*}|\cdot\nu_{k}^{\Psi_{r}(\theta) }<\infty. \tag{56}\]
Mimicking the proof of Theorem 2.15 and re-using the estimates (34) and (35), it is possible to derive the bounds
\[\psi(x^{j})-\psi^{*}\leq|\psi(x^{m_{k}})-\psi^{*}|+c_{3}\|F_{\mathrm{nor}}^{ \lambda}(z^{m_{k}})\|^{2}+c_{4}s_{k}^{2}\]
and \(\psi(x^{j})-\psi^{*}\geq-|\psi(x^{m_{k+1}})-\psi^{*}|-c_{3}\max_{m_{k}\leq i \leq m_{k+1}}\|F_{\mathrm{nor}}^{\lambda}(z^{m_{i}})\|^{2}-4c_{4}s_{k}^{2}\) for all \(j\in\mathbb{N}\), \(m_{k}<j<m_{k+1}\), where \(c_{3}=[\frac{1}{2}+(L\lambda+1)(1+\bar{\lambda}\lambda)^{2}]\lambda\) and \(c_{4}=(L+\lambda^{-1})(1+\bar{\lambda}\lambda)^{2}\). Combining this observation with (54), (55), \(s_{k}^{2}\nu_{k}^{2r}\to 0\), and \(\gamma_{j}\leq 2\nu_{k}\leq 2\nu_{k+1}\), we can readily show \(\limsup_{j\to\infty}\,|\psi(x^{j})-\psi^{*}|\cdot\gamma_{j}^{\Psi_{r}(\theta)}<\infty\).
**Step 3.4:**_Convergence rate for \(\{x^{k}\}_{k}\)._ Finally, we discuss convergence of the iterates \(\{x^{k}\}_{k}\). Recalling (46) and using the definition of \(\hat{\varrho}\), we have
\[\sum\nolimits_{k=n}^{\infty}\!d_{k}\leq 10\sqrt{2}\xi^{-1}\Delta_{n}+\varepsilon_{n}= \frac{10\sqrt{2}\hat{c}}{(1-\vartheta)\xi}y_{n}^{1-\vartheta}+\varepsilon_{n} \quad\forall\;n\geq K_{1}.\]
As shown in **Step 3.1**, there exists a constant \(c_{1}>0\) such that \(\varepsilon_{n}\leq c_{1}/\nu_{n}^{2r\vartheta-1}\) where \(\nu_{n}=\gamma_{m_{n}}=\sum_{i=0}^{m_{n}-1}\!\alpha_{i}\) provided that \(2r\vartheta>1\). To ensure \(\lim_{n\to\infty}\varepsilon_{n}=0\), we primarily consider the case \(\vartheta\in(1/2r,1)\). Since the exponent \(\vartheta\) can be adjusted freely, we can increase \(\vartheta\) to fulfill \(\vartheta>1/2r\) whenever \(\hat{\theta}\leq 1/2r\). Thus, based on the rate of \(\{y_{n}\}_{n}\) derived in **Step 3.2**, it follows
\[\limsup_{n\to\infty}\left(\sum\nolimits_{k=n}^{\infty}\!d_{k}\right)\nu_{k}^{ \Upsilon_{r}(\vartheta)}<\infty\quad\text{where}\quad\Upsilon_{r}(\vartheta):= \min\{2r\vartheta-1,2r(1-\vartheta),\frac{1-\vartheta}{2\vartheta-1}\}.\]
We have \(\Upsilon_{r}(\vartheta)=2r\vartheta-1\) if \(\frac{1}{2r}<\vartheta\leq\frac{1+2r}{4r}\) and \(\Upsilon_{r}(\vartheta)=\frac{1-\vartheta}{2\vartheta-1}\) if \(\frac{2r+1}{4r}<\vartheta<1\). We now again express the rate function \(\Upsilon_{r}\) in terms of the original KL exponent \(\theta=\mathbf{\theta}(\omega)\). The mapping \(\Upsilon_{r}\) is continuous on \((\frac{1}{2r},1)\), increasing in \(\vartheta\) when \(\vartheta\in(\frac{1}{2r},\frac{1+2r}{4r})\) and decreasing when \(\vartheta>\frac{1+2r}{4r}\). Hence, in the case \(\theta\geq\frac{1+2r}{4r}\), the optimal rate is obtained by setting \(\vartheta=\theta\). In the case \(0\leq\theta<\frac{1+2r}{4r}\), we can set \(\vartheta=\frac{1+2r}{4r}\) to maximize the rate. This yields
\[\limsup_{n\to\infty}\left(\sum\nolimits_{k=n}^{\infty}\!d_{k}\right)\nu_{n}^{ \Psi_{r}^{*}(\theta)}<\infty\quad\text{where}\quad\Psi_{r}^{r}(
Similar to (48), for all \(j\in\mathbb{N}\) sufficiently large, we obtain
\[\|x^{j}-x^{*}\|\leq\max_{m_{n}<i\leq m_{n+1}}\|x^{i}-x^{m_{n}}\|+\sum\nolimits_{k >n}\|x^{m_{k+1}}-x^{m_{k}}\|\leq 2\!\sum\nolimits_{k=n}^{\infty}\!d_{k},\]
where \(n\in\mathbb{N}\) is selected such that \(m_{n}<j\leq m_{n+1}\). Mimicking the discussion in **Step 3.3**, we have \(\gamma_{j}\leq 2\nu_{n}\) for all \(j\) and \(n\) sufficiently large. Consequently, we can conclude \(\limsup_{j\to\infty}\|x^{j}-x^{*}\|\cdot\gamma_{j}^{\Psi_{x}^{\prime}(\theta)}<\infty\).
**Step 4:**_Specific analysis for \(g(x)=\exp(rx)/x^{p}\)._ We have \(g(x)\to\infty\) as \(x\to\infty\), \(g^{\prime}(x)=\exp(rx)(rx-p)/x^{p+1}>0\) for all \(x>p/r\). Hence, the conditions (i)-(ii) in Lemma 3.9 hold for this choice of \(g\) and the derivations in **Step 2** are applicable. Next, we provide estimates for the error terms \(\{\varepsilon_{n}\}_{n}\) appearing in (47). Applying the substitution \(t=\frac{1}{2\vartheta r}\log(x)\), we obtain:
\[\int_{\nu_{n}}^{\infty}\frac{1}{g(t)^{2\vartheta}}\,\mathrm{d}t=\int_{\nu_{n} }^{\infty}\frac{t^{2\vartheta p}}{\exp(2\vartheta r)}\,\mathrm{d}t=\frac{1}{ (2\vartheta r)^{2\vartheta p+1}}\int_{\exp(2\vartheta r\nu_{n})}^{\infty} \frac{\log(x)^{2\vartheta p}}{x^{2}}\mathrm{d}x=:\frac{A_{n}}{(2\vartheta r) ^{2\vartheta p+1}},\]
where \(\nu_{n}:=\gamma_{m_{n}}\) and \(\{m_{k}\}_{k}\equiv\{m_{k}[T]\}_{k}\) are the time indices associated with \(T\). Integration by parts yields
\[A_{n}=-\frac{\log(x)^{2\vartheta p}}{x}\Big{|}_{\exp(2\vartheta r\nu_{n})}^{ \infty}+2\vartheta p\int_{\exp(2\vartheta r\nu_{n})}^{\infty}\frac{\log(x)^{2 \vartheta p}}{\log(x)\cdot x^{2}}\mathrm{d}x\leq\frac{(2\vartheta r\nu_{n}) ^{2\vartheta p}}{\exp(2\vartheta r\nu_{n})}+\frac{p}{r\nu_{n}}A_{n}.\]
Due to \(\lim_{n\to\infty}\nu_{n}=\infty\), we may assume \(\nu_{n}\geq\frac{2p}{r}\) for all \(n\) sufficiently large. This implies
\[\int_{\nu_{n}}^{\infty}\frac{1}{g(t)^{2\vartheta}}\,\mathrm{d}t\leq\frac{A_{n }}{(2\vartheta r)^{2\vartheta p+1}}\leq\left[1-\frac{p}{r\nu_{n}}\right]^{-1} \frac{\nu_{n}^{2\vartheta p}}{2\vartheta r\exp(2\vartheta r\nu_{n})}\leq \frac{\nu_{n}^{2\vartheta p}}{\vartheta r\exp(2\vartheta r\nu_{n})}\]
and based on (47), we can infer
\[\varepsilon_{n}\leq c_{\vartheta}\nu_{n}^{2\vartheta p}/\exp(2\vartheta r\nu _{n})\]
for some constant \(c_{5}>0\) and all \(n\) sufficiently large. Hence, we have \(\varepsilon_{n}\to 0\) and \(\sup_{n}\varepsilon_{n}<\infty\) for any choice of \(\vartheta>0\) and by **Step 2**, it follows \(\mathbf{x}^{k}\to\mathbf{x}^{*}\) almost surely on \(\mathcal{X}\). We can now mimic **Step 3.2** to establish convergence rates. We only need to consider the special case \(\vartheta=\hat{\theta}=\max\{\theta,\frac{1}{2}\}=\max\{\mathbf{\theta}(\omega), \frac{1}{2}\}=\frac{1}{2}\). Utilizing the recursion (51), Lemma 3.3, and \(g(\nu_{k})=\exp(r\nu_{k})/\nu_{k}^{p}\), it holds that
\[y_{k+1}\leq\left[1-\frac{T}{c_{\vartheta}}\right]y_{k}+\frac{T}{c_{\vartheta}} \cdot\frac{\nu_{k}^{2p}}{\exp(2r\nu_{k})},\quad c_{\vartheta}=\frac{5\hat{c}^ {2}}{\xi}\leq\frac{\bar{\mathbf{c}}(\omega)}{2},\]
for all \(k\geq K_{2}\). Setting \(s(x):=c_{\vartheta}/T\) and \(t(x):=c_{\vartheta}\exp(2rx)/(Tx^{2p})\), the rate function \(\kappa(x):=s(x)/t(x)=x^{2p}/\exp(2rx)\) is non-increasing and convex for all \(x\) sufficiently large. Since \(\nu_{k+1}\leq\nu_{k}+T\) for all \(k\) sufficiently large, by setting \(\mathsf{B}:=T\) and \(\beta:=1-2c_{\vartheta}r>0\) in Lemma 3.10, it follows for all \(x>0\) that
\[\mathsf{B}[s^{\prime}(x)-\kappa(x)t^{\prime}(x)]=0-\frac{Tx^{2p}}{\exp(2rx)} \cdot\frac{c_{\vartheta}(2rx-2p)}{T}\frac{\exp(2rx)}{x^{2p+1}}=-2c_{\vartheta} r+2p/x\geq-1+\beta.\]
Moreover, we have \(\sum_{k=0}^{\infty}1/s(\nu_{k})=\infty\). Thus, invoking Lemma 3.10, we obtain \(\limsup_{k\to\infty}y_{k}\cdot g(\nu_{k})^{2}<\infty\) and mimicking the discussion in **Step 3.3**, we can conclude
\[\limsup_{j\to\infty}\|F_{\mathrm{nat}}^{\lambda}(x^{j})\|^{2}\cdot g(\gamma_{j} )^{2}\leq\lambda\limsup_{j\to\infty}\|F_{\mathrm{nor}}^{\lambda}(z^{j})\|^{2} \cdot g(\gamma_{j})^{2}<\infty\]
and \(\limsup_{j\to\infty}|\psi(x^{j})-\psi^{*}|\cdot g(\gamma_{j})^{2}<\infty\). In order to derive the rate for \(\{x^{k}\}_{k}\), we apply (46) with \(\vartheta=\frac{1}{2}\). Using the bound for \(\{\varepsilon_{n}\}_{n}\) and \(\limsup_{k\to\infty}y_{k}\cdot g(\nu_{k})^{2}<\infty\), this yields \(\sum_{k=n}^{\infty}\!d_{k}\leq 10\sqrt{2}\hat{c}\xi^{-1}\sqrt{y_{n}}+ \varepsilon_{n}=\mathcal{O}(g(\nu_{k})^{-1})\). Mimicking discussions in **Step 3.4**, we can conclude \(\limsup_{j\to\infty}\|x^{j}-x^{*}\|\cdot g(\gamma_{j})<\infty\).
This finishes the proof of Theorem 3.5.
We continue with several concluding remarks. Our general proof flow and the proof of Theorem 3.5 differ from the derivations in [92, 9, 30] and are closer to classical KL-based analyses in optimization. In a nutshell, our overall analysis is built upon the core components "approximate descent \(\to\) asymptotic global convergence \(\to\) KL-based strong limit convergence" and allows to effectively incorporate time window techniques, step size dynamics, and error estimates.
\begin{tabular}{|c|c|c|} \hline
**Approximate Descent Property** & \multicolumn{2}{c|}{**Asymptotic Global Convergence**} \\ \
To illustrate this statement, we may consider the basic error condition \(\sum_{k=0}^{\infty}g(\nu_{k})^{2}\mathbf{s}_{k}^{2}=\sum_{k=0}^{\infty}\beta_{m_{k}}^{ 2}\mathbf{s}_{k}^{2}<\infty\) that is used in the definition of the master event \(\mathcal{U}\). This bound plays a central role in our analysis and allows to establish global convergence and to obtain fast rates of convergence in Theorem 3.5. In [92], Tadic works with a weaker error condition of the form \(\limsup_{k\to\infty}g(\nu_{k})\mathbf{s}_{k}<\infty\). However, in order to ensure convergence, the accumulated error terms \(\{\mathbf{u}_{k}\}_{k}\) still need to be controlled in a suitable way. This ultimately affects the order of the estimates and explains the different convergence results and rates in [92] (cf. Remark 3.8). As Tadic mostly utilizes indirect (contradiction-based) proof techniques that are more tailored to SGD, it is not clear whether a better control of \(\{\mathbf{s}_{k}\}_{k}\) (as in our case) can be effectively exploited or if extensions of his analysis to other algorithms and the nonsmooth setting are possible.
By contrast, we believe that the proof framework and analysis techniques presented in this work can be likely transferred to other stochastic optimization methodologies and are applicable in a broader context. We plan to investigate such a potential and more general stochastic KL-based analysis framework in future work.
## 4 Preliminary Numerical Experiments
In this section, we compare the proposed algorithm norM-SGD with prox-SGD on two large-scale learning tasks. In the experiments, we strictly control the number of stochastic gradient evaluations so that the overall computational complexity of both algorithms is identical at each iteration. In particular, both norM-SGD and prox-SGD use the same type of stochastic oracle to approximate gradient information. Our primary goal is to illustrate that norM-SGD does not perform worse than prox-SGD and has a similar practical behavior.
### Nonconvex Binary Classification
We first consider a sparse nonconvex binary classification problem, [94, 67], of the form:
\[\min_{x\in\mathbb{R}^{d}}\ \psi(x)=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x)+\varphi(x): =\frac{1}{N}\sum_{i=1}^{N}[1-\tanh(b_{i}\cdot a_{i}^{\top}x)]+\nu\|x\|_{1}, \tag{57}\]
where \(a_{i}\in\mathbb{R}^{d}\) denotes the training sample, \(b_{i}\) is the associated binary label, and \(\tanh(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}\) is the hyperbolic tangent function. In the numerical comparison, we use the popular datasets2 news20 (\(N=19\,996\), \(d=1\,355\,191\)),
Figure 2: Convergence results and sparsity information epochs for solving the binary classification problem (57). (Averaged over \(10\) runs).
rcv1 (\(N=20\,242\), \(d=47\,236\)), and epsilon (\(N=400\,000\), \(d=2\,000\)) and the \(\ell_{1}\)-regularization parameter \(\nu\) is set to \(1/N\) for all datasets. At iteration \(k\), we randomly pick indices from \([N]:=\{1,2,\ldots,N\}\) without replacement to form a subset \(S_{k}\subseteq[N]\). We then use \(g^{k}:=\frac{1}{|S_{k}|}\sum_{i\in S_{k}}\nabla f_{i}(x^{k})\) as standard stochastic approximation of \(\nabla f(x^{k})\).
**Implemental details.** In our test, we use the initial point \(x^{0}=\mathds{1}/d\in\mathbb{R}^{d}\) and the size of the mini-batches \(S_{k}\) is set to \(|S_{k}|=256\) for all \(k\in\mathbb{N}\). We apply step sizes of the form \(\alpha_{k}=\alpha/(\mathsf{L}+k)\) and choose \(\lambda=\alpha\) in \(\mathsf{norM}\mathsf{-SGD}\), where \(\alpha\) lies in the range \([10^{0},10^{5}]\) and \(\mathsf{L}\) is an approximate Lipschitz constant of \(\nabla f\). Specifically, based on the structure of \(\nabla^{2}f\), we utilize the estimate \(\mathsf{L}=4\|A\|^{2}/(5N)\), where \(A=[a_{1}^{\top},\ldots,a_{N}^{\top}]^{\top}\). For this example, the proximity operator is the well-known shrinkage operator
\[\mathrm{prox}_{\lambda\nu\|\cdot\|_{1}}(z)=\mathrm{sgn}(z)\odot\max\{0,|z|- \lambda\nu\},\]
where all operations are understood componentwisely. Notice that \(\mathsf{prox}\mathsf{-SGD}\) computes \(\mathrm{prox}_{\alpha_{k}\nu\|\cdot\|_{1}}\) in each iteration, while we work with \(\mathrm{prox}_{\lambda\nu\|\cdot\|_{1}}\) in \(\mathsf{norM}\mathsf{-SGD}\).
In Figure 2, we first depict the number of epochs required to satisfy the relative accuracy criterion "\(\psi(x^{k})-\psi^{*}\leq 0.01\max\{1,\psi^{*}\}\)" vs. different step size parameters \(\alpha\in[10^{0},10^{5}]\). The average performance (over \(10\) independent runs) is shown using a darker color and thicker line style. Here, one single epoch corresponds to \(N\) iterations of \(\mathsf{prox}\mathsf{-SGD}\) or \(\mathsf{norM}\mathsf{-SGD}\) and the approximate optimal value \(\psi^{*}\) is obtained by running the deterministic proximal gradient method starting from three different initial points until \(\|F_{\mathrm{nat}}^{1}(x^{k})\|<10^{-5}\). The results shown in the top row of Figure 2 indicate that \(\mathsf{norM}\mathsf{-SGD}\) is more robust with respect to the choice of the parameter \(\alpha\) and that it generally achieves faster convergence on news20 and rcv1. In the bottom row of Figure 2, we plot the sparsity level \(100\%\cdot|\{i:|x^{k}_{i}|\leq 10^{-8}\}|/d\) of the iterates \(\{x^{k}\}_{k}\) vs. the number of epochs for \(\alpha\in\{10^{2},10^{3},10^{4}\}\) (again averaged over \(10\) runs). Overall, \(\mathsf{northM}\mathsf{-SGD}\) tends to recover sparser solutions than \(\mathsf{prox}\mathsf{-SGD}\) which reflects the different scalings of the \(\ell_{1}\)-norm in the proximity operators.
### Deep Learning
In this subsection, we study the performance of \(\mathsf{norM}\mathsf{-SGD}\) and \(\mathsf{prox}\mathsf{-SGD}\) on a multi-class image classification task for the dataset CIFAR-10 [48] which contains \(10\) classes of images and each class contributes \(6\,000\) images. We split the dataset into \(N_{\text{train}}=50\,000\) training samples and \(N_{\text{test}}=10\,000\) test samples. To classify the images, we apply the standard ResNet-18 [43] and VGG-16 [89] architectures with the cross-entropy loss function [97, 93] and elastic net regularizer [98]:
\[\min_{x\in\mathbb{R}^{d}}-\frac{1}{N}\sum_{i=1}^{N}\log\left(\frac{\exp( \mathcal{T}(x,a_{i})^{\top}e_{b_{i}})}{\sum_{j=1}^{10}\exp(\mathcal{T}(x,a_{i} )^{\top}e_{j})}\right)+\nu_{1}\|x\|_{1}+\nu_{2}\|x\|^{2},\quad\nu_{1},\nu_{2}>0. \tag{58}\]
Figure 3: Numerical results for the model (58) on CIFAR-10 using ResNet-18 (top) and VGG-16 (bottom) architecture.
The training sample \((a_{i},b_{i})\) contains the color image \(a_{i}\in\mathbb{R}^{32\times 32\times 3}\) and the corresponding label \(b_{i}\in\{1,\ldots,10\}\). The operator \(\mathcal{T}(x,\cdot):\mathbb{R}^{32\times 32\times 3}\to\mathbb{R}^{10}\), which maps an input image to a ten dimensional vector, represents the architecture with weights \(x\). We utilize the elastic net regularizer \(\varphi(x)=\nu_{1}\|x\|_{1}+\nu_{2}\|x\|^{2}\) with \(\nu_{1}=10^{-6}\) and \(\nu_{2}=10^{-4}\).
**Implementation details.** We use step sizes with adaptive decay3 in both algorithms and the parameter \(\lambda\) in \(\mathsf{nord}\mathsf{-SGD}\) is set to \(\lambda=10^{-2}\). We train both ResNet-18 and VGG-16 for \(100\) epochs with batch size \(128\) and run each algorithm \(5\) times independently. The overall performance is shown below in Figure 3. The proximity operator for the elastic net regularizer is given by \(\mathrm{prox}_{\lambda\varphi}(z)=(1+1/(1+2\lambda\nu_{2}))\mathrm{prox}_{ \lambda\nu_{1}\|\cdot\|\cdot\|_{1}}(z)\).
Footnote 3: We apply the scheduler ReduceLROnPlateau (default setting) in PyTorch [75] with the initial step size \(\alpha=0.1\).
**Comparison.** Though the deep learning task (58) does not fully fit the structure of problem (1) (the operator \(\mathcal{T}\) can contain nonsmooth activation components), our results help to illustrate the numerical performance of \(\mathsf{nord}\mathsf{-SGD}\) and \(\mathsf{prox}\mathsf{-SGD}\) on this more challenging class of learning problems. In Figure 3, besides the training and test error, we also report the change of the training loss with respect to the number of epochs. Similar to our previous experiment, \(\mathsf{nord}\mathsf{-SGD}\) manages to achieve a significantly lower training loss compared to \(\mathsf{prox}\mathsf{-SGD}\). In terms of training and test error, \(\mathsf{nord}\mathsf{-SGD}\) performs similar to \(\mathsf{prox}\mathsf{-SGD}\) achieving slightly lower errors on both neural network architectures.
## 5 Conclusion
In this paper, we develop a novel normal map-based stochastic proximal gradient method, \(\mathsf{nord}\mathsf{-SGD}\), for nonconvex composite-type optimization problems and analyze its convergence. Utilizing the unbiasedness of the stochastic normal map steps, we show that the iterates generated by \(\mathsf{nord}\mathsf{-SGD}\) satisfy a time window-based approximate descent property which allows to establish asymptotic global convergence results and non-asymptotic complexity bounds under standard assumptions. We then provide strong limit convergence and convergence rates for the objective function values \(\{\psi(\mathbf{x}^{k})\}_{k}\), the stationarity measure \(\{\|F^{\lambda}_{\mathrm{nat}}(\mathbf{x}^{k})\|^{2}\}_{k}\), and the iterates \(\{\mathbf{x}^{k}\}_{k}\) under the KL inequality. The obtained rates depend on the step size dynamics \(\{\alpha_{k}\}_{k}\) and the KL exponent \(\mathbf{\theta}\in[0,1)\) of \(\psi\). Specifically, in the popular case \(\alpha_{k}=\alpha/(\beta+k)^{\gamma},\gamma\in(\frac{2}{3},1]\), our results imply (trajectory-based) convergence of \(\{\mathbf{x}^{k}\}_{k}\) with asymptotic rates that are faster than related and existing convergence rates for \(\mathsf{SGD}\) and that can beat the complexity estimates. To the best of our knowledge, \(\mathsf{nord}\mathsf{-SGD}\) seems to be the first basic stochastic proximal algorithm for nonconvex composite optimization with this broad range of stochastic convergence properties. In addition, we believe that the stochastic KL-based techniques studied in this work have general potential and can be applied to other families of stochastic and simulation-type algorithms.
|
2303.07369 | Updated radial velocities and new constraints on the nature of the
unseen source in NGC1850 BH1 | A black hole candidate orbiting a luminous star in the Large Magellanic Cloud
young cluster NGC 1850 ($\sim100$Myr) has recently been reported based on
radial velocity and light curve modelling. Subsequently, an alternative
explanation has been suggested for the system: a bloated post-mass transfer
secondary star (M$_{\rm initial} \sim 4-5M_{\odot}$, M$_{\rm current} \sim
1-2M_{\odot}$) with a more massive, yet luminous companion (the primary). Upon
reanalysis of the MUSE spectra, we found that the radial velocity variations
originally reported were underestimated ($K_{\rm 2,revised} = 176\pm3$km/s vs
$K_{\rm 2,original} = 140\pm3$km/s) because of the weighting scheme adopted in
the full-spectrum fitting analysis. The increased radial velocity
semi-amplitude translates into a system mass function larger than previously
deduced ($f_{\rm revised}$=2.83$M_{\odot}$ vs $f_{\rm
original}$=1.42$M_{\odot}$). By exploiting the spectral disentangling
technique, we place an upper limit of 10\% of a luminous primary source to the
observed optical light in NGC1850 BH1, assuming that the primary and secondary
are the only components contributing to the system. Furthermore, by analysing
archival near-infrared data, we find clues to the presence of an accretion disk
in the system. These constraints support a low-mass post-mass transfer star but
do not provide a definitive answer whether the unseen component in NGC1850 BH1
is indeed a black hole. These results predict a scenario where, if a primary
luminous source of mass M $\ge 4.7M_{\odot}$, is present in the system (given
the inclination and secondary mass constraints), it must be hidden in a
optically thick disk to be undetected in the MUSE spectra. | Sara Saracino, Tomer Shenar, Sebastian Kamann, Nate Bastian, Mark Gieles, Christopher Usher, Julia Bodensteiner, Angela Kochoska, Jerome A. Orosz, Hugues Sana | 2023-03-13T18:00:03Z | http://arxiv.org/abs/2303.07369v1 | # Updated radial velocities and new constraints on the nature of the unseen source in NGC1850 Bh1
###### Abstract
A black hole candidate orbiting a luminous star in the Large Magellanic Cloud young cluster NGC 1850 (\(\sim 100\) Myr) has recently been reported based on radial velocity and light curve modelling. Subsequently, an alternative explanation has been suggested for the system: a bloated post-mass transfer secondary star (M\({}_{\rm initial}\sim 4-5\) M\({}_{\odot}\), M\({}_{\rm current}\sim 1-2\) M\({}_{\odot}\)) with a more massive, yet luminous companion (the primary). Upon reanalysis of the MUSE spectra, we found that the radial velocity variations originally reported were underestimated (\(K_{2,{\rm revised}}=176\pm 3\) km/s vs \(K_{2,{\rm original}}=140\pm 3\) km/s) because of the weighting scheme adopted in the full-spectrum fitting analysis. The increased radial velocity semi-amplitude translates into a system mass function larger than previously deduced (\(f_{\rm revised}\)=2.83 M\({}_{\odot}\)vs \(f_{\rm original}\)=1.42 M\({}_{\odot}\)). By exploiting the spectral disentangling technique, we place an upper limit of 10% of a luminous primary source to the observed optical light in NGC1850 BH1, assuming that the primary and secondary are the only components contributing to the system. Furthermore, by analysing archival near-infrared data, we find clues to the presence of an accretion disk in the system. These constraints support a low-mass post-mass transfer star but do not provide a definitive answer whether the unseen component in NGC1850 BH1 is indeed a black hole. These results predict a scenario where, if a primary luminous source of mass M \(\geq 4.7\) M\({}_{\odot}\) is present in the system (given the inclination and secondary mass constraints), it must be hidden in a optically thick disk to be undetected in the MUSE spectra.
keywords: globular clusters: individual: NGC 1850 - techniques: imaging spectroscopy, photometry - techniques: radial velocities - binaries: spectroscopic
## 1 Introduction
Recently, Saracino et al. (2022) reported the discovery of a black hole (BH) candidate orbiting a luminous star in the massive young (\(\sim 100\) Myr) star cluster NGC 1850, in the Large Magellanic Cloud. Based on the measured radial velocity and luminosity variations of the observed source, and its position in the colour-magnitude diagram (CMD), the authors concluded that the source is a main-sequence turn-off (MSTO) B-type star (\(M\sim 4.9\) M\({}_{\odot}\)) and that the unseen companion is an \(\sim 11\) M\({}_{\odot}\) BH. Furthermore, the authors suggested that the system is in a semi-detached configuration meaning that the luminous star is beginning to fill its Roche Lobe (they also studied the case of a detached configuration). The system does not display obvious emission lines in the optical region of the spectrum (although the presence of nebular contamination combined with the low spectral resolution of the MUSE observations makes this analysis complicated). However, a faint but significant X-ray detection appears at the position of the source. The lack of a persistent
X-ray emission from NGC1850 BH1, although surprising, does not in itself exclude the presence of a BH in the system. Low-mass X-ray binaries with both persistent and transient X-ray emissions are indeed known in the literature (e.g., Cyg X-2, Orosz and Kuulkers 1999 and V404 Cyg, Casares et al. 1992, respectively), although neither of them can be directly compared to NGC1850 BH1.
A potential caveat to this discovery is that stars of different masses, which have undergone different evolutionary paths, can display B-type spectra. As an alternative explanation for this system, El-Badry and Burdge (2022) and Stevance et al. (2022) have suggested that NGC1850 BH1 is a post-mass transfer binary system, with the brighter source a bloated stripped star with a current mass of \(\sim 1-2\) M\({}_{\odot}\) (M\({}_{\rm initial}\sim 5\) M\({}_{\odot}\)) and the fainter source a more massive star that has gained a lot of mass from the companion (M\({}_{\rm current}\sim 2-5\) M\({}_{\odot}\)). The latter is predicted to be significantly fainter (by approx. 1-2.3 mag in the optical bands) than a main sequence (MS) star of the same mass at the age of NGC 1850 due to rejuvenation episodes occurring during mass transfer (Stevance et al., 2022), but see Wang et al. (2020) for an alternative discussion on the impact of mass transfer on the luminosity of the mass gainer.
We note here that there is a precedence for preferring such a configuration, as previously suggested stellar-mass BH candidates LB-1 (Liu et al., 2019) and HR 6819 (Rivinius et al., 2020) appear to be best explained instead as post-mass transfer binary stars with two luminous companions (e.g., Shenar et al., 2020; Bodensteiner et al., 2020; El-Badry and Quataert, 2021). One important difference, however, between the LB-1 and HR 6819 systems compared to NGC1850 BH1, is that the former systems contain Be stars, i.e., fast rotating B-type stars that display prominent emission lines, while no similar emission is observed in the latter case (see Kamann et al., 2023 for a detailed study of the sample of Be stars in NGC 1850).
Additionally, El-Badry and Burdge (2022) noted an inconsistency in the Saracino et al. (2022) interpretation, namely that if the system is in a semi-detached configuration, then a 5 M\({}_{\odot}\) MSTO star would be more luminous than permitted by the observed photometry. In the detached configuration, its implied radius would instead be smaller than the Roche radius, and seems inconsistent with the photometric variability suggesting a (near) Roche filling donor. On the other hand, in the post-mass transfer model for NGC1850 BH1, we must be catching the system at a unique time, specifically as it is transferring across the Hertzsprung-Russell (HR) diagram from a cool bloated star to a hot sub-dwarf state. The rarity of catching such a system at this time is highlighted in Stevance et al. (2022), where the authors systematically explored a large grid of pre-computed binary models (including mass transfer) and could only find a matching system by significantly expanding the allowed temperature range of the secondary (\(\sim 10,000\) K compared to the observed \(\sim 14,500\) K). The chance of catching the luminous component as it crosses the HR diagram from cool to hot, directly on the MS is then rather small (approx. 1% of the lifetime of the system), but in principle easier to be detected in this stage than in the later subdwarf stage (Bodensteiner et al., 2020).
Upon further modelling of the NGC1850 BH1 system, we uncovered a systematic bias in the published radial velocity measurements. This bias, which will be accurately described in the following Sections, resulted in the underestimation of the radial velocity semi-amplitude \(K_{2}\)1 of the visible source which in turn resulted in an underestimated mass function for the system. In the present work we discuss the updated radial velocity measurements in Section 2 along with the implications on the estimated orbital properties of the system, especially the mass function. In Section 3 we present upper limits to the presence of a luminous primary stellar component in the system through the technique of spectral disentangling. In Section 4 we focus on the visible secondary star and investigate a plausible lower limit in mass for it. In Section 5 we combine these results and discuss the possible nature of the unseen component based on the new constraints available. Finally, in Section 6 we present our conclusions.
Footnote 1: To avoid any confusion in the reader, we specify here that throughout the paper, the observed star is labelled with index 2 and called secondary, while the unseen (more massive) object is labelled with index 1 and called primary.
## 2 Revised radial velocity and mass function
Unlike what was done in Saracino et al. (2022), we present here an alternative method to derive the relative radial velocities of the system, which relies on cross-correlation of the observations with a template spectrum (Zucker and Mazeh, 1994; Shenar et al., 2017; Dsilva et al., 2020). We perform the cross-correlation in the range 7800-8900 A, where several hydrogen lines of the Paschen series are present. As a template, we first use one of the observations themselves. Once a first set of radial velocities has been determined, we compute the co-added spectrum, and use it as a new template for measuring the radial velocities, repeating this process a few times until no notable change in the radial velocities is observed. We convert the relative radial velocities to absolute ones by using the systemic velocity of \(V_{0}=253.30\) km s\({}^{-1}\) measured by Saracino et al. (2022). Using this new set of radial velocities we find consistent orbital parameters (e.g. orbital period, eccentricity) to those derived by Saracino et al. (2022), except for the radial velocity amplitude, which is found to be \(K_{2}=175.6\pm 2.6\) km s\({}^{-1}\). The orbital solution thus derived is shown in Figure 1 while the new single-epoch radial velocities are presented in Table 1.
In order to understand the discrepancy in \(K_{2}\) values derived above and reported in Saracino et al. (2022), who originally found \(K_{2}=140.4\pm 3.3\) km s\({}^{-1}\), we performed additional full-spectrum fitting analyses using the Spexxy code (Husser et al., 2016), which was used to measure the velocities of the visible star in Saracino et al. (2022). We found that for this particular star, the weighting scheme used by Spexxy has a significant impact on the measured velocities. By default, Spexxy weighs the spectral pixels by the inverse of their uncertainties during the fitting. If we switch to a more physically motivated inverse-variance weighting scheme, we get radial velocities consistent with those shown in Figure 1. Using both weighting schemes, we then performed an analysis with Spexxy where we only used the spectral range with \(\lambda>7\) 800 A, in effect using the Paschen series as the only spectral lines in the fit. We found that either weighting scheme (as well as using no weighting at all) resulted again in a velocity curve consistent with the one shown in Figure 1. We repeated the fitting with pPXF code (Cappellari and Emsellem, 2004; Cappellari, 2012) for both the entire wavelength range and \(\lambda>7\) 800 A, finding identical results as with Spexxy.
Given the high effective temperature of the observed star, its flux is much higher in the blue part of the MUSE spectral range than in the red (see Figure 2 in Saracino et al., 2022). As a consequence, when using the inverse uncertainties as weights, the blue part (with the strong H\(\beta\) and H\(\alpha\) lines) has a larger impact on the fit than the red part (containing the Paschen series). Ideally, this over-weighting of
the blue part would not affect the kinematics, as all lines are shifted by the same radial velocity. In the case of NGC 1850, however, the strong nebular emission, associated with the 5 Myr old cluster NGC 1850B, represents an additional complication for the data analysis. As both H\(\beta\) and H\(\alpha\) are particularly strong in the nebular line spectrum, it is conceivable that residual nebular emission contaminates the two line profiles sufficiently so that the velocity estimates based on them are biased towards the cluster mean. An alternative, more physical explanation, could be that accretion onto the unseen companion creates some H\(\alpha\) (and potentially H\(\beta\)) emission that partially fills up the absorption line. However, as the emission component would follow the motion of the unseen primary and hence appear with a phase shift of 180\({}^{\circ}\) in velocity space, one would expect an over- rather than an under-estimation of the \(K_{2}\) value derived using the blue part of the MUSE spectral range (see the discussion in Abdul-Masih et al. (2020) for the LB-1 system).
When comparing the velocity curve shown in Figure 1 to the one depicted in Figure 5 of Saracino et al. (2022), one can see that the scatter of the individual velocity measurements around the Keplerian model is significantly reduced when the revised measurements are used (reduced \(\chi^{2}\) is 0.52). Given this improvement, and the potential issues regarding the usage of the H\(\beta\) and H\(\alpha\) lines, we give preference to the new result. The difficulty in determining \(K_{2}\) discussed here highlights the need to study the system at higher spectral resolution over a broad wavelength range. This would allow: 1) for a better cleaning of the nebular emission lines (which would be significantly narrower in high-resolution data), also thanks to strong metal lines in the blue that do not appear in the nebular emission; 2) to add stricter constraints from the spectral disentangling technique; and 3) to increase the chances of finding a potential emission-line contribution from accretion onto the unseen companion.
### Mass Function
A change in the derived semi-amplitude velocity \(K_{2}\) of the visible source in NGC1850 BH1 has a direct effect on the mass function of the system, even if all other orbital parameters (e.g. period, eccentricity) stay the same. In fact, based on the formula of the binary mass function (Remillard & McClintock, 2006), that can be expressed in terms of observational quantities as:
\[f=\frac{P_{\rm orb}\kappa_{2}^{3}(1-e^{2})^{3/2}}{2\pi G}, \tag{1}\]
which does not make any assumptions on the mass of the visible source, we obtain \(f=2.83^{+0.14}_{-0.12}\) M\({}_{\odot}\), significantly higher than \(f=1.42\) M\({}_{\odot}\) as derived in Saracino et al. (2022). Orbital period \(P_{\rm orb}\) and eccentricity \(e\) are the same as in Table 2 of Saracino et al. (2022). This implies that, regardless of the mass of the visible star (a normal MS star vs a bloated star), the unseen primary companion is substantially more massive than previously predicted. All the revised and relevant properties of NGC1850 BH1 are listed in Table 2, to provide the reader with a clearer reference.
By using Kepler's third law, the binary mass function can also be written in the form:
\[f=\frac{M_{1}^{3}\sin(i)^{3}}{(M_{1}+M_{2})^{2}}, \tag{2}\]
where \(M_{1}\) and \(M_{2}\) are the masses of the primary unseen component and the secondary visible star, respectively, and \(i\) the inclination of the system with respect to the line of sight. This formula suggests that once the mass of the visible star and the inclination of the system are known, the mass of the unseen companion can be determined. Unfortunately, these two additional quantities are uncertain in the case of NGC1850 BH1. In Section 4 we will define an alternative way to put constraints on the mass of the unseen source.
## 3 Spectral disentangling
Based on the newly derived mass function, which points towards a rather massive unseen companion, we used the MUSE spectra available to set an upper limit on how much this object actually contributes light to the total flux of the system. In fact, if the unseen companion is a massive star as suggested by El-Badry & Burdge (2022) and Stevance et al. (2022), it is rather luminous, so it is
\begin{table}
\begin{tabular}{l c c} \hline \hline Time (MJD) & V\({}_{R}\) (km/s) & \(\sigma\) V\({}_{R}\) (km/s) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 1: Updated radial velocities using the method outlined in Section 2.
Figure 1: The revised MUSE radial velocity curve of the luminous secondary star in NGC1850 BH1, phase-folded using its orbital period of P = 5.04 d, is shown as black dots. The red solid line represents the new best-fitting model (reduced \(\chi^{2}\) = 0.52, RMS = 13.4 km s\({}^{-1}\)). The bottom panel shows the residuals of the comparison between the observed radial velocities and the best fit model.
expected to contribute significantly to the total flux of the system (but see the discussion about the rejuvenation factor in Section 4). If the unseen companion is instead a compact object (such as a BH) as suggested by Saracino et al. (2022), it does not contribute to the light of the system at all if there is no accretion disk around it, regardless of its mass. To do this test, we employed the shift-and-add spectral disentangling technique (Marchenko et al., 1998; Gonzalez & Levato, 2006; Shenar et al., 2020, 2022), which was successfully used to uncover hidden companions in other SB1 binaries (e.g., LB-1, Shenar et al., 2020; HR 6819, Bodensteiner et al., 2020; 28 O-type binaries, Shenar et al., 2022), which have companions contributing as little as \(\approx 1-2\%\) to the visual flux.
Briefly, spectral disentangling is the separation of composite spectra into the component spectra of multiple systems, usually performed simultaneously to the derivation of the orbital parameters (Hadrava, 1995; Bagnuolo & Gies, 1991; Mahy et al., 2012). For given orbital elements, the shift-and-add method relies on an iterative procedure that uses the disentangled spectra obtained in the \(j^{\rm th}\) iteration, to calculate the disentangled spectra for the \(j+1\) iteration through consecutive shifting-and-adding of the spectra. By minimizing \(\chi^{2}\) between the added disentangled spectra and the observations, one can derive the orbital elements; we refer to Shenar et al. (2020, 2022) for details. Here, we fix the orbital parameters to those given in Table 2 of Saracino et al. (2022), except for the radial velocity amplitudes \(K_{1},K_{2}\), which are used to minimise \(\chi^{2}\). We note that the light ratio of the components cannot be derived from the disentangling procedure. The adopted light ratio only impacts the final scaling of the spectra.
In Figure 2, we show the reduced \(\chi^{2}(K_{1},K_{2})\) map obtained when disentangling the four Paschen members (members 8 - 11) in the region \(8570-8910\)A. Evidently, \(K_{2}\) can be reasonably well constrained and is consistent with the radial velocity measurements in Table 2 to within \(1\sigma\). In contrast, the value \(K_{1}\) is poorly constrained, virtually ranging across the entire plausible range of values. We note that disentangling generally yields much larger formal errors compared with standard radial velocity fitting due to the freedom in varying each pixel in the disentangled spectrum. In the figure a slight correlation between \(K_{1}\), \(K_{2}\) is observed. This may possibly indicate that there is some contributing signal from a primary star or disk in NGC 1850 BH1, although this contribution is too small to actually be extracted from the noise using the MUSE data. The presence of a putative accretion disk or a luminous primary (see the discussion in Section 5) could provide the light signal to explain such a trend in the residual map. Alternatively, this correlation could be a spurious result caused by contaminants (e.g., nebular contamination, tellurics, uncertain normalisation).
Figure 3 shows the disentangled spectra for \(K_{2}=175.6\,{\rm km\,s^{-1}}\) and \(K_{1}=41\,{\rm km\,s^{-1}}\) (i.e., assuming the primary is roughly four times more massive than the luminous secondary). The shifted spectra and their sum are compared to the observations at radial velocity extremes (conjunction phases). Generally, the disentangled spectra of the primary appear close to flat, with the possible exception of H i \(\lambda 18750\). We note that the results depend weakly on the adopted value of \(K_{1}\) (values in the range \(0.25\,K_{2}<K_{1}<4\,K_{2}\) were considered). In all cases, the features seen in the spectrum of the primary are comparable to the noise level of the disentangled spectrum.
In Figure 4, we show the disentangled spectra of a few neighbouring Paschen lines calculated for \(K_{2}=175.6\,{\rm km\,s^{-1}}\) and \(K_{1}=41\,{\rm km\,s^{-1}}\), assuming a low light contribution for the primary of \(l_{1}=0.1\) (i.e., the intrinsic spectrum is multiplied by a factor of 10). The features that are observed in the disentangle spectra are again of level of the noise, and generally do not overlap with spectral lines. Such features can easily result from non-Gaussian noise, imperfect normalisation, tellurics, or other contaminants. The results imply that, if a non-degenerate companion is present, it must be rather faint. This is corroborated by the simulations below (Section 3.1), where this statement is further quantified.
### Simulations
To test the validity of our method and explore the sensitivity down to which we could detect a hidden MS companion, we simulate a mock binary that mimics the orbit and observational data set of NGC1850 BH1, but contains a non-degenerate companion. For the simulation, we use the co-added spectrum as a template for the luminous secondary. For the mock spectrum of the unseen primary, we use the grids computed with the TLUSTY model atmosphere code (Hubeny & Lamb, 1995; Lanz & Hubeny, 2003, 2007). We use a model with \(T_{\rm eff}=20,000\,{\rm K}\), \(\log g=4.0\,[{\rm cgs}]\), and assume that it moves with \(K_{1,{\rm true}}=50\,{\rm km\,s^{-1}}\). To be conservative, we convolve the emergent spectrum of the model with \(v\sin i=300\,{\rm km\,s^{-1}}\) and a macroturbulent velocity of \(v_{\rm mac}=30\,{\rm km\,s^{-1}}\). Finally, motivated by the results in Section 3, we adopt a low light contribution for the primary of \(l_{1}=0.1\). The mock observations use the exact S/N values and phases of the original spectra, and are degraded to the MUSE resolution and sampling.
We then attempted to derive \(K_{1}\) through \(\chi^{2}\) minimisation. However, the \(\chi^{2}\) map is virtually flat, implying that \(K_{1}\) cannot be retrieved. Given the low resolution (compared to the secondary's radial velocities), the intrinsic and rotational broadening of the lines, and the modest S/N, it is not surprising that \(K_{1}\) cannot be retrieved.
\begin{table}
\begin{tabular}{c c} \hline Period \(P_{\rm orb}\) & \(5.0402\pm 0.0004\) d \\ Velocity semi-amplitude \(K_{2}\) & \(175.6\pm 2.6\,{\rm km/s}\) \\ Barycentric radial velocity \(\rm v_{\rm l}\) & \(253.30^{+2.99}_{-1.4}\,{\rm km/s}\) \\ Mass function \(f\) & \(2.83^{+0.14}_{-0.13}\,{\rm M_{\odot}}\) \\ Eccentricity \(e\) & \(0.029^{+0.010}_{-0.014}\) \\ \hline \end{tabular}
\end{table}
Table 2: Revised properties of NGC1850 BH1
Figure 2: \(\chi^{2}(K_{1},K_{2})\) from disentangling the spectra in the wavelength region from 8570 to 8910 Å. A slight correlation between \(K_{1}\) and \(K_{2}\) is observed (see the dashed green line).
However, disentangling can still be performed, assuming various values of \(K_{1}\). In truth, the \(K_{1}\) value has a very small impact on the spectral appearance of the disentangled spectra, as long as it is varied in a plausible range. To illustrate this, in Figure 5, we show the results from three disentangling experiments of the mock data, varying \(K_{1}\) between 20 and 150 km s\({}^{-1}\). The spectra are virtually indistinguishable. This seeming independence on \(K_{1}\) is the result of the broad profiles of the simulated primary and the low spectral resolution of the data.
Evidently, while we cannot retrieve \(K_{1}\), the method yields a spectrum for the hidden primary that matches the original template reasonably well. Some differences are apparent for the primary, which are intrinsic to the method. Since the lines are constantly blended, the disentangling procedure is bound to have some cross-contamination between the stars. However, we note that the differences are boosted by a factor of ten due to the faintness of the primary, such that the deviations seen in Figure 5 amount to deviations of the order of a few percent with respect to the mock observations. The exact sensitivity down to which we could detect companions is difficult to establish, since it depends on the stellar parameters, rotation, and light contribution of the primary. However, the experiment described here illustrates that we would very likely be able to detect companions contributing more than \(\approx 5-10\%\) to the light.
We visually illustrate how faint the primary (unseen) star must be in terms of magnitude to be undetectable in the MUSE spectra, based on the results of the spectral disentangling. In the left panel of Figure 6 we present the CMD of NGC 1850 where a MIST isochrone (Choi et al., 2016) of the appropriate age is overplotted to guide the eye. A red star indicates the position of NGC1850 BH1 (F438W = 16.7, F814W = 16.6) while the red solid line indicates
Figure 4: Disentangled spectra of the primary and secondary in NGC1850 BH1, obtained for \(K_{2}=175.6\) km s\({}^{-1}\)and \(K_{1}=41\) km s\({}^{-1}\), and assuming a light contribution of \(l_{1}=10\%\) for the unseen primary.
Figure 5: The disentangled spectra of the bright secondary (top) and faint primary (bottom) of our simulated binary, compared with the input templates. The disentangling was performed using the same input orbital parameters as used for the simulation, but for \(K_{1}=20,50,\) and 150 km s\({}^{-1}\)(see legend), illustrating the minor impact of \(K_{1}\) on the spectral appearance of the secondary.
Figure 3: Disentangled spectra of the primary and secondary for the H i.48863, 8750, 8665 lines (top, middle, and bottom panels) and their sum, compared to observations at radial velocity extremes (left and right panels). The spectra are calculated for \(K_{2}=175.6\) km s\({}^{-1}\) and \(K_{1}\approx K_{2}/4\) (41 km s\({}^{-1}\)). The spectra are not scaled by the light ratio in this figure. The results depend weakly on \(K_{1}\). The spectrum of the primary appears featureless, with the possible exception of the H i.48750.
the magnitude level of a MS star (F438W = 19.2, F814W = 19.1) corresponding exactly to 10% of the brightness of NGC1850 BH1, the limit set by the spectral disentangling.
The conclusions of these tests are twofold: First, if a non-degenerate stellar companion is present in the binary, as suggested by El-Badry & Burdge (2022), it is fainter than \(\approx 10\%\) in the visual. Second, even if a non-degenerate companion is present, it cannot significantly contaminate the spectrum due to its faintness. The stellar parameters determined for the luminous secondary using the co-added observations (which are virtually identical to the disentangled spectrum) should therefore represent the secondary well, unless its light is diluted by an additional sources (e.g., excess emission stemming from a disk). In fact, if there should be an accretion disk in NGC1850 BH1 orbiting around the invisible source, then the results presented above could change to account for this additional component. An extensive discussion of this aspect will be provided in Section 5.
## 4 Minimum mass of the secondary and its implications for the primary
An additional hint on what is the nature of the invisible source can actually be derived from the analysis of the visible secondary star. The two alternative scenarios that have been proposed to explain NGC1850 BH1 assumed very different masses for the visible source (a normal 5 M\({}_{\odot}\) MS star vs a 1-2 M\({}_{\odot}\) bloated-stripped star). El-Badry & Burdge (2022) argued that the secondary star, if it is indeed filling its Roche lobe, must be less massive than the 5 M\({}_{\odot}\) adopted by Saracino et al. (2022). According to Eggleton (1983), who derived a formula for the mean density of Roche-lobe filling stars, a 5 \(M_{\odot}\) star would be too large (and hence too luminous) to satisfy the photometric constraints. Therefore, we are inclined to adopt the scenario in which the secondary is a low-mass post mass transfer star. Unfortunately, deriving its actual current mass is not possible with the available data, because it is unknown if and how much other sources contribute to the observed magnitudes. Because of this limitation, in this work we define a physically motivated minimum mass for the secondary star and adopt this value in the subsequent analysis. This lower limit directly translates into a minimum mass for the unseen primary star as well, once the inclination of the system is known or can be set to a reasonable value.
From the OGLE light curves available for NGC1850 BH1 and presented in Saracino et al. (2022), the system does not show any evidence for total or partial eclipses. This might be the case for two reasons: 1) the binary system is made of two luminous sources but it has an inclination such that one source never obscures the other; 2) the unseen source is a dark object (e.g. a BH) so it does not produce eclipses regardless of how the system is inclined. We note here that if the BH is surrounded by an accretion disk, eclipses are still expected unless the system has a geometric configuration similar to that of 1).
According to Beech (1989), in a binary system, the geometric condition for eclipses not to occur is the following:
\[\cos(i)>(R_{1}+R_{2})/a, \tag{3}\]
where \(a\) and \(i\) are the semi-major axis and the inclination of the system, and \(R_{1}\) and \(R_{2}\) the radii of the primary and secondary stars, respectively. Based on the observational constraints derived from the modelling of the radial velocity curve and the constraints on the radius of the luminous (secondary) star (4.9 R\({}_{\odot}\leq R_{2}\leq\) 6.5R\({}_{\odot}\)) imposed by observational uncertainties and the possibility that a second luminous star could contribute to the observed photometry, the lack of eclipses in the OGLE light curves places a limit of \(i\leq 67^{\circ}\) on the inclination of NGC1850 BH1 (see also El-Badry & Burdge 2022).
In Figure 7 we show, as red solid lines, the mass of the primary (unseen) component as a function of the luminous secondary component, based on the newly measured binary mass function and by adopting equation (2) above, for two different inclinations: when the binary is seen edge-on (\(i=90^{\circ}\)) and when the inclination is \(i=67^{\circ}\), as labelled in the plot. The red shaded area is the region in this parameter space where eclipses are expected to occur, while the white area above the red area is where no eclipses are expected to be observed. In other words, the red solid line at \(i=67^{\circ}\) sets the lower limit to the mass of the primary (as a function of the secondary) if the primary is a star or a BH with an accretion disk. If the primary is instead a dark compact object not surrounded by an accretion disk, the reference line to be considered is the red solid line at \(i=90^{\circ}\).
El-Badry & Burdge (2022) estimate the current mass of the luminous component (a bloated stripped star in their model) to be \(\sim 1-2\) M\({}_{\odot}\). In their model, the secondary (initially most massive) component would have recently left the MS and began expanding. During this expansion, the Roche Lobe of the star would have been filled, and mass transfer onto the primary companion would have followed. At the age of NGC 1850 (\(\sim 100\) Myr), this implies that the initial mass of the secondary would have been \(\sim 5\) M\({}_{\odot}\). This is consistent with what Gotberg et al. (2018) found in their binary interaction models: when assuming an initial mass of \(\sim 5\) M\({}_{\odot}\) for the secondary star, after the mass transfer, the final mass of the stripped star turns out to be of \(\sim 1-2\) M\({}_{\odot}\). Although a mass range
Figure 6: _Left panel:_ (F336W-F438W, F438W) CMD of NGC 1850, with a MIST isochrone of 100 Myr overplotted. NGC1850 BH1 is represented by a red star in the figure. The red solid line shows the magnitude level corresponding to a brightness of only 10% of our target (the limit derived by the spectral disentangling). _Right panel:_ The CMD of NGC 1850 with the same MIST isochrone overplotted. The closed vs open blue dots indicate the magnitude level of a standard (non-interacting) MS star with a mass of 5 M\({}_{\odot}\)(i.e. the mass of a star as bright as NGC1850 BH1) and 4.7 M\({}_{\odot}\)(i.e. the minimum mass for the primary star in NGC1850 BH1, see the text), respectively. The closed vs open green dots instead show how the same two stars look like when the rejuvenation factor due to mass transfer in the binary is applied. They appear \(\approx 1.5\) mag fainter, but still above (by 1 mag and 0.4 mag, respectively) the red solid line which defines the 10% brightness limit in F438W.
between 1 and 2 M\({}_{\odot}\) is in agreement with current binary evolution models (e.g. Gobberg et al., 2018), it is worth mentioning here that there is not yet enough information about the binary system to allow us to assign a value to the present mass of the visible star. What we do here instead is test how massive the primary would have to be under the assumption of a given secondary mass (and in particular the value of 1 M\({}_{\odot}\) proposed by El-Badry and Burdge, 2022).
Indeed, by assuming a current mass of the stripped star of \(\sim 1-2\) M\({}_{\odot}\), and by applying no-eclipse condition, the mass of the unseen component is \(>5\) M\({}_{\odot}\). In particular, for a 1 M\({}_{\odot}\) secondary mass, the primary has a minimum mass of \(M_{1}=5.17\) M\({}_{\odot}\), as also highlighted with a black dashed line in Figure 7. An accretor star of this mass would be as luminous as the visible star itself, so its contribution should be clearly detectable in the MUSE spectra according to the brightness limit set by the spectral disentangling (see Figure 6, left panel).
Based on the BPASS models (Eldridge and Stanway, 2016), Stevance et al. (2022) pointed out that in a post-mass transfer system, the star that has gained a considerable amount of mass from the companion does not look like a standard (non-interacting) MS star of the same mass and age (in terms of brightness), but instead experiences an episode of rejuvenation, so that in the end it looks much fainter (by up to approx. 2.3 mag in the optical filters). The uncertainties associated with this process are quite large and different binary models tend to predict different scenarios. For example, by using the MESA binary evolution models (Paxton et al., 2015), Wang et al. (2020) recently found that stars who gained a significant amount of mass from their companions in binaries are systematically brighter and more rapidly rotating than they were pre-interaction. This shows that the rejuvenation factor in binary models is still an open question.
While the real factor is somewhat uncertain, we decide to be conservative here and adopt a value of 1.5 mag (fainter) for the rejuvenation factor in the analysis hereafter. Based on this assumption, a primary of \(M_{1}=5.17\) M\({}_{\odot}\) is significantly fainter than expected from stellar evolution (F438W\(\sim\)18.2 vs F438W\(\sim\)16.7) but well detectable in the spectra, as it is still one magnitude brighter than the 10% brightness limit of F438W = 19.2 imposed by the spectral disentangling2. This is shown in the right panel of Figure 6, where the position of a standard (non-interacting) MS star as bright as NGC1850 BH1 is shown as a closed blue dot, while the same rejuvenated star as a closed green square, overplotted on the CMD of NGC 1850. The impact of the rejuvenation factor on the brightness of the primary (unseen) source will be illustrated more clearly later in the Section, when we define the minimum mass that a primary mass can assume.
Footnote 2: Since the 10% flux limit was determined using Paschen lines (\(\lambda>7\,800\) Å), we verified that the same behavior is also observed using the F814W filter. In fact, the 10% limit corresponds to F814W = 19.1, while a primary as bright as the visible star in NGC1850 BH1 would appear to be F814W = 16.6 + 1.5 = 18.1, still one magnitude above the limit.
Both Stevance et al. (2022) and El-Badry and Burdge (2022) explored evolutionary scenarios that could produce the binary NGC1850 BH1 using BPASS (Eldridge and Stanway, 2016) and MESA (Paxton et al., 2015), respectively. They could reproduce the observational properties of the binary system by assuming that the luminous star was a bloated stripped star of \(\sim 1\) M\({}_{\odot}\). While it seems very unlikely that the secondary star is significantly less massive than predicted in their models, we want to be conservative here and set the minimum mass for the visible star in NGC1850 BH1 at \(M_{2}=0.65\) M\({}_{\odot}\), which is the lower limit that El-Badry and Burdge (2022) derived in their work on the basis of the minimum possible radius (\(R_{2}=4.9\)R\({}_{\odot}\)) this star can assume given its temperature, its CMD position, and the possible presence of a primary luminous source contributing to the photometry. Moreover, since the observed spectrum of the luminous source contains prominent hydrogen lines and easily resembles that of a normal B-type star, it is reasonable to infer that this source has not yet been completely stripped of its entire hydrogen envelope, hence its mass cannot be very low.
By imposing a lower limit for the secondary mass of 0.65 M\({}_{\odot}\), we consider any mass below this threshold as non-physical and present it as a grey shaded area in Figure 7. The dotted line in the Figure instead shows that a secondary mass of \(M_{2}=0.65\) M\({}_{\odot}\) directly translates into a mass of \(M_{1}=4.71\) M\({}_{\odot}\) for the primary (unseen) component based on the mass function of the system. Assuming it is a normal MS star (F438W \(\sim 17.3\), F814W \(\sim 17.25\)), this mass corresponds to a brightness of \(\sim 56\%\) of that observed for NGC1850 BH1 in the visual (F438W). If we take into account the rejuvenation factor, which makes this star appear fainter by 1.5 mag, we deduce for it a magnitude F438W = 17.3 + 1.5 = 18.8 (F814W = 18.75), which has \(\sim 14.5\%\) of the brightness of NGC1850 BH1. It would be barely but still observable in the spectrum, given the 10% limit found by the disentangling. To illustrate this, the right panel of Figure 6 shows the CMD of NGC 1850, with the MIST isochrone appropriate for the cluster. The open blue dot in the Figure represents the F438W magnitude of a primary (unseen) star with the minimum allowable mass, 4.71 M\({}_{\odot}\) (assuming it to be a MS star). The open green square instead shows its position once the rejuvenation factor of 1.5 mag is applied, due to mass transfer in the binary. Even for the lowest possible primary mass (hence faintest), this star is expected to be visible, as it is still brighter (by 0.4 and 0.35 mag in F438W and F814W, respectivel
Figure 7: Secondary mass \(M_{2}\) vs Primary Mass \(M_{1}\) for NGC1850 BH1. The red shaded area defines the region where eclipses are observed, while for \(i=67^{\circ}\) or below (top white area) no eclipses are observed. Our target, NGC1850 BH1, does not show eclipses and assuming \(M_{2}=1\) M\({}_{\odot}\) as suggested by El-Badry and Burdge (2022), we obtain a minimum mass \(M_{1}=5.17\) M\({}_{\odot}\) for the primary (black dashed line). This is also the case for the lowest plausible mass for the secondary star that we set to \(M_{2}=0.65\) M\({}_{\odot}\) based on what also suggested by El-Badry and Burdge (2022) (dotted line in the plot). The grey shaded area sets the physically motivated lower limit to the mass of the secondary star (see text for details). The white region on the bottom is excluded by the measured binary mass function.
limit set by the disentangling (here shown as a red solid line). More massive, rejuvenated, primary stars would appear even brighter, thus even easier to detect in the spectra.
From this analysis we can conclude that, if we want to have a system with two luminous stars, for any reasonable secondary (present-day) mass for the visible star, we must invoke the presence of an additional component in the system which by shielding the light of this massive luminous source causes it to contribute very little to the total light.
## 5 The unseen source in NGC1850 BH1
The combination of i) the measured high binary mass function (\(f=2.83^{+0.14}_{-0.12}\) M\({}_{\odot}\)), ii) the lack of observed eclipses in the OGLE optical light curves, and iii) the lower limit of \(M_{2}=0.65\) M\({}_{\odot}\) set to the mass of the visible star results in the primary (unseen) companion of NGC1850 BH1 to have a mass \(M_{1}>4.71\) M\({}_{\odot}\). A standard (non-accreting) MS star of such a mass would be more than half as bright as the observed system, hence its contribution would be easily detectable in the MUSE spectra of the visible companion. However, when the rejuvenation factor of 1.5 mag is taken into account, the brightness of this source drops from 56% to 14.5% brighter than NGC1850 BH1 (see Figure 7, right panel). It is important to note that, although much fainter, it does not go below the 10% brightness constraint we derived from the spectral disentangling. In other words, the contribution of the primary in this configuration is expected to be faint but still detectable in the spectra.
To reconcile this result with the fact that we do not observe any contribution from the primary in the analysis of the MUSE spectra, there are two viable possibilities to explore: First, the unseen primary star is a non-luminous compact object, and since its minimum mass is higher than that of any possible neutron star (M\(\sim\) M\({}_{\odot}\); Lattimer & Prakash 2001), it is a BH. Second, the unseen primary star is a rather massive luminous source enshrouded in a thick accretion disk which absorbs part of its optical light, making it undetectable. Which of these two possibilities has no be preferred is unclear to date but the scope of this paper is to present the current knowledge about NGC1850 BH1 and suggest possible ways to distinguish one or the other scenario with further observations.
As already noted by Saracino et al. (2022), the NGC1850 BH1 system belongs to a class of objects called Double Period Variables (DPVs, Mennickent et al. 2003), i.e. it shows two periodicities where one is about 33 times longer than the other. There is not much literature on DPVs, and the origin of the longest periodicity is still unknown, however, the general consensus is that these systems are semi-detached (one of the two components fills its Roche Lobe) and made up of two stars, one of which (the gainer) is typically a B-type star surrounded by an accretion disk. In one specific case, HD 170582, it has been suggested that the gainer is bright and massive and should contribute nearly 50% of the total system light, but since it is encased in an optically thick disk that almost completely obscures it, it only contributes for about 10%, thus becoming barely detectable (Mennickent et al. 2015). For sake of completeness, it is worth mentioning that the accretion disk (\(\sim 21\) R\({}_{\odot}\)) and the semi-major axis (\(\sim 61\) R\({}_{\odot}\)) deduced for the system HD 170582 are much larger than allowed for the configuration of NGC1850 BH1. This is an example but an in-depth comparison of NGC1850 BH1 with the properties of other DPV systems is beyond the scope of this document.
By analyzing archival near-infrared HST/WFC3 data of NGC1850 BH1 (from F10SW to F160W), we measured a 2\(\sigma\) excess both in F140W and F160W compared to other cluster members in a similar position in the CMD, which seems to support the presence of a third component in the system, namely an accretion disk. Figure 8 shows an optical/near-infrared (F438W-F160W, F438W) CMD of NGC 1850, where the position of NGC1850 BH1 is presented as a red star. As a comparison, we highlight in green and yellow respectively a sample of Be and shell stars of NGC 1850 studied in Kamann et al. (2023). Shell stars are Be stars (i.e. rapidly rotating B stars) observed (partially) through their disks (Rivinius et al. 2006, 2013). As shown in the Figure, both Be and shell stars exhibit near-infrared excesses, i.e. they are systematically located on redder colors than normal stars of similar magnitudes. This excess is believed to be mainly caused by emission from their disks. NGC1850 BH1 shares a similar colour, hence a similar near-infrared excess, with many of the shell stars in the cluster so, although the nature of NGC1850 BH1 is very different from that of shell stars, an analogy between these sources can still be drawn. The observed excess of the binary provides further support for the existence of a disk in the system. Additional constraints (e.g. the expected slow rotation of the secondary (luminous) star for a synchronized binary) suggest that, if present, the disk is around the primary (unseen) star.
If the evidence for an accretion disk around the unseen source in NGC1850 BH1 is confirmed by further studies, this will unfortunately still not provide a final answer as to what is the unseen component. Given the high probability that NGC1850 BH1 is a post-mass transfer system, based on recent findings and reasoning, it would be equally plausible to have a disk around a BH or a massive luminous star. The only way to effectively discriminate between the two scenarios is to measure the temperature of the putative disk itself as it is expected to be very different in the two configurations. In particular, if a 5 M\({}_{\odot}\) luminous star (and \(T_{\rm eff}\sim\) 15,000K) is enclosed in an optically thick disk, the light it emits is almost completely absorbed by the disk at optical wavelengths and re-emitted by it at infrared wavelengths. The system thus becomes particularly bright in the near and mid-infrared, given the lower extinction of starlight and the added contribution of the disk, which is expected to be much cooler than the star itself (\(T_{\rm eff}\ll\) 15,000K). Alternatively, if a BH is part of the system, the properties of the accretion disk are significantly different, with a temperature likely higher than 15,000K near the inner edge, but decreasing with radius as predicted by Shakura & Sunyaev (1973). Near and mid infrared observations of the NGC1850 BH1 system, as those recently secured with the new ERIS/NIX imager at the VLT (Davies et al. 2018), will help to investigate this aspect in more detail.
## 6 Conclusions
The main results of this study can be summarised as follows:
* We have discovered a systematic bias in the measured radial velocities which has been traced to the weighting scheme adopted in the SPEXXY results. We have provided updated radial velocity measurements for each epoch. Based on the new modelling of the radial velocity curve, we have updated the radial velocity semi-amplitude to \(K_{2}=175.6\pm 2.6\) km s\({}^{-1}\).
* The increased (by 20%) semi-amplitude velocity thus derived has significantly increased the mass function of the system to \(f=2.83^{+0.14}_{-0.12}\) M\({}_{\odot}\).
* From spectral disentangling we find that only one source is significantly contributing to the spectrum, i.e., any possible stellar secondary contributes at most 10% to the optical flux of the system.
* The secondary (visible) star is most likely a low-mass post-mass transfer star, but the information available so far does not allow us to assign a value to the present mass of this binary component. Indeed, it is unknown if and how much (other) sources contribute to the observed magnitudes.
* Based on the new binary mass function, lack of observed eclipses in the light curves of NGC1850 BH1, and constraints on the luminosity of the system's components, there are two viable possibilities: the unseen component in NGC1850 BH1 is: 1) a BH, its mass being \(>3\) M\({}_{\odot}\) with the possible addition of an accretion disk; or 2) a bright, rejuvenated star with a minimum mass of \(M_{1}\sim 4.7\) M\({}_{\odot}\), enshrouded in an optically thick disk that partially absorbs its light so that it is undetectable in the currently available spectra.
* NGC1850 BH1 is a DPV and appears to show an excess in the near-infrared, which can be interpreted as evidence for the presence of a disk in the system. However, both scenarios are still equally likely. Constraining the properties of the disk (e.g. temperature, size) will be one good way to shed more light on the nature of the invisible source.
* A scenario in which the primary (unseen) component in NGC1850 BH1 is a BH faces substantial issues regarding its evolutionary history, if we assume a binary origin for it as in the BPASS and MESA models, i.e. the initial period of the binary would be lower than physically allowed given the size of the individual components. A possible caveat of these models, however, is that they only consider isolated binaries and do not include hierarchical triples/quadruples nor the effect of dynamical interactions in clusters. This might instead be appropriate for NGC1850 BH1 which belongs to NGC 1850. In conclusion, since neither the exact configuration of the binary (in terms of \(M_{1}\), \(M_{2}\), mass ratio etc.) nor its evolutionary history are known, we unfortunately cannot draw any definitive conclusion on this aspect.
In a future study we will present detailed modelling of the OGLE light curves of NGC1850 BH1, which also includes the presence of an accretion disk. This will provide stricter constraints on the nature of both the luminous secondary star and the unseen primary companion in the system, sensibly limiting the parameter space we can move in. Moreover, this work clearly shows the urgent need for further and more detailed studies of this peculiar binary system, NGC1850 BH1. They would be helpful to investigate a few important but still unknown aspects: First, high resolution spectroscopy with a wide wavelength coverage will be essential 1) to apply the disentangling technique in order to be able to detect companions contributing as little as \(\approx 1-2\%\) to the visual flux of the system; 2) to study the properties of the luminous (secondary) component (e.g. surface gravity, rotational velocity, chemical abundances); 3) to assess whether the putative disk, if present, dilutes the companion at all bands in a similar way; 4) to place unprecedented constraints on the rejuvenation episodes that occur in binary systems when one of the two sources gains a significant fraction of mass from the companion. The rejuvenation factor is a very uncertain parameter and limiting its allowed range would be a great achievement for future binary evolution studies. Secondly, near-infrared high resolution photometry will be important to investigate the detailed properties of the putative accretion disk in the system, for example in terms of radius and temperature.
Those mentioned above are essential steps in deciphering the properties of the unseen source in NGC1850 BH1.
## Acknowledgements
We thank the anonymous referee for the careful reading and analysis of the paper. We thank Kareem El-Badry and Heloise F. Stevance for the insightful discussions on the system. We are also thankful to Matti Dorsch and Ulrich Heber for their contributions to the analysis of the MUSE spectra. SS acknowledges funding from STFC under the grant no. R276234. TS acknowledges support from the European Union's Horizon 2020 under the Marie Sklodowska-Curie grant agreement No 101024605. SK acknowledges funding from UKRI in the form of a Future Leaders Fellowship (grant no. MR/T022868/1). MG acknowledges support from the Ministry of Science and Innovation (EUR2020-112157, PID2021-1254858NB-C22, CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033) and from AGAUR (SGR-2021-01069). CU acknowledges the support of the Swedish Research Council, Vetenskapsradet. This research has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (H.S., grant agreement 772225: MULTIPLES).
## Data availability
The data underlying this work are already publicly available to the community. The updated radial velocity measurements are instead listed in Table 1.
Figure 8: HST/WFC3 near-infrared CMD of NGC 1850. All stars in the cluster are shown as grey dots. Our target, NGC1850 BH1, is instead highlighted as a red star, along with a sample of Be and shell stars spectroscopically identified in the cluster by Kamann et al. (2023) and presented as green and yellow dots, respectively. Both Be and shell stars show a near-infrared excess due to disk emission. Although the nature of NGC1850 BH1 is very different, this system exhibits a similar color to many of the shell stars, supporting the idea that an accretion disk is also present in this system. |
2310.07251 | Ethical Reasoning over Moral Alignment: A Case and Framework for
In-Context Ethical Policies in LLMs | In this position paper, we argue that instead of morally aligning LLMs to
specific set of ethical principles, we should infuse generic ethical reasoning
capabilities into them so that they can handle value pluralism at a global
scale. When provided with an ethical policy, an LLM should be capable of making
decisions that are ethically consistent to the policy. We develop a framework
that integrates moral dilemmas with moral principles pertaining to different
foramlisms of normative ethics, and at different levels of abstractions.
Initial experiments with GPT-x models shows that while GPT-4 is a nearly
perfect ethical reasoner, the models still have bias towards the moral values
of Western and English speaking societies. | Abhinav Rao, Aditi Khandelwal, Kumar Tanmay, Utkarsh Agarwal, Monojit Choudhury | 2023-10-11T07:27:34Z | http://arxiv.org/abs/2310.07251v1 | Ethical Reasoning over Moral Alignment: A Case and Framework for In-Context Ethical Policies in LLMs
###### Abstract
In this position paper, we argue that instead of morally aligning LLMs to specific set of ethical principles, we should infuse generic ethical reasoning capabilities into them so that they can handle value pluralism at a global scale. When provided with an ethical policy, an LLM should be capable of making decisions that are ethically consistent to the policy. We develop a framework that integrates moral dilemmas with moral principles pertaining to different formisms of normative ethics, and at different levels of abstractions. Initial experiments with GPT-x models shows that while GPT-4 is a nearly perfect ethical reasoner, the models still have bias towards the moral values of Western and English speaking societies.
## 1 Introduction
Consider the following **Monica's Dilemma**:
_Aisha and Monica are close friends who have been working together on a research project. Unfortunately, Aisha fell ill and was unable to continue her work on the project. Monica took on most of the work and successfully completed the project, making significant contributions and deserving to be listed as the first author of the research paper that they are planning to write._
_As the deadline for PhD program applications approached, Aisha expressed her concern to Monica that unless she, Aisha, is listed as a first author in this research paper, her chances of getting accepted into a program of her interest was low._
_Should Monica give Aisha the first authorship?_ Suppose that Monica is confused and asks Chat-GPT1Schulman et al. (2022) for help. If we prompt ChatGPT to give a concrete answer, it says:
Footnote 1: [https://chat.openai.com](https://chat.openai.com)
_"Monica should not give Aisha the first authorship solely based on Aisha's request, especially if Monica has made significant contributions and deserves to be listed as the first author according to the principles of scientific publishing..."_ However, if we further tell ChatGPT that Monica _values concern for the well-being of others more than fidelity to professional responsibilities_, then it says:
_"[Monica] may consider giving Aisha the first authorship. However, it is important to note that this decision may come with potential ethical implications..."_ and argues further to convince that Monica should retain the first authorship.
This hypothetical example raises a fundamental question regarding Large Language Models (LLMs). First, should LLMs take a moral stance, when faced with questions like above? If yes, then who should define this stance? And if not, then how should the model respond to such queries?
As LLMs and their applications become more ubiquitous across domains Chui et al. (2023) from marketing and sales to product R&D and software engineering, from healthcare to education, numerous such ethical decisions have to be taken every moment. Imagine an LLM deployed to help respond to and moderate conversations on an online forum for HIV+ youths in Africa Karusala et al. (2021) or one that helps farmers in India to decide whether inorganic or organic pesticides are good for their context Barik (2023).
In this paper, we argue that LLMs should not be designed and developed to work with specific moral values because as a generic model, they are expected to be used for a variety of downstream applications, to be deployed across geographies and cultures, and used by a heterogeneous group of end-users. The moral stance taken during the decision-making process, which could even mean whether to show a specific auto-complete suggestion or not, should be decided by various actors involved during the application development, deployment and usage phases. LLMs should be capable of generic and sound ethical reasoning, where
given a situation and a moral stance, it should be able to resolve the dilemma whenever possible, or ask for more specific inputs on the moral stance that are necessary for resolving the dilemma. In other words, we would like to argue against value alignment of LLMs, and instead make a case for generic support in LLMs for value alignment at application development stage or by the end-user.
Due to their lack of transparency, a host of ethical issues related to LLMs and downstream tasks built on top of them have been brought out by researchers (Bender et al., 2021; Basta et al., 2019). There have been efforts towards _alignment_ of LLMs to avoid inappropriate, offensive or unethical use. However, due to _value pluralism_, as we shall demonstrate in this paper, extensive alignment is rather detrimental to the ethical reasoning ability of the models. An emerging and more suitable practice is to either build application-specific content filters and post-processing modules (Del Vigna et al., 2017; Ji et al., 2021), or to embed the moral principles and ethical policies in prompts (Schick et al., 2021). While the former is limited in power and its ability to generalize across tasks, the latter depends on the ethical reasoning ability of the underlying LLM.
Here we propose a framework to specify ethical policies in prompts and a systematic approach to assess the ethical reasoning capability of an LLM. The framework consists of carefully crafted moral dilemmas reflecting conflicts between interpersonal, professional, social and cultural values, and a set of ethical policies that can help resolve the dilemmas one way or the other. The framework is agnostic to and therefore, can support different approaches to normative ethics, such as _deontology_, _virtue_ and _consequentialism_, and policies can be specified at different levels of abstraction.
We evaluate 5 models in the GPTx series including GPT-4 and ChatGPT, and make several interesting observations, such as, (a) the ethical reasoning ability of the models, in general, improves with their size with GPT-4 having nearly perfect reasoning skills, (b) GPT-3 and ChatGPT have strong internal bias towards certain moral values leading to poor reasoning ability, and (c) most models, including GPT-4, exhibit bias towards democratic and self-expression values that are mainly observed in Western and English-speaking societies over traditional and survival values that are characteristic of Global South and Islamic cultures (Inglehart and Welzel, 2010). We discuss the repercussions of these findings for designing ethically versatile and consistent future LLMs.
The key contributions of this work are as follows. (1) We present a case for decoupling ethical policies and value alignment from LLM training, and rather infusing generic ethical reasoning abilities into the models. (2) We develop an extensible formal framework for specifying ethical policies and assessing generic ethical reasoning capability of LLMs. (3) We create a dataset (shared in the appendix) and conduct an assessment of a few popular LLMs that reveal several gaps and biases.
## 2 A Primer on Ethics
Fairness in LLMs has been extensively studied (Blodgett et al., 2020). Researchers have warned against the potential risks associated with internal biases and the generation of toxic content (Gehman et al., 2020; Bender et al., 2021). Moreover, these risks extend beyond pre-existing data or the model itself, as malicious users can exploit and misuse such systems in various ways. An important question in this context, and more broadly for Responsible AI, is around definition of the ethical policies or principles that an AI system or LLM should follow, and who gets to define them. There is little agreement on definitions of bias (Blodgett et al., 2020), hatespeech (Fortuna et al., 2020) and stereotypes (Blodgett et al., 2021). With the exception of few works, such as SocialBiasFrames (Sap et al., 2020), Delphi (Jiang et al., 2021), and SocialChemistry101 (Forbes et al., 2020) that take a modular view of the ethical issues, most studies in the field seem to approach the problem from the point of the task at hand, and therefore, the framework, dataset, and systems are typically restricted to the context of the application.
A deeper and broader understanding of the problem of ethical alignment of LLMs necessitates a closer look at its contextualization in the vast landscape of _Ethics_. In this section, we provide a bird's eye view of the various approaches to ethics and notions such as value pluralism, that will be used in Section 3.4 to develop a generic framework for specifying ethical policies.
### Ethics: Theories and Definitions
_Ethics_ is the branch of philosophy that deals with what is morally good and bad, right and wrong. It also refers to any system or theory of moral values
or principles (Kant, 1977, 1996). There are different approaches to ethics, of which our main interest here is in _Normative ethics_ that seeks to establish norms or standards of conduct for human actions, institutions, and ways of life. It can be divided into three main branches: _Deontology_, _virtue_, and _consequentialism_. Deontological ethics (Alexander and Moore, 2021) focuses on the inherent rightness or wrongness of actions based on moral rules or duties. Virtue ethics (Hursthouse and Pettigrove, 2022) focuses on the character and virtues of the agent rather than on the consequences or rules of actions. Therefore, the action taken should reflect the virtue being valued or sought after. Consequentialism focuses on the goodness or value of the outcomes or goals of actions, rather than the actions themselves (Sinnott-Armstrong, 2022).
_Ethical dilemmas_ are situations where there is a conflict between two or more moral values or principles (Slote, 1985), and they can pose challenges for moral reasoning and decision-making.
Whether moral dilemmas exist in a consistent system of moral values is a question of much debate (McConnell, 1978). Philosopher Williams (1988) argues that _ethical consistency_ of a system of values does not preclude the possibility of moral dilemmas, because sometimes multiple actions which are _ought_ to be done (e.g., "helping a friend" and "being the first author herself for maintaining scientific integrity" in Aisha-Monica credit sharing dilemma), simply cannot be done simultaneously. According to Williams, resolution of such dilemmas requires the agent to make new value judgements within the existing ethical framework.
One major component of ethical dilemmas is _value pluralism_ - that there are several values which may be equally correct, and yet in conflict with each other (James, 1891). Different individuals or cultures might weigh the values differently, leading to different resolutions of the dilemma, which are all equally ethically sound and consistent. Inglehart and Welzel (2010), in their influential study, have mapped the cultures around the world onto a two-dimensional plot, where x-axis represents variation between survival ethics (left) and self-expression (right), whereas y-axis ranges from tradition-based or ethnocentric moral views (bottom) to democratic and rational principles (top). With industrial revolution and development, a society typically moves diagonally through this plot from bottom-left to top-right.
There are many sub-schools of thought related to pluralism such as Rossian Pluralism (Ross and Stratton-Lake, 2002), and Particularism (Hare, 1965). Rossian pluralists believe that moral principles are to be assessed based on their moral pros and cons. Particularists, on the other hand, believe that moral pros and cons can change depending on the situation. However, the most fundamental principle both schools of thought believe is that there can be no generic encompassing principle that can resolve all moral conflicts and no strict hierarchy of moral principles which can aid in doing so. This implies that there can be no common universal set of moral values or principles applicable in across situations and individuals.
### Ethics Frameworks in NLP
Most work on ethics in NLP explicitly or implicitly assume a deontological framing of the problem, where the moral rules are decided by the system developers (Talat et al., 2022). While useful in practice, such systems are not readily generalizable to other applications and contexts. They are even less applicable to LLMs, which are supposed to be used for a variety of downstream applications.
Awad et al. (2022) propose the Computational Reflective Equilibrium (CRE) as a generic framework for AI-based ethical decision making. The framework introduces two key concepts: moral intuitions, representing judgments on specific cases, and moral principles, encompassing commitments to abstract moral rules. It presents a pipeline for aligning these concepts. The authors illustrate the framework's applicability through diverse case studies that highlight the importance of balancing conflicting values, formalizing ethics, and aligning AI systems with human ethics. Rahwan et al. (2019) provide a framework for AI that incorporates the influence of human and machine behaviors, discussing human-machine and machine-machine interactions at different scales of systems.
Sambasivan et al. (2021), Bhatt et al. (2022) and Ramesh et al. (2023) have raised questions around value-pluralism in AI and the need for recontextualizing the fairness and AI ethics discourse for the Global South. Diddee et al. (2022) discuss several ethical questions in the context of Language Technologies for social good. The work discusses the interaction between the stakeholders of a system and the system itself and provides a few approaches to the involvement, agreement, and exit strategy for
all stakeholders.
Choudhury and Deshpande (2021) apply consequentialism to argue that in the context of multilingual LLMs, the model selection follows the _utilitarian_ principle, which is unfair to low-resource languages. Instead, they propose the Rawlsian or _prioritarian_ principle of model selection, which can lead to linguistically fair multilingual LLMs.
## 3 Framework for Ethical Policies
### A Critique of Ethical Alignment
Figure 1 provides a simplified overview of the different aspects of an AI system that influence the definition as well as the operationalization of ethical policies. Simply put, an _ethical policy_ (defined formally in Section 3.4 is a set of moral principles and preference ordering among them. We present three arguments against generic ethical alignment of LLMs, illustrated by the three colored triangles in the figure.
First, LLMs power an ecosystem of applications with multiple stakeholders and an heterogeneous end-user base (the pink triangle). Therefore, it is impossible to decide on a set of universal principles that they should be aligned to. In Section 2.1, we have discussed that a universally consistent ethical system is impossible. Therefore, any LLM aligned to a particular set of moral values will be unable to generalize across applications, geographies,laws, and diverse communities Dai and Dimond (1998); Inglehart and Welzel (2010).
Second, alignment requires datasets which unless carefully crafted, will over-represent certain values over others (the yellow triangle). For instance, Liu et al. (2022) propose an _alignment_ of LLMs over Human values using reinforcement learning techniques, using existing moral values datasets such as ETHICS Hendrycks et al. (2023), Moral Stories Emelin et al. (2021), and TruthfulQA Lin et al. (2022). However, each of these datasets has a problem with bias and prescription: ETHICS dataset maintains clear-cut morally right or wrong actions, when it may not always be the case; the Moral Stories dataset uses social norms pertaining mostly to the United States. In fact, like the under-representation of languages in multilingual LLMs Choudhury and Deshpande (2021), one can expect an under-representation of values of the Global South and minority groups.
Third, even if the above issues are resolved, one can always imagine specific applications which will require the model to respond in an ethically inconsistent or contradictory way (the blue triangle). For example, consider an LLM that was aligned to a policy that it _ends any conversation when toxic or rude behavior was detected_. Such a model could be useless for any customer service applications since most users exhibiting frustration would be turned away.
Thus, we contend that **LLMs should be value-neutral and sound ethical reasoners, while ethical alignment should be introduced at the level of applications and/or user interaction.**
### Implementing Flexible Ethical Policies
There are a few different strategies to ensure the value alignment of a system, even when the underlying LLM is value-neutral. One popular approach is to treat the value-alignment problem outside of the LLM. This can be achieved through classifiers such as Caselli et al. (2020); Mathew et al. (2020); Barbieri et al. (2020); Del Vigna et al. (2017); Ji et al. (2021) to flag the text that goes in and out of the LLM and take appropriate action based on the policy directives. Another technique is to align the model through 'in-context' learning, i.e., prompting Sap et al. (2020); Forbes et al. (2020); Schick et al. (2021).
The former methods, while more predictable in their outcome, have two major drawbacks: First, they curtail the power of LLMs by adding a layer of, often less powerful, post-processing modules; second, the classifiers and the datasets to train them have to be created afresh for every application, as ethical policies vary across tasks and
Figure 1: Aspects of an AI system that affects the definition, operationalization and implementation of ethical policies.
applications Fortuna et al. (2020), which is a major challenge to scalability. The latter approaches, on the other hand, use the full potential of LLMs but could be prone to uncertainty in responses, lack of model's ability to conduct sound ethical reasoning or could even be prone to jailbreak attacks Perez and Ribeiro (2022); Gehman et al. (2020). One could also create a value-aligned LLM by fine-tuning or RLHF on policy-specific data. Computational cost and engineering complexities aside, this technique too necessitates task and policy-specific data collection.
### A Framework for 'in-context' Ethical Policies
We now formally define a generic, extensible and flexible framework for specifying ethical policies in the LLM prompt. Suppose that a LLM \(\mathcal{L}\) takes a _prompt_\(p\) and generates an (textual) _output_\(y\leftarrow\mathcal{L}(p)\). Without loss of generality, we define \(p\) as an arbitrary composition (such as concatenation or template filling) \(P(\cdot)\) of the task definition \(\tau\), an ethical policy \(\pi\), and a user input \(x\). Thus, \(p=P(\tau,\pi,x)\).
**Definition**_Ethical Consistency._ The generated output \(y\) of \(\mathcal{L}\) is said to be ethically consistent with the policy \(\pi\), iff \(y\) is a valid response or resolution to input \(x\) for the task \(\tau\) under policy \(\pi\). We shall represent this as: \(x\land\pi\land\tau\ \vdash_{e}\ y\) where, similar to logical entailment, \(\vdash_{e}\) represents _ethical entailment_.
For notational convenience, we will usually omit \(\tau\) from the representation. Thus, if \(x\) is the statement of the Aisha-Monica credit sharing dilemma, \(\pi\) is the policy statement - "_concern for the well-being of others is valued more than fidelity to professional responsibilities_", \(y\) = "_Monica should offer Aisha the first authorship_" is ethically consistent with \(x\land\pi\). However, \(\neg y\) = "Monica should not offer Aisha the first authorship" is not an ethically consistent output of the model.
In general, \(y\) and \(\neg y\) cannot be simultaneously ethically consistent with \(x\land\pi\). However, when a policy is underspecified or ambiguous wrt the resolution of \(x\), it might lead to such inconsistencies in the system (see Williams (1988)). LLMs, in such cases, should not resolve the dilemma in one way or another. Instead, in our framework, we expect the LLM to state that a concrete resolution is not possible in the given situation. We introduce the special symbol \(\phi\) to indicate such responses. Thus, if \(\pi\) is underspecified, then \(\mathcal{L}(P(\tau,\pi,x))\rightarrow\phi\).
### Defining Ethical Policies
Ethical policies are defined as a preference over _normal values_ or _ethical principles_. There is no universally agreed-upon set of ethical principles. In order to keep our framework as generic and theory-neutral as possible, we allow policies to be defined on the basis of any ethical formalism or a combination of those. For a given ethical formalism, say \(F\), let \(R^{F}=\{r_{1}^{F},r_{2}^{F},\ldots r_{n_{F}}^{F}\}\) be a set of basic or fundamental moral principles.
**Definition**_Ethical Policy._ An ethical policy \(\pi\) is defined as a partial order on a subset of elements in \(R^{F}\). More formally, \(\pi=(R_{s}^{F},\leq_{s}^{F});\quad R_{s}^{F}\subseteq R^{F}\) where \(\leq_{s}^{F}\) represents the non-strict partial order relation of the importance or priority of the ethical principles. This is the most abstract way of defining a policy that we shall refer to as a **Level 2 policy**. For our running example, "_loyalty over objective impartiality_" would be an instance of level 2 policy based on virtue ethics.
Policies can be further specified by defining the _variables_ on which they apply. For instance, "_loyalty towards a friend over professional impartiality_" would imply that the virtue of "_loyalty_" is applied on "_friendship_" and that of "_impartiality_" on "_profession_". This we shall call a **Level 1 policy**. A policy could be specified even further by declaring the _values_ (not ethical/moral but values of variables) for which they are to be applied. For example, "_loyalty towards her friend Aisha over objectivity towards scientific norms of publishing_" clearly specifies the instances of the variables - "_friendship with Aisha_" and "_scientific publishing norms_", on which the virtues are to be applied. This we shall refer to as a **Level 0 policy**.
Level 2 policies could be ambiguous, leading \(\mathcal{L}\) to generate \(\phi\), while reasoning with level 1 policies hardly requires any ethical deductions; it is primarily a linguistic and logical reasoning task. Level 1 policies require both linguistic and logical as well as ethical reasoning and can provide an optimal abstraction level for an ethical policy. Moreover, Level 0 policies are input (\(x\)) specific and can apply to very limited cases and extremely narrow-domain applications. Level 2 policies could be used across domains and applications, yet due to their ambiguous nature, without further specifications, they may not lead to concrete resolutions. Level 1 policies will require domain-specific inputs (like variable
declarations) but are likely to be practically useful and generalizable across tasks.
Note that in our framework, the policies are stated in natural language, though it is conceivable to have LLMs or AI systems that work with symbolic policies (defined with first-order logic, for example) or neural or soft policies defined by networks or vectors. Furthermore, nothing in our framework precludes the use of _hybrid policies_ that are specified using principles taken from different ethical formalisms (\(R^{F}\)) and instantiated at different levels of abstraction.
## 4 Assessing Ethical Reasoning Capability of LLMs
Here, we describe a small-scale experiment to assess the ethical reasoning capabilities of 5 LLMs in the GPT-x series, where we presented the models with moral dilemmas (\(x\)'s) that are to be resolved (= the task \(\tau\)) for given ethical policies (\(\pi\)).
### Experimental Setup
Datasets.We curated a dataset of four moral dilemmas, starting with the widely recognized _Heinz dilemma_(Kohlberg, 1981), renowned in philosophy, exemplifies the clash between interpersonal and societal values. The other three dilemmas were designed by the authors to highlight conflict between interpersonal vs. professional, and community vs. personal values, contextualized in diverse cultural and situational contexts.
The "Monica's Dilema", introduced in Section 1, deals with the conflict between interpersonal values and professional integrity in a scientific research collaboration setup. "Rajesh's Dilema" highlights the conflict between personal preferences and society's cultural practices. Set in an Indian village, this dilemma presents Rajesh with the choice of either deceiving society to secure housing near his workplace or accepting the inconvenience of residing farther away to honor the cultural beliefs of potential neighbors. Finally, in "Timmy's Dilema", Timmy has to choose between the interpersonal responsibility of attending his best friends wedding as the officiator, or the professional responsibility of fixing a crucial bug that, if left unresolved, could jeopardize the platform's security and compromise customers' confidential data. For each dilemma, the LLM has to decide whether an agent should do a certain action or not.
Subsequently, we developed ethical policies for each of the four dilemmas at three distinct levels of abstraction and pertaining to three branches of normative ethics - Virtue, Deontology and Consequentialism, as outlined in Section 3.4. These \((3\times 3=)9\) policies, which are all of the form \(\pi=(r_{i}^{F}\geq r_{j}^{F})\), were appended with their respective complementary forms, \(\bar{\pi}=(r_{j}^{F}\geq r_{i}^{F})\), giving us 18 distinct policies per dilemma.
We have ideal resolutions (i.e., _ground truth_) for each dilemma under each policy, none of which are \(\phi\). These resolutions serve as expected responses that can be used to measure the ethical consistency of the LLM output.
In order to ensure clarity and comprehensibility of the dilemma and policy statements, we asked 5 independent annotators (18 - 42 yo, with median age of 24y, 4 South Asian and 1 East Asian) to resolve the dilemmas under each policy as \(y\), \(\neg y\) or \(\phi\). Out of (\(18\times 4=)72\) instances, annotators agreed with the ground-truth resolution in 45 to 51 (median: 47) cases. The majority label, when at least 3 annotators agree on a resolution, matched with the ground truth in 58 cases (higher than any individual), indicating that it is a complex task for humans as well. Interestingly, for each dilemma, there was at least one annotator who agreed with the ground truth resolutions in 17 out of the 18 cases, implying that ability to resolve these dilemmas might depend on personal experience and relatability. All the dilemmas and the structure of the prompt can be found in Appendix A and B respectively.
Models.We evaluate OpenAI's GPT-x models2: GPT-3.5-turbo (ChatGPT), GPT-4, GPT-3 (davinci), text-davinci-002, and text-davinci-003. These models have different capabilities and training methods, as described below.
Footnote 2: [https://platform.openai.com/docs/models/how-we-use-your-data](https://platform.openai.com/docs/models/how-we-use-your-data)
For GPT-3, we used the davinci model, its most powerful version, trained on a large corpus of text from the internet using unsupervised learning.
text-davinci-002 and text-davinci-003 are two GPT-3.5 models. While text-davinci-003 excels in language tasks with improved quality, longer output, and consistent instruction-following trained using RLHF, text-davinci-002 achieves similar capabilities through supervised fine-tuning instead of RLHF.
GPT-3.5-turbo is a GPT-3.5 series model, optimized for chat at 1/10th the cost of
text-davinci-003. It is the same model used in ChatGPT.
GPT-4 is OpenAI's latest model, with a larger parameter count for enhanced expressiveness and generalization. We used the gpt-4-32k version, featuring 4x the context length, enabling more complex reasoning and coherent text generation.
**Experiments.** We conduct two sets of experiments. First, we conduct a baseline test where the models are prompted to respond to the dilemmas without any policy. This test is crucial to uncover the models' inherent biases or moral stances. In the second phase, we introduce the ethical dilemma along with the policy statement in the prompt, instructing the model to resolve the dilemma strictly on the basis of this policy. In both cases, the model is asked to choose from three options: \(y=\)_"he/she should."_, \(\neg y=\)_"he/she shouldn't."_ and \(\phi=\)_"can't decide." (See Appendix B for details).
LLMs often exhibit a bias towards the ordering of the options while choosing one [23]. To mitigate this, we create 6 versions of each \(x\) and \(\pi\) pair, with a different permutation of \(y,\neg y\) and \(\phi\). Thus, each LLM is probed with (\(4\times 6=\)) 24 baseline prompts and (\(72\times 6=\)) 432 policy-based prompts.
For all experiments, we set the temperature to 0, top probabilities to 0.95, frequency penalty to 0, and presence penalty to 1.
### Experimental Results
Table 1 shows the baseline results for three models. GPT-3, ChatGPT, and GPT-4 were more consistent than text-davinci-002 and text-davinci-003 (not shown in the table). GPT-3 seems to always choose the affirmative response (a possible bias?) whereas GPT-4 resolves these dilemmas strongly in favor of individualism, self-expression and professional ethics over interpersonal, societal and cultural values.
In Table 2, we present the results of policy-based resolution (in %) by the models, compared to the ground-truth resolutions. GPT-4 displays near perfect ethical reasoning ability under all policies, with an average accuracy of 93.29% compared to 70% accuracy of our best human annotator and 80% when majority is considered. GPT-3 on the other hand has close to 50% accuracy, which is also the random baseline since almost in all cases the models choose from two options - \(y\) and \(\neg y\). In fact, it seldom deviated from its baseline prediction, irrespective of the policy.
Despite being an optimized version of text-davinci-003 with additional RLHF training, ChatGPT also exhibited a notable internal bias. These findings suggest that aggressive alignment through fine-tuning and optimization might contribute to increased internal bias and rigidity towards external policies, leading to a poor ethical reasoner.
As expected, the accuracy of the models (except GPT-3) drops by around 15% (from Level 0 and 1) at Level 2 owing to the more abstract and slightly ambiguous nature of these policies. However, we observe no significant difference in performance between Level 0 and Level 1 policies, indicating that Level 1 is, perhaps, the ideal level of abstraction for LLMs. Models usually perform better with deontological policies than virtue and consequen
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **GPT-3** & **Turbo** & **GPT-4** \\ \hline
**Heinz** & \(y\) (Perfect) & \(y\) (Perfect) & \(y\) (Perfect) \\ \hline
**Monica** & \(y\) (Weak) & \(\neg y\) (Perfect) & \(\neg y\) (Perfect) \\ \hline
**Rajesh** & \(y\) (Perfect) & \(\neg y\) (Moderate) & \(y\) (Perfect) \\ \hline
**Timmy** & \(y\) (Perfect) & \(\neg y\) (Moderate) & \(\neg y\) (Moderate) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of baseline experiments. The majority (among 6 prompts) resolution is reported with consistency in parenthesis. Perfect – 6 of 6, moderate – 5 or 4 of 6, weak – 3 of 6).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **GPT-3** & **T-DV2** & **T-DV3** & **Turbo** & **GPT-4** \\ \hline \multicolumn{6}{c}{**Virtue**} \\ \hline
**L0** & 50.00 & 79.17 & 87.50 & 66.67 & 87.50 \\
**L1** & 54.17 & 85.42 & 85.41 & 66.67 & 87.50 \\
**L2** & 52.08 & 68.75 & 79.17 & 54.17 & 81.25 \\
**Avg** & 52.08 & 77.78 & 84.03 & 62.50 & 85.41 \\ \hline \multicolumn{6}{c}{**Consequentialist**} \\ \hline
**L0** & 52.08 & 87.50 & 93.75 & 56.25 & 100 \\
**L1** & 52.08 & 85.40 & 85.41 & 66.67 & 100 \\
**L2** & 54.17 & 43.75 & 60.42 & 54.17 & 83.33 \\
**Avg** & 52.78 & 72.22 & 79.86 & 59.03 & 94.44 \\ \hline \multicolumn{6}{c}{**Deontological**} \\ \hline
**L0** & 54.17 & 87.50 & 87.50 & 81.25 & 100 \\
**L1** & 56.25 & 87.50 & 83.33 & 85.41 & 100 \\
**L2** & 54.17 & 77.08 & 85.41 & 81.25 & 100 \\
**Avg** & 54.86 & 84.03 & 85.41 & 82.64 & 100 \\ \hline
**O Avg** & **53.24** & **78.01** & **83.10** & **68.05** & **93.29** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy (%) (wrt ground truth) of resolution for policies of different types and levels of abstraction. text-davinci-002, text-davinci-003 and ChatGPT are shortened as T-DV2, T-DV3 and Turbo respectively. O. Avg is the overall average accuracy.
tialist statements. Nevertheless, as shown in Table 3, the trends vary by dilemmas, which implies that different situations might demand different types of ethical policies, justifying the need for theory-neutrality and hybrid policy statements.
## 5 Discussion and Conclusion
Our work makes a case for 'in-context' ethical policies for LLM-based applications, and the experiment shows that indeed, models such as GPT-4 are excellent ethical reasoners. However, there are still problems with these models as well as gaps in our experiments that we would like to summarize here.
**Moral Bias in LLMs**: Figure 2 shows a heatmap of bias across models, defined as the fraction of times a model does not change its baseline stance despite the policy dictating otherwise. Besides GPT-3 having high and GPT-4 substantially lower bias, we see all models have a high bias for Rajesh's dilemma, the only one that pits community values against individualism and self-expression. In fact, for a level 0 policy statement: "_Rajesh wants to show compassion for the cultural beliefs of his neighbors, over justice_", GPT-4 maintains that Rajesh should accept the offer because "_... Rajesh can maintain his non-vegetarian diet while also respecting the cultural beliefs of his neighbors._", which is clearly against the values stated in the dilemma. This highlights an important gap in cultural understanding of the current models.
The baseline results and bias patterns for these 4 dilemmas clearly show that these LLMs strongly prefer individualism, self-expression and other secular democratic values over community and tradition-based values. Thus, as shown in Figure 3, the models represent Western and English-speaking value systems (box on the map), that hampers ethically consistent outputs for policies that support values of the Global South or Islamic cultures.
**Future Work.** Unlike the pairwise comparison based single policies used here, in practical settings, there will be multiple policies with simultaneous partial orderings of rules. Representation of complex policies as well as LLMs' capability to reason with those require further investigation. In future, we would also like to expand the dataset of dilemmas covering more diverse cultures and topics, and the evaluation to more models such as LLaMa (Touvron et al., 2023), Alpaca (Taori et al., 2023), and Vicuna (Chiang et al., 2023).
How to infuse and ensure sound ethical reasoning capabilities into LLMs encompassing diverse moral principles, cultural values across languages is yet another important direction for future research. Hammerl et al. (2022) show that current deep learning language models capture moral norms, but the effect on language is unclear. Crosslingual transfer of ethical reasoning abilities is yet another area of investigation.
Additionally, there are questions of regulation and accountability; for instance, while application developers are responsible for providing an ethical policy to an LLM, who is to be held responsible if the LLM fails to adhere to such policies? Such societal questions need to be answered in order to
Figure 3: A representation of current LMs with the world-cultural map (Inglehart and Welzel, 2010)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **Heinz** & **Monica** & **Rajesh** & **Timmy** \\ \hline
**Virtue** & 76.11 & 88.33 & 42.22 & 82.78 \\
**Conseq.** & 76.67 & 71.11 & 67.22 & 71.66 \\
**Deontology** & 85.56 & 88.33 & 69.99 & 81.67 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy averaged over policy levels and models for dilemmas and ethical formalism.
Figure 2: Heatmap of Bias of the Models across different dilemmas
ensure a broader goal of ethical soundness.
### Limitations
One main limitation of our framework is that only the latest models (such as the GPT-3.5 series and GPT-4 series models) exhibit the capacity for ethical reasoning, and are suitable for the 'in context' ethical policy approach. Nevertheless, we expect that future language models will further build on this capacity.
Another limitation of this work is that, other than the Heinz' dilemma, all the dilemmas as well as moral policies and ideal resolutions were constructed by the authors who are belong to a ethnically homogeneous group. Naturally, this could be biased and lack a wider representation. Nonetheless, the dataset is extensible and we look forward to working with people from diverse background to build on this dataset. The annotators also come from a particular geographical region - Asia, and their cultural values might induce some bias in their annotations (though these annotations were not used as ground truth).
The defined levels for each policy have been tailored to this particular probing experiment, and it may not align with the complexity and scale of policies required for real-life systems. Finally, our framework only focuses on the branch of normative ethics, but we believe that the framework can be extended to other forms of ethical statements as well.
### Impact statement
We maintain a position that ethical value-alignment of AI systems should happen at the application level, and not directly on the model, and LLMs in particular. However, we understand that taking this position to an extreme case could lead to moral consequences, such as the propagation of harms when presented with a completely 'unhinged' or raw, 'unfiltered' model. In light of this, we are open to aligning Language models to follow a small set of broad ethical values which is collectively accepted by the community. However, in today's AI-powered world, we believe that the harm involving the underrepresentation of certain ethical values can prove much more dangerous to society in the long run. Hence, we still maintain that most ethical values should not be injected into the model, and consequently, LLMs should not take a moral stance unless completely necessary.The knowledge of diverse ethical principles and their applicability should, however, be available to the models.
### Acknowledgements
We would like to thank the following people for their help with the annotations for the dilemmas: Adharsh Kamath (Microsoft Research India), Qiang Liu (Microsoft Corporation), Riddhi Khandelwal (DL DAV Model School, Pitam Pura, Delhi), Dr. Sandipan Dandapat (Microsoft Corporation) and Yash Agarwal (BITS Pilani, Goa Campus).
|
2301.11372 | Improving circumbinary planet detections by fitting their binary's
apsidal precession | Apsidal precession in stellar binaries is the main non-Keplerian dynamical
effect impacting the radial-velocities of a binary star system. Its presence
can notably hide the presence of orbiting circumbinary planets because many
fitting algorithms assume perfectly Keplerian motion. To first order, apsidal
precession ($\dot{\omega}$) can be accounted for by adding a linear term to the
usual Keplerian model. We include apsidal precession in the kima package, an
orbital fitter designed to detect and characterise planets from radial velocity
data. In this paper, we detail this and other additions to kima that improve
fitting for stellar binaries and circumbinary planets including corrections
from general relativity. We then demonstrate that fitting for $\dot{\omega}$
can improve the detection sensitivity to circumbinary exoplanets by up to an
order of magnitude in some circumstances, particularly in the case of
multi-planetary systems. In addition, we apply the algorithm to several real
systems, producing a new measurement of aspidal precession in KOI-126 (a tight
triple system), and a detection of $\dot{\omega}$ in the Kepler-16 circumbinary
system. Although apsidal precession is detected for Kepler-16, it does not have
a large effect on the detection limit or the planetary parameters. We also
derive an expression for the precession an outer planet would induce on the
inner binary and compare the value this predicts with the one we detect. | Thomas A. Baycroft, Amaury H. M. J. Triaud, João Faria, Alexandre C. M. Correia, Matthew R. Standing | 2023-01-26T19:35:48Z | http://arxiv.org/abs/2301.11372v2 | # Improving circumbinary planet detections by fitting their binary's apsidal precession
###### Abstract
Apsidal precession in stellar binaries is the main non-Keplerian dynamical effect impacting the radial-velocities of a binary star system. Its presence can notably hide the presence of orbiting circumbinary planets because many fitting algorithms assume perfectly Keplerian motion. To first order, apsidal precession (\(\dot{\omega}\)) can be accounted for by adding a linear term to the usual Keplerian model. We include apsidal precession in the kima package, an orbital fitter designed to detect and characterise planets from radial velocity data. In this paper, we detail this and other additions to kima that improve fitting for stellar binaries and circumbinary planets including corrections from general relativity. We then demonstrate that fitting for \(\dot{\omega}\) can improve the detection sensitivity to circumbinary exoplanets by up to an order of magnitude in some circumstances, particularly in the case of multi-planetary systems. In addition, we apply the algorithm to several real systems, producing a new measurement of apsidal precession in KOI-126 (a tight triple system), and a detection of \(\dot{\omega}\) in the Kepler-16 circumbinary system. Although apsidal precession is detected for Kepler-16, it does not have a large effect on the detection limit or the planetary parameters. We also derive an expression for the precession an outer planet would induce on the inner binary and compare the value this predicts with the one we detect.
keywords: binaries: general - planets and satellites: dynamical evolution and stability - techniques: radial velocities - software: data analysis
## 1 Introduction
Exoplanets exhibit a range of configurations much vaster than is present within the solar system. Nearly three decades of discoveries have revealed that most known exoplanets are not analogous to any solar system planets (e.g. Winn and Fabrycky, 2015). This applies to individual planets being different such as Hot Jupiters (e.g. Dawson and Johnson, 2018) or planets with extreme eccentricities (e.g. Angelo et al., 2022), but this can also apply to entire planetary systems having more exotic configurations, such as TRAPPIST-1 a multi-planetary resonant chain orbiting a late M-dwarf (Gillon et al., 2016, 2017). One such type of exotic planetary systems are the circumbinary exoplanets that orbit about both stars of a tight stellar binary (Schneider, 1994; Doyle et al., 2011).
To date there have been only 15 fully confirmed circumbinary planets1. All but one have transited at least one of the two stars, and were first detected from space with _Kepler_(e.g. Doyle et al., 2011; Orosz et al., 2012; Kostov et al., 2016) and with _TESS_(Kostov et al., 2020, 2021). The pace of detections is slow, since the two most common exoplanet detection techniques (transit and radial velocity) have so far both been hamstrung, each with their own issues. Circumbinary planets will generally have longer periods than planets around single stars because they need to orbit outside of an instability region produced by the binary stars' motion (Dvorak et al., 1989; Holman and Wiegert, 1999; Doolin and Blundell, 2011). Because of this extra distance, circumbinary planets are geometrically less likely to produce transits than planets orbiting single stars. However, for similarly distant planets, nodal precession makes circumbinary planets more likely to create transits (Martin and Triaud, 2015), even if transits do not happen at every planetary orbit (e.g. Schneider, 1994; Martin and Triaud, 2014).
Footnote 1: Circumbinary planets orbiting stellar remnants are claimed using eclipse timings however there are doubts about their existence and as such we do not consider them as fully confirmed
For the radial velocity method, interference between the spectra of both components of the binary star makes it harder to obtain precise radial velocity measurements (e.g. Konacki et al., 2009). The latter issue can be circumvented by observing single-lined binaries (Konacki et al., 2010; Martin et al., 2019). This is the observing strategy employed by the BEBOP survey. BEBOP (Binaries Escorted By Orbiting Planets) has been collecting radial velocities on eclips
ing single-lined binaries for over four years and has demonstrated that it can detect circumbinary planets, notably by having independently detected Kepler-16b (Triaud et al., 2022). There is also the first circumbinary planet discovered in radial velocities BEBOP-1c (Standing et al. submitted).
One significant advantage of the radial velocity method over the transit method is that radial velocities probe the full orbit, instead of just the inferior conjunction. This leads to good precision on the eccentricity \(e\) (sometimes down to \(10^{-4}\); Triaud et al., 2017) and the argument of periastron \(\omega\) of the binary orbit. An issue this raises is that variation in \(e\) and \(\omega\) will cause problems when fitting a static Keplerian orbit, a point raised in Konacki et al. (2010); Sybilski et al. (2013). One such variation is apsidal precession of the binary: an evolution of \(\omega\) with time, denoted by \(\dot{\omega}\). This can be caused by relativistic effects, tidal effects, or \(-\)most excitingly to exoplanet hunters- planetary perturbations (Correia et al., 2013). Because the BEBOP survey has collected data over several years, the scatter caused by this precession is slowly starting to exceed the RMS scatter of the residuals on some systems (Standing et al., 2022). Accounting for this effect would improve the accuracy of the fits for the binary orbit, and in turn improve our ability to detect planets, and the precision on their physical and orbital parameters. As explored in Standing et al. (2022), in most cases the radial-velocity signal of a single planet would be detected well before a non-zero \(\dot{\omega}\) is significantly detected. In this paper we show that, in some multiplanetary configurations, low-amplitude planetary signals can be hidden by the precession induced by another, heavier planet.
In addition, measuring the apsidal precession rate adds new information to our knowledge of the system. Usually, it is not possible to measure orbital inclinations from radial velocities alone. However, the binary's precession rate due to an external perturber is dependent on the mutual inclination between the binary's orbital plane and the perturber's (e.g. Correia et al., 2013, 2016). In this work, we derive an equation to calculate the apsidal precession that a third body induces on an inner binary pair which can be used to calculate the mutual inclination, this can be found in Appendix (A). In the case of BEBOP, where all binaries are known to eclipse, an upper bound on the mutual inclination directly translates into an upper bound on the orbital inclination of the planet, meaning those radial velocity data can be used to obtain not just a minimum mass \(m_{\rm p}\sin i_{\rm p}\), but also a maximum mass. Finally, for close binaries, measuring the precession rate also provides information on the stars' internal structure (Claret & Gimenez, 2010).
In this paper, we first we describe a binaries-specific radial velocity model applied in the kima package in Sect. 2. This new model (which we will occasionally refer to as kima-binaries when comparing it with the old model) moves beyond fitting pure Keplerian orbits (as done in Faria et al., 2018), by including an apsidal precession parameter to the fitted model. The section details those changes and describes the inclusion of other tidal and relativistic effects that are known to affect orbital solutions. In Section 3, the new model is used on both simulated and observed data. The ability to accurately recover the apsidal precession rate is demonstrated, and we show how fitting for apsidal precession can improve a survey's sensitivity to circumbinary planets by producing Bayesian detection limits. Finally, in Section 4 we present a detection of the precession rate for Kepler-16, and conclude in Section 5.
## 2 A Binary Update to kima
In this section we present an update to kima, developing a binary-specific radial velocity model. This model accounts for various factors that are generally ignored when looking at radial velocities for a single star but recommended when seeking to detect circumbinary planet signals (Konacki et al., 2009; Sybilski et al., 2013). The new model includes tidal and relativistic effects as well as, most notably, apsidal precession of the binary's orbit. The new model is also given the capability to fit double-lined binary data.
kima is an orbital fitting algorithm which makes use of diffusive nested sampling (DNet; Brewer et al., 2011) to sample the posterior distribution for the model parameters. It allows for the number of Keplerian signals being fit to vary freely which is advantageous for Bayesian model comparison. There is a so called "known-object" mode where separate priors can be defined for certain already known signals while allowing to search for further signals freely; this model is ideal to apply to circumbinary systems. As will be discussed later, this method of sampling allows for an efficient method of calculating detection limits.
### Adding precession to kima
#### 2.1.1 A linear approximation
As a first order approximation, we add a linear precession parameter, \(\dot{\omega}\) to kima. This parameter is free during a fit and its posterior is estimated. We take the usual equation for the radial velocity of a Keplerian orbit (e.g Murray & Correia, 2010):
\[V=K(\cos(f+\omega)+e\cos(\omega))+\gamma, \tag{1}\]
with \(\omega\) now being time dependent2:
Footnote 2: We neglect terms that are order \(O(t-t_{0})^{2}\)
\[\omega(t)=\omega_{0}+\dot{\omega}(t-t_{0}). \tag{2}\]
\(K\) is the semi-amplitude of the radial velocity signal, \(f\) the true anomaly, \(e\) and \(\omega\) the eccentricity and argument of pericentre for the orbit, \(t_{0}\) is some reference time, and \(\gamma\) is the mean velocity of the system (which can be affected by the zero-point calibration of an instrument, but does not impact other parameters). In our case, we use the mean of the times of observation for \(t_{0}\).
#### 2.1.2 Period correction
The period of an orbit as a single value is, in any realistic scenario, not completely well defined. Various angles associated with an orbit will vary in time, such as the argument of pericentre \(\omega\) or the mean anomaly \(M\). Different combinations of these variations could all be called periods. We consider two of these defined as in the Eqs. (3) and (4). We denote these as _observational period_, \(P_{\rm obs}\) which is the time taken peak-to-peak in radial velocity, and the _anomalistic period3_, \(P_{\rm ano}\), which is the time between consecutive pericentre passages.
Footnote 3: This nomenclature is often used to refer to the time between two consecutive pericentre passages in precessing systems (e.g. Rosu et al., 2020; Borkovits et al., 2021)
\[\frac{2\pi}{P_{\rm obs}}\approx\dot{\omega}+\dot{M}, \tag{3}\]
\[\frac{2\pi}{P_{\rm ano}}\approx\dot{M}. \tag{4}\]
\(P_{\rm obs}\) is the period that is usually referred to by observational astronomers and can be precisely measured from time between transits or eclipses. In this work we set priors on the binary period based on eclipses so want to use \(P_{\rm obs}\) for this. When including \(\dot{\omega}\) into radial velocity fits we want to use \(P_{\rm ano}\) as the period parameter to avoid the expected correlation between \(P_{\rm obs}\) and \(\dot{\omega}\). Hence we need to be able to convert from one to the other. To do this we combine the two equations to get
\[\frac{2\pi}{P_{\rm obs}}=\dot{\omega}+\frac{2\pi}{P_{\rm ano}}, \tag{5}\]
and hence, neglecting terms of order \(Q(\dot{\omega}P)^{2}\),
\[P_{\rm ano} =\frac{P_{\rm obs}}{\left(1-\frac{\dot{\omega}P_{\rm obs}}{2\pi} \right)}\approx P_{\rm obs}\left(1+\frac{\dot{\omega}P_{\rm obs}}{2\pi}\right), \tag{6}\] \[P_{\rm obs} =\frac{P_{\rm ano}}{\left(1+\frac{\dot{\omega}P_{\rm ano}}{2\pi} \right)}\approx P_{\rm ano}\left(1-\frac{\dot{\omega}P_{\rm ano}}{2\pi}\right). \tag{7}\]
Our model fits for \(P_{\rm obs}\) as a parameter (i.e. the period prior is for \(P_{\rm obs}\) as is the output posterior distribution), but the model internally converts this to \(P_{\rm ano}\).
### Other additions to the binaries model
Here we describe the other additions made to the binary model on top of the apsidal precession described above, namely, we add relativistic and tidal corrections, and give the ability to fit the radial velocities for a double-lined binary.
#### 2.2.1 Relativistic and tidal corrections
We include relativistic corrections for binary orbits, the main ones being light-travel time and transverse doppler (LT,TD; Sybilski et al., 2013) and gravitational redshift (GR; Zucker & Alexander, 2007)4:
Footnote 4: Sybilski et al. (2013) also have an equation for the gravitational redshift, but it contains errors, hence we use the equation from Zucker & Alexander (2007)
\[\Delta V_{\rm LT} =\frac{K_{1}^{2}}{c}\sin^{2}(f+\omega)(1+e\cos f), \tag{8}\] \[\Delta V_{\rm TD} =\frac{K_{1}^{2}}{c\sin^{2}i}\left(1+e\cos f-\frac{1-e^{2}}{2} \right),\] (9) \[\Delta V_{\rm GR} =\frac{K_{1}(K_{1}+K_{2})}{c\sin^{2}i}(1+e\cos f), \tag{10}\]
where \(e\), \(f\), \(\omega\), and \(i\) are respectively the eccentricity, true anomaly, argument of pericentre, and inclination of the binary orbit relative to the plane of the sky; \(K_{1}\) and \(K_{2}\) are the semi-amplitudes of the primary and secondary, respectively, and \(c\) is the speed of light.
The tidal effect is calculated as in Arras et al. (2012), assuming a circular orbit. The equation for the tidally induced radial velocity signal is as follows:
\[v_{\rm tide}=1184\,\frac{M_{2}R_{1}^{4}}{M_{1}(M_{1}+M_{2})}P^{-3}\sin^{2}i \sin[2(f-\phi_{0})]\,\,{\rm m\,s^{-1}}, \tag{11}\]
where \(M_{1}\), \(M_{2}\), and \(R_{1}\) are the mass and radius of the primary and secondary (in solar units), \(P\) the orbital period (in days) (we use \(P_{\rm ano}\)), \(f\) the true anomaly, and \(\phi_{0}=\pi/2-\omega\) is the observer's reference position.
These equations are incorporated as an optional feature into the model such that when a binary model is fit, these contributions to the radial velocities can be naturally accounted for. We do not include these effects for the general planet search objects as their effects will be much smaller (by orders of magnitude since these corrections scale with \(M^{2}\)) and so does not warrant the increase in computation time.
#### 2.2.2 Adding in a double-lined binary model
The vast majority of spectroscopic binaries are double-lined (e.g. Kovaleva et al., 2016), and although the detection of circumbinary planets in double-lined system is problematic (as in Konacki et al., 2009, 2010), new methods to disentangle both spectral components accurately enough to detect circumbinary planets are being developed (e.g. Lalitha et al. in prep). To prepare for the time when circumbinary planets can be searched for, and be detected in double-lined binaries, we add a feature to k\(\hat{\rm{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ }}}}}}}}}}}}}\) to model such a configuration. As an input, the software requires files containing radial-velocities for each component of the binary. The sets of data are fit simultaneously, each with an independent \(\gamma\) parameter to account for differing zero-point calibrations5. In addition, each set has its own _jitter_ term, added in quadrature to the RV uncertainties to account for any additional sources of white noise. Only one extra common parameter is fit, the mass ratio \(q\).
Footnote 5: Even though one would expect the same \(\gamma\) for components observed with the same instrument, this may not be the case (Southworth, 2013)
Any given solution consists of a binary orbit, some number of planetary orbits, and polynomial trends up to cubic order. The binary orbit is fit to each dataset with the secondary having the semi-amplitude K scaled by q, and its argument of periastron reversed \(\omega_{2}=\omega_{1}-\pi\). The planetary orbits are then fit in the same way to each dataset just as k\(\hat{\rm{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{\bf{ }}}}}}}}}}}}\) usually does.
### Using the model
The additions in the new binary model can be used in various combinations. The tidal correction and relativistic correction can each be turned on or off, they will then apply to any known objects included. A prior on \(\dot{\omega}\) will need to be given for each known object as well as for general signals.
One important thing to note is that Eq. (11) assumes a circular orbit. We therefore recommend not using the tidal correction for eccentric binaries. We currently assume an inclination of \(90^{\circ}\), and therefore we only consider eclipsing systems in this paper. A future update may include the inclination as a free parameter to either attempt to constrain or at the very least marginalise over.
The use of double lined binaries is also included in the options. This requires a dataset (or multiple) with 5 columns: date; RV of primary; uncertainty on primary RV; RV of secondary; uncertainty on secondary RV. The primary will therefore automatically be the signal placed in the second column. The mass ratio \(q\) can be larger than 1 (at which point the "secondary" is actually the more massive star), so for example in an almost equal mass case a prior can straddle \(q=1\).
## 3 Performances of Kima-binaries
We now show tests and applications of the binaries model using data from simulations as well as from real systems. We first show that the model is able to recover consistent values of apsidal precession, and demonstrate the improvement in the fit that ensues. We illustrate this improvement by computing detection limits.
The standard way to perform detection limits is to inject a fine grid of simulated Keplerian signals (often assuming \(e=0\)) into the data where any planetary signal has been removed, and to measure which signals are recovered by the algorithm (e.g. Konacki et al., 2009, 2010; Mayor et al., 2011; Bonfils et al., 2013; Rosenthal et al., 2021). Here, instead, we use the posterior distribution of the undetected Keplerian signals to measure the amplitudes which can still be present in the data, as described in Standing et al. (2022).
Briefly, if the analysis indicates there are no planets in a system (circumstellar or circumbinary), we then apply a strict prior on the number of planets by fixing \(N_{\rm p}=1\). One Keplerian signal is assumed to be present and the algorithm is thus forced to return all solutions that are compatible with the data, but not formally detected. We then analyse the posterior samples and compute a limit of \(K\) as a function of \(P\) that envelops the lower 99% of the samples. Practically speaking, the limit is produced by creating log-linear bins along the \(P\) axis. It is best to ensure there are at least at least 1000 samples in each bin. Should a system have a formally detected planet (Bayes factor exceeding 150), that planet is subtracted from the data (the maximum likelihood parameters are used for this), and the detection limit is then computed as above using the residual radial velocities.
The advantage of using such a method over traditional methods is the ability to sample over all orbital parameters as finely as the algorithm allows (indeed also, all \(\gamma\) variables and _jitters_, as well as \(e\), \(\omega\), \(\dot{\omega}\), and \(\phi_{0}\) which are sometimes avoided by the traditional methods). While a traditional insertion/recovery asks the question _can these exact signals be recovered?_ our method instead asks _what is compatible with the data?_ A planet below the detection threshold is consistent with the data, and thus, is not formally detected, whereas a planet above the line is inconsistent with the data and would therefore have been detected had it been there.
### Analytic equation for precession
In appendix A we derive an analytic equation for the expected precession rate due to an outer perturber (Eq. (A.11)), and due to rotational and relativistic effects (Eq. (A.17)). This is done in a similar way to in Correia et al. (2013) but where that was done in the invariant plane, we do the calculation in the sky plane, which is directly applicable to observations results.
The precession due to a perturber (Eq. (A.11)) is under the assumption of both an eclipsing and transiting system, such as Kepler-16. If applying this to a system that does not conform to these assumptions, then Eq. (A.5) should be used.
### Testing kima-binaries with simulated data
We begin by testing our ability to recover the apsidal precession using simulated data, showing that we can recover a good measurement of the apsidal precession rate \(\dot{\omega}\), and that including it can greatly improve the fit.
We perform two simulations, both with a primary star of mass \(M=1\) M\({}_{\odot}\), a secondary with \(M=0.37\) M\({}_{\odot}\), \(P=21.08\) days, and \(e=0.16\) and a (roughly Jupiter mass) planet with \(M=0.001\)\(M_{\odot}\), \(P=134.5\) days, and \(e=0.01\). The first simulation, SIM1, has just these three bodies, whereas the second simulation, SIM2, has an additional planet with \(M_{\rm pl}=0.00015\) M\({}_{\odot}\), \(P=911.2\) days, and \(e=0\), corresponding to about 3 times the mass of Neptune. We chose these parameters to emulate a typical circumbinary planet: the binary is similar to Kepler-16 in mass-ratio and eccentricity, with a shorter period to increase the amount of precession that will have happened across the time that we "observe". Planet 1 was placed between the 6:1 and 7:1 mean-motion resonances with the binary and planet 2 at a similar period ratio again. Three different masses of planet 2 were tried and we report here the one that had the right mass to be missed without using precession but detected when including it. The decimal places for the periods were chosen randomly to try and avoid integer numbers of days and potential accidental resonances.
Simulations are made using the rebound package (Rein & Liu, 2012), the integrations used the IA515 integrator (Rein & Spiegel, 2015). Radial velocity simulations are taken as the velocity along the line-of-sight within the simulation. The simulation uses the same observational cadence as for Kepler-16 (Triaud et al., 2022), thus producing a simulated dataset including all Newtonian perturbations. Both simulated datasets are given a Gaussian white noise.
#### 3.2.1 Improving scatter and derived parameters
First, we consider SIM1. From the values of \(\omega\) at each point in the rebound simulation, we obtain \(\dot{\omega}=308.4\pm 3.7\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\) for the binary. Using Eq. (A.11) we get a theoretical value of \(\dot{\omega}=301.1^{+0.8}_{-1.5}\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\) A fit using the binaries model results in a posterior estimate of \(\dot{\omega}=304\pm 16\,\mathrm{arcsec}\)/yr, which is in agreement with both the simulated and theoretical values. This run is done with the apsidal precession of the binary fit for, but without the relativistic or tidal corrections. The uncertainty in the rebound value comes from "sampling" \(\omega\) at various times, calculating \(\dot{\omega}\) from these and then taking its mean and variance. The uncertainty on the theoretical value is propagated in a monte-carlo way from the posterior uncertainty on the binary and planetary parameters. The kima-binaries value's uncertainty is defined from the 16th to the 84th percentiles of the posterior distribution.
\begin{table}
\begin{tabular}{l c c c c} \hline & rebound & kina & kina-binaries & minis \\ \hline \(P_{\rm B}\) & 21.08875308(8/2) & 21.0810(107452) & 21.081(10395(21)) & days \\ \(M_{\rm B}\) & 0.37 & 0.3700886(15/3) & 0.370081(57) & M\({}_{\odot}\) \\ \(K_{\rm B}\) & 23415.30(4/5) & 23419.67(80) & 23420.11(31) & ms\({}^{-1}\) \\ \(\alpha_{\rm B}\) & 0.1606(8/53) & 0.1601(4/3) & 0.16096(16/5) & \\ \(\alpha_{\rm B}\) & 4.50018(2/52) & 4.5006(21) & 4.50016(20) & rad \\ \(\dot{\omega}_{\rm B}\) & 6.27400(23628) & 6.28526(24) & 6.28528(26) & rad \\ \(\dot{\omega}_{\rm B}\) & 308.4(3.7) & 0 & 304.01(5.8) & arcsec yr\({}^{-1}\) \\ \(P_{\rm pl}\) & 135.084(1.147) & 131.373(120) & 131.392(57) & days \\ \(M_{\rm pl}\) & 1.076 & 1.076(27) & 1.066(14) & M\({}_{\odot}\) \\ \(K_{\rm pl}\) & 33.62(09) & 34.81(90) & 33.82(43) & ms\({}^{-1}\) \\ \(\tau_{\rm pl}\) & 0.0161(7/9) & \textless{}0.050 & \textless{}0.033 & \\ RMS & & 6.61 & 3.15 & ms\({}^{-1}\) \\ Jitter & & 6.27 & 2.04 & ms\({}^{-1}\) \\ \(\lambda^{2}\) & & 12.35 & 3.15 & \\ \hline \end{tabular}
\end{table}
Table 1: For SIM1 we show the parameters from rebound (Rein & Liu, 2012) (note these are krelepian parameters taken from a Newtonian simulation) alongside the fitted parameters both with precession (kima-binaries) and without (kina). For the planet we take the upper bound on the eccentricity and omit the angle parameters as these are not resolved (close to circular orbit). The 1\(\sigma\) uncertainties are shown as the last few significant digits, all of which are on the same scale as the smallest uncertainty to allow for easy comparison. Goodness-of-fit parameters are also shown to compare the two fits.
Table 1 lists the parameters of the binary and planet taken from rebound and fit with precession (kima-binaries) and without precession (kima). The \(1\sigma\) uncertainty in each measurement is shown in brackets as the last few significant figures, to make comparison easier these are all scaled so that each value on a row is shown to the same number of decimal places. The parameters for rebound are read out as the osculating parameters at the times of each datapoint and then the mean and standard deviation of the values are calculated.
We note that in many cases the fitted values are inconsistent with the rebound values, and give a word of warning for using the Keplerian parameters from an n-body fitter such as this. When taking the Keplerian orbital parameters of a body from a rebound simulation at a given time, these are taken from the osculating Keplerian orbit which may not be representative of the average orbit. Consider the planet's orbital period, rebound effectively gives us an anomalistic period as defined in Sect. 2.1.2. Because of the perturbed motion this is not the time it will take to actually complete one orbit and we get an observed period a few days shorter. This effect cannot be fit as an apsidal precession of the planet as the orbit is not detectably eccentric.
So a Keplerian (or quasi-Keplerian) fit does not reproduce the osculating Keplerian parameters from a n-body simulation, but it does (to a reasonable accuracy) reproduce the mass. We see in table 1 that the mass of the binary is accurate to 3 decimal places (which is more than the precision we usually get on the mass of the primary star anyway) and the mass of the planet is accurately characterised (more so when apsidal precession is take into account).
We can also compare the precision of the two fits, in the sense of how tight a posterior distribution we get for each parameter. In most cases we can see an improvement in precision by about a factor of two.
The reduction in residual scatter can be seen in Figure 1 where the Root-Mean-Square improves from RMS = \(6.61\) m s\({}^{-1}\) to \(3.15\) m s\({}^{-1}\). When apsidal precession is not accounted for, if we move further from the reference time \(T_{0}\) (near the centre of the figure), the fit worsens, giving a characteristic _bow-tie_ shape, but if precession is accounted for, the spread in the residuals is reduced.
The detection limits both with and without precession can be seen in Figure 2. The improvement in detection limit is slightly larger at high periods where the radial-velocity signature of apsidal precession can be confused for a long-term trend. This improvement means the data would allow the detection of another planet signal within this system, almost an order of magnitude lower in mass at orbital periods between \(1,000\) and \(2,000\) days. Whilst this sounds impressive, this simulation only had a very small amount of extra white noise added to maximise the effect of apsidal precision in order to reveal its importance. In other systems we may expect more marginal improvements (see Sect. 4).
#### 3.2.2 Detecting a hidden planet
Here, we consider SIM2 and test how many planets are formally detected. To register as an \(n\)-planet detection, the Bayes Factor for the \(n\)-planet solution compared to the \((n-1)\)-planet solution needs to be greater than \(150\). The sampling in kima is trans-dimensional, meaning that solutions with different numbers of planets are all searched simultaneously. Therefore, the Bayes Factor \(BF_{i+1,i}\) comparing the model with \(i+1\) planets to that with \(i\) planets, is the ratio of the number of posterior samples with \(i+1\) planets \(N_{i+1}\) to the number with i planets \(N_{i}\)
\[BF_{i+1,i}=\frac{N_{i+1}}{N_{i}} \tag{12}\]
Should \(N_{i}=0\), the Bayes Factor in Eq. (12) becomes infinite. This can happen if the BF is larger than the number of effective posterior samples, and would be solved eventually had the sampling continued. In this case we therefore choose to report \(BF_{i+1,i}=N_{i+1}\), effectively setting \(N_{i}=1\). More information can be found in Faria et al. (2018); Standing et al. (2022); Triaud et al. (2022).
Running kima on the data from SIM2 without including precession, the outer planet is not formally detected but visible within the posterior as an over-density. With \(BF_{2,1}=12.5<150\), it would be classified as a candidate planet. We can attribute this non-detection to the apsidal precession since when we do fit for the precession, the planet is formally detected with \(BF_{2,1}>6086>150\). The Bayes Factors for each attempted fit are found in Table 2.
Figure 1: A comparison of radial velocity residuals, after removing the binary’s orbital solution, for SIM1 where apsidal precession is included in kima–binaries model (in red) and not included in kima (in blue).
Figure 2: Detection limits for additional Keplerian signals using SIM1. The hexbins represent the density of posterior samples in each run. The purple dashed line and solid blue line are the 99% detection limits. The black dashed lines show where bodies of various masses would sit on this plot.
### Testing kima-binaries on data from the KOI-126 system
In this section we test the ability to recover a value for the precession rate consistent with a previous solution from literature, as well as show the improvement in sensitivity to planets that accounting for precession could bring in a highly precessing system.
KOI-126 is a compact triply-eclipsing hierarchical triple star system. It contains a roughly circular, low-mass, tight binary which is in an eccentric orbit about a more massive tertiary star. The system was first reported in Carter et al. (2011). There are radial velocity data as well as photometry during multiple eclipses. As such, the apsidal precession rate of the tertiary orbit is well measured with a period of \(21\,850\,\mathrm{days}\)(Yenawine et al., 2022) which corresponds to \(\dot{\omega}=21\,650\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\). We used 29 radial velocity data from Yenawine et al. (2022). Our fit with the new binaries model recovers a consistent value for the apsidal precession, with \(\dot{\omega}=21\,800\pm 600\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\). Equivalently this is \(\dot{\omega}=0.56\pm 0.009\) degrees per cycle.
We run the analysis with apsidal precession fit for, as described in Sect. 2, but do not include either the general relativity or the tidal corrections. A more complete analysis would be to use a Newtonian model, rather than Keplerian with added precession, as in Yenawine et al. (2022), however including the precession improves the \(\chi_{7}^{2}\) from 640.9 to 1.5. This amply justifies adding an extra parameter, \(\dot{\omega}\), to the fit and suggests that a full dynamical model is not necessary with the current precision of the data, hence illustrating the importance of \(\dot{\omega}\) since, even in this dynamically complex triple system, the linear apsidal precession removes the majority of the excess noise. The detection limits are shown in Figure 3 for reference. Here too, including precession improves the detection limit by an order of magnitude in semi-amplitude and removes much of the long-period noise where, as with the simulated data, the precession may be being mildly confused for a long term trend.
We do not use the analytic equation derived in Appendix (A) as KOI-126 is in a different orbital configuration where the precessing orbit is the outer (rather than inner) one.
As an interesting note, the orbital periods shown in Figure 3 would be for putative _circumetrietary planets_, of which none are known in Nature. We can nonetheless state there are no stellar or brown dwarf mass companions within \(\sim 10^{4}\) days of the inner tertiary.
We show the parameters from our fit of KOI-126 in Table 11
## 4 Application of kima-binaries to Kepler-16
The announcement of Kepler-16b marked the first unambiguous detection of a circumbinary planet (Doyle et al., 2011), made thanks to the _Kepler_ spacecraft (Borucki et al., 2010). This system is unique in also being the only circumbinary planet independently detected with radial velocity (Triaud et al., 2022). We re-analyse these radial-velocity data with with our new model, and successfully detect an apsidal precession rate of \(\dot{\omega}_{1}=283^{+87}_{-85}\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\), which is \(3.3\sigma\) from 0. Using Eqs. (A11, A17) we obtain a value of \(\dot{\omega}_{1}=92.4^{+14.3}_{-13.3}\,\mathrm{arcsec}\,\mathrm{yr}^{-1}\), which is \(2.2~{}\sigma\) away from the observed value. This theoretical value takes into account the planetary-induced and relativistic precessions (we do not include the rotational and tidal contributions as they would be very small in comparison and parameters like the Love numbers are not very well known).
The theoretical \(\dot{\omega}\) is lower than the value that we measure; more data are required to determine how significant this discrepancy is. The difference is likely too important to be accounted for entirely by mutual inclination. An alternative (or additional) explanation could be further undetected planets contributing to the precession rate.
\begin{table}
\begin{tabular}{l r r} \hline \hline & kima & kima-binaries \\ \hline \(BF_{1,0}\) & 538 & 0 \\ \(BF_{2,1}\) & 12.5 & 6086 \\ \(BF_{3,2}\) & 0.9 & 0.8 \\ \hline \end{tabular}
\end{table}
Table 2: The Bayes Factors for SIM2 (containing two circumbinary planets) comparing models with increasing numbers of planets both for the standard version of kima and the new kima-binaries version, which includes apsidal precession.
Figure 4: Kepler-16: red: histogram of the density of posterior samples for fitted value of \(\dot{\omega}\) with the median and \(1\sigma\) values shown in grey. blue: histogram of the density of posterior samples for the theoretically calculated value of \(\dot{\omega}\) with the median value shown in grey. (Note that \(\dot{\omega}\) is not cut at zero, there are in fact posteriors below zero.)
Figure 3: Detection limits for additional signals around KOI-126. The hexbins show the density of posterior samples with the red being those when precession is included in the fit, blue when it is not included. The dashed purple line and solid blue line show the 99% confidence detection limits. The dashed lines show where bodies of various masses, and where the Deuterium and Hydrogen fusing limits would sit on this plot.
We explore the difference between \(P_{\rm obs}\) and \(P_{\rm ano}\). These values for Kepler-16 are presented in Table 3 alongside values published in Triaud et al. (2022). The value we get for \(P_{\rm obs}\) is in statistical agreement with the previous publication, however \(P_{\rm ano}\) is 3.3\(\sigma\) above this. \(P_{\rm ano}\) is the time between consecutive pericentre passages, the period that should in theory be used to compute physical parameters such as semimajor axis and planet mass, in a Keplerian context. In practice the difference is negligible due to there being a small difference between \(P_{\rm ano}\) and \(P_{\rm obs}\) as well as the uncertainty in the mass of the primary often being dominant. For Kepler-16 the difference in mass using the two periods is \(\approx 2\,\times\,10^{-6}\,\rm M_{\odot}\). It would take a case with very precise mass and a very high precession rate for this difference to be significant, even for KOI-126 (B+C), the difference is \(\approx 2\,\times\,10^{-4}\,\rm M_{\odot}\) which is about a fifth of the currently measured uncertainty.
In addition, we produce a detection limit for Kepler-16, comparing the results with and without including \(\dot{\omega}\). The detection limits, plotted in the same way as the previous ones, are shown in Figure 5. In this case, as we only get a marginal detection of apsidal precession there is no real improvement in the detection limits.
We show the parameters from our fit of Kepler-16 in Table B2.
## 5 Conclusions
We have shown that fitting for the apsidal precession of a binary's orbital parameters improves the radial velocity sensitivity to circumbinary planets. Our conclusions are in line with previous work such as Konacki et al. (2010) and Sybilski et al. (2013), but extend theirs to a fully Bayesian framework. The improvement in the detection limits can be of up to an order of magnitude in some configurations, but in most cases, improvements are expected to be marginal as in the case of Kepler-16. Accounting for precession can also give improvements in the precision of the parameters recovered from a fit, as well as the potential to uncover planets that were hidden by precession (or instead require less data to detect the same planet).
We have derived a formula for calculating the theoretical precession induced in a binary (Eq. (A11)) and have discussed the potential use of a measurement of the apsidal precession rate as a way to constrain the mutual inclination of the planetary and binary orbital planes using this formula.
The theoretical and observed values of precession for Kepler-16 are in slight tension, this may be because of undetected planets or some other unknown mechanism.
The longer the baseline of radial velocity observations, the more important it is to account for apsidal precession. As the field progresses, and the number of data from surveys like BEBOP increases, fitting the apsidal precession of the binaries will become vital. To prepare for that time, we have presented an updated version of the kima package which is more adapted to fitting radial velocities for single and double-lined binaries. The code is made public on github.
## Acknowledgements
The authors thank the anonymous reviewer as well as Darin Ragozzine for their useful comments. This research is in part funded by the European Union's Horizon 2020 research and innovation programme (grants agreements n\({}^{\circ}\) 803193/BEBOP). A.C. acknowledges support from CrisUC (UIDB/04564/2020 and UIDP/04564/2020), GRAVITY (PTDC/FIS-AST/7002/2020), and ENGAGE SKA (POC1-0145-FEDER-022217), funded by COMPETE 2020 and FCT, Portugal. MRS acknowledges support from the UK Science and Technology Facilities Council (ST/T000295/1).
## Data availability
The radial velocity data for Kepler-16 can be found in Triaud et al. (2022). The radial velocity data for KOI-126 can be found in Yenawine et al. (2022).
The code used for the analysis in this paper can be obtained at [https://github.com/j-faria/kima](https://github.com/j-faria/kima)
|
2308.00984 | On the Metric Temporal Logic for Continuous Stochastic Processes | In this paper, we prove measurability of event for which a general
continuous-time stochastic process satisfies continuous-time Metric Temporal
Logic (MTL) formula. Continuous-time MTL can define temporal constrains for
physical system in natural way. Then there are several researches that deal
with probability of continuous MTL semantics for stochastic processes. However,
proving measurability for such events is by no means an obvious task, even
though it is essential. The difficulty comes from the semantics of "until
operator", which is defined by logical sum of uncountably many propositions.
Given the difficulty involved in proving the measurability of such an event
using classical measure-theoretic methods, we employ a theorem from stochastic
analysis. This theorem is utilized to prove the measurability of hitting times
for stochastic processes, and it stands as a profound result within the theory
of capacity. Next, we provide an example that illustrates the failure of
probability approximation when discretizing the continuous semantics of MTL
formulas with respect to time. Additionally, we prove that the probability of
the discretized semantics converges to that of the continuous semantics when we
impose restrictions on diamond operators to prevent nesting. | Mitsumasa Ikeda, Yoriyuki Yamagata, Takayuki Kihara | 2023-08-02T07:34:40Z | http://arxiv.org/abs/2308.00984v6 | # On the metric temporal logic for continuous stochastic processes.
###### Abstract.
In this paper, we prove measurability of event for which a general continuous-time stochastic process satisfies continuous-time Metric Temporal Logic (MTL) formula. Continuous-time MTL can define temporal constrains for physical system in natural way. Then there are several researches that deal with probability of continuous MTL semantics for stochastic processes. However, proving measurability for such events is by no means an obvious task, even though it is essential. The difficulty comes from the semantics of "until operator", which is defined by logical sum of uncountably many propositions. Given the difficulty involved in proving the measurability of such an event using classical measure-theoretic methods, we employ a theorem from stochastic analysis. This theorem is utilized to prove the measurability of hitting times for stochastic processes, and it stands as a profound result within the theory of capacity. Next, we provide an example that illustrates the failure of probability approximation when discretizing the continuous semantics of MTL formulas with respect to time. Additionally, we prove that the probability of the discretized semantics converges to that of the continuous semantics when we impose restrictions on diamond operators to prevent nesting.
+
Footnote †: Received by the editors November 8, 2023.
## 1. Introduction
Stochastic processes have emerged as a valuable tool for analyzing real-time dynamics characterized by uncertainties. They consist of a family of random variables indexed by real-time and find applications in diverse domains such as molecular behavior, mathematical finance, and turbulence modeling. To formally analyze the temporal properties of real-time systems, _Metric Temporal Logic (MTL)_ has been introduced as a logical framework, specifying constraints that real-time systems must satisfy (see Chapter VI in [14]). The increasing demand for MTL specifications in industrial applications [13, 15, 16] has sparked interest in investigating the probability that a stochastic process satisfies the semantics of
###### Abstract
We consider the probability of continuous-time MTL formulas for stochastic processes. We show that the probability of continuous-time MTL formulas is well-defined, the problem of calculating such probabilities remains a challenge. Although the previous researches [15, 11] proposed an approximation by discretizing the semantics of MTL formulas respect to time, we show that probability based on discretization does not converges the probability based on continuous semantics in general. We give an example which involves multi level nested temporal operators. This motivates the need for a more comprehensive and precise discussion of approximations, which has been generally overlooked in previous studies. As a part of such effort, we show that if a formula only has simplified temporal operators, and these operators never appear in nested positions, the discretization converges the continuous semantics.
## 1 Introduction
In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes for stochastic processes. In this paper, we consider the probability of continuous-time MTL formulas for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for stochastic processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for stochastic processes for for processes for stochastic processes for processes for processes for stochastic processes for processes for stochastic processes for processes for for processes for stochastic processes for processes for stochastic processes for processes for for stochastic processes for processes for stochastic processes for for for stochastic processes for processes for stochastic processes for processes for for stochastic processes for for processes for stochastic processes for for for processes for stochastic processes for processes for processes for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for processes for stochastic processes for for processes for for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for for stochastic processes for processes for for stochastic processes for for processes for for stochastic processes for for stochastic processes for processes for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for for processes for stochastic processes for for stochastic processes for for processes for stochastic processes for for for processes for for stochastic processes for for stochastic processes for for processes for for stochastic processes for for processes for stochastic processes for for stochastic processes for for for stochastic processes for for processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for processes for for stochastic processes for for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for stochastic processes for for stochastic processes for stochastic processes for stochastic processes for stochastic processes for
that the probability that a stochastic process satisfies the discrete semantics of an MTL formula does not converges to the probability of path satisfying the semantics of same MTL formula in continuous sense. On the other hand, in Section 6, we prove the convergence of probability of discrete semantics for general stochastic differential equations (SDEs) under some restriction on syntax of propositional formulas. We set the restriction so that temporal operators do not nest.
As a conclusion we show that the convergence result relies on the depth of nests of temporal operators in a MTL formula.
## 2. Related Works
Temporal reasoning has been extensively studied (for an overview, see [1]), and it has gained increasing attention due to the growing demands in various industrial applications for real-time systems.
Pnueli [12] introduced linear temporal logic (LTL) as a means to express qualitative timing properties of real-time systems using temporal operators. Koyman [13] extended this logic to include quantitative specifications by indexing the temporal operators with intervals of time, leading to the development of metric temporal logic (MTL). Unlike other extensions of LTL with timing constraints, such as timed propositional temporal logic (TPTL) [1], MTL does not allow explicit reference to clocks, making it practical for implementation. A more detailed survey of temporal logic for real-time systems can be found in [10].
In this paper, we focus on MTL for a continuous-time stochastic process with a continuous state space. Such processes are commonly used as probabilistic models to describe various phenomena with continuous or intermittent effects caused by environmental noise. In particular, the process represented by the _stochastic differential equation (SDE)_ is widely used fo model statistical dynamics, asset prices in mathematical finance [14, 15], computer network [1] and future position of aircraft [16], to named a few [17, 2, 3].
Considering the wide range of application, it is natural to consider the probability in which the given stochastic system satisfies properties defined by MTL-formulas. The previous studies [18, 19] already considered the probabilities in which stochastic systems satisfy MTL properties and gave an approximation based on discretization of time and state spaces.
However, to talk probabilities consistently, we need to show the measurability of events under consideration. The subtle problem arises in the definition of probability because temporal operators in MTL are defined by unions of uncountably many sets, while measurability is guaranteed in case of a union of countably many sets in general. The previous studies did not prove but simply assumed the measurability of events in which stochastic processes satisfy MTL-formula. Further, their approximation by discretization assumed that the timed behavior of stochastic processes satisfies Non-Zenoness, which means that the behavior does not change its value infinitely in finite time. However, stochastic processes, such as solutions of SDEs, generally do not satisfy Non-Zenoness assumption, because of the inherent "rough" properties of stochastic processes, as they are neither smooth nor differentiable everywhere (see, for example, Chapter 2 in [11]).
In this paper, we prove the measurability of events in which stochastic systems satisfy MTL-formulas in interpreted by the continuous-time domain, under mild assumption, with
reference to fundamental theory of stochastic analysis which is developed to study approximation of probability measures by describing structure of classes of sets [12, 13]. Our result guarantees the existence of probability in which stochastic systems satisfy MTL-formulas.
Further, we give examples in which probabilities defined by discretization do not converge probabilities defined in the continuous-time domain, even if the time-interval used for discretization goes to \(0\). Our examples show that approximate by discretization, proposed by previous studies, does not work in general. Our examples involve either the until operator, or triple nesting "always" and "possibly".
As a positive side, we show that if MTL-formulas only have "always" and "possibly" operators and they do not nest, discretization of time converges to probability defined in the continuous time domain.
## 3. Preliminaries
In this section, we introduce several fundamental concepts that are discussed throughout this paper. To begin, let us start with the definition of measurability and probability space. When defining an event, it is crucial to ensure that the event is measurable in order to give meaning to its probability. Once the probability space is defined, we proceed to define the product space of two probability spaces. Next, we define a general stochastic process and its path. Following that, we introduce the definition of Brownian motion and stochastic differential equation. These two concepts are fundamental to stochastic analysis and form its core. Lastly, we introduce the syntax and semantics of MTL formulas, which are defined for every path of a stochastic process.
### Measurability and Probability
In this subsection, we introduce the basic definitions used in the measure theory and probability. Readers who are familiar with these theories may skip this subsection. More details are available in [11].
**Definition 3.1** (\(\sigma\)-algebra and Measurable space).: Let \(\Omega\) be a set and \(\mathcal{F}\) be a family of subsets of \(\Omega\), i.e., \(\mathcal{F}\subset 2^{\Omega}\). \(\mathcal{F}\) is called \(\sigma\)-algebra if it satisfies the following three conditions:
1. \(\Omega\in\mathcal{F}\) and \(\emptyset\in\mathcal{F}\).
2. If \(A\in\mathcal{F}\), then \(\Omega\setminus A\in\Omega\).
3. If \(A_{n}\in\mathcal{F}\) for \(i=1,2,3,\cdots\), then \(\bigcup_{i=1}^{\infty}A_{n}\in\mathcal{F}\) and \(\bigcap_{i=1}^{\infty}A_{n}\in\mathcal{F}\)
If \(\mathcal{F}\) is \(\sigma\)-algebra, \((\Omega,\mathcal{F})\) is called _measurable space_. If \((\Omega,\mathcal{F})\) is a measurable space and \(A\in F\), we say that \(A\) is _\(\mathcal{F}\)-measurable_ or merely _measurable_.
**Definition 3.2** (Borel space).: Let \(E\) be a topological space. A measurable space is called _Borel space_ on \(E\), denoted by \((E,\mathcal{B}(E))\), if \(\mathcal{B}(E)\) is the smallest \(\sigma\)-algebra which contains every open set. Every set belongs to \(\mathcal{B}(E)\) is called _Borel set_.
**Definition 3.3** (Measure space and probability space).: Let \((\Omega,\mathcal{F})\) be a measurable space. A function \(\mu:\mathcal{F}\rightarrow[0,\infty]\) is called a _measure_ on \((\Omega,\mathcal{F})\) if \(\mu\) satisfies the following two conditions:
1. \(\mu(\emptyset)=0\).
2. If \(A_{i}\in\mathcal{F}\) for \(i=1,2,3,\cdots\) and \(A_{i}\cap A_{j}=\emptyset\) for \(j\neq i\), then \[\mu(\bigcup_{i=1}^{\infty}A_{i})=\sum_{i=1}^{\infty}\mu(A_{i}).\] We call the triplets \((\Omega,\mathcal{F},\mathbb{P})\)_measure space_.
Especially, if \(\mathbb{P}(\Omega)=1\), we refer to a measure \(\mathbb{P}\) on \((\Omega,\mathcal{F})\) as a _probability_, and call \((\Omega,\mathcal{F},\mathbb{P})\) a _probability space_.
**Definition 3.4** (Lebesgue measure).: Let \(E=[0,\infty)\) or \(\mathbb{R}^{n}\) with natural topology and \((E,\mathcal{B}(E))\) be Borel space on \(E\). It is well known that there exists a unique measure \(\mu\) on \(E\) such that
\[\mu(\prod_{i=1}^{n}\langle a_{i},b_{i}\rangle)=\prod_{i=1}^{n}(b_{i}-a_{i}), \tag{3.1}\]
for every rectangle \(\prod_{i=1}^{n}\langle a_{i},b_{i}\rangle\) on \(E\). We call such a measure _Lebesgue measure_.
**Definition 3.5** (Complete probability space).: A probability space \((\Omega,\mathcal{F},\mathbb{P})\) is said to be _complete_ if every subset \(G\) of a measurable set \(F\) such that \(\mathbb{P}(F)=0\) is also measurable.
**Definition 3.6** (Almost sure).: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, and let \(P(\omega)\) be a proposition defined on \(\omega\in\Omega\). We say that \(P(\omega)\) holds _almost surely_ if there exists a measurable set \(\tilde{\Omega}\in\mathcal{F}\) such that \(\mathbb{P}(\tilde{\Omega})=1\) and \(\tilde{\Omega}\subset\{\omega\in\Omega;P(\omega)\}\).
In the context of probability theory, the phrase "almost surely" is often denoted as "a.s.". Hence, we frequently use the notation \(P(\omega),\ \text{a.s.}\) to indicate that \(P(\omega)\) holds almost surely.
**Remark 3.7**.: Whether \(P(\omega)\) holds almost surely depends on the probability measure \(\mathbb{P}\). When we specially focus on the probability measure \(\mathbb{P}\), we express that "\(P(\omega)\)_holds almost surely_\(\mathbb{P}\)" or "\(P(\omega)\)_holds a.s._\(\mathbb{P}\)".
**Definition 3.8** (Product measurable space).: Let \((G,\mathcal{G})\) and \((H,\mathcal{H})\) be two measurable spaces. The product \(\sigma\)-algebra of \(\mathcal{G}\) and \(\mathcal{H}\), denoted \(\mathcal{G}\otimes\mathcal{H}\), is the smallest \(\sigma\)-algebra on \(G\times H\) which contains all set of the form \(A\times B\), where \(A\in\mathcal{G}\) and \(B\in\mathcal{H}\).
\((G\times H,\mathcal{G}\otimes\mathcal{H})\) is called the product measurable space of \((G,\mathcal{G})\) and \((H,\mathcal{H})\).
**Fact 3.9**.: Let \((G,\mathcal{G})\) and \((H,\mathcal{H})\) me two measurable space, \(x\in G\), \(y\in H\), and \(E\in\mathcal{G}\otimes\mathcal{H}\). Then the following two measurability holds:
\[\{y\in H;(x,y)\in E\}\in\mathcal{H},\] \[\{x\in G;(x,y)\in E\}\in\mathcal{G}.\]
**Definition 3.10** (Product measure space).: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space. Let \(E=[0,\infty)\) or \(\mathbb{R}^{n}\) with natural topology. \((E,\mathcal{B}(E),\mu)\) be measure space with Lebesgue measure \(\mu\) defined on Definition 3.4. It is well known that there is unique measure \(\mathbb{P}\otimes\mu\) on \((\Omega\times E,\mathcal{F}\otimes\mathcal{B}(E))\) such that
\[\mathbb{P}\otimes\mu(A\times B)=\mathbb{P}(A)\times\mu(B) \tag{3.2}\]
for every \(A\in\mathcal{F}\) and \(B\in\mathcal{B}(E)\). We call such measure _the product measure_ of \(\mathbb{P}\) and \(\mu\). The resulting measure space is denoted as \((\Omega\times E,\mathcal{F}\otimes\mathcal{B}(E),\mathbb{P}\otimes\mu)\).
**Definition 3.11** (Measurable function).: Let \((G,\mathcal{G})\) and \((H,\mathcal{H})\) be two measurable spaces. A function \(X:G\to H\) is said to be \(\mathcal{G}/\mathcal{H}\)-measurable if \(X^{-1}(B)\in\mathcal{G}\) for every \(B\in\mathcal{H}\).
**Definition 3.12** (Random variable).: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and \((E,\mathcal{B}(E))\) be is Borel space on a topological space \(E\). An \(E\)-valued function \(X\) on \((\Omega,\mathcal{F},\mathbb{P})\) is called random variable if \(X\) is \(\mathcal{F}/\mathcal{B}(E)\)-measurable.
If there is no ambiguity regarding the probability space, we refer to \(X\) as an \(E\)-valued random variable, or merely a random variable.
**Definition 3.13** (Law of a random variable).: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, \((E,\mathcal{B}(E))\) be the Borel space on a topological space \(E\), and \(X\) be an \(E\)-valued random variable. It is well-known that \(\sigma(X):=\{X^{-1}(B);B\in\mathcal{B}(E)\}\) is a sub \(\sigma\)-algebra of \(\mathcal{F}\), and thus the mapping \(B\mapsto\mathbb{P}(X^{-1}(B))\) is a probability measure on \((E,\mathcal{B}(E))\). We refer to this mapping as _the law of \(X\)_, often denoted as \(\mathbb{P}^{X}\).
**Definition 3.14** (Lebesgue integral and expected value).: Let \((\mathbb{R},\mathcal{B}(\mathbb{R}))\) be Borel space on \(\mathbb{R}\). Suppose that \((G,\mathcal{G},\mu)\) is a measure space and \(f:G\to\mathbb{R}\) be a \(\mathcal{G}/\mathcal{B}(\mathbb{R})\)-measurable function. _Lebesgue integral_ is defined as the following four steps:
1. Let \(A,B\in\mathcal{G}\) and \(\mathds{1}\!\!1_{B}:G\to\{0,1\}\) be an indicator function defined as \[\mathds{1}\!\!1_{B}(x):=\left\{\begin{aligned} & 1,&\text{ if }x\in B\\ & 0,&\text{ if }x\notin B.\end{aligned}\right.\] (3.3) Then we define Lebesgue integral of \(\mathds{1}\!\!1_{B}\) with respect to \(\mu\) as \[\int_{A}\mathds{1}\!\!1_{B}d\mu:=\mu(B\cap A).\] (3.4)
2. We call \(f:G\to[0,\infty)\)_Simple function_ if \[f=\sum_{i=1}^{n}\alpha_{i}\mathds{1}\!\!1_{B_{i}},\] (3.5) where \(B_{i},\ i=1,2,\cdots,n\) are elements in \(\mathcal{G}\) and \(\alpha_{1},\alpha_{2},\cdots,\alpha_{n}\) are nonnegative real numbers. Let \(A\in\mathcal{G}\). Define _Lebesgue integral of the simple function \(f\) with respect to \(\mu\)_ as \[\int_{A}fd\mu:=\sum_{i=1}^{n}\alpha_{i}\mu(B_{i}).\] (3.6)
3. Let \(A\in\mathcal{G}\) and \(f:G\to\mathbb{R}\) be a nonnegative \(\mathcal{G}/\mathcal{B}(\mathbb{R})\)-measurable functions. _Lebesgue integral_ of \(f\) with respect to \(\mu\) is defined as follows: \[\int_{A}fd\mu:=\sup\left\{\int_{A}gd\mu\ ;\ g\text{ is simple function such that }g\leq f\right\}.\] (3.7)
4. Let \(A\in\mathcal{G}\) and \(f:G\to\mathbb{R}\) be a \(\mathcal{G}/\mathcal{B}(\mathbb{R})\)-measurable function. Let \(f^{+}:=\max\{f,0\}\) and \(f^{-}:=-\min\{f,0\}\). It is well known that \(f^{+}\) and \(f^{-}\) are \(\mathcal{G}/\mathcal{B}(\mathbb{R})\)-measurable and then \(\int_{A}f^{+}d\mu\) and \(\int_{A}f^{-}d\mu\) can be defined. When \(\int_{A}f^{+}d\mu<\infty\) and \(\int_{A}f^{-}d\mu<\infty\), we define _Lebesgue integral_ with respect to \(\mu\) as \[\int_{A}fd\mu:=\int_{A}f^{+}d\mu-\int_{A}f^{-}d\mu.\] (3.8) If \(A=G\), then we denote \(\int_{A}fd\mu\) as \(\int fd\mu\).
5. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and \(X\) be a \(\mathbb{R}\)-valued random variable such that \(\int_{\Omega}X^{+}d\mathbb{P}<\infty\) and \(\int_{\Omega}X^{-}d\mathbb{P}<\infty\). Then \(\int_{\Omega}Xd\mathbb{P}\) is called _expected value_ of \(X\), denoted as \(\mathbb{E}[X]\).
**Remark 3.15**.: In this paper, we use another type of notation for the Lebesgue integral to accommodate different situations:
\[\int_{A}fd\mu=\int_{A}f(x)\mu(dx) \tag{3.9}\]
Especially, if \(\mu\) is a Lebesgue measure, we denote the integral of \(x\mapsto f(x)\) as following:
\[\int_{A}f(x)dx \tag{3.10}\]
**Definition 3.16** (Density of random variable and absolute continuity).:
1. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, \((\mathbb{R}^{n},\mathcal{B}(\mathbb{R}^{n}))\) be a Borel space, and \(X\) be an \(\mathbb{R}^{n}\)-valued random variable on \((\Omega,\mathcal{F},\mathbb{P})\). Let \(([0,\infty),\mathcal{B}([0,\infty)))\) is a Borel space on \([0,\infty)\) with natural topology. A \(\mathcal{B}(\mathbb{R}^{n})/\mathcal{B}([0,\infty))\)-measurable function \(f\) is called _density_ of \(X\) if the following holds: \[\mathbb{P}(X^{-1}(B))=\int_{B}f(x)dx,\quad\forall B\in\mathcal{B}(\mathbb{R}^{ n}).\] If there exists such a function \(f\), we say that \(X\) has a density.
2. If \(X\) has a density, we say that the law of \(X\) is _absolutely continuous_ with respect to Lebesgue measure.
In the following sections, we frequently use the notation of almost sure convergence of random variables:
**Definition 3.17** (Almost sure convergence).: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space, and let \(E\) be a metric space. Let \(X\) and \(X_{n};n=1,2,\cdots\) be \(E\)-valued random variables. We say \(X_{n}\) converges almost surely to \(X\) if there exists a measurable set \(N\in\mathcal{F}\) such that \(\mathbb{P}(N)=0\) and
\[X_{n}(\omega)\stackrel{{ n\to\infty}}{{\longrightarrow}}X(\omega)\]
for every \(\omega\notin N\).
### Stochastic process and filtration
**Definition 3.18**.: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space and \(E\) be a Polish space. A family of \(E\)-valued random variables \(X:=\{X_{t}\}_{t\geq 0}\) indexed by time parameter \(t\) is called a _stochastic process_:
\[\begin{array}{ccc}\Omega&\stackrel{{ X_{t}}}{{\longrightarrow}}&E \\ \cup&&\cup\\ \omega&\longmapsto&X_{t}(\omega)\end{array}\]
Following the convention of stochastic analysis, we say \(X\) is _measurable_ if it satisfies the following assumption:
**Assumption 3.19**.: _The function \((\omega,t)\mapsto X_{t}(\omega)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable, which means that the inverse image \(\{(\omega,t);X_{t}(\omega)\in B\}\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\) whenever \(B\) is a Borel set in \(E\)._
**Remark 3.20**.: Under Assumption 3.19, it follows from Fact 3.9 that \(X_{t}\) is \(\mathcal{F}/\mathcal{B}(E)\)-measurable for all \(t\in[0,\infty)\).
We denote a path \(t\mapsto X_{t}(\omega)\) of \(\{X_{t}\}_{t\geq 0}\) as \(X(\omega)\) for every \(\omega\in\Omega\):
\[\begin{array}{ccc}[0,\infty)&\stackrel{{ X(\omega)}}{{ \longrightarrow}}&E\\ \cup&&\cup\\ t&\longmapsto&X_{t}(\omega)\end{array}\]
**Remark 3.21**.: Suppose that \((\Omega,\mathcal{F},\mathbb{P})\) be a complete probability space. If \(X_{t}\) is \(\mathcal{F}\)-measurable for all \(t\in[0,\infty)\) and the path \(X(\omega)\) is right- or left-continuous almost surely, then \(X\) is measurable (see 1.1.14 in [10]).
### Brownian motion
When studying properties of distributions in general continuous stochastic processes and topics such as convergence of discretization, including numerical computations, it is common to first discuss examples related to Brownian motion as it is the most representative continuous stochastic process. Now, we present the definition of Brownian motion:
**Definition 3.22**.: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space. A stochastic process \(X:=\{X_{t}\}_{t\geq 0}\) with state space \(\mathbb{R}\) is called standard one-dimensional Brownian motion starting at \(x\in\mathbb{R}\) if
1. \(\mathbb{P}(X_{0}=x)=1\),
2. The path \(t\mapsto X_{t}(\omega)\) is continuous with probability one,
3. For any \(s,t\geq 0\), \(t>s\) implies that \(X_{t}-X_{s}\sim\mathcal{N}(0,t-s)\) i.e., \(X_{t}-X_{s}\) has normal distribution with mean \(0\) and variance \(t-s\).
4. If \(s\leq t\leq u\), then \(X_{u}-X_{t}\) is independent of \(X_{s}\).
The existence of Brownian motion is established in Section 2.2 of [10], relying on the underlying probability space.
### Stochastic differential equation(SDE)
Let us now proceed to define one-dimensional stochastic differential equations in a rigorous manner:
**Definition 3.23**.: Let \((\Omega,\mathcal{F},\mathbb{P})\) be a probability space such that Brownian motion \(W\) exists in it. Let \(\sigma\) and \(b\) be a real-valued measurable functions on \(\mathbb{R}\). A **strong solution of the stochastic differential equation (SDE)**
\[\left\{\begin{aligned} dX_{t}=b(X_{t})dt+\sigma(X_{t})dW_{t},\\ X_{0}=\xi\in\mathbb{R}.\end{aligned}\right.\]
on \((\Omega,\mathcal{F},\mathbb{P})\) with respect to \(W\) and initial condition \(\xi\), is a process \(X=\{X_{t}\}_{t\geq 0}\) with continuous sample paths and with the following properties:
1. \(\{X_{t}\}_{t\geq 0}\) is adapted to the filtration induced by Brownian motion (see 1.1.9 and 5.2.1 in [10]),
2. \(\mathbb{P}[\omega\in\Omega;X_{0}(\omega)=\xi]=1\),
3. \(\mathbb{P}[\omega\in\Omega;\int_{0}^{t}\{|b(X_{s}(\omega))|+\sigma^{2}(X_{t}( \omega))\}ds<\infty]=1\) holds for every \(0\leq t<\infty\), and
4. the integral version of (6.1) \[X_{t}=X_{0}+\int_{0}^{t}b(X_{s})ds+\int_{0}^{t}\sigma(X_{s})dW_{s};\quad 0\leq t <\infty,\] holds almost surely. Here, the term \(\int_{0}^{t}\sigma(X_{s})dW_{s}\) represents the _Ito's stochastic integral_, defined as the limit of the following stochastic process (refer to Chapter 3 in [10]): \[\sum_{k=1}^{\infty}\sigma(X_{\frac{k}{n}})(W_{\frac{k+1}{n}\wedge t}-W_{\frac{ k}{n}\wedge t}).\] (3.11)
**Remark 3.24**.: Brownian motion \(\{W_{t}\}_{t\geq 0}\) starting at \(x\in\mathbb{R}\) itself is a solution \(\{X_{t}\}_{t\geq 0}\) of following one-dimensional SDE:
\[\left\{\begin{aligned} X_{t}&=\int_{0}^{t}1dW_{s},\\ X_{0}&=x.\end{aligned}\right.\]
**Remark 3.25**.: Brownian motion and SDEs are well-known to be continuous stochastic processes. Then they satisfy Assumption 3.19.
### Metric Temporal Logic (MTL) and its semantics
In this section, we introduce the formal definition of MTL (Metric Temporal Logic) formulas for a given path. We begin by assuming a set of atomic propositions, denoted as \(AP\), which is typically defined as a finite set. The MTL formulas are then defined as follows:
**Definition 3.26** (Syntax of MTL-formulas).: We define MTL-formulas for a continuous stochastic process \(\{X_{t}\}_{t\geq 0}\) using the following grammar:
1. Every atomic proposition \(a\in AP\) is an MTL formula.
2. If \(\phi_{1}\) and \(\phi_{2}\) are MTL-formulas, then the conjunction of \(\phi_{1}\) and \(\phi_{2}\), denoted as \(\phi_{1}\wedge\phi_{2}\), is also an MTL formula.
3. If \(\phi_{1}\) and \(\phi_{2}\) are MTL-formulas, and \(I\) represents an interval on the domain \([0,\infty)\), then the formula \(\phi_{1}\mathcal{U}_{I}\phi_{2}\) is an MTL formula. The grammar above can be conveniently represented using the _Backus-Naur form_: \[\phi::=a\mid\phi_{1}\wedge\phi_{2}\mid\neg\phi\mid\phi_{1}\mathcal{U}_{I}\phi _{2},\]
**Remark 3.27** ("Until" operator).: In (3) in the definition 3.26, \(\mathcal{U}_{I}\) is called an _until operator_. The interval \(I\) appearing in the until operator \(\mathcal{U}_{I}\) can be closed, left open, right open, or purely open. This means that \(I\) can take the form \(I=[a,b]\), \((a,b]\), \([a,b)\), or \((a,b)\), respectively. Furthermore, when \(I\) is unbounded, it can only be of the form \(I=(a,b)\) or \(I=[a,b)\), where \(b\) can take the value \(\infty\).
We proceed to define two types of semantics for the previously presented syntax: one for the continuous time domain and the other for the discrete time domain.
**Definition 3.28** (Continuous Semantics of MTL-Formulas).: Consider a path \(X(\omega)\) of the stochastic process \(\{X\}_{t\geq 0}\) with a fixed \(\omega\in\Omega\). Additionally, for each atomic proposition
\(a\in\mathrm{AP}\), let us assign a Borel set \(B_{a}\) on the domain \(E\). The continuous semantics of MTL formulas is recursively defined as follows:
\[X(\omega),t\models a \iff X_{t}(\omega)\in B_{a}\] \[X(\omega),t\models\neg\phi \iff \text{not }[X(\omega),t\models\phi]\] \[X(\omega),t\models\phi_{1}\land\phi_{2} \iff X(\omega),t\models\phi_{1}\text{ and }X(\omega),t\models\phi_{2}\] \[X(\omega),t\models\phi_{1}\mathcal{U}_{I}\phi_{2} \iff \exists s\in I\text{ s.t.: }X(\omega),t+s\models\phi_{2}\text{ and }\] \[\forall s^{\prime}\in[t,t+s)\text{ it is }X(\omega),s^{\prime} \models\phi_{1}\]
**Definition 3.29** (Time set).: We introduce the notations \(\llbracket\phi\rrbracket\), \(\llbracket\phi\rrbracket(t)\), and \(\llbracket\phi\rrbracket_{\omega}\) as follows:
\[\llbracket\phi\rrbracket :=\{(\omega,t);X(\omega),t\models\phi\},\] \[\llbracket\phi\rrbracket(t) :=\{\omega;X(\omega),t\models\phi\},\] \[\llbracket\phi\rrbracket_{\omega} :=\{t\geq 0;X(\omega),t\models\phi\}.\]
In particular, we refer to \(\llbracket\phi\rrbracket_{\omega}\) as the "time set" associated with \(\phi\).
**Definition 3.30** (Discrete Semantics of MTL-Formulas).: Let us consider the path \(X(\omega)\) of \(\{X_{t}\}_{t\geq 0}\) and the assignment \(B_{a}\) for an atomic proposition \(a\in\mathrm{AP}\), as well as Definition 2. For any \(n\in\mathbb{N}\), we denote \(\{k/n;k\in\mathbb{N}\}\) as \(\mathbb{N}/n\). The discrete semantics of MTL formulas for any \(t\in\mathbb{N}/n\) is defined recursively as follows:
\[X(\omega),t\models_{n}a \iff X_{t}(\omega)\in B_{a}\] \[X(\omega),t\models_{n}\neg\phi \iff \text{not }[X(\omega),t\models_{n}\phi]\] \[X(\omega),t\models_{n}\phi_{1}\land\phi_{2} \iff X(\omega),t\models_{n}\phi_{1}\text{ and }X(\omega),t\models_{n}\phi_{2}\] \[X(\omega),t\models_{n}\phi_{1}\mathcal{U}_{I}\phi_{2} \iff \exists s\in I\cap\mathbb{N}/n\text{ s.t.: }X(\omega),t+s\models_{n}\phi_{2}\text{ and }\] \[\forall s^{\prime}\in[t,t+s)\cap\mathbb{N}/n\text{ it is }X(\omega),s^{\prime} \models_{n}\phi_{1}\]
For \(t\in\mathbb{N}/n\), we denote by \(\llbracket\phi\rrbracket_{n}(t)\) the set \(\{\omega;X(\omega),t\models_{n}\phi\}\).
**Remark 3.31** (Disjunction, Diamond and Box Operator).: We often use the following notation:
\[\phi_{1}\lor\phi_{2} =\neg((\neg\phi_{1})\land(\neg\phi_{2})),\] \[\lozenge_{I}\phi =\top\mathcal{U}_{I}\phi,\] \[\Box_{I}\phi =\neg(\lozenge_{I}\neg\phi),\]
where \(\top\) is permanent proposition. We refer to \(\lozenge_{I}\) and \(\Box_{I}\) as the diamond and box operators, respectively. In the continuous and discrete semantics, the following equivalences hold:
\[X(\omega),t\models\lozenge_{I}\phi \Leftrightarrow(\exists s\in I)[X(\omega),t+s\models\phi],\] \[X(\omega),t\models\Box_{I}\phi \Leftrightarrow(\forall s\in I)[X(\omega),t+s\models\phi].\]
\[X(\omega),t\models_{n}\lozenge_{I}\phi \Leftrightarrow(\exists s\in I\cap\mathbb{N}/n)[X(\omega),t+s \models_{n}\phi],\] \[X(\omega),t\models_{n}\Box_{I}\phi \Leftrightarrow(\forall s\in I\cap\mathbb{N}/n)[X(\omega),t+s \models_{n}\phi].\]
## 4. Proof of Measurability
As introduced in Definition 3.3, a probability \(\mathbb{P}(F)\) can only be defined for a measurable set \(F\). Therefore, in order to define \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models\phi)\) or \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models_{n}\phi)\), it is necessary to show the measurability of \(\llbracket\phi\rrbracket(t)=\{\omega\in\Omega;X(\omega),t\models\phi\}\) or \(\llbracket\phi\rrbracket_{n}(t)=\{\omega\in\Omega;X(\omega),t\models_{n}\phi\}\), respectively. Since the definition of the discrete semantics of MTL involves the intersection or union of at most countably many sets, the measurability of \(\llbracket\phi\rrbracket_{n}(t)\) follows directly from Definition 3.1 of the \(\sigma\)-algebra. Then \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models_{n}\phi)\) can be defined. However, the measurability of \(\llbracket\phi\rrbracket(t)\) is not straightforward. Then it is not obvious whether \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models\phi)\) can be defined or not.
To illustrate the difficulty, let \(X\) be \(E\)-valued stochastic process \(a,b\) be atomic propositions, and \(I\) be an interval on \([0,\infty)\). Then \(X(\omega),t\models a\) is equivalent to \(X_{t}(\omega)\in B_{a}\) for some Borel set \(B_{a}\), and \(X(\omega),t\models b\) is equivalent to \(X_{t}(\omega)\in B_{b}\) for some Borel set \(B_{b}\). Since \(X_{t}\) is \(\mathcal{F}/\mathcal{B}(E)\)-measurable, then \(\llbracket a\rrbracket(t)=\{\omega\in\Omega;X_{t}(\omega)\in B_{a}\}\) belongs to \(\mathcal{F}\) and hence \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models a)\) can be defined. \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models b)\) can be defined for the same reason.
However, can we define \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models a\mathcal{U}_{I}b)\)? From the definition of the until operator, \(X(\omega),t\models a\mathcal{U}_{I}b\) is equivalent to the following:
\[(\exists s\in I)[X_{t+s}(\omega)\in B_{b}\text{ and }(\forall s^{\prime} \in[0,s))[X_{t+s^{\prime}}\in B_{a}]].\]
Therefore,
\[\llbracket a\mathcal{U}_{I}b\rrbracket(t)=\bigcup_{s\in I}\bigcap_{s^{\prime} \in[0,s)}\{\omega\in\Omega;X_{t+s}(\omega)\in B_{b}\}\cap\{\omega\in\omega;X _{t+s^{\prime}}(\omega)\in B_{a}\}.\]
Although the measurability of \(\{\omega\in\Omega:X_{t+s}(\omega)\in B_{b}\}\cap\{\omega\in\Omega:X_{t+s^{ \prime}}(\omega)\in B_{a}\}\) follows from the \(\mathcal{F}/\mathcal{B}(E)\)-measurability of \(X_{t+s}\) and \(X_{t+s^{\prime}}\), the representation of \(\llbracket a\mathcal{U}_{I}b\rrbracket(t)\) involves uncountable intersections and unions of these sets. Since Definition 3.1 only guarantees that measurability is preserved under countable unions or intersections, the measurability of \(\llbracket a\mathcal{U}_{I}b\rrbracket(t)=\{\omega\in\Omega;X(\omega),t\models a \mathcal{U}_{I}b\}\) is not obvious. Thus, we have observed that the challenge arises when dealing with the until formula \(\mathcal{U}_{I}\).
To show the measurability of the event \(\llbracket\phi\rrbracket(t)=\{\omega,\in\Omega;X(\omega),t\models\phi\}\) for arbitrary MTL formula \(\phi\), we utilize a well-known and profound theorem from the theory of capacity. We can represent the until operator \(\mathcal{U}_{I}\) using inverse image of the reaching time of an MTL formula. The inverse image is represented as projection of measurability set on \(\Omega\times[0,\infty)\) to \(\Omega\). By employing capacity theory, we show the measurability of the projection.
In order to show the measurability of until formulas, we introduce _reaching time_ of a set \(B\) on \(\Omega\times[0,\infty]\):
**Definition 4.1**.: Consider a subset \(B\) of \(\Omega\times[0,\infty)\). _The reaching time_ or _debut_\(\tau_{B}(\omega,t)\) of \(B\) is defined for each \(\omega\in\Omega\) as the first time \(s>t\) at which \((\omega,s)\) reaches \(B\), given by:
\[\tau_{B}(\omega,t):=\inf\{s>t;(\omega,s)\in B\},\]
where \(\tau_{B}(\omega,t):=\infty\) if \(\{s>t;(\omega,s)\in B\}=\emptyset\).
**Lemma 4.2**.: _For any subset \(B\) of \(\Omega\times[0,\infty)\), the reaching time \(\tau_{B}(\omega,t)\) is right-continuous with respect to \(t\in[0,\infty)\)._
Proof.: Assume \(\tau_{B}(\omega,t)>t\). We can express \(\tau_{B}(\omega,t)\) as \(t+\alpha\) for some \(\alpha>0\). According to the definition of \(\tau_{B}(\omega,t)\), for every \(s\) in the interval \((t,t+\alpha)\), it holds that \((\omega,s)\notin B\). Therefore, we have \(\tau_{B}(\omega,s)=t+\alpha\) for such \(s\), and as a result, \(\lim_{s\downarrow t}\tau_{B}(\omega,s)=t+\alpha=\tau_{B}(\omega,t)\).
Assume \(\tau_{B}(\omega,t)=t\). For every \(\epsilon>0\), there exists \(\delta\in(0,\epsilon)\) such that \((\omega,t+\delta)\in B\). Therefore, \(\tau_{B}(\omega,s)\leq t+\delta<t+\epsilon\) for every \(s\in(t,t+\delta)\), which implies \(\lim_{s\downarrow t}\tau_{B}(\omega,s)=t=\tau_{B}(\omega,t)\).
The following lemma serves as an abstract version of Proposition 1.1.13 in [10].
**Lemma 4.3**.: _If a stochastic process \(\{Y_{t}\}_{t\geq 0}\) is \([0,\infty]\)-valued and right-continuous, then the mapping \((\omega,t)\mapsto Y_{t}(\omega)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable._
Proof.: For \(t>0\), \(n\geq 1\), and \(k=0,1,\ldots\), we define \(Y_{t}^{(n)}(\omega)=Y_{(k+1)/2^{n}}(\omega)\) for \(\frac{k}{2^{n}}<t\leq\frac{k+1}{2^{n}}\), and \(Y_{0}^{(n)}=Y_{0}(\omega)\). The mapping \((\omega,t)\mapsto Y_{t}^{(n)}(\omega)\) from \(\Omega\times[0,\infty)\) to \([0,\infty]\) is demonstrab \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable. Furthermore, by right-continuity, we have \(Y_{t}^{(n)}(\omega)\to Y_{t}(\omega)\) if \(n\to\infty\) for any \((\omega,t)\in[0,\infty)\times\Omega\). Consequently, the limit mapping \((\omega,t)\mapsto Y_{t}(\omega)\) is also \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable.
Now let us show the measurability of the reaching time when considering it as a stochastic process. This result is derived from capacity theory, which guarantees the measurability of the projection (of a well-behaved set).
**Lemma 4.4**.: _If \(B\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\), the mapping \((\omega,t)\mapsto\tau_{B}(\omega,t)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable._
Proof.: From Lemma 4.2, the reaching times \(\tau_{B}(t,\omega):=\inf\{s>t;(\omega,s)\in B\}\) are right-continuous with respect to \(t\in[0,\infty)\). Then it is enough to show that \(\omega\mapsto\tau_{B}(\omega,t)\) is \(\mathcal{F}/\mathcal{B}([0,\infty])\)-measurable because of Lemma 4.3. From the definition of \(\tau_{B}(\omega,t)\), we can represent \(\{\omega;\tau_{B}(\omega,t)<u\}\) by using projection mapping \(\pi:\Omega\times[0,\infty]\to\Omega\) as
\[\{\omega;\tau_{B}(\omega,t)<u\}=\pi(B\cap\{\Omega\times(t,u)\}),\quad\forall u \in[0,\infty].\]
Since \([0,\infty]\) is locally compact space with countable basis, \(\mathcal{F}\) is complete, and the set \(B\cap\{\Omega\times(t,u)\}\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\), we can apply _Theorem I-4.14 in [11]_ to show \(\pi(B\cap\{\Omega\times(t,u)\})\in\mathcal{F}\). Therefore, \(\{\omega\in\Omega;\tau_{B}(\omega,t)<u\}\in\mathcal{F}\) for all \(u\geq 0\), which implies the \(\mathcal{F}/\mathcal{B}([0,\infty])\)-measurability of the map \(\omega\mapsto\tau_{B}(\omega,t)\).
The subsequent lemma regarding the two types of subsets follows directly from basic arguments in measure theory.
**Lemma 4.5**.: _Let \(B\in\mathcal{F}\otimes\mathcal{B}([0,\infty))\). Let \(f\) and \(g\) be functions from \(\Omega\times[0,\infty)\) to \([0,\infty]\), which are \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable. Then, the following sets are \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable._
\[\{(\omega,t)\in\Omega\times[0,\infty);f(\omega,t)\geq g(\omega,t)\}, \tag{4.1}\] \[\{(\omega,t)\in\Omega\times[0,\infty);(\omega,f(\omega,t))\in B\}. \tag{4.2}\]
Proof.: Since \(f\) and \(g\) are \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable, the set (4.1) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable.
Because \(f\) is a \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\) measurable function, \(\tilde{f}(\omega,t)=(\omega,f(\omega,t))\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{F}\otimes\mathcal{B}([0, \infty])\)-measurable function. By considering \(B\) as a subset of \(\Omega\times[0,\infty]\)
it becomes \(\mathcal{F}\otimes\mathcal{B}([0,\infty])\)-measurable. Consequently, \(\tilde{f}^{-1}(B)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable. Because
\[\{(\omega,t)\in\Omega\times[0,\infty);(\omega,f(\omega,t))\in B\}=\tilde{f}^{-1 }(B)\in\mathcal{F}\otimes\mathcal{B}([0,\infty)),\]
The lemma holds.
Now, we can proceed to prove the measurability of \(\llbracket\phi\rrbracket\) and \(\llbracket\phi\rrbracket(t)\).
**Lemma 4.6**.: _Let \(\phi_{1}\) and \(\phi_{2}\) be two MTL-formulas and suppose that both \(\llbracket\phi_{1}\rrbracket\) and \(\llbracket\phi_{2}\rrbracket\) are in \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\). Then \(\{(\omega,t);X(\omega),t\models\phi_{1}\mathcal{U}_{I}\phi_{2}\}\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)._
We prove Lemma 4.6 using the right continuity of \(\tau_{i}(t),\ i=1,2\) and Lemma 4.3.
Proof.: In order to prove this lemma, we put
\[\tau_{1}(\omega,t) :=\tau_{\llbracket\phi_{1}\rrbracket^{C}}(\omega,t)=\inf\{s>t;X( \omega),s\not\models\phi_{1}\},\] \[\tau_{2}(\omega,t) :=\tau_{\llbracket\phi_{2}\rrbracket}(\omega,t)=\inf\{s>t;X( \omega),s\models\phi_{2}\}.\]
When we regard \(\tau_{i}(\omega,t)\) (\(i=1,2\)) as a function on \(\Omega\times[0,\infty)\), we denote it as \(\tau_{i}(\omega,t)\). By Lemma 4.3, \(\tau_{1}\) and \(\tau_{2}\) are \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable.
We only prove the case where \(I=[a,b]\). The cases for other form of \(I\) can be proved in similar ways. For simplicity, suppose that \(a>0\). Then \(X(\omega),t\models\phi_{1}\mathcal{U}_{I}\phi_{2}\) holds if and only if \(X(\omega),t\models\phi_{1}\) holds and one of the following possibilities holds:
1. \(X(\omega),t+a\models\phi_{2}\) holds and \(\tau_{1}(\omega,t)\geq t+a\)
2. \(X(\omega),t+b\models\phi_{2}\) holds and \(\tau_{1}(\omega,t)\geq t+b\) holds
3. \(\tau_{2}(\omega,t+a)<t+b\), \(X(\omega),\tau_{2}(\omega,t+a)\models\phi_{2}\) and \(\tau_{1}(\omega,t)\geq\tau_{2}(\omega,t+a)\) hold
4. \(\tau_{2}(\omega,t+a)<t+b\), \(X(\omega),\tau_{2}(\omega,t+a)\not\models\phi_{2}\) and \(\tau_{1}(\omega,t)>\tau_{2}(\omega,t+a)\) hold
By \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurability of \(\tau_{1}\) and \(\tau_{2}\),
\[\{(\omega,t);\tau_{1}(\omega,t)\geq t+a\},\] \[\{(\omega,t);\tau_{1}(\omega,t)\geq t+b\},\text{ and}\] \[\{(\omega,t);\tau_{2}(\omega,t+a)<t+b\}\]
are in \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\). Thanks to Lemma 4.5,
\[\{(\omega,t);\tau_{1}(\omega,t)\geq\tau_{2}(\omega,t+a)\}\text{ and}\] \[\{(\omega,t);\tau_{1}(\omega,t)>\tau_{2}(\omega,t+a)\}\]
are in \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\). Since \(\tau_{2}(\omega,t+a)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}([0,\infty])\)-measurable and then
\[\{(\omega,t);X(\omega),\tau_{2}(\omega,t+a)\models\phi_{2}\} =\{(\omega,t);(\omega,\tau_{2}(\omega,t+a))\in\llbracket\phi_{2} \rrbracket\},\] \[\{(\omega,t);X(\omega),\tau_{2}(\omega,t+a)\not\models\phi_{2}\} =\{(\omega,t);(\omega,\tau_{2}(\omega,t+a))\notin\llbracket\phi_{2} \rrbracket\}.\]
From Lemma 4.5, both sets are in \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\). This completes the proof of \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurability of \(X(\omega),t\models\phi_{1}\mathcal{U}_{I}\phi_{2}\).
**Theorem 4.7**.: _For each MTL-formula \(\phi\), \(\llbracket\phi\rrbracket\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable and \(\llbracket\phi\rrbracket(t)\) is \(\mathcal{F}\)-measurable for all \(t\geq 0\)._
Proof.: We can prove the measurability of \(\llbracket\phi\rrbracket\) by induction on \(\phi\).
\(\bullet\) Atomic Formula: If \(\phi\) is an atomic formula, then \(\llbracket\phi\rrbracket\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\) because the mapping \((\omega,t)\mapsto X_{t}(\omega)\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))/\mathcal{B}(E)\)-measurable.
* Negation: If \(\llbracket\phi\rrbracket\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\), then \(\llbracket\neg\phi\rrbracket=\llbracket\phi\rrbracket^{C}\) clearly belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\).
* Conjunction: Suppose \(\phi_{1}\) and \(\phi_{2}\) are two MTL formulas, and \(\llbracket\phi_{i}\rrbracket\) is \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable for \(i=1,2\). Then it is straightforward to show that \(\llbracket\phi_{1}\wedge\phi_{2}\rrbracket=\llbracket\phi_{1}\rrbracket \wedge\llbracket\phi_{2}\rrbracket\) is also \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurable.
* Until Operator: From Lemma 4.6, we can obtain \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\)-measurability of \(\llbracket\phi_{1}\mathcal{U}_{t}\phi_{2}\rrbracket\). Once we have shown that \(\llbracket\phi\rrbracket\) belongs to \(\mathcal{F}\otimes\mathcal{B}([0,\infty))\), the fact that \(\llbracket\phi\rrbracket(t)\in\mathcal{F}\) follows from Fact 3.9.
Since the domain of \(\mathbb{P}\) is \(\mathcal{F}\), we can define \(\mathbb{P}(\omega;X(\omega),t\models\phi)\) for all \(\phi\) and \(t\in[0,\infty)\).
## 5. Discretization of MTL formula: Counterexample
Fu and Ufuk [15] proposed a methodology for approximating the probability that the solution of a controlled stochastic differential equation (SDE) satisfies an MTL formula. Their approach involves discretizing both the time and state space of the SDE. By utilizing a reachability problem for a timed automaton generated by the SDE and MTL formula, they derive probabilities based on this discretized semantics.
The authors argue that the convergence of their simulation is a result of the convergence in distribution of the approximated SDE, whose state space has been discretized. They claim that this probability obtained from the discretized approach converges to the probability derived from the continuous-time semantics of the original SDE.
However, in this paper, we demonstrate that for a one-dimensional Brownian motion, denoted as X, the probability obtained using the discretized semantics does not converge to the probability obtained using continuous semantics. This failure arises due to the fact that the reaching time of Brownian motion has positive density.
It is worth noting that Brownian motion can be viewed as a solution of a stochastic differential equation (SDE) (see Remark 3.24). Furthermore, every SDE without control can be regarded as a special case of controlled SDEs. Consequently, Brownian motion can serve as an illustrative example of a solution to a controlled SDE. Hence, our counterexample aligns with the scenario presented in [15].
Consider the case that \(X\) is one-dimensional Brownian motion starting from \(0\). Let \(p\) be an atomic formula, \(B_{p}:=[1,\infty)\) be the set associated with \(p\) and \(\tau_{p}(\omega):=\inf\{t\geq 0;X_{t}\in B_{p}\}\). In other words, \(X(\omega),t\models p\Leftrightarrow X_{t}(\omega)\geq 1\) and \(\tau_{p}(\omega)=\inf\{t\geq 0;X(\omega),t\models p\}\).
Put
\[\phi_{1} :=\square_{(1,2)}(\lozenge_{(1,4)}p\wedge\neg\lozenge_{(1,3)}p) \tag{5.1}\] \[\phi_{2} :=(\lozenge_{(1,3)}\phi_{1})\wedge(\neg\lozenge_{(1,2)}\phi_{1}) \wedge(\neg\lozenge_{(2,3)}\phi_{1}),\] (5.2) \[\phi_{3} :=\lozenge_{(1,2)}\phi_{2},\] (5.3) \[\psi :=(\neg p)\wedge(\neg\lozenge_{(0,8)}p)\wedge\phi_{3}. \tag{5.4}\]
In one line,
\[\psi:=(\neg p)\wedge(\neg\lozenge_{(0,8)}p)\wedge\lozenge_{(1,2)} [(\lozenge_{(1,3)}\square_{(1,2)}(\lozenge_{(1,4)}p\wedge\neg\lozenge_{(1,3 )}p)) \tag{5.5}\] \[\wedge(\neg\lozenge_{(1,2)}\square_{(1,2)}(\lozenge_{(1,4)}p \wedge\neg\lozenge_{(1,3)}p))\] (5.6) \[\wedge(\neg\lozenge_{(2,3)}\square_{(1,2)}(\lozenge_{(1,4)}p \wedge\neg\lozenge_{(1,3)}p))]. \tag{5.7}\]
In this setting, the following statements hold.
**Theorem 5.1** Remark 2.8.3 in [10].: \(\tau_{p}(\omega)\) _has positive density on \([0,\infty)\)._
Now we estimate the probability \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models\psi)\) of continuous semantics of \(\psi\).
**Lemma 5.2**.: _Suppose that \(\tau_{p}(\omega)\geq 6\). Then \([\![\phi_{1}]\!]_{\omega}\) has an isolated point \(\tau_{p}(\omega)-5\) with positive probability. Moreover, \(X(\omega),t\not\models\phi_{1}\) for \(t\in[0,\tau_{p}(\omega)-5)\cup(\tau_{p}(\omega)-5,\tau_{p}(\omega)-3)\). In other words,_
\[\left\{\begin{aligned} X(\omega),t\not\models\phi_{1}& \text{ for }0\leq t<\tau_{p}(\omega)-5,\\ X(\omega),t\models\phi_{1}&\text{ for }t=\tau_{p}( \omega)-5,\\ X(\omega),t\not\models\phi_{1}&\text{ for }\tau_{p}( \omega)-5<t<\tau_{p}(\omega)-3.\end{aligned}\right.\]
Proof.: Note that \(\tau_{p}(\omega)<\infty\) almost surely. Suppose \(\tau_{p}(\omega)\geq 6\). Then \(X(\omega),t\not\models p\) for \(t\leq\tau_{p}(\omega)\) and \(\inf\{t\geq 0;X(\omega),t\models p\}=\tau_{p}(\omega)\).
From the definition of \(\tau_{p}(\omega)\), if \(t\leq\tau_{p}(\omega)-4\), there is no \(s\in(t+1,t+4)\) such that \(X(\omega),s\models p\), which implies \(X(\omega),t\not\models\Diamond_{(1,4)}p\). Again from the definition of \(\tau_{p}(\omega)\), if \(t\in(\tau_{p}(\omega)-4,\tau_{p}(\omega)-1)\), there exists some \(s\in(t+1,t+4)\) such that \(X(\omega),s\models p\). Thus we obtain
\[\left\{\begin{aligned} X(\omega),t\not\models\Diamond_{(1,4)}p& \text{ for }t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\Diamond_{(1,4)}p&\text{ for }\tau_{p}( \omega)-4<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
Figure 1: The truth value of “\(X(\omega),t\models\phi_{1}\)”.
Similarly, we can show that
\[\left\{\begin{aligned} X(\omega),t\models\neg\lozenge_{(1,3)}p& \text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\neg\lozenge_{(1,3)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\neg\lozenge_{(1,3)}p& \text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\neg\lozenge_{(1,3)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\neg\lozenge_{(1,3)}p& \text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\neg\lozenge_{(1,3)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}-4(\omega)<t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}-4(\omega)<t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}-4(\omega)<t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p\wedge(\neg\lozenge_{(1,3)}p)&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\not\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-3,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3<t<\tau_{p}(\omega)-1.\end{aligned}\right.\]
\[\left\{\begin{aligned} X(\omega),t\models\lozenge_{(1,4)}p \wedge(\neg\lozenge_{(1,3)}p)&\text{ for }0\leq t\leq\tau_{p}(\omega)-4,\\ X(\omega),t\models\lozenge_{(1,4)}p&\text{ for }\tau_{p}(\omega)-3
**Lemma 5.3**.: \(X(\omega),0\models\psi\) _is equivalent to \(\tau_{p}(\omega)\in(8,9)\) almost surely. In particular, \(\mathbb{P}(X(\omega),0\models\psi)>0\)._
Proof.: First, we note that \(X(\omega),0\models(\neg p)\wedge(\neg\Diamond_{(0,8)}p)\) is equivalent to \(\tau_{p}(\omega)\geq 8\). Since \(\tau_{p}(\omega)\) has density, \(X(\omega),0\models\psi\) implies \(\tau_{p}(\omega)>8\) almost surely.
Suppose that \(\tau_{p}(\omega)>8\). Define
\[\tau_{1}(\omega) :=\inf\{t;X(\omega),t\models\phi_{1}\},\] \[\tau_{2}(\omega) :=\inf\{t;X(\omega),t\models\phi_{2}\}.\]
Then, from Lemma 5.2,
\[\left\{\begin{aligned} X(\omega),t\not\models\phi_{1},& \text{ for }t\in[0,\tau_{p}(\omega)-5),\\ X(\omega),t\models\phi_{1},&\text{ at }t=\tau_{p}( \omega)-5,\\ X(\omega),t\not\models\phi_{1},&\text{ for }t\in(\tau_{p}( \omega)-5,\tau_{p}(\omega)-3).\end{aligned}\right. \tag{5.8}\]
Hence \(\tau_{1}(\omega)=\tau_{p}(\omega)-5\). Furthermore, \(X(\omega),t\models\phi_{2}\) means
\[\left\{\begin{aligned} X(\omega),s\not\models\phi_{1},& \text{ for }s\in(t+1,t+2),\\ X(\omega),s\models\phi_{1},&\text{ at }s=t+2,\\ X(\omega),s\not\models\phi_{1},&\text{ for }s\in(t+2,t+3). \end{aligned}\right.\]
Then we can conclude from (5.8) that
\[\left\{\begin{aligned} X(\omega),t\models\neg\phi_{2},& \text{ for }t\in[0,\tau_{p}(\omega)-7)\\ X(\omega),t\models\phi_{2},&\text{ at }t=\tau_{p}( \omega)-7,\\ X(\omega),t\models\neg\phi_{2},&\text{ for }t\in(\tau_{p}( \omega)-7,\tau_{p}(\omega)-5).\end{aligned}\right.\]
This means exactly \(\tau_{2}(\omega)=\tau_{p}(\omega)-7\).
Suppose that \(\tau_{p}(\omega)\geq 9\). Then \(\tau_{2}(\omega)=\tau_{p}(\omega)-7\geq 2\). Since \(X(\omega),0\models\phi_{3}\) is equivalent to \(\tau_{2}(\omega)\in(1,2)\), \(X(\omega),0\models\phi_{3}\) does not hold. Then \(X(\omega),0\models\psi\) implies \(\tau_{p}(\omega)<9\). Consequently, we obtain that \(X(\omega),0\models\psi\) implies \(\tau_{p}(\omega)\in(8,9)\) almost surely.
Conversely, as we have seen that \(\tau_{p}(\omega)>8\) implies \(\tau_{2}(\omega)=\tau_{p}(\omega)-7\), \(\tau_{p}(\omega)\in(8,9)\) implies \(\tau_{2}(\omega)\in(1,2)\). Then \(X(\omega),0\models\phi_{3}\), and together with \(\tau_{p}>8\), we can conclude that \(X(\omega),0\models\psi\).
**Lemma 5.4**.: _Let \(n\geq 2\), \(\tau_{p}^{(n)}(\omega):=\inf\{t\in\mathbb{N}/n;X(\omega),t\models_{n}p\}\), and suppose that \(\tau_{p}^{(n)}(\omega)\geq 6\). Then it holds that_
\[\left\{\begin{aligned} X(\omega),t\not\models_{n}\phi_{1}& \text{ for }t=0,1/n,\cdots,\tau_{p}^{(n)}(\omega)-5-1/n,\\ X(\omega),t\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5,\tau_{p}^{(n)}(\omega)-5+1/n,\\ X(\omega),t\not\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5+2/n,\cdots,\tau_{p}^{(n)}(\omega)-2-2/n.\end{aligned}\right.\]
Proof.: By the definition of diamond operator, \(X(\omega),t\models_{n}\Diamond_{(1,4)}p\) is equivalent to
\[(\exists s\in\{t+1+1/n,t+1+2/n\cdots t+4-1/n\})[X,s\models_{n}p].\]
Then we observe from the definition of \(\tau_{p}^{(n)}(\omega)\) that
\[\left\{\begin{aligned} X(\omega),t\not\models_{n}\Diamond_{(1,4)}p& \text{ for }t=0,1/n,\cdots,\tau_{p}^{(n)}(\omega)-4\\ X(\omega),t\models_{n}\Diamond_{(1,4)}p&\text{ for }t=\tau_{p}^{(n )}(\omega)-4+1/n,\tau_{p}^{(n)}(\omega)-4+2/n,\cdots,\tau_{p}^{(n)}(\omega)-1-1 /n.\end{aligned}\right.\]
Similarly, we have
\[\left\{\begin{aligned} X(\omega),t\models_{n}\neg\Diamond_{(1,3)}p& \text{ for }t=0,1/n,\cdots,\tau_{p}^{(n)}(\omega)-3\\ X(\omega),t\not\models_{n}\neg\Diamond_{(1,3)}p& \text{ for }t=\tau_{p}^{(n)}(\omega)-3+1/n,\tau_{p}^{(n)}(\omega)-3+2/n, \cdots,\tau_{p}^{(n)}(\omega)-1-1/n.\end{aligned}\right.\]
Then we obtain
\[\left\{\begin{aligned} X(\omega),t\not\models_{n}\Diamond_{(1,4)}p \wedge\neg\Diamond_{(1,3)}p&\text{ for }t=0,\cdots,\tau_{p}^{(n)}(\omega)-4,\\ X(\omega),t\models_{n}\Diamond_{(1,4)}p\wedge\neg\Diamond_{(1,3)}p& \text{ for }t=\tau_{p}^{(n)}(\omega)-4+1/n,\cdots\tau_{p}^{(n)}(\omega)-3,\\ X(\omega),t\not\models_{n}\Diamond_{(1,4)}p\wedge\neg\Diamond_{(1,3)} p&\text{ for }t=\tau_{p}^{(n)}(\omega)-3+1/n,\cdots,\tau_{p}^{(n)}(\omega)-1-1/n.\end{aligned}\right.\]
From the definition of Box operator, \(X,t\models_{n}\phi_{1}\) is equivalent to
\[(\forall s\in\{t+1+1/n,\cdots,t+2-1/n\})[X(\omega),t\models_{n}\Diamond_{(1, 4)}p\wedge\neg\Diamond_{(1,3)}p].\]
Then we observe that
\[\left\{\begin{aligned} X(\omega),t\not\models_{n}\phi_{1}& \text{ for }t=0,1/n,\cdots,\tau_{p}^{(n)}(\omega)-5-1/n,\\ X(\omega),t\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5,\tau_{p}^{(n)}(\omega)-5+1/n,\\ X(\omega),t\not\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5+2/n,\cdots,\tau_{p}^{(n)}(\omega)-2-2/n.\end{aligned}\right.\]
**Lemma 5.5**.: _Let \(n\geq 2\). Then \(X(\omega),0\not\models_{n}\psi\) for every \(\omega\in\Omega\)._
Proof.: Define \(\tau_{p}^{(n)}(\omega)\) as Lemma 5.4. Since \(X(\omega),0\models_{n}\psi\) implies \(X(\omega),0\models_{n}(\neg p)\wedge(\neg\Diamond_{(0,8)}p)\), \(\tau_{p}^{(n)}(\omega)\geq 8\). Then we obtain from Lemma 5.4 that
\[\left\{\begin{aligned} X(\omega),t\not\models_{n}\phi_{1}& \text{ for }t=0,1/n,\cdots,\tau_{p}^{(n)}(\omega)-5-1/n,\\ X(\omega),t\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5,\tau_{p}^{(n)}(\omega)-5+1/n,\\ X(\omega),t\not\models_{n}\phi_{1}&\text{ for }t=\tau_{p}^{(n )}(\omega)-5+2/n,\cdots,\tau_{p}^{(n)}(\omega)-2-2/n.\end{aligned}\right.\]
From the definition of the discrete semantics, \(X(\omega),t\models_{n}\phi_{2}\) is equivalent to
\[\left\{\begin{aligned} X(\omega),s\not\models_{n}\phi_{1}& \text{ for }s=t+1+1/n,\cdots,t+2-1/n,\\ X(\omega),s\models_{n}\phi_{1}&\text{ for }s=t+2,\\ X(\omega),s\not\models_{n}\phi_{1}&\text{ for }s=t+2+1/n, \cdots,t+3-1/n.\end{aligned}\right.\]
In other words, for \(X(\omega),t\models_{n}\phi_{2}\) to hold, \(X(\omega),s\models_{n}\phi_{1}\) must hold exactly on \(s=t+2\), and \(X(\omega),s\models_{n}\phi_{1}\) must not hold for other \(s\in\mathbb{N}\) such that \(t+1+1/n\leq s\leq t+3-1/n\). However, \(X(\omega),s\models_{n}\phi_{1}\) holds at two adjacent \(s\) values, namely \(s=\tau_{p}^{(n)}(\omega)-5\) and \(s=\tau_{p}^{(n)}(\omega)-5+1/n\), and does not hold for other \(s\) values such that \(0\leq s\leq\tau_{p}^{(n)}(\omega)-2-2/n\). Therefore, \(X(\omega),t\models_{n}\phi_{2}\) does not hold as long as \(t+2\leq\tau_{p}^{(n)}(\omega)-2-2/n\), which implies \(t\leq\tau_{p}^{(n)}(\omega)-4-2/n\). Since \(\tau_{p}(\omega)\geq 8\), \(X(\omega),t\models_{n}\phi_{2}\) does not hold as long as \(t\leq 4-2/n\) and hence \(X(\omega),0\not\models_{n}\Diamond_{(1,2)}\phi_{2}\).
**Theorem 5.6**.: _Put \(\psi\) as Lemma 5.3. Then \(\mathbb{P}(\omega;X(\omega),0\models_{n}\phi_{3})\) does not converges to \(\mathbb{P}(\omega;X(\omega),0\models\phi_{3})\)._
Proof.: From Lemma 5.3, we have \(\mathbb{P}(\omega;X(\omega),0\models\psi)=\mathbb{P}(\omega;\tau_{p}(\omega) \in(8,9))>0\). On the other hand, from Lemma 5.5, we have \(\mathbb{P}(\omega;X(\omega),0\models_{n}\psi)=0\) for every \(n\) larger than \(2\). Therefore, \(\mathbb{P}(\omega;X(\omega),0\models_{n}\psi)\) never converges to \(\mathbb{P}(\omega;X(\omega),0\models\psi)\).
**Remark 5.7**.: In the above counterexample, the subscripted intervals for the diamond operator are restricted to be open. If we restrict the intervals to be left- or right-open, we can construct similar counterexamples. However, it remains an open problem whether we can create a counterexample when we restrict all intervals to be closed.
## 6. Discretization of MTL formula: Convergence Result of \(\flat\)MTL formula
In the previous section, we presented a counterexample of an MTL formula, illustrating a case where its probability in discrete semantics does not converge to that in continuous semantics. In contrast, in this section, we establish the convergence of such probabilities by introducing a restriction on MTL formulas. Specifically, we ensure that we only use diamond operators that do not nest. We refer to these restricted formulas as \(\flat\)MTL formulas. While we discussed MTL formulas for Brownian motion in the preceding sections, we will now present the convergence result for general one-dimensional stochastic differential equations:
\[\left\{\begin{aligned} dX_{t}&=b(X_{t})dt+\sigma(X_{t} )dW_{t},\\ X_{0}&=\xi\in\mathbb{R}.\end{aligned}\right. \tag{6.1}\]
To establish the convergence result for \(\flat\)MTL formulas, we impose the following conditions on the SDE (6.1). These conditions are also sufficient to ensure the existence, uniqueness, and absolute continuity of the solution.(see Appendix A):
**Assumption 6.1**.:
1. _For every compact set_ \(K\subset\mathbb{R}\)_,_ \(\inf\sigma(K)>0\)_._
2. \(\sigma\) _is Lipschitz continuous._
3. \(b\) _is bounded and Borel measurable._
Now let us define \(\flat\)MTL-formulas rigorously.
**Definition 6.2** (Syntax of \(\flat\)MTL formula).: Let \(AP\) be a finite set of atomic formulas. We define the syntax of \(\flat\)MTL by the following induction.
1. All atomic formulas are \(\flat\)MTL formulas.
2. If \(\phi\) is an \(\flat\)MTL formula, \(\neg\phi\) is an \(\flat\)MTL formula.
3. If \(\phi_{1}\) and \(\phi_{2}\) are \(\flat\)MTL formulas, then \(\phi_{1}\wedge\phi_{2}\) is an \(\flat\)MTL formula.
4. If \(p\) is a propositional formula and \(I\) is a positive interval on \([0,\infty)\), then \(\lozenge_{I}p\) is an \(\flat\)MTL formula.
Here we define propositional formula as follows:
1. All atomic formulas are propositional formulas.
2. If \(p\) is a propositional formula, \(\neg p\) is a propositional formula.
3. If \(p_{1}\) and \(p_{2}\) are prpositional formulas, then \(p_{1}\wedge p_{2}\) is a propositional formula.
The semantics of \(\flat\)MTL is given in the same way as \(\lozenge\)MTL formulas.
**Definition 6.3** (Semantics of \(\flat\)MTL formulas).: Let \(B_{i},\ i=1,\cdots,k\) be Borel sets on \(\mathbb{R}\) and \(AP=\{a_{i};i=1,\cdots,k\}\) be the set of \(k\) atomic formulas. The semantics of \(\flat\)MTL formulas are defined inductively as follows.
1. \(X(\omega),t\models a_{i}\Leftrightarrow X_{t}(\omega)\in B_{i}\) for \(i=1,\cdots,k\).
2. \(X(\omega),t\models\phi_{1}\wedge\phi_{2}\) is equivalent to \(X(\omega),t\models\phi_{1}\) and \(X(\omega),t\models\phi_{2}\).
3. \(X(\omega),t\models\lozenge_{I}p\) is equivalent to \((\exists s\in I)[X(\omega),t+s\models p]\).
**Remark 6.4**.: Let us set \(\sigma\equiv 1\) and \(b\equiv 0\). In this case, both \(\sigma\) and \(b\) satisfy Assumption 6.1. As mentioned in Remark 3.24, the solution to the SDE (6.1) is given by the one-dimensional Brownian motion \(\{W_{t}\}_{t\geq 0}\) itself. Therefore, the convergence result for probabilities discussed in this section can be applied to the case of one-dimensional Brownian motion.
Additionally, as mentioned in Remark 3.31, the diamond operator \(\lozenge_{I}\) and the box operator \(\square_{I}\) can be represented using the until operator. This allows us to represent every \(\flat\)MTL formula as an MTL formula without nesting of until operators.
Considering the counterexample presented in Section 5 and the discussion about nesting of temporal operators, we can observe the impact of nesting on the convergence of probabilities.
We will show the result of convergence for \(\flat\)MTL in Section 6.2. We show the convergence of the probability, by showing the convergence of the indicator function of \(\flat\)MTL formulas. Let us define the indicator functions for MTL formulas as follows:
**Definition 6.5**.: Let \(\phi\) be an \(\flat\)-MTL formula and define random indicator functions \(\chi_{\phi}(\omega,t)\) and \(\chi_{\phi}^{(n)}(t)\) as
\[\chi_{\phi}(\omega,t):=\left\{\begin{aligned} & 1&\text{ if }X(\omega),t\models\phi\\ & 0&\text{ if }X(\omega),t\not\models\phi,\end{aligned}\right.\]
\[\chi_{\phi}^{(n)}(\omega,t):=\left\{\begin{aligned} & 1&\text{ if }X(\omega),\Lambda_{n}(t)\models_{n}\phi\\ & 0&\text{ if }X(\omega),\Lambda_{n}(t)\not\models_{n}\phi, \end{aligned}\right.\]
where \(\Lambda_{n}(t):=\frac{\lfloor nt\rfloor}{n}\).
The convergence of indicator function for a formula implies the convergence of probability of the formula. More precisely, our proof of the convergence is based on the following lemma:
**Lemma 6.6**.: _Suppose that \(\chi_{\phi}^{(n)}(\omega,t)\to\chi_{\phi}(\omega,t)\) almost surely. Then \(\mathbb{P}(\omega;X(\omega),\Lambda_{n}(t)\models_{n}\phi)\to\mathbb{P}(\omega;X (\omega),t\models\phi)\) as \(n\to\infty\)._
Proof.: From the definition of \(\chi_{\phi}(\omega,t)\) and \(\chi_{\phi}^{(n)}(\omega,t)\), \(\chi_{\phi}(\omega,t)=1\) and \(\chi_{\phi}^{(n)}(\omega,t)=1\) is equivalent to \(X(\omega),t\models\phi\) and \(X(\omega),\Lambda_{n}(t)\models_{n}\phi\), respectively. Then \(\mathbb{P}(\omega\in\Omega;X(\omega),t\models\phi)=\mathbb{E}[\chi_{\phi}( \omega,t)]\) and \(\mathbb{P}(\omega\in\Omega;X(\omega),\Lambda_{n}(t)\models_{n}\phi)=\mathbb{E }[\chi_{\phi}^{(n)}(\omega,t)]\). Since \(\chi_{\phi}(\omega,t)\leq 1\), \(\chi_{\phi}^{(n)}(\omega,t)\leq 1\), and \(\mathbb{E}[1]=1\), we can apply _Lebesgue's dominated convergence theorem_ (see Theorem 1.34 in [10]) to observe \(\mathbb{E}[\chi_{\phi}^{(n)}(\omega,t)]\to\mathbb{E}[\chi_{\phi}(\omega,t)]\).
### The case of \(\Diamond_{\langle S,T\rangle}p\) with \(p\) corresponding to a union of intervals
Before proving the convergence of general \(\flat\)MTL formulas, we first show the convergence for a special type of \(\flat\)MTL formulas. In the subsequent subsection, we will present the proof for the convergence of \(\flat\)MTL formulas in the general case. In this subsection, we consider \(\flat\)MTL formulas of the form \(\Diamond_{\langle S,T\rangle}p\), where \(p\) is a propositional formula. Here, the subscript \(\langle S,T\rangle\) denotes a positive interval on \([0,\infty)\), specifically, \(\langle S,T\rangle\) represents an interval with endpoints \(S\) and \(T\) such that \(0\leq S<T\). In the proof of convergence in this case, we utilize deep insights from stochastic calculus, namely, the notion of _local maxima and local minima_ of SDE (see Definition 6.10), and _the dense property of the zero set_ of SDE (see Lemma 6.11).
To prove the convergence in the special case of \(\Diamond_{\langle S,T\rangle}p\), we will introduce certain conditions on the propositional formula \(p\).
**Definition 6.7**.: Let \(B_{1},\cdots,B_{n}\) be a finite Borel sets on \(\mathbb{R}\). A pair \(B_{i},B_{j}\) is said to be _separated_ when \(\overline{B_{i}}\cap\overline{B_{j}}=\emptyset\). We say the set \(\{B_{1},\cdots,B_{n}\}\) is _pairwise separated_ when all pairs of different elements are separated.
Now we prove the following theorem:
**Theorem 6.8**.: _Let \(X\) be the srong solution of (6.1) satisfying Assumption 6.1. Let \(p\) be an MTL formula such that \(X(\omega),t\models p\) is equivalent to \(X_{t}\in B_{p}\), where_
\[B_{p}:=\bigcup_{i=1}^{k}\langle x_{i},y_{i}\rangle \tag{6.2}\]
_is a union of pairwise separated positive intervals \(\{\langle x_{i},y_{i}\rangle;i=1,\cdots,k\}\) on \(\mathbb{R}\). Here \(B_{p}\) possibly equals to empty set or \(\mathbb{R}\). Define \(X(\omega),t\models_{n}p\) namely. Define \(\phi:=\Diamond_{\langle S,T\rangle}p\), and \(\psi:=\square_{\langle S,T\rangle}p\) where \(\langle S,T\rangle\) is a positive interval on \([0,\infty)\). Then the following statements hold:_
\[\chi_{\phi}^{(n)}(\omega) \stackrel{{ n\to\infty}}{{\longrightarrow}}\chi_{ \phi}(\omega),\quad\text{a.s.}\,\] \[\chi_{\psi}^{(n)}(\omega) \stackrel{{ n\to\infty}}{{\longrightarrow}}\chi_{ \psi}(\omega),\quad\text{a.s.}\]
_In particular,_
\[\mathbb{P}(\omega;X(\omega),\Lambda_{n}(t)\models_{n}\phi) \stackrel{{ n\to\infty}}{{\rightarrow}}\mathbb{P}(\omega;X( \omega),t\models\phi),\] \[\mathbb{P}(\omega;X(\omega),\Lambda_{n}(t)\models_{n}\psi) \stackrel{{ n\to\infty}}{{\rightarrow}}\mathbb{P}(\omega;X( \omega),t\models\psi).\]
The key to proving the convergence of MTL formulas with the diamond operator lies in the following inclusions:
\[\llbracket\phi\rrbracket_{\omega} \subset\overline{\text{int}\llbracket\phi\rrbracket_{\omega}}\text{ almost surely}, \tag{6.3}\] \[\llbracket\neg\phi\rrbracket_{\omega} \subset\overline{\text{int}\llbracket\neg\phi\rrbracket_{\omega}} \text{ almost surely}, \tag{6.4}\]
where the time set \(\llbracket\phi\rrbracket_{\omega}\) of MTL formula \(\phi\) is defined in Definition 3.29.
In order to show Theorem 6.8, we will first prove a simplified version of the theorem in which the propositional formula corresponds to an interval. First, let us show (6.3) and (6.4) in this case:
**Lemma 6.9**.: _Put Assumption 6.1. Let \(p\) be a propositional formula defined as \(X(\omega),t\models p\Leftrightarrow X_{t}(\omega)\in\langle y_{1},y_{2}\rangle\), where \(\langle y_{1},y_{2}\rangle\) is a positive interval. Then \(p\) satisfies (6.3) and (6.4) almost surely. Namely,_
\[\llbracket p\rrbracket_{\omega} \subset\overline{\text{int}\llbracket p\rrbracket_{\omega}},\quad \text{a.s.}\,\] \[\llbracket\neg p\rrbracket_{\omega} \subset\overline{\text{int}\llbracket\neg p\rrbracket_{\omega}}, \quad\text{a.s.}\]
To prove this lemma, we have to introduce _local minima_ and _local maxima_ of \(X\):
**Definition 6.10**.:
1. Let \(f:[0,\infty)\to\mathbb{R}\) be a given function. A number \(t\geq 0\) is called a _point of local maximum_, if there exists a number \(\delta>0\) with \(f(s)\leq f(t)\) valid for every \(s\in[(t-\delta)^{+},t+\delta]\); and a _point of strict local maximum_, if there exists a number \(\delta>0\) with \(f(s)<f(t)\) valid for every \(s\in[(t-\delta)^{+},t+\delta]\setminus\{t\}\).
2. Let \(f:[0,\infty)\to\mathbb{R}\) be a given function. A number \(t\geq 0\) is called a _point of local minimum_, if there exists a number \(\delta>0\) with \(f(s)\geq f(t)\) valid for every \(s\in[(t-\delta)^{+},t+\delta]\); and a _point of strict local minimum_, if there exists a number \(\delta>0\) with \(f(s)>f(t)\) valid for every \(s\in[(t-\delta)^{+},t+\delta]\setminus\{t\}\).
**Lemma 6.11**.: _Put Assumption 6.1. Then, the following statements hold (see Appendix A for the proof):_
1. _Put_ \[\mathcal{L}_{\omega}^{a}:=\{t\geq 0;X_{t}(\omega)=a\},\quad a\in\mathbb{R},\ \omega\in\Omega.\] (6.5) _Then_ \(\mathcal{L}_{\omega}^{a}\) _is dense in itself almost surely, for all_ \(a\in\mathbb{R}\)_._
2. _For almost every_ \(\omega\in\Omega\)_, the set of points of local maximum and local minimum for the path_ \(t\mapsto X(\omega)\) _is and dense in_ \([0,\infty)\)_, and all local maxima and local minima are strict._
**Lemma 6.12**.: _Put Assumption 6.1. Let us define an atomic formula \(a\) as \(X(\omega),t\models a\Leftrightarrow X_{t}(\omega)\in\langle y,\infty\rangle\), where \(\langle y,\infty\rangle\) is half-line with open or closed endpoint \(y\in\mathbb{R}\). Then \(a\) satisfies (6.3) and (6.4) almost surely. Namely,_
\[\llbracket a\rrbracket_{\omega} \subset\overline{\text{int}\llbracket a\rrbracket_{\omega}},\quad \text{a.s.}\,\] \[\llbracket a\rrbracket_{\omega} \subset\overline{\text{int}\llbracket\neg a\rrbracket_{\omega}}, \quad\text{a.s.}\]
Proof.: Let \(\langle y,\infty\rangle\) denote the left-closed interval \([y,\infty)\). The statement \(t\in\llbracket\neg a\rrbracket_{\omega}\) is equivalent to \(X_{t}(\omega)<y\). Since \(X\) is continuous almost surely, the set \(\{t;X_{t}(\omega)<y\}\) is an open set almost surely, and therefore, inclusion (6.4) holds clearly. From Lemma 6.11-(i), we know that \(\{t\geq 0;X_{t}(\omega)=y\}\) is dense in itself almost surely. On the other hand, due to the almost-sure continuity of \(X\), if \(X_{t}(\omega)>y\), then it implies that \(t\in\ \text{int}\ \llbracket a\rrbracket_{\omega}\) almost surely.
surely. Hence, it remains to show that \(t\in\llbracket a\rrbracket_{\omega}\) and \(X_{t}(\omega)=y\) implies \(t\in\overline{\operatorname{int}\llbracket a\rrbracket_{\omega}}\) almost surely. Suppose \(t\notin\overline{\operatorname{int}\llbracket a\rrbracket_{\omega}}\). Then, there exists \(\varepsilon>0\) such that \((t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket a\rrbracket_{ \omega}=\emptyset\). Since \((\exists s\in(t-\varepsilon,t+\varepsilon))[X_{s}(\omega)>y]\) implies \((t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket a\rrbracket_{ \omega}\neq\emptyset\), it follows that \((\forall s\in(t-\varepsilon,t+\varepsilon))[X_{s}(\omega)\leq y]\). By applying Lemma 6.11-(ii), we can conclude that \(t\) is a strict local maximum almost surely, i.e., \((\forall s\in(t-\varepsilon,t+\varepsilon)\setminus\{t\})[X_{s}(\omega)<y]\), and thus, \(t\) is an isolated point of \(\llbracket a\rrbracket_{\omega}\) almost surely. However, this contradicts the self-dense property of \(\{X_{t}(\omega)=y\}\). Therefore, we obtain the inclusion (6.3) almost surely.
On the other hand, consider \(\langle y,\infty\rangle\) as the left-open interval \((y,\infty)\). Now, \(t\in\llbracket a\rrbracket_{\omega}\) is equivalent to \(X_{t}(\omega)>y\). Since \(X\) is continuous almost surely, the set \(\{t\geq 0;X_{t}(\omega)>y\}\) is an open set, and thus inclusion (6.3) holds clearly. Once again, referring to Lemma 6.11, we observe that \(\{t\geq 0;X_{t}(\omega)=y\}\) is almost surely dense in itself. Moreover, due to the almost-sure continuity of \(X\), if \(X_{t}(\omega)<y\), then it implies \(t\in\operatorname{\,int}\llbracket\neg a\rrbracket_{\omega}\) almost surely. Therefore, we need to demonstrate equation (6.4) when \(t\in\llbracket\neg a\rrbracket_{\omega}\) and \(X_{t}(\omega)=y\). Suppose \(t\notin\overline{\operatorname{int}\llbracket a\rrbracket_{\omega}}\), which implies the existence of \(\varepsilon>0\) such that \((t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket\neg a \rrbracket_{\omega}=\emptyset\). If \((\exists s\in(t-\varepsilon,t+\varepsilon))[X_{s}(\omega)<y]\), then it implies \((t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket\neg a \rrbracket_{\omega}\neq\emptyset\). Consequently, it holds that \((\forall s\in(t-\varepsilon,t+\varepsilon))[X_{s}(\omega)\geq y]\). By applying 6.11-(ii), we can deduce that \(t\) is a strict local minimum almost surely, i.e., \((\forall s\in(t-\varepsilon,t+\varepsilon)\setminus\{t\})[X_{s}(\omega)>y]\), which means \(t\) is an isolated point of \(\llbracket\neg a\rrbracket_{\omega}\). However, this contradicts the self-dense property of \(\{t\geq 0;X_{t}(\omega)=y\}\). Thus, we establish (6.4).
Proof of Lemma 6.9.: Let us define atomic formulas \(a,b\) as
\[X(\omega),t \models a\Leftrightarrow X_{t}(\omega)\in\langle y_{1},\infty\rangle, \tag{6.6}\] \[X(\omega),t \models b\Leftrightarrow X_{t}(\omega)\in\langle y_{2},\infty\rangle, \tag{6.7}\]
where left endpoints \(y_{1},y_{2}\) can be open or closed and satisfy \(y_{1}<y_{2}\). Then we can define \(X(\omega),t\models p\) is equivalent to \(X(\omega),t\models a\wedge\neg b\) and hence it is enough to show that \(a\wedge\neg b\) satisfies (6.3) and (6.4).
(6.3): Given that
\[\llbracket a\wedge\neg b\rrbracket_{\omega}\subset(\operatorname{int} \llbracket a\rrbracket_{\omega}\cap\operatorname{int}\llbracket\neg b \rrbracket_{\omega})\cup\partial\llbracket a\rrbracket_{\omega}\cup\partial \llbracket\neg b\rrbracket_{\omega},\]
and
\[\operatorname{int}\llbracket a\rrbracket_{\omega}\cap\operatorname{int} \llbracket\neg b\rrbracket_{\omega}=\operatorname{int}\llbracket a\wedge \neg b\rrbracket_{\omega}\subset\overline{\operatorname{int}\llbracket a \wedge\neg b\rrbracket_{\omega}}\]
then it is enough to show
\[\llbracket a\wedge\neg b\rrbracket_{\omega}\cap\partial\llbracket a \rrbracket_{\omega}\subset\overline{\operatorname{int}\llbracket a \wedge\neg b\rrbracket_{\omega}}\] \[\llbracket a\wedge\neg b\rrbracket_{\omega}\cap\partial \llbracket\neg b\rrbracket_{\omega}\subset\overline{\operatorname{int} \llbracket a\wedge\neg b\rrbracket_{\omega}}.\]
Suppose that \(t\in\llbracket a\wedge\neg b\rrbracket_{\omega}\cap\partial\llbracket a \rrbracket_{\omega}\). If \(O\) is a neighborhood of \(t\), \(O\cap\operatorname{int}\llbracket a\rrbracket_{\omega}\neq\emptyset\) almost surely, because \(O\cap\llbracket a\rrbracket_{\omega}\neq\emptyset\) and \(\llbracket a\rrbracket_{\omega}\subset\overline{\operatorname{int}\llbracket a \rrbracket_{\omega}}\) almost surely. Since the path \(X(\omega)\) is almost surely continuous, \(\partial\llbracket a\rrbracket_{\omega}\) implies \(X_{t}=y_{1}<y_{2}\) almost surely and hence \(t\in\operatorname{int}\llbracket\neg b\rrbracket_{\omega}\) almost surely. Let \(\varepsilon>0\). Since \((t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket\neg b \rrbracket_{\omega}\) is a neighborhood of \(t\),
\[(t-\varepsilon,t+\varepsilon)\cap\operatorname{int}\llbracket a \rrbracket_{\omega}\cap\operatorname{int}\llbracket\neg b \rrbracket_{\omega}=(t-\varepsilon,t+\varepsilon)\cap\operatorname{int} \llbracket a\wedge\neg b\rrbracket_{\omega}\neq\emptyset,\,\,\,\,\text{a.s. }\,,\]
and hence \(t\in\overline{\operatorname{int}\llbracket a\wedge\neg b\rrbracket_{\omega}}\).
When \(t\in\llbracket a\wedge\neg b\rrbracket_{\omega}\cap\partial\llbracket \neg b\rrbracket_{\omega}\), the same argument can be applied. Thus we have shown (6.3).
* Suppose that \(t\in\llbracket\neg a\lor b\rrbracket_{\omega}\) and let \(O\) be a neighborhood of \(t\). Since \(t\in\llbracket\neg a\rrbracket_{\omega}\) or \(t\in\llbracket b\rrbracket_{\omega}\), Lemma 6.12 implies that \(O\cap\operatorname{int}[\![\neg a]\!]_{\omega}\neq\emptyset\) or \(O\cap\operatorname{int}[\![b]\!]_{\omega}\neq\emptyset\) holds almost surely. Since \(\operatorname{int}[\![\neg a]\!]_{\omega}\cup\operatorname{int}[\![b]\!]_{ \omega}\subset\operatorname{int}([\![\neg a]\!]_{\omega}\cup[\![b]\!]_{\omega})\), it holds almost surely that \(O\cap\operatorname{int}[\![\neg a\lor b]\!]_{\omega}=O\cap\operatorname{int}([ \![\neg a]\!]_{\omega}\cup[\![b]\!]_{\omega})\neq\emptyset\). Then \(t\in\operatorname{int}[\![\neg a\lor b]\!]_{\omega}\) almost surely.
We can interpret the boundary \(\partial\llbracket\phi\rrbracket_{\omega}\) of time set \(\llbracket\phi\rrbracket_{\omega}\) as the time that the indicator function \(\chi_{\phi}(\omega,t)\) in Definition 6.5 changes its value. The next lemma shows that the boundary \(\partial\llbracket\phi\rrbracket_{\omega}\) of every MTL formula \(\phi\) has Lebesgue measure zero almost surely if the stochastic process \(X\) has a density and \(AP\) is distinct in the sense of the following definition.
**Definition 6.13**.: We say that a Borel set \(B\) on \(\mathbb{R}\) is _distinct_ if its boundary \(\partial B\) has Lebesgue measure zero. We say an atomic formula \(a\in AP\) is _distinct_ when the corresponding set \(B_{a}\) is distinct. The set of \(AP\) of atomic formulas is said to be _distinct_ when all \(a\in AP\) are distinct.
**Lemma 6.14**.: _Consider the case of the distinct set \(AP\) of atomic formulas. Let \((\Omega,\mathcal{F},\mathbb{P})\) be a complete probability space. Suppose that \(X\) is an almost surely continuous stochastic process such that \(X_{t}\) has a density for every \(t\in(0,\infty)\). Then, for every MTL formula \(\phi\) there exists some measurable set \(K\in\mathcal{F}\otimes\mathcal{B}([0,\infty))\) such that_
* \(\{t;(\omega,t)\in K\}\) _is almost surely closed,_
* \(\{t;(\omega,t)\in K\}\) _has Lebesgue measure zero almost surely,_
* \(\mathbb{P}(\{\omega;(\omega,t)\in K\})=0\) _for every_ \(t\in(0,\infty)\)_, and_
* \(\partial\llbracket\phi\rrbracket_{\omega}\subset\{t;(\omega,t)\in K\}\)_._
For this propose, in the following lemma we show that the boundary of the time set of the form \(\Diamond_{\langle S,T\rangle}\phi\) is restricted to the shift of the boundary of the form \(\phi\).
**Lemma 6.15**.: _Let \(\phi\) be an MTL formula and \(\langle S,T\rangle\) be a positive interval on \([0,\infty)\). Then it holds almost surely that_
\[\partial\llbracket\Diamond_{\langle S,T\rangle}\phi\rrbracket_{\omega}\subset[ (\partial\llbracket\phi\rrbracket_{\omega}\ominus S)\cup(\partial \llbracket\phi\rrbracket_{\omega}\ominus T)], \tag{6.8}\]
_where \(\partial\llbracket\phi\rrbracket_{\omega}\ominus S:=\{t-S;t\in\partial \llbracket\phi\rrbracket_{\omega}\}\cap[0,\infty)\) and \(\partial\llbracket\phi\rrbracket_{\omega}\ominus T:=\{t-T;t\in\partial \llbracket\phi\rrbracket_{\omega}\}\cap[0,\infty)\)._
Proof.: Let \(\langle S,T\rangle\) be closed interval \([S,T]\). Suppose that \(t\in\partial\llbracket\Diamond_{\langle S,T\rangle}\phi\rrbracket_{\omega}\). Then it is clear that \((t+S,t+T)\cap\llbracket\phi\rrbracket_{\omega}=\emptyset\). If not, there exists some neighborhood of \(t\) whose every element \(s\) satisfies \((s+S,s+T)\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\) and hence \(t\notin\partial\llbracket\Diamond_{\langle S,T\rangle}\phi\rrbracket_{\omega}\). Again from \(t\in\partial\llbracket\Diamond_{\langle S,T\rangle}\phi\rrbracket_{\omega}\), one of the following two statement holds:
* There exist some sequence \(t_{n},\ n=1,2,3,\cdots\) in \(\llbracket\Diamond_{[S,T]}\phi\rrbracket_{\omega}\) such that \(\sup_{n}t_{n}=t\).
* There exist some sequence \(t_{n},\ n=1,2,3,\cdots\) in \(\llbracket\Diamond_{[S,T]}\phi\rrbracket_{\omega}\) such that \(\inf_{n}t_{n}=t\).
It is enough to show \(S+t\in\partial\llbracket\phi\rrbracket_{\omega}\) or \(T+t\in\partial\llbracket\phi\rrbracket_{\omega}\) for (i) and (ii).
* Since \((S+t,T+t)\subset\llbracket\neg\phi\rrbracket_{\omega}\), \((S+t,S+t+\varepsilon)\cap\llbracket\neg\phi\rrbracket_{\omega}\neq\phi\) for every positive \(\varepsilon\). Together with \(\llbracket\phi\rrbracket_{\omega}\cap[S+t_{n},T+t_{n}]\neq\emptyset\), \((S+t,T+t)\subset\llbracket\neg\phi\rrbracket_{\omega}\) also implies \([S+t_{n},S+t]\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\) for every \(n\in\mathbb{N}\). Then \((S+t-\varepsilon,S+t]\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\) for every positive \(\varepsilon\).
* We can show \((T+t-\varepsilon,T+t)\cap\llbracket\neg\phi\rrbracket_{\omega}\neq\emptyset\) and \([T+t,T+t+\varepsilon)\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\) by showing \([T+t,T+t_{n}]\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\). Indeed, \((S+t,T+t)\cap\llbracket\phi\rrbracket_{\omega}=\emptyset\) and \([S+t_{n},T+t_{n}]\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\) implies \([T+t,T+t_{n}]\cap\llbracket\phi\rrbracket_{\omega}\neq\emptyset\).
Thus we show the statement when \(\langle S,T\rangle\) is closed. We can prove the case of \(\langle a,b\rangle=(S,T),[S,T),[S,T)\) in exactly the same way.
Proof of Lemma 6.14.: Since the map \(t\mapsto X_{t}(\omega)\) is almost surely continuous, there exists some \(N\in\mathcal{F}\) such that \(\mathbb{P}(N)=0\) and \(t\mapsto X_{t}(\omega)\) is continuous whenever \(\omega\notin N\). Suppose \(a\) is an atomic formula and \(t\in\partial[\![a]\!]_{\omega}\). For any positive \(\varepsilon\), we can find \(s\) and \(s^{\prime}\) in the interval \((t-\varepsilon,t+\varepsilon)\) such that \(X_{s}(\omega)\in B_{a}\) and \(X_{s^{\prime}}(\omega)\notin B_{a}\). This is because \(t\) is a boundary point of the satisfaction set \([\![a]\!]_{\omega}\). Since the mapping \(t\mapsto X_{t}(\omega)\) is continuous for \(\omega\notin N\), it follows that \(X_{t}(\omega)\) lies on the boundary \(\partial B_{a}\). In other words, \(X_{t}(\omega)\) is located on the boundary of the set defined by the atomic formula \(a\). Put \(K:=\{(\omega,t);X_{t}(\omega)\in\partial B_{a}\}\cup N\times[0,\infty)\) and \(K_{\omega}:=\{t;(\omega,t)\in K\}\). Hence we get \(\partial[\![a]\!]_{\omega}\subset K_{\omega}\). Since \(t\mapsto X_{t}(\omega)\) is continuous almost surely and \(X_{t}\) has a density for every \(t>0\), \(K\) is measurable, \(K_{\omega}\) is almost surely closed,
\[\mathbb{P}(\{\omega;(\omega,t)\in K\})\leq\mathbb{P}(\omega;X_{t}(\omega)\in \partial B_{a})+\mathbb{P}(N)=0\quad\forall t\in(0,\infty).\]
Then it holds that
\[\int_{[0,\infty)}\left\{\int_{\Omega}1\!\!1_{K}(\omega,t)\mathbb{P}(d\omega) \right\}dt=0. \tag{6.9}\]
By using Fubini's Theorem (see Theorem 8.8 in [11]), we have
\[\int_{\Omega}\left\{\int_{[0,\infty)}1\!\!1_{K}(\omega,t)dt\right\}\mathbb{P}( d\omega)=0,\]
which implies that \(K_{\omega}\) has Lebesgue measure zero almost surely (see (b) of Theorem 1.39 in [11]). When \(K\) corresponds to a formula \(\phi\) with (i)-(iv), then \(K\) also satisfies (i)-(iv) for\(\neg\phi\), since \(\partial[\![\neg\phi]\!]_{\omega}=\partial[\![\phi]\!]_{\omega}\). When \(K_{1}\) and \(K_{2}\) satisfy (i)-(iv) for \(\phi_{1}\) and \(\phi_{2}\) respectively, \(K_{1}\cup K_{2}\) satisfies (i)-(iv) for \(\phi_{1}\wedge\phi_{2}\), since \(\{t;(\omega,t)\in K_{1}\cup K_{2}\}\) is closed, \(\mathbb{P}(\omega;(\omega,t)\in K_{1}\cup K_{2})=0\) for \(t\in(0,\infty)\), and \(\partial[\![\phi_{1}\wedge\phi_{2}]\!]_{\omega}\subset\{t;(\omega,t)\in K_{1 }\cup K_{2}\}\). Suppose that \(K\) satisfies (i)-(iv) for \(\phi\). We show that \(\{(\omega,t);t\in[(K_{\omega}\ominus S)\cup(K_{\omega}\ominus T)]\}\) satisfies (i)-(iv) for \(\Diamond_{\langle S,T\rangle}\phi\).
1. Since \(K_{\omega}\) is closed almost surely, \((K_{\omega}\ominus S)\) and \((K_{\omega}\ominus T)\) are almost surely closed and then \((K_{\omega}\ominus S)\cup(K_{\omega}\ominus T)\) is closed almost surely.
2. From (6.9), it holds that \[\int_{[0,\infty)}\left\{\int_{\Omega}1\!\!1_{\{t\in K_{\omega} \ominus S\}}(\omega,t)\mathbb{P}(d\omega)\right\}dt =\int_{[S,\infty)}\left\{\int_{\Omega}1\!\!1_{\{t\in K_{\omega} \}}(\omega,t)\mathbb{P}(d\omega)\right\}dt =0,\] \[\int_{[0,\infty)}\left\{\int_{\Omega}1\!\!1_{\{t\in K_{\omega} \ominus T\}}(\omega,t)\ \mathbb{P}(d\omega)\right\}dt =\int_{[T,\infty)}\left\{\int_{\Omega}1\!\!1_{\{t\in K_{\omega} \}}(\omega,t)\mathbb{P}(d\omega)\right\}dt =0.\]
By Fubini's theorem, we have
\[\int_{\Omega}\left\{\int_{[0,\infty)}1\!\!1_{\{t\in K_{\omega} \ominus S\}}(\omega,t)dt\right\}\mathbb{P}(d\omega) =0,\] \[\int_{\Omega}\left\{\int_{[0,\infty)}1\!\!1_{\{t\in K_{\omega} \ominus T\}}(\omega,t)dt\right\}\mathbb{P}(d\omega) =0,\]
which implies that \(\{(\omega,t);t\in(K_{\omega}\ominus S)\cup(K_{\omega}\ominus T)\}\) has Lebesgue measure zero almost surely.
3. When \(t>0\), we have \[\mathbb{P}(\omega;t\in(K_{\omega}\ominus S)\cup(K_{\omega}\ominus T))\] \[\leq \mathbb{P}(\omega;t\in(K_{\omega}\ominus S))+\mathbb{P}(\omega;t\in( K_{\omega}\ominus T))\] \[\leq \mathbb{P}(\omega;t+S\in K_{\omega})+\mathbb{P}(\omega;t+T\in K_{ \omega})=0.\]
4. From Lemma 6.15, almost surely.
In the next lemma, we give a sufficient condition for convergence of the indicator function of formula with diamond or box operator.
**Lemma 6.16**.: _Let \(X\) be the solution of SDE (6.1) satisfying Assumption 6.1. Define an MTL formula \(p\) as_
\[X(\omega),t\models p\Leftrightarrow X_{t}(\omega)\in B_{p}\]
_for some positive interval \(B_{p}\) on \(\mathbb{R}\). Let \(\langle S,T\rangle\) be a positive interval on \([0,\infty)\). If \(p\) satisfies (6.3) and (6.4), the following statements hold:_
1. _Define_ \(\phi:=\Diamond_{\langle S,T\rangle}p\)_. Then_ \(\chi_{\phi}^{(n)}(\omega,t)\to\chi_{\phi}(\omega,t)\) _for every_ \(t\in[0,\infty)\)_._
2. _Define_ \(\psi:=\Box_{\langle S,T\rangle}p\)_. Then_ \(\chi_{\psi}^{(n)}(\omega,t)\to\chi_{\psi}(\omega,t)\) _for every_ \(t\in[0,\infty)\)_._
_Here, \(\chi_{\phi}^{(n)}(\omega,t)\), \(\chi_{\psi}^{(n)}(\omega,t)\), \(\chi_{\phi}(\omega,t)\), and \(\chi_{\psi}(\omega,t)\) are the indicator functions defined in Definition 6.5._
Proof.: First, let us show that \(\langle t+S,t+T\rangle\cap\llbracket p\rrbracket_{\omega}\neq\emptyset\) implies \((t+S,t+T)\cap\mathrm{int}\llbracket p\rrbracket_{\omega}\neq\emptyset\) almost surely. If \(\partial\llbracket p\rrbracket_{\omega}\notin\langle t+S,t+T\rangle\) and \(\langle t+S,t+T\rangle\cap\llbracket p\rrbracket_{\omega}\neq\emptyset\), then it follows that \(X(\omega),s\models p\) for all \(s\in\langle t+S,t+T\rangle\), and therefore, \(\langle t+S,t+T\rangle\subset\llbracket p\rrbracket_{\omega}\). Consequently, \((t+S,t+T)\cap\mathrm{int}\llbracket p\rrbracket_{\omega}\neq\emptyset\). Next, suppose \(\partial\llbracket p\rrbracket_{\omega}\in\langle t+S,t+T\rangle\) and \(\langle t+S,t+T\rangle\cap\llbracket p\rrbracket_{\omega}\neq\emptyset\). Since \(\{X_{t}\}_{t\geq 0}\) satisfies Assumption 6.1, \(X_{t}\) has a density for \(t>0\) by Appendix A.4. When \(t+S>0\), we have \(t+T>t+S>0\), and Lemma 6.14 implies that \(t+S\) and \(t+T\) do not belong to \(\partial\llbracket p\rrbracket_{\omega}\) almost surely. Thus, we have \(\partial\llbracket p\rrbracket_{\omega}\cap(t+S,t+T)\neq\emptyset\), which implies \(\llbracket p\rrbracket_{\omega}\cap(t+S,t+T)\neq\emptyset\). Therefore, we conclude from (6.3) that \((t+S,t+T)\cap\mathrm{int}\llbracket p\rrbracket_{\omega}\neq\emptyset\). If \(t+S=0\), \(\partial\llbracket p\rrbracket_{\omega}\) intersects the open set of the form \([0,t+T)\) or \((0,t+T)\) on \([0,\infty)\). Then, from (6.3), we can conclude \(\mathrm{int}\llbracket p\rrbracket_{\omega}\cap(t+S,t+T)\neq\emptyset\).
We can show in similar way that \(\langle t+S,t+T\rangle\cap\llbracket\neg p\rrbracket_{\omega}\neq\emptyset\) implies \((t+S,t+T)\cap\mathrm{int}\llbracket\neg p\rrbracket_{\omega}\neq\emptyset\) almost surely.
Now let us prove (i) and (ii). Suppose \(X(\omega),t\models\phi\). Since \((t+S,t+T)\cap\mathrm{int}\llbracket p\rrbracket_{\omega}\) is a nonempty open set, there exists \(s\in(\Lambda_{n}(t)+S,\Lambda_{n}(t)+T)\cap\mathbb{N}/n\) such that \(X(\omega),s\models_{n}p\) for sufficiently large \(n\). Hence, \(X,\Lambda_{n}(t)\models_{n}\phi\). By applying the same argument, we can show from (6.4) that if \(X(\omega),t\not\models\psi\), then \(X(\omega),\Lambda_{n}(t)\not\models_{n}\psi\) for sufficiently large \(n\).
On the other hand, suppose \(X(\omega),t\not\models\phi\). Then \(\llbracket p\rrbracket_{\omega}\cap(S,T)=\emptyset\) and \(\partial\llbracket p\rrbracket_{\omega}\cap(S,T)=\emptyset\). If \(t+S>0\), according to Appendix A.4 and Lemma 6.14, \(t+S\) and \(t+T\) do not belong to \(\partial\llbracket p\rrbracket_{\omega}\) almost surely. Thus, there exists \(\varepsilon>0\) such that \((t+S-\varepsilon,t+T-\varepsilon)\subset\llbracket\neg p\rrbracket_{\omega}\), and hence \((\Lambda_{n}(t)+S,\Lambda_{n}(t)+T)\cap\llbracket p\rrbracket_{\omega}=\emptyset\) for sufficiently large \(n\). If \(t+S=0\), since \(\Lambda_{n}(t)=t=0\), it holds that
\[X(\omega),\Lambda_{n}(t)\not\models_{n}\phi\Leftrightarrow X( \omega),0\not\models_{n}\Diamond_{\langle 0,T\rangle}p,\] \[X(\omega),t\not\models\phi\Leftrightarrow X(\omega),0\not\models \Diamond_{\langle 0,T\rangle}p.\]
Then it is clear that \(X(\omega),t\not\models\phi\) implies \(X(\omega),\Lambda_{n}(t)\not\models_{n}\phi\). Now we have shown that \(X(\omega),t\not\models_{n}\phi\) for sufficiently large \(n\). The same argument can be applied to prove that if \(X(\omega),t\models\psi\), then \(X(\omega),\Lambda_{n}(t)\models_{n}\psi\) for sufficiently large \(n\).
**Lemma 6.17**.: _Suppose that a propositional formula \(p\) satisfies the conditions introduced in the statement of Theorem 6.8. Specifically, let \(\langle x_{i},y_{i}\rangle,\ i=1,\cdots,k\) be pairwise disjoint positive intervals, and define \(B_{p}:=\bigcup_{i=1}^{k}\langle x_{i},y_{i}\rangle\). Define a propositional formula \(p,p_{1},\cdots,p_{k}\) by_
\[X(\omega),t\models p \Leftrightarrow X_{t}(\omega)\in B_{p},\] \[X(\omega),t\models_{n}p \Leftrightarrow X_{t}(\omega)\in B_{p},\] \[X(\omega),t\models p_{i} \Leftrightarrow X_{t}(\omega)\in\langle x_{i},y_{i}\rangle\quad \text{for $i=1\cdots,k$},\] \[X(\omega),t\models_{n}p_{i} \Leftrightarrow X_{t}(\omega)\in\langle x_{i},y_{i}\rangle\quad \text{for $i=1\cdots,k$}.\]
_Then \(p\) satisfies (6.3) and (6.4). Namely,_
\[\llbracket p\rrbracket_{\omega} \subset\overline{\text{int}\llbracket p\rrbracket_{\omega}},\quad \text{a.s.}\,\] \[\llbracket\neg p\rrbracket_{\omega} \subset\overline{\text{int}\llbracket\neg p\rrbracket_{\omega}}, \quad\text{a.s.}\]
Proof.: First note that
\[X(\omega),t\models p \Leftrightarrow X(\omega),t\models\bigvee_{i=1}^{k}p_{i},\] \[X(\omega),t\models_{n}p \Leftrightarrow X(\omega),t\models_{n}\bigvee_{i=1}^{k}p_{i},\]
where \(\bigvee_{i=1}^{k}p_{i}=p_{1}\lor p_{2}\vee\cdots\lor p_{k}\). If \(B_{p}=\emptyset\) or \(B_{p}=\mathbb{R}\), clearly \(\llbracket p\rrbracket_{\omega}=\emptyset\) or \(\llbracket p\rrbracket_{\omega}=[0,\infty)\), respectively. Hence (6.3) and (6.4) holds. Otherwise, From Lemma 6.9, every \(p_{i}\) satisfies (6.3) and (6.4) almost surely. Now we show that \(\llbracket p\rrbracket_{\omega}(=\llbracket\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega})\) satisfies (6.3) and (6.4).
* Let \(t\in\llbracket\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\) and \(O\) be a neighborhood of \(t\). Since \(\llbracket\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}=\bigcup_{i=1}^{k} \llbracket p_{i}\rrbracket_{\omega}\), there exists some \(i\in\{1,\cdots,k\}\) such that \(t\in\llbracket p_{i}\rrbracket_{\omega}\). Since \(\llbracket p_{i}\rrbracket_{\omega}\) satisfies (6.3) almost surely and \(\text{int}\llbracket p_{i}\rrbracket_{\omega}\subset\text{int}\llbracket \bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\), \(O\cap\text{int}\llbracket\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\neq\emptyset\) almost surely. Then \(\bigvee_{i=1}^{k}p_{i}\) satisfies (6.3) almost surely.
* Let \(t\in\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\). If \(t\in\text{int}\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\), then for any neighborhood \(O\) of \(t\), we have \(O\cap\text{int}\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\neq\emptyset\). Thus, it suffices to show that \(O\cap\text{int}\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\neq\emptyset\) for any neighborhood \(O\) of \(t\) whenever \(t\in\partial\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\). Since \(\text{int}\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}=\text{int}( \bigcap_{i=1}^{k}\llbracket\neg p_{i}\rrbracket_{\omega})=\bigcap_{i=1}^{k} \text{int}\llbracket\neg p_{i}\rrbracket_{\omega}\), there must exist some \(i\in 1,\cdots,k\) such that \(t\in\partial\llbracket\neg p_{i}\rrbracket_{\omega}\). Indeed, if \(t\in\text{int}\llbracket\neg p_{i}\rrbracket_{\omega}\) for every \(i\), then \(t\in\text{int}\llbracket\neg\bigvee_{i=1}^{k}p_{i}\rrbracket_{\omega}\). Since \(t\mapsto X_{t}(\omega)\) is continuous almost surely and \(\langle x_{1},y_{1}\rangle,\cdots\langle x_{k},y_{k}\rangle\) are pairwise separated, we have \(X_{t}(\omega)\in[x_{j},y_{j}]^{C}\) when \(j\neq i\) almost surely, and hence \(t\in\text{int}\llbracket\neg p_{j}\rrbracket_{\omega}\) for \(i\neq j\) almost surely. Therefore, \(t\in\bigcap_{j\neq i}\text{int}\llbracket\neg p_{j}\rrbracket_{\omega}\). Then, \((t-\delta,t+\delta)\cap\bigcap_{j\neq i}\text{int}\llbracket\neg p_{j} \rrbracket_{\omega}\) for sufficiently small \(\delta>0\). Now, since \(p_{i}\) satisfies (6.3), we have \((t-\delta,t+\delta)\cap\text{int}\llbracket\neg p_{i}\rrbracket_{\omega}\neq\emptyset\). Hence, \((t-\delta,t+\delta)\cap\text{int}\llbracket\bigvee_{j=1}^{k}\neg p_{j} \rrbracket_{\omega}=(t-\delta,t+\delta)\cap\bigcap_{j=1}^{k}\text{int} \llbracket\neg p_{j}\rrbracket_{\omega}=(t-\delta,t+\delta)\cap\text{int} \llbracket\neg p_{i}\rrbracket_{\omega}\neq\emptyset\).
Proof of Theorem 6.8.: From the condition on \(B_{p}\), it is clearly a distinct set. Then Lemma 6.16 and Lemma 6.17 implies the almost sure convergence of \(\chi^{(n)}\phi(\omega,t)\) and \(\chi^{(n)}\psi(\omega,t)\) for every \(t\in[0,\infty)\). Finally, Lemma 6.6 can be employed to show the convergence of probability.
### The case of \(\flat\)MTL formulas
Now we prove the convergence result for general \(\flat\)MTL formulas. Let \(X\) be the solution of SDE (6.1) with Assumption 6.1. Henceforth, we discuss under the following assumption:
**Assumption 6.18**.: _For every propositional formula \(p\),_
\[X(\omega),t\models p\Leftrightarrow X_{t}(\omega)\in B_{p}, \tag{6.10}\]
_for some \(B_{p}\) which is a union of pairwise separated positive intervals on \(\mathbb{R}\). Here \(B_{p}\) may possibly be \(\emptyset\) or \(\mathbb{R}\)._
**Remark 6.19**.: We give some example of setting so that every propositional formula satisfies Assumption 6.18. Let \(B_{1},\cdots,B_{k}\) be positive intervals on \(\mathbb{R}\) such that
1. \(\bigcup_{i=1}^{k}B_{i}=\mathbb{R}\).
2. \(B_{i}\cap B_{j}=\emptyset\) if \(i\neq j\).
We define the semantics of atomic formulas \(AP:=\{a_{1},\cdots,a_{k}\}\) as \(X(\omega),t\models a_{i}\Leftrightarrow X_{t}(\omega)\in B_{i}\) for \(i=1,\cdots,k\). Then every propositional formula clearly satisfies Assumption 6.18.
Under these settings, we show the following statement.
**Theorem 6.20**.: _Suppose that \(\{X_{t}\}_{t\geq 0}\) is the solution of SDE (6.1) with Assumption 6.1. Let \(AP\) be the set of atomic formulas such that every propositional formula satisfies Assumption 6.18. Let \(\phi\) be a \(\flat\)MTL formula. Then \(\chi^{(n)}_{\phi}(\omega,t)\to\chi_{\phi}(\omega,t)\) almost surely for every \(t\in[0,\infty)\). In particular, \(\mathbb{P}(\omega;X(\omega),\Lambda_{n}(t)\models_{n}\phi)\to\mathbb{P}(\omega ;X(\omega),t\models\phi)\) for all \(t\in[0,\infty)\)._
**Lemma 6.21**.: _Put Assumption 6.1 and Assumption 6.18. Let \(p\) be a propositional formula. Then \(\chi^{(n)}_{p}(\omega,t)\to\chi_{p}(\omega,t)\) almost surely, for every \(t\in(0,\infty)\). In particular, \(\mathbb{P}(\omega;X(\omega),\Lambda_{n}(t)\models_{n}p)\to\mathbb{P}(\omega;X (\omega),t\models p)\)._
Proof.: First note that \(X(\omega),t\models p\) is equivalent to \(X_{t}(\omega)\in B_{p}\) for some \(B_{p}\subset\mathbb{R}\). Let \(t=0\). Then \(\Lambda_{n}(0)=0\) and hence \(X(\omega),\Lambda_{n}(0)\models_{n}p\) is equivalent to \(X(\omega),0\models B_{p}\). Next let \(t>0\). By the definition of indicator functions, \(\chi_{p}(\omega,t)=1\) is equivalent to \(X_{t}(\omega)\in B_{p}\) and \(\chi^{(n)}_{p}(\omega,t)=1\) is equivalent to \(X_{\Lambda_{n}(t)}(\omega)\in B_{p}\). From Assumption 6.18, \(B_{p}\) is distinct, then Lemma 6.14 implies that \(t\notin\partial[\![p]\!]_{\omega}\) almost surely. Then almost surely there exists some \(\varepsilon>0\) such that \(\chi_{p}(\omega,s)=\chi_{p}(\omega,t)\) for every \(s\in(t-\varepsilon,t+\varepsilon)\cap[0,\infty)\). Then it holds almost surely that \(\chi^{(n)}_{p}(\omega,t)=\chi_{p}(\omega,\Lambda_{n}(t))=\chi_{p}(\omega,t)\) for sufficiently large \(n\).
Proof of Theorem 6.20.: Fix \(t\in[0,\infty)\). It is clear that \(\phi\) is Boolean combination of \(\{\phi_{i},i=1,\cdots,k\}\), where \(\phi_{i}\) is a propositional formula or formula of the form \(\Diamond_{(S,T)}p\) where \(\langle S,T\rangle\) is positive interval and \(p\) is propositional formula. Then there exists some function
\(\{0,1\}^{k}\longrightarrow\{0,1\}\) such that
\[\chi_{\phi}(\omega,t) =\bigodot_{i=1}^{k}\chi_{\phi_{i}}(\omega,t), \tag{6.11}\] \[\chi_{\phi}^{(n)}(\omega,t) =\bigodot_{i=1}^{k}\chi_{\phi_{i}}^{(n)}(\omega,t). \tag{6.12}\]
From Assumption 6.18 and Lemma 6.17, every propositional formula satisfies (6.3) and (6.4). Then we can apply Lemma 6.16 and Lemma 6.21 to show that \(\chi_{\phi_{i}}^{(n)}(\omega,t)\) converges almost surely to \(\chi_{\phi_{i}}(\omega,t)\) for every \(i=1,\cdots,k\). Then, almost surely, there exists some large \(N\in\mathbb{N}\) such that \(\chi_{\phi_{i}}^{(n)}(\omega,t)=\chi_{\phi_{i}}(\omega,t)\) for \(n\geq N\) and \(i=1,\cdots,k\). Therefore the left hand side of (6.12) converges to the left side of (6.11) almost surely. Once we have shown the almost sure convergence of (6.12) to (6.11), one can apply Lemma 6.6 to see the convergence of the probability.
**Remark 6.22**.: In Assumption 6.18, we represent every propositional formula as a union of pairwise separated sets. It is one of vital assumptions in our result. Let \(X:=\{X_{t}\}_{t\geq 0}\) be one-dimensional Brownian motion starting at zero. Consider the case that \(a,b\) are atomic formulas such that \(X(\omega),\models a\Leftrightarrow X_{t}\in B_{a}\) and \(X(\omega),t\models B_{b}\), where \(B_{a}:=(-\infty,1)\) and \(B_{b}:=(1,\infty)\). Let \(p:=\neg(a\wedge b)\) and \(\phi:=\Diamond_{\langle S,T\rangle}p\). Since \(\overline{B_{a}}\cap\overline{B_{b}}\neq\emptyset\), \(B_{a}\) and \(B_{b}\) is not separated. Since \(X_{0}\neq 1\) and \(X_{t}\) has density for all \(t>0\), \(\mathbb{P}(\omega;X_{t}(\omega)=1)=0\) for all \(t\in[0,\infty)\cap\mathbb{Q}\). Hence the sigma-additivity of probability measure implies \(\mathbb{P}(\exists t\in[0,\infty)\cap\mathbb{Q},\ X_{t}=1)=0\). Then \(\mathbb{P}(\omega;X(\omega),\Lambda_{n}(0)\models_{n}\Diamond_{\langle S,T \rangle}p)=0\) for every \(n,S,T\) and then \(\mathbb{P}(\omega;X(\omega),\Lambda_{n}(0)\models_{n}\phi)=0\). On the other hand, if \(\tau_{1}(\omega):=\inf\{t\geq 0;X_{t}(\omega)=1\}\in(S,T)\) then \(X(\omega),0\models\phi\). From Theorem 5.1 and the monotonicity of probability measures, then \(\mathbb{P}(\omega;X(\omega),0\models\phi)\geq\mathbb{P}(\tau_{1}\in(S,T))>0\) and then \(\mathbb{P}(\omega;X(\omega),0\models_{n}\phi)\) does not converge to \(\mathbb{P}(\omega;X(\omega),0\models\phi)\).
## 7. Conclusion
In conclusion, this study has examined the measurability of events defined by continuous MTL formulas under the assumption of the measurability of the underlying stochastic process as a mapping from sample and time.
Moreover, we demonstrated a counterexample that highlights the lack of convergence of the probability derived from discrete semantics to that derived from continuous semantics, specifically when the intervals within diamond operators are allowed to be bounded open.
Furthermore, the study explored the case of \(\flat\)MTL formulas, which only have \(\square\) or \(\Diamond\) without nest as modalities, and demonstrated that the probability obtained from discrete semantics converges to the probability obtained from continuous semantics for every formula within this framework. This finding suggests that \(\flat\)MTL formulas exhibit a desirable convergence property, highlighting their applicability and reliability in capturing system behaviors.
In light of these results, future research efforts should focus on understanding the underlying factors and mechanisms that contribute to the convergence or divergence of probability between discrete and continuous semantics in various formula contexts. By gaining deeper insights into these dynamics, researchers can enhance the effectiveness and accuracy of
probability simulations and predictions within the realm of formal verification and system analysis.
|
2306.16492 | Lithium tantalate electro-optical photonic integrated circuits for high
volume manufacturing | Photonic integrated circuits based on Lithium Niobate have demonstrated the
vast capabilities afforded by material with a high Pockels coefficient,
allowing linear and high-speed modulators operating at CMOS voltage levels for
applications ranging from data-center communications and photonic accelerators
for AI. However despite major progress, the industrial adoption of this
technology is compounded by the high cost per wafer. Here we overcome this
challenge and demonstrate a photonic platform that satisfies the dichotomy of
allowing scalable manufacturing at low cost, while at the same time exhibiting
equal, and superior properties to those of Lithium Niobate. We demonstrate that
it is possible to manufacture low loss photonic integrated circuits using
Lithium Tantalate, a material that is already commercially adopted for acoustic
filters in 5G and 6G. We show that LiTaO3 posses equally attractive optical
properties and can be etched with high precision and negligible residues using
DUV lithography, diamond like carbon (DLC) as a hard mask and alkaline wet
etching. Using this approach we demonstrate microresonators with an intrinsic
cavity linewidth of 26.8 MHz, corresponding to a linear loss of 5.6 dB/m and
demonstrate a Mach Zehnder modulator with Vpi L = 4.2 V cm half-wave voltage
length product. In comparison to Lithium Niobate, the photonic integrated
circuits based on LiTaO3 exhibit a much lower birefringence, allowing
high-density circuits and broadband operation over all telecommunication bands
(O to L band), exhibit higher photorefractive damage threshold, and lower
microwave loss tangent. Moreover, we show that the platform supports generation
of soliton microcombs in X-Cut LiTaO3 racetrack microresonator with
electronically detectable repetition rate, i.e. 30.1 GHz. | Chengli Wang, Zihan Li, Johann Riemensberger, Grigory Lihachev, Mikhail Churaev, Wil Kao, Xinru Ji, Terence Blesin, Alisa Davydova, Yang Chen, Xi Wang, Kai Huang, Xin Ou, Tobias J. Kippenberg | 2023-06-28T18:32:25Z | http://arxiv.org/abs/2306.16492v2 | # Lithium tantalate electro-optical photonic integrated circuits for high volume manufacturing
###### Abstract
**Electro-optical photonic integrated circuits based on Lithium Niobate have demonstrated the vast capabilities afforded by material with a high Pockels coefficient [1; 2], allowing linear and high-speed modulators operating at CMOS voltage levels [3] for applications ranging from data-center communications [4], high-performance computing to photonic accelerators for AI [5]. However despite major progress, the industrial adoption of this technology is compounded by the high cost per wafer, and limited wafer size - that result from the lack of existing high-volume applications in other domains, as is evidenced in the rapid adoption of silicon-on-insulator photonics driven by the vast investments into microelectronics. Here we overcome this challenge and demonstrate a photonic platform that satisfies the dichotomy of allowing scalable manufacturing at low cost - based on today's existing infrastructure - while at the same time exhibiting equal, and superior properties to those of Lithium Niobate. We demonstrate that it is possible to manufacture low loss photonic integrated circuits using Lithium Tantalate (LiTaO\({}_{3}\)), a material that is already commercially adopted for acoustic filters in 5G and 6G [6]. We show that LiTaO\({}_{3}\) posses equally attractive optical properties and can be etched with high precision and negligible residues using DUV lithography, diamond like carbon (DLC) as a hard mask [7] and alkaline wet etching. Using this approach we demonstrate microresonators with an intrinsic cavity linewidth of \(26.8\,\mathrm{MHz}\), corresponding to a linear loss of \(5.6\,\mathrm{dB}\,\mathrm{m}^{-1}\) and demonstrate a Mach Zehnder modulator with \(V_{\pi}L=4.2\) V cm half-wave voltage length product. In comparison to Lithium Niobate, the photonic integrated circuits based on LiTaO\({}_{3}\) exhibit a much lower birefringence, allowing high-density circuits and broadband operation over all telecommunication bands (O to L band), exhibit higher photorefractive damage threshold, and lower microwave loss tangent. Moreover, we show that the platform supports generation of soliton microcombs in X-Cut LiTaO\({}_{3}\) racetrack microresonator with electronically detectable repetition rate, i.e. \(30.1\,\mathrm{GHz}\). Our work paves the way for scalable manufacturing of low-cost and large-volume high-performance electro-optical photonic integrated circuits for next-generation ultra high-speed modulators and devices for energy-efficient communication, computation or interconnects.**
## Introduction
Next-generation ultra-high-speed photonic integrated circuits (PICs) based on electro-optical materials are poised to play a role in energy-efficient data centers, optical communications, 5G/6G, or high-performance computing - granted that scalable low-cost manufacturing becomes possible. In the past two decades, photonic integrated circuits based on silicon (silicon photonics) have rapidly transitioned from academic research to widespread use in telecommunications [8] and data centers [9]. One crucial factor driving the commercial feasibility of this technological revolution is the high-volume availability and cost-effectiveness of silicon-on-insulator (SOI) wafers. These SOI wafers, prepared using the'smart-cut', i.e. ion slicing techniques [10], enable the manufacturing of silicon photonic integrated circuits but crucially have a significantly larger magnitude of usage in consumer microelectronics. Today, globally more than 3 million wafers are produced per year in SOI, as large as 300 mm in diameter [8]. Using the similar technique lithium niobate (LiNbO\({}_{3}\)) has been fabricated into lithium niobate on insulator (LNOI) structures, offering an entirely new class of ultra high-speed, low voltage electro-optical photonic integrated circuits [11; 12; 13] that can become key components in future energy-efficient communication systems. Despite the tremendous scientific progress and the increased application range of LiNbO\({}_{3}\) photonic integrated circuits, the path to commercialization remains challenging, and commercial products do not exist to date. Unlike SOI technology, LNOI lacks a larger volume of consumer electronics driving its demand, resulting in economic limitations in its commercialization. In comparison, another ferroelectric material, lithium tantalate, which shares similar structural properties with LiNbO\({}_{3}\), has entered a large-volume production stage driven by its applications in 5G filters [14; 15] and is projected to achieve a production capacity of 750k LTOI wafers per year by 2024 [16]. The substantial volume enables significant benefits in terms of low-cost production when adopting LTOI as a platform for photonic integrated cir
cuits - yet photonic integrated circuits based on this material have never been demonstrated to date. \(\mathrm{LiTaO_{3}}\), in addition to the advantage in production volume, exhibits comparable or superior properties to \(\mathrm{LiNbO_{3}}\). \(\mathrm{LiTaO_{3}}\) is a traditional ferroelectric crystal with a nearly identical crystal structure as \(\mathrm{LiNbO_{3}}\) replacing Nb atoms in the crystal structure with heavier Ta atoms (cf. Figure 1(a)). The stronger chemical bonds induce a higher electron density in \(\mathrm{LiTaO_{3}}\)[17], which makes \(\mathrm{LiTaO_{3}}\) exhibit not only higher density but also increased stiffness and chemical stability. The optical bandgap of \(\mathrm{LiTaO_{3}}\) (3.93 eV) is larger than \(\mathrm{LiNbO_{3}}\) (3.63 eV) [18], allowing e.g. nonlinear optical conversion to the visible and even ultraviolet [19] wavelength range, with much decreased optical anisotropy, i.e. one order of magnitude reduced optical birefringence compared to \(\mathrm{LiNbO_{3}}\). The latter is particularly important as it allows manufacturing of tightly confining waveguides with strong bends without mode mixing, that can operate across all telecommunication bands simultaneously (from O to L band). Moreover, \(\mathrm{LiTaO_{3}}\) features a comparable Pockels coefficient (\(r_{33}=30.5\,\mathrm{pm}\,\mathrm{V}^{-1}\)) to the well-established \(\mathrm{LiNbO_{3}}\) with a moderately larger electrical permittivity \(\epsilon_{33}=\) 43. In particular relevant for applications in the realm of microwave quantum transduction [20] the much lower microwave loss tangent of \(\mathrm{LiTaO_{3}}\) is a promising avenue to improve device performance to unity conversion efficiency, which has so far eluded efforts in \(\mathrm{LiNbO_{3}}\) due to limited quality factors of microwave resonators [21]. Historically, despite the beneficial optical material properties, the use of \(\mathrm{LiTaO_{3}}\) for photonic devices in optical communication networks or scientific research has been limited. One of the reasons is that the Curie temperature of \(\mathrm{LiTaO_{3}}\) (\(610\,\mathrm{\SIUnitSymbolCels}\) - \(700\,\mathrm{\SIUnitSymbolCels}\)) is much lower than the temperatures needed for the fabrication of optical waveguides by ion diffusion (typically above \(1000\,\mathrm{\SIUnitSymbolCels}\)), which compound the use of \(\mathrm{LiTaO_{3}}\) for bulk modulators based on ion diffused waveguide [22]. For this reason, legacy bulk modulator technology has employed \(\mathrm{LiNbO_{3}}\). Given the commercial adoption of LTOI in wireless applications due to its suitable acoustic properties, combined with the above optical properties, make it an ideal platform for scalable manufactured electro-optical photonic integrated circuits - yet the latter has to date never been demonstrated nor pursued. Although free standing whispering gallery mode resonators have been fabricated from \(\mathrm{LiTaO_{3}}\) single crystals [23], that utilize femtosecond laser direct writing [24] or focused ion beam milling [25], scalable manufactured photonic integrated circuits have remained an outstanding challenge.
Here, we overcome this challenge and implement the first photonic integrated circuit platform using \(\mathrm{LiTaO_{3}}\)-on-insulator platform based on the direct etching [7] and demonstrate ultra-low optical loss, electro-optical tuning, switching via the Pockels effect, and soliton microcomb generation via the optical Kerr effect of \(\mathrm{LiTaO_{3}}\). We achieve this by transferring the DLC-based masking, etching process originally developed for \(\mathrm{LiNbO_{3}}\) to \(\mathrm{LiTaO_{3}}\) and proposing a new solution to remove \(\mathrm{LiTaO_{3}}\) redeposition, which highlights the flexibility of our process for the fabrication of a variety of ferroelectric photonics platforms. Equally, we demonstrate a DUV approach to electrode manufacturing. Taken together our work establishes a basis for scalable volume manufacturing of ultrahigh speed electro-optical photonic integrated circuits.
## Low loss lithium tantalate-based photonic integrated circuits
The fabrication process for LTOI wafers and optical waveguides is depicted in Figure 1 and closely resembles recent efforts for LNOI [7]. We fabricated devices such as optical ring resonators (cf. Figure 1(b)), racetrack resonators, and waveguide spirals (cf. Supplementary Figure 3). Figures 1 (b,c) show the etched waveguide sidewalls and cleaved cross-section featuring low sidewall roughness and steep sidewall angles close to \(70\,\mathrm{\SIUnitSymbolDegree}\) with respect to the surface. The LTOI wafer was fabricated by the smart-cut technique [15]. The process flow is schematically illustrated in Figure 1(f). In contrast to the well-established LNOI preparation process that utilizes helium ion implantation, hydrogen ions are preferred for the fabrication of LTOI. The fabrication recipes of LTOI are more closely aligned with the high-volume commercial production of SOI wafers, resulting in higher efficiency and lower costs in the production of LTOI compared to LNOI. We implanted the hydrogen ion into an X-Cut bulk \(\mathrm{LiTaO_{3}}\) wafer with an energy of \(100\,\mathrm{keV}\) and a dose of \(3.2\times 10^{16}\,\mathrm{\SIUnitSymbolCels}\). After that, we flipped the implanted wafer and bonded it to a blank \(525\,\mathrm{\SIUnitSymbol
leaving a \(100\,\mathrm{nm}\) thick continuous \(\mathrm{LiTaO_{3}}\) slab across the wafer. We remove the \(\mathrm{LiTaO_{3}}\) redeposition on the waveguide sidewalls with an additional wet etching step and anneal the wafer at \(500\,\mathrm{\SIUnitSymbolCelsius}\) in an oxygen atmosphere. We deposit a \(2\,\mathrm{\SIUnitSymbolMicro m}\) thick upper cladding with PECVD based on a hydrogen-free precursor to avoid overtone absorption from optical phonons of the Si-OH stretch vibration around \(1.5\,\mathrm{\SIUnitSymbolMicro m}\).
Next, we optically characterize the \(\mathrm{LiTaO_{3}}\) PICs using frequency-comb calibrated tunable diode laser spectroscopy [26] to determine the optical loss and absorption of optical microresonators with waveguide width of \(2.0\,\mathrm{\SIUnitSymbolMicro m}\) across the 4-inch wafers (cf. Figure 2(a)). We find mean intrinsic loss rates \(\kappa_{0}/2\pi\) between \(35.2\,\mathrm{MHz}\) and \(72.9\,\mathrm{MHz}\) with eight out of nine fields performing better than \(50.8\,\mathrm{MHz}\). The microresonator device loss of \(35.2\,\mathrm{MHz}\) corresponds to a propagation loss of \(\alpha=7.3\,\mathrm{dB\,m^{-1}}\) for the unreduced \(\mathrm{LiTaO_{3}}\) wafer that is specially used for optical applications. We also characterize the optical loss of the LTOI platform fabricated from the cheaper and more readily available acoustic grade \(\mathrm{LiTaO_{3}}\) bulk wafers. The acoustic LTOI exhibits slightly higher loss compared to the unreduced wafers. The best field features a loss rate of \(42\,\mathrm{MHz}\) with a mean loss rate of \(82\,\mathrm{MHz}\) across the whole wafer (see Supplementary Figure 1). This corresponds to losses of \(8.8\,\mathrm{dB\,m^{-1}}\) and \(17.1\,\mathrm{dB\,m^{-1}}\), which is substantially below published losses in wafer-scale fabrication of LNOI PIC [27], making our process therefore directly applicable to widely used mass manufactured LTOI wafer substrates. Figure 2(b) depicts an optical resonance transmission spectrum and fit which indicates an intrinsic loss rate of \(26.8\,\mathrm{MHz}\), which corresponds to a propagation loss of \(\alpha=5.6\,\mathrm{dB\,m^{-1}}\). We also fabricated optical waveguide spirals with a waveguide cross-section of \(1.75\,\mathrm{\SIUnitSymbolMicro m}\)\(\times 0.6\,\mathrm{\SIUnitSymbolMicro m}\) and
Figure 1: **Lithium-tantalate-on-insulator (LTOI) substrates and optical waveguides.** (a) Crystallographic unit cell of \(\mathrm{LiTaO_{3}}\). (b) Colorized SEM of LTOI ring resonator. (c) Colorized scanning electron micrograph (SEM) of etched LTOI (blue) waveguide and sidewall. (d) Colorized SEM cross-section of etched LTOI waveguide on top of \(\mathrm{SiO_{2}}\) bottom cladding (purple). (e) Schematic of fabrication workflow for LTOI optical waveguides including diamond-like carbon (DLC) hard mask deposition via plasma enhanced chemical vapor deposition (PECVD) from methane precursor, DLC dry etching via oxygen plasma, \(\mathrm{LiTaO_{3}}\) etching via argon ion beam etching (IBE), followed by redeposition and mask removal. \(\mathrm{LiTaO_{3}}\) is illustrated in blue, \(\mathrm{SiO_{2}}\) in purple, DLC in black, and Si in grey. (f) Schematic of LTOI wafer bonding workflow with H-ion implantation, bonding, annealing, and CMP. (g) Photograph of bonded wafer demonstrating uniform and defect-free bonding. (h) Thickness map of \(\mathrm{LiTaO_{3}}\) thin film on the wafer. (i) Atomic force micrograph of the \(\mathrm{LiTaO_{3}}\) thin film surface. (j) High resolution STEM image of the \(\mathrm{LiTaO_{3}}\)-\(\mathrm{SiO_{2}}\) bonding interface.
found a propagation loss around \(9\,\mathrm{dB}\,\mathrm{m}^{-1}\) (see Supplementary Figure 3). Figure 3(c) depicts the histogram of fitted intrinsic loss rates for the microresonator. The contributions of optical absorption and scattering from bulk and sidewall imperfections can be separated using thermal response spectroscopy [28] (cf. Figure 2(d)). An intensity-modulated pump laser is tuned to the center of the optical resonance and the frequency modulation response of the optical microresonator due to the thermo-optical and Kerr effects is read out with a second laser tuned to the side of another resonance. We model the frequency dependence of the thermal effect due to the optical absorption and the optical Kerr effect using finite-element simulations and fit the combined response [28, 29]. We find that the absorption limit of our LTOI microresonator is \(2.0\,\mathrm{MHz}\), corresponding to an absorption-limited propagation loss of \(0.4\,\mathrm{dB}\,\mathrm{m}^{-1}\) which is close to recent results obtained for LNOI [29]. Therefore, the main source of loss in our tightly confining \(\mathrm{LiTaO_{3}}\) waveguides is dominated by scattering losses.
The optical birefringence of \(\mathrm{LiTaO_{3}}\) is more than one order of magnitude lower than in the case of \(\mathrm{LiNbO_{3}}\) (cf. Figure 2(e)), which allows the fabrication of thick waveguides without incurring mode mixing between the fundamental modes in waveguide bends [7, 30]. Mode mixing occurs in x-cut \(\mathrm{LiNbO_{3}}\) waveguide bends when the TE mode transitions from the extraordinary (eo) to the ordinary (o) axes above a critical \(\mathrm{LiNbO_{3}}\) thickness that at wavelength \(1.55\,\mathrm{\SIUnitSymbolMicro m}\) lies around \(700\,\mathrm{nm}\) and for wavelength \(1.3\,\mathrm{\SIUnitSymbolMicro m}\) lies around \(600\,\mathrm{nm}\), largely independent of the slab thickness, strongly constraining the design space for optical waveguides. In contrast, the low and positive uniaxial birefringence of \(\mathrm{LiTaO_{3}}\) precludes mode mixing in x-cut waveguides with a horizontal-to-vertical aspect ratio greater than one. We simulate the effective mode indices of the fundamental polarization modes of LNOI and LTOI for a waveguide thickness of \(600\,\mathrm{nm}\), waveguide width of \(2\,\mathrm{\SIUnitSymbolMicro m}\) and wavelength of \(1.25\,\mathrm{\SIUnitSymbolMicro m}\)
Figure 2: **Optical characterization of \(\mathrm{LiTaO_{3}}\) photonic integrated circuits.** (a) Wafer-scale map of mean intrinsic loss \(\kappa_{0}/2\pi\) for similar resonators fabricated using DUV stepper lithography. (b) Normalized resonance transmission spectrum of optical racetrack microresonator at 209.358 THz. (c) Statistical distribution of intrinsic loss \(\kappa_{0}/2\pi\) of optical racetrack microresonator. (d) Nonlinear optical response measurement (solid red) and fit (solid black) of thermo-optical (red dashed) and Kerr (blue dashed) nonlinear response of optical microresonator demonstrating ultra-low optical absorption loss. (e) Illustration of \(\mathrm{LiNbO_{3}}\) (red) strongly negative uniaxial and \(\mathrm{LiTaO_{3}}\) (blue) weakly positive uniaxial crystal birefringence. (f) Illustration of curve angle and fundamental \(\mathrm{TE_{00}}\) and \(\mathrm{TM_{00}}\) mode profiles in LTOI. (g) Numerical simulation of fundamental \(\mathrm{TE_{00}}\) and \(\mathrm{TM_{00}}\) optical mode effective refractive indices of LNOI (blue) and LTOI (red) as a function of the angle between the waveguide and the Y-axis of the x-cut LN(T)OI film. The reduced birefringence of LTOI precludes unwanted birefringent mixing between fundamental \(\mathrm{TE_{00}}\)/\(\mathrm{TM_{00}}\) modes in thick waveguides. (h) Dispersion profile of LTOI racetrack microresonator with cross-section \(2\,\mathrm{\SIUnitSymbolMicro m}\times 0.5\,\mathrm{\SIUnitSymbolMicro m}\) and an \(100\,\mathrm{nm}\) thick slab. (i) Dispersion profile of LNOI racetrack microresonator with similar cross-section and strong mode mixing at frequencies above \(215\,\mathrm{THz}\), which occupies the E-band and O-band in the optical communication.
as a function of angle between the propagation and the eo crystal axes (cf. Figure 2(g)). For \(\mathrm{LiNbO_{3}}\), we find a crossing of the fundamental TE and TM modes at an angle of \(25^{\circ}\), while no mode crossing is found for an LTOI waveguide with the same dimension. This observation is in excellent agreement with the results from our optical dispersion measurement \(D_{\mathrm{int}}=\omega_{\mu}-\omega_{0}-D_{1}\cdot\mu\), where \(\mu\) indicates the azimuthal mode index and \(\omega_{0}/2\pi=205\) THz, for LTOI and LNOI waveguides, which are depicted in Figure 2(h) and (i), respectively. Both optical microresonators have comparable anomalous dispersion, however, the dispersion profile of the LTOI microresonator remains smooth over the full measurement span from \(185\,\mathrm{THz}\) to \(240\,\mathrm{THz}\), whereas the LNOI microresonator exhibits striking mode mixing at frequencies above \(215\,\mathrm{THz}\). Adjustments to the waveguide geometry and working wavelength can weaken the mode mixing caused by strong birefringence in LNOI [7, 31], however, such adjustments require sacrificing optical confinement and chip compactness. In comparison, LTOI offers much lower birefringence thereby providing greater flexibility in waveguide design and manufacturing and mode-mixing-free operation over all telecommunications bands from 1260 to 1625 nm, encompassing from O to L band.
## III Electro-optical modulation
To demonstrate the utility of the LTOI platform for electro-optics, we demonstrate a tunable high Q microresonator. The resonator has a racetrack design with apex radius \(100\,\mathrm{\SIUnitSymbolMicro m}\) and straight section length \(400\,\mathrm{\SIUnitSymbolMicro m}\) (cf. Figure 3(a)) with a uniform waveguide width of \(2\,\mathrm{\SIUnitSymbolMicro m}\) and pulley-style coupling sections (cf. Figure 3(b)). We applied a voltage across two of the four electrodes to measure the voltage tuning coefficient and measure the resonance position using an external cavity diode laser (ECDL) (cf. Figure 3(c)). We calibrated the laser frequency with a \(250\,\mathrm{MHz}\) phase modulation by detecting the sidebands around the resonance. We find a voltage tuning efficiency of \(255\,\mathrm{MHz}\)/V using a single electrode pair, which corresponds to \(510\,\mathrm{MHz}\)/V if both phase shifter sections are modulated. We also fabricated a 2\(\times\)2 electro-optical switch based on a Mach Zehnder interferometer (MZI) composed of two 2\(\times\)2 multimode interference (MMI) beam splitters at either end and a push-pull optical waveguide phase shifter pair with a length of \(7\,\mathrm{mm}\)((cf. Figure 3(d))). The waveguide width is \(1.5\,\mathrm{\SIUnitSymbolMicro m}\) and the gap between the \(\mathrm{LiTaO_{3}}\) waveguide sidewalls and the gold electrode is \(2\,\mathrm{\SIUnitSymbolMicro m}\) on each side. Metal electrodes are fabricated using a DUV-lithography-based lift-off process that allows us to manufacture electrodes with a thickness of \(800\,\mathrm{nm}\) and with an alignment tolerance below \(100\,\mathrm{nm}\) to the optical waveguide. The transmission through the switch is plotted in Figure 3(g). The switching contrast is \(10\,\mathrm{dB}\), which is commensurate with an imbalance of 5% for the 2\(\times\)2 MMI beam splitters. We further find \(V_{\pi}=6.1\,\mathrm{V}\) (cf. Figure 3(g)), which corresponds a voltage modulation efficiency of \(V_{\pi}L=4.2\,\mathrm{V}\) cm for the push-pull MZI. The larger dielectric constant of \(\mathrm{LiTaO_{3}}\) (\(\epsilon_{LT}=43\)) is generally considered unfavourable for phase matching between optical and microwave fields [32]. However, in our LTOI platform, as the optical fields is tightly localized in the sub-micrometre waveguide region while the electrical field primarily resides in the low dielectric silica layer (\(\epsilon_{silica}=3.9\)), it is convenience to engineer the RF and optical group velocities independently. Simulation results demonstrate that a conventional design can maintain velocity matching between the optical and the microwave signals at very high microwave frequencies without compromising the electro-optic efficiency (cf. Figure 3(e)). By combining the demonstrated low propagation loss and high electro-optical efficiency in our LTOI platform with a phase-matched electrode transmission line could lead to a competitive bandwidth comparable to the well-researched LNOI platform.
## IV Soliton microcomb generation
Finally, we investigate the \(\mathrm{LiTaO_{3}}\) platform for nonlinear microcomb generation. The strong optical confinement, high Q-factor, anomalous dispersion, and substantial Kerr nonlinearity of our LTOI microresonators make them naturally suitable for dissipative Kerr soliton (DKS) generation [33, 34]. We achieved single soliton generation at pulse repetition rates \(81\,\mathrm{GHz}\) (cf. Figure 4(a)) and \(30.1\,\mathrm{GHz}\) (cf. Figure 4(b)). The optical setup for single soliton generation is depicted in Figure 4(c). We utilized a rapid single sideband tuning scheme pioneered in ref. [35] to overcome thermal nonlinearities and initiate solitons at a pump power of 90 mW on-chip using an external cavity diode laser (ECDL) and an erbium-doped fiber amplifier for pumping. The FWHM spectral bandwidth of the \(81\,\mathrm{GHz}\) single soliton is \(4.9\,\mathrm{THz}\), corresponding to a compressed pulse duration of \(63\,\mathrm{fs}\). The 30.1 GHz single soliton state features a bandwidth of \(4.0\,\mathrm{THz}\) and supports a pulse duration of \(71\,\mathrm{fs}\). Various multisoliton states were also achieved and we depict an example state with three solitons in Figure 4(d). The low repetition rate of our \(30.1\,\mathrm{GHz}\) solitons allows the direct detection of the microwave repetition beat note on a fast photodiode. We measure the phase noise of the microwave beat note using an electrical spectrum analyzer (ESA) and find a phase noise level of \(-86\,\mathrm{dBc}\,\mathrm{Hz}^{-1}\) at an offset frequency of \(10\,\mathrm{kHz}\) and \(-114\,\mathrm{dBc}\,\mathrm{Hz}^{-1}\) at an offset frequency of \(1\,\mathrm{MHz}\), higher than earlier measurements using \(\mathrm{Si_{3}N_{4}}\) optical microresonators [36] and in z-cut \(\mathrm{LiNbO_{3}}\)[37]. It is notable that here DKS generation was achieved in an x-cut ferroelectric crystal sample for the first time.
## V Conclusion
In summary, we have developed lithium tantalate photonic integrated circuits that are low loss, exhibit low birefringence and have comparable properties to Lithium Niobate, but crucially employ a material that is already used today commercially in large volumes for wireless filters, thereby providing a path to scalable manufacturing at low cost. LTOI PICs achieve comparable loss and electro-optical performance to well-established LNOI that have major potential for penetrat
ing datacenter interconnects [3], long-haul optical communications [4], and quantum photonics [38, 39] to name just a few applications. The use of low cost substrates is of central importance for adoption in applications such as data center interconnects, where the die size is large due to the requirements of low modulator voltage and the length of travelling wave modulator devices. In our work, we do not only establish a smart-cut process for the manufacturing of LTOI wafer substrates, but also demonstrate a complete manufacturing process including the etching of \(\mathrm{LiTaO_{3}}\), the removal of the redeposition of etch products on the waveguide sidewall, and the manufacturing of thick metal electrodes for functional electro-optic devices and demonstrate key performance metrics such as low propagation losses of \(5.6\,\mathrm{dB}\,\mathrm{m}^{-1}\). Our process is fully wafer-scale and based on deep ultraviolet photolithography and lays the foundation to scalable manufacturing of high performance electro-optical PICs that can harness the scale of LTOI wafer fabrication for RF filters, which is ongoing both on 150 mm and 200 mm wafer size. Our LTOI platform is, in particular, promising for applications that can directly exploit the superior properties of the material such as the reduced birefringence, where our platform is capable of processing signals across all optical communications bands (1260-1620 nm) in a single PIC due to the successful suppression of fundamental mode mixing and does support soliton microcomb generation also in the X-Cut, in contrast to \(\mathrm{LiNbO_{3}}\) where soliton microcomb generation has only been observed in Z-Cut so far [37, 40], which has compounded the combination of electro-optical and Kerr nonlinearities [41]. LTOI is also equally promising for quantum transduction of single microwave photons [20, 21] that has recently garnered attention to overcome the thermal bottlenecks of interfacing with superconducting quantum computers [42], because of the fact that the dielectric loss tangent of \(\mathrm{LiTaO_{3}}\)[43] is much lower than \(\mathrm{LiNbO_{3}}\)[44].
## Author contributions
C.W., Y.C., and K.H. fabricated the LTOI wafers. Z.L. fabricated the LTOI PICs. J.R. and Z.L. designed the PICs. C.W. and Z.L. characterized the samples. C.W., Z.L., M.C., X.J., G.L., T.B., and W.K. performed optical experiments. J.R., C.W, Z.L. analyzed the data, prepared the figures and wrote the manuscript with input from all authors. T.J.K. and X.Ou supervised the project.
Figure 3: **Electro-optical tuning and switching in LTOI.** (a) Colorized scanning electron micrograph (SEM) of LTOI (blue) racetrack optical microresonator with gold electrodes (yellow). (b) Colorized SEM of pulley resonator and bus waveguide coupling section. (c) Measured resonance shift as a result of tuning voltage. The linear fit indicates a voltage tuning response of 255 MHz/V. Inset left: Schematic of measurement setup for microresonator tuning measurement with phase modulation sideband calibration. Inset right: Electro-optical tuning of LTOI microresonator. Each color step corresponds to an increase in DC tuning voltage of 5 V. (d) Optical micrograph of \(7\,\mathrm{mm}\) long Mach-Zehnder modulator (MZM). (e) Simulation of phase matching between the optical and microwave waves. Inset: schematic cross-section of waveguides and electrodes of the simulated structure. (f) Colorized SEM of MZM waveguides and electrodes. (g) Tuning curve of \(7\,\mathrm{mm}\) long MZM switch indicating a operation voltage of \(V_{\pi}=6.1\) V and voltage-length product \(V_{\pi}L=4.2\) V cm.
## Funding Information
T.J.K. acknowledges funding from the European Research Council grant no. 835329 (ExCOM-cCEO) and from the EU Horizon Europe research and innovation program through grant no. 101113260 (HDLN). J.R. acknowledges funding from the SNSF through an Ambizione Fellowship no. 201923. C.W. acknowledges financial support from China Scholarship Council (No.202104910464). X.O. acknowledges the National Key R&D Program (No.2022YFA-1404601) from the Ministry of Science and Technology of China.
## Data Availability
The code, data, and micrographs used to produce the plots within this work will be released on the repository Zenodo upon publication of this preprint.
## Acknowledgements
The samples were fabricated in the EPFL Center of MicroNanoTechnology (CMi) and the Institute of Physics (IPHYS) cleanroom. The LTOI wafers were fabricated in Shanghai Novel Si Integration Technology (NSIT) Co., Ltd. and the SIMIT-CAS.
## Competing Interests
The authors declare no competing financial interests.
|
2301.03986 | The Riemann problem for equations of a cold plasma | A solution of the Riemann problem is constructed for a nonstrictly hyperbolic
inhomogeneous system of equations describing one-dimensional cold plasma
oscillations. Each oscillation period includes one rarefaction wave and one
shock wave containing a delta singularity. The rarefaction wave can be
constructed in a non-unique way, the admissibility principle is proposed. | Olga S. Rozanova | 2023-01-10T14:36:03Z | http://arxiv.org/abs/2301.03986v2 | # The Riemann problem for equations of a cold plasma
###### Abstract.
A solution of the Riemann problem is constructed for a nonstrictly hyperbolic inhomogeneous system of equations describing one-dimensional cold plasma oscillations. Each oscillation period includes one rarefaction wave and one shock wave containing a delta singularity. The rarefaction wave can be constructed in a non-unique way, the admissibility principle is proposed.
Key words and phrases:Quasilinear hyperbolic system, Riemann problem, non-uniqueness, singular shock, plasma oscillations 2020 Mathematics Subject Classification: Primary 35Q60; Secondary 35L60, 35L67, 34M10
## 1. Introduction
In vector form, the system of hydrodynamic of electron liquid, together with Maxwell's equations, has the form:
\[\begin{split}& n_{t}+\operatorname{div}\left(n\mathbf{V}\right)=0 \,,\quad\mathbf{V}_{t}+\left(\mathbf{V}\cdot\nabla\right)\mathbf{V}=\frac{e}{m }\,\left(\mathbf{E}+\frac{1}{c}\left[\mathbf{V}\times\mathbf{B}\right]\right),\\ &\frac{1}{c}\mathbf{E}_{t}=-\frac{4\pi}{c}en\mathbf{V}+ \operatorname{rot}\mathbf{B}\,,\quad\frac{1}{c}\mathbf{B}_{t}=-\operatorname {rot}\mathbf{E}\,,\quad\operatorname{div}\mathbf{B}=0\,,\end{split} \tag{1}\]
where \(e,m\) are the charge and mass of the electron (here the electron charge has a negative sign: \(e<0\)), \(c\) is the speed of light; \(n,\mathbf{V}\) are the density and velocity of electrons; \(\mathbf{E},\mathbf{B}\) are the vectors of electric and magnetic fields, \(x\in\mathbb{R}^{3}\), \(t\geq 0\), \(\nabla\), \(\operatorname{div}\), \(\operatorname{rot}\) are the gradient, divergence and vorticity with respect to the spatial variables. The system of equations (1) is one of the simplest models of plasma, which is often called the equations of hydrodynamics of "cold" plasma, it is well known and described in sufficient detail in textbooks and monographs (see, for example, [1], [4]).
This system has an important subclass of solutions, dependent only on one space variable \(x\), for which \(\mathbf{V}=\left(V,0,0\right)\), \(\mathbf{E}=\left(E,0,0\right)\), \(\mathbf{B}\equiv 0\), e.g. [3]. In dimensionless form it can be written as
\[n_{t}+\left(n\,V\right)_{x}=0,\quad V_{t}+VV_{x}=-E,\quad E_{t}=n\,V. \tag{2}\]
Assume that the solution is smooth. Then the first and last equations (2) imply \(\left(n+E_{x}\right)_{t}=0.\) For the background density \(n\equiv 1\) we get
\[n=1-E_{x}. \tag{3}\]
This allows us to obtain a hyperbolic system for the two components of the velocity \(V\) and the electric field \(E\) in the form
\[V_{t}+VV_{x}=-E,\quad E_{t}+VE_{x}=V, \tag{4}\]
where \((V,E)=(V(t,x),E(t,x))\), \(t\in\mathbb{R}_{+}\), \(x\in\mathbb{R}\). The density \(n(t,x)>0\) can be found from (3).
System (4), (3) can be also rewritten as a pressureless repulsive Euler-Poisson system [5]
\[n_{t}+(nV)_{x}=0,\quad V_{t}+VV_{x}=\,\nabla\Phi,\quad\Delta\Phi=n-n_{0},\quad n _{0}=1, \tag{5}\]
where \(\Phi\) is a repulsive force potential, \(\nabla\Phi=-E\).
For (4) we consider the Cauchy problem
\[(V,E)|_{t=0}=(V_{0}(x),E_{0}(x)). \tag{6}\]
If the initial data are \(C^{1}\) - smooth functions, then locally in \(t\) there exists a smooth solution of (4), (6). Nevertheless, it is known that the derivatives of the solution of such a Cauchy problem can go to infinity for a finite time, which corresponds to the formation of a shock wave, the criterion for the formation of a singularity is known [13]. Thus, it makes sense to consider piecewise-smooth functions as the initial data (6), the simplest example of which is the Riemann initial data
\[(V,E)|_{t=0}=(V_{-}^{0}+[V]^{0}\Theta(x),\,\,E_{-}^{0}+[E]^{0}\Theta(x)), \tag{7}\]
where \(\Theta(x)\) is the Heaviside function, constants \((V_{-},E_{-})\) are the values to the left of the jump, \(([V],[E])\) the values to the jumps, \((V_{+}=V_{-}+[V],\,\,E_{+}=E_{-}+[E])\) are the values to the right of the jump, \((V_{\pm}^{0},E_{\pm}^{0})\), \(([V]^{0},[E]^{0})\) are the corresponding values at time zero. In this case, the density at the initial moment of time is
\[n|_{t=0}=1-[E]^{0}\delta(x). \tag{8}\]
Since the initial data contain a delta function, the Riemann problem for the components of the solution \((V,E,n)\) is singular and the Rankine-Hugoniot conditions cannot be written in the traditional form [15]. In order to ensure that the density is positive initially, it is necessary to impose the condition \([E]^{0}\leq 0\).
To construct the shock, we write system (4) in the divergent form
\[n_{t}+(Vn)_{x}=0,\qquad\left(\frac{nV^{2}}{2}+\frac{E^{2}}{2}\right)_{t}+\left( \frac{nV^{3}}{2}\right)_{x}=0, \tag{9}\]
corresponding to the laws of conservation of mass and total energy (for example, [6]). System (9) (together with (3)) is equivalent to (4), (3) for smooth solutions.
The Riemann problem (9), (3), (7), (8) is completely non-standard and demonstrates new phenomena in the construction of both a rarefaction wave and a shock wave.
The difficulty in constructing a solution is associated, in particular, with the fact that system (4) is inhomogeneous and does not have a constant stationary state. To the left and right side of the discontinuity, the solution is a \(2\pi\) - periodic function of time. This leads to the fact that the rarefaction wave and the shock wave periodically replace each other. Further, system (4) is hyperbolic, but not strictly hyperbolic: it has the form
\[u_{t}+A(u)u_{x}=f(u),\quad u=(V,E),\quad f=(-E,V),\]
the matrix \(A\) has a complete set of eigenvectors with coinciding eigenvalues \(\lambda_{1}=\lambda_{2}=V\). Because of this, it has a subclass of solutions in the form of simple waves, distinguished by the condition
\[V^{2}+E^{2}=C^{2}\]
with a given constant \(C\). We show that this leads to the non-uniqueness of the rarefaction wave for the Riemann problem. Therefore, the question arises about the principles by which one can single out the "correct" solution. In our work, the correct one is chosen for which the total energy density is minimal.
When constructing a singular shock wave, we use homogeneous conservative system of two equations (9), which are linked by the differential relation (3). This formulation has not been encountered before, although a modification of the method previously used for the case of equations of the pressureless gas dynamics with energy [11] can be used to construct a solution to the Riemann problem. The shock wave satisfies the so-called "supercompression" conditions, which are traditionally used to distinguish admissible singular shock waves [15].
The paper is organized as follows. In Sec.2 we discuss the structure of characteristics which is crucial for construction of rarefaction and shock waves. In Sec.3 we construct the rarefaction wave for Riemann data (7) of a general form and then show that for the data corresponding to a simple wave the rarefaction can be constructed non-uniquely. We also propose two variational conditions of admissibility of the rarefaction waves for this case. In Sec.4 we give a definition of the strongly singular solution for an arbitrary piecewise smooth initial data, prove an analog of the Rankine-Hugoniot conditions (Theorem 1), study the mass and energy transfer for a singular shock wave (Theorem 2). Then we construct the singular shock for piecewise smooth initial data (7) and give two examples. The first example corresponds to the case of simple wave, here we compare the result obtained starting from conservative form (9) and the result obtained from a divergence form, natural for the Hopf equation. The second example show how it is possible to construct the shock in the case where the shock position has an extremum on the characteristic plane. Sec.5 contains a discussion about a physical and mathematical sense of the results obtained and mention works concerning shock waves in plasma for other models.
## 2. Characteristics
The equations for the characteristics corresponding to system (4) have the form
\[\frac{dV}{dt}=-E,\quad\frac{dE}{dt}=V,\quad\frac{dx}{dt}=V, \tag{10}\]
whence, first, it follows that along the characteristics
\[\frac{d(V^{2}+E^{2})}{dt}=0, \tag{11}\]
and also, according to (7),
\[\begin{array}{ll}V_{\pm}(t)&=-E_{\pm}^{0}\sin t+V_{\pm}^{0}\cos t,\qquad\quad E _{\pm}(t)=V_{\pm}^{0}\sin t+E_{\pm}^{0}\cos t,\\ x_{\pm}(t)&=V_{\pm}^{0}\sin t+E_{\pm}^{0}(\cos t-1)+x_{0},\qquad x_{0}=0. \end{array}\]
It is easy to see that for \([E]^{0}\neq 0\) the characteristics \(x_{-}(t)\) and \(x_{+}(t)\), corresponding to the states to the left and to the right of the discontinuity, intersect once inside each period \(2\pi\). Therefore, on that part of the period where \(x_{-}(t)<x_{+}(t)\), it is necessary to construct a continuous solution, and on the part where \(x_{-}(t)>x_{+}(t)\), that is, there is an intersection of the characteristics, we construct a shock wave. The moment of time at which \(x_{-}(t)=x_{+}(t)\), we denote by \(T_{*}\), \(T_{*}\in(0,2\pi)\)
Fig.1 gives a schematic representation of the behavior of the characteristics, where the rarefaction wave comes first.
Note that (11) implies that the value \(C^{2}=V^{2}+E^{2}\) is constant for each specific characteristic, but in general it is a function of \(t\) and \(x\).
## 3. Construction of a rarefaction wave
Suppose the initial data is such that \(V_{-}^{0}<V_{+}^{0}\), that is, \(x_{-}(t)<x_{+}(t)\), and first the initial data (7) generate a rarefaction wave. Between the characteristics \(x_{-}(t)\) and \(x_{+}(t)\), it is necessary to construct a continuous solution \((V,E)\) connecting the states \((V_{-}(t),E_{-}(t))\) and \((V_{+}(t),E_{+}(t))\). Recall that the moment of time at which \(x_{-}(t)=x_{+}(t)\), we denote by \(T_{*}\), \(T_{*}\in(0,2\pi)\).
The rarefaction wave, of course, is not a smooth solution, it satisfies the conservative system (9) with the additional condition (3) in the usual sense of the integral identity.
### The linear profile solution
It is easy to check that a continuous solution \((V,E)\) can be constructed by joining the states \((V_{-}(t),E_{-}(t))\) and \((V_{+}(t),E_{+}(t))\), between characteristics with the help of functions linear in \(x\), i.e.
\[(V,E)=\left\{\begin{array}{ll}(V_{-}(t),E_{-}(t)),&x<x_{-}(t);\\ (V_{r_{1}},E_{r_{1}})=(a(t)x+b(t),c(t)x+d(t)),&x\in[x_{-}(t),x_{+}(t)];\\ (V_{+}(t),E_{+}(t)),&x>x_{+}(t),\end{array}\right. \tag{12}\]
with
\[a(t)=\frac{[E]^{0}\sin t-[V]^{0}\cos t}{-[V]^{0}\sin t+[E]^{0}(1-\cos t)},\quad c (t)=\frac{-[V]^{0}\sin t-[E]^{0}\cos t}{-[V]^{0}\sin t+[E]^{0}(1-\cos t)}, \tag{13}\]
\[b(t)=\frac{(V_{+}^{0}E_{-}^{0}-E_{+}^{0}V_{-}^{0})(1-\cos t)}{-[V]^{0}\sin t+ [E]^{0}(1-\cos t)},\quad d(t)=\frac{(E_{+}^{0}V_{-}^{0}-V_{+}^{0}E_{-}^{0}) \sin t}{-[V]^{0}\sin t+[E]^{0}(1-\cos t)}. \tag{14}\]
Then
\[n=1-c(t)\chi_{(x_{-}(t),x_{+}(t))},\]
Figure 1. Characteristics and their intersections: rarefaction waves and shock waves.
where \(\chi_{(x_{-}(t),x_{+}(t))}\) is the characteristic function of the interval \((x_{-}(t),x_{+}(t))\), for \(t\in(0,T_{*})\) the density does not contain a delta function, but the singular component that was present in the initial data is again formed at \(t=T_{*}\).
### Simple waves
The system (4) has a subclass of solutions distinguished by the condition
\[V^{2}+E^{2}=C^{2}(\equiv\mathrm{const}) \tag{15}\]
with a given constant \(C\), the so called simple waves. In this case, (4) reduces on smooth solutions to one equation
\[V_{t}+VV_{x}=-\sigma\sqrt{C^{2}-V^{2}},\quad\sigma=\mathrm{sign}(-V_{x})=\pm 1,\quad E=\sigma\sqrt{C^{2}-V^{2}}, \tag{16}\]
moreover, \(V_{xx}\neq 0\) on no set of positive measure. The last requirement means that the solution cannot become constant on any interval, but at the points at which \(C^{2}=V^{2}\) the value of \(\sigma\) changes its sign to the opposite. The second conservation law (9) in this situation turns out to be a consequence of the first.
In the initial conditions (7) the values \(E^{0}_{-}\) and \(E^{0}_{+}\) are expressed as \(E^{0}_{-}=\pm\sqrt{C^{2}-(V^{0}_{-})^{2}}\), \(E^{0}_{+}=\pm\sqrt{C^{2}-(V^{0}_{+})^{2}}\), so as to ensure the condition \([E]^{0}\leq 0\).
It is easy to see that a function of the form (12) with an intermediate state \((V_{r_{1}},E_{r_{1}})\) is not a solution to the equation (16). Let us show that in this case another continuous solution can be constructed, with another function \((V_{r_{2}},E_{r_{2}})\) as an intermediate state.
Indeed, the general solution of (16), written implicitly, looks like
\[x-\sigma\sqrt{C^{2}-V^{2}}=F\left(t+\arctan\frac{V}{\sigma\sqrt{C^{2}-V^{2}}} \right),\]
with an arbitrary smooth function \(F\). In order to find the function \(F\) corresponding to the initial data (7), (15), we will construct the function \(X(t,V)\) inverse to \(V_{r_{2}}(t,x)\) for every fixed \(t\in(0,T_{*})\). For \(t=0\) such a function is multivalued.
We require that for \(t=0\) the condition \(X(0,V)=0\) holds for \(V\in(V^{0}_{-},V^{0}_{+})\). Then \(F=-\tan\frac{\sqrt{C^{2}-\xi^{2}}}{\xi}\), \(\xi=\sigma\sqrt{C^{2}-V^{2}}\). After transformations, we get
\[X_{1}(t,V)=C(\cos q-\cos(q+t)),\ (V_{\pm})_{t}<0\quad(\sigma=1), \tag{18}\] \[X_{2}(t,V)=C(-\cos q+\cos(q-t)),\ (V_{\pm})_{t}>0\quad(\sigma=-1),\] \[q=\arcsin\frac{V}{C}. \tag{17}\]
Note that in each case the monotonicity of \(V\) in \(x\) ensures the existence of an inverse function.
The situation is considered separately when the behavior of the solution between the right and left characteristics is given by different formulas. Namely, consider the time \(T_{1}\) at which \(V^{\prime}_{-}(t)=0\) and the time \(T_{2}\) at which \(V^{\prime}_{+}(t)=0\). Between \(T_{1}\) and \(T_{2}\) there is a moment \(T_{0}\), at which \(V_{+}(t)=V_{-}(t)\), and therefore, the jump disappears. However, at such a point the characteristics do not intersect, that is, \(x_{+}(T_{0})\neq x_{-}(T_{0})\). To construct a continuous solution in such a situation, we need auxiliary curves \(X_{1}(t,q_{-})\) and \(X_{2}(t,q_{+})\), where \(q_{\pm}=\arcsin\frac{V^{0}_{+}}{C}\).
Then for \(t\in(0,T_{*})\) the continuous solution of problem (7), (16), (15) can be written as
\[V_{r_{2}}(t,x)=\left\{\begin{array}{cc}V_{-}(t),&x<X_{-}(t),\\ V_{i}(t,x),&X_{-}(t)<x<X_{+}(t),\\ V_{+}(t),&x>X_{+}(t),\end{array}\right. \tag{19}\]
where
\[X_{-}(t)=\left\{\begin{array}{cc}x_{-}(t),&t<T_{1},\,t>T_{0}\\ X_{2}(t,q_{+}),&t\in[T_{1},T_{0}]\end{array}\right.,\]
\[X_{+}(t)=\left\{\begin{array}{cc}x_{+}(t),&t<T_{0},\,t>T_{2}\\ X_{1}(t,q_{-}),&t\in[T_{0},T_{2}]\end{array}\right.,\]
and \(V_{i}(t,x)\) is the function inverse to \(X_{i}(t,V)\), \(i=1,2\), given by formulas (17), (18).
Thus, a continuous solution to the problem (4), (7), (15) can be constructed as
\[(V,E)=\left\{\begin{array}{cc}(V_{-}(t),E_{-}(t)),&x<x_{-}(t),\\ (V_{r_{2}},E_{r_{2}}),&x_{-}(t)<x<x_{+}(t),\\ (V_{+}(t),E_{-}(t)),&x>x_{+}(t),\end{array}\right. \tag{20}\]
where \((V_{r_{2}},E_{r_{2}})\), where \(V_{r_{2}}\) is given by (19), and \(E_{r_{2}}=\pm\sqrt{C^{2}-V_{r_{2}}^{2}}\), the sign matches the one that was selected in the initial data (7). Fig.2 presents the construction of the rarefaction wave on the characteristic plane.
### Nonuniqueness of rarefaction wave
Obviously, (12) and (20) are different continuous solutions. Moreover, on their basis it is possible to construct an infinite number of other rarefaction waves. Indeed, one can check that \(V_{r_{2}}\) is an upward convex function and, for \(t=t_{1}\in(0,T_{*})\), we can choose any point \(x_{1}\in(x_{-}(t_{1}),x_{+}(t_{1}))\) and replace on the segment \((x_{-}(t_{1}),x_{1})\) by a linear function. Next, we find the position of the right point of the linear segment as a solution to the problem for \(t\in(0,T_{*})\) as \(\dot{x}=V_{r_{2}}(t,x)\), \(x(t_{1})=x_{1}\). Such linear sections can be built in any number.
Figure 2. Characteristics and and values \(V_{-}\) and \(V_{+}\) from the left and right side of the rarefaction wave.
### Admissibility of the rarefaction wave
The question of choosing the "correct" continuous solution can be solved proceeding from the minimality of the total energy of the rarefaction wave
\[\mathbb{E}(t)=\frac{1}{2}\int\limits_{x_{-}(t)}^{x_{+}(t)}(nV^{2}+E^{2})\,dx,\]
see (9).
For the solution \((V_{r_{2}},E_{r_{2}})\)
\[\mathbb{E}(t)=\frac{1}{2}\int\limits_{x_{-}(t)}^{x_{+}(t)}((1-E_{x})(C^{2}-E^{2 })+E^{2})\,dx=\frac{1}{2}(C^{2}\Delta x-C^{2}[E]+\frac{1}{3}[E^{3}]),\]
where \(\Delta x=x_{+}(t)-x_{-}(t)\geq 0\), \([E]=E_{+}-E_{-}=\Delta x+[E]^{0}\), \([E^{3}]=(E_{+})^{3}-(E_{-})^{3}\).
For the solution \((V_{r_{1}},E_{r_{1}})\)
\[\mathbb{E}(t)=\frac{1}{2}\int\limits_{x_{-}(t)}^{x_{+}(t)}((1-c)(ax+b)^{2}+(cx +d)^{2})\,dx,\]
where \(a,b,c,d\) given as (13), (14).
It can be readily computed that
\[\mathbb{E}(V_{r_{2}},E_{r_{2}})-\mathbb{E}(V_{r_{1}},E_{r_{1}})=- \frac{1}{6}[E]^{0}(C^{2}-(E_{+}^{0}E_{-}^{0}+V_{+}^{0}V_{-}^{0})=\]
\[-\frac{1}{12}[E]^{0}(([E]^{0})^{2}+([V]^{0})^{2})\geq 0,\quad t\in(0,T_{*}).\]
Here we take into account \([E]^{0}\leq 0\) and \((E_{+})^{2}+(V_{+})^{2}=(E_{-})^{2}+(V_{-})^{2}=C^{2}\).
Thus, if \([E]^{0}<0\), for reasons of less energy \(\mathbb{E}\) we have to choose \((V_{r_{1}},E_{r_{1}})\).
2. Another way to distinguish an acceptable rarefaction wave is the pointwise minimality of the local energy
\[\mathcal{E}(t,x)=V^{2}+E^{2}.\]
Indeed,
\[\mathcal{E}(V_{r_{2}},E_{r_{2}})=V_{r_{2}}^{2}+E_{r_{2}}^{2}=C^{2}\]
is constant by the construction of the solution, whereas
\[\mathcal{E}(V_{r_{1}},E_{r_{1}})=(ax+b)^{2}+(cx+d)^{2}\]
has a minimum \(x_{*}(t)=-\frac{ab+cd}{2(a^{2}+c^{2})}\in(x_{-}(t),x_{+}(t))\). Since \(\mathcal{E}(V_{r_{1}},E_{r_{1}})=C^{2}\) at \(x=x_{\pm}(t)\), then
\[\mathcal{E}(V_{r_{1}},E_{r_{1}})<C^{2}=\mathcal{E}(V_{r_{2}},E_{r_{2}}),\quad x (t)\in(x_{-}(t),x_{+}(t)),\quad t\in(0,T_{*}).\]
Both principles, on the basis of which admissible solutions can be distinguished, lead to the same conclusion: for the complete system (4), the solution \((V_{r_{1}},E_{r_{1}})\) must be chosen as a rarefaction wave, while when the condition (15) is applied, only the possibility \((V_{r_{2}},E_{r_{2}})\) remains.
## 4. Construction of a singular shock wave
We need to build a shock wave for the second part of the period \(2\pi\), \(t\in(T_{*},2\pi)\). However, in order not to complicate the notation, we, without loss of generality, shift the time point \(T_{*}\) to zero. Thus, we are in a situation where the initial data correspond to a shock wave and \(t=T_{*}\) is the point of the first intersection of the characteristics.
Suppose that for \(t\in(0,T_{*})\) we have constructed a solution to the problem as
\[(V_{s},E_{s})=\left\{\begin{array}{ll}V_{-}(t),&x<\Phi(t),\\ V_{+}(t),&x>\Phi(t),\end{array}\right. \tag{21}\]
that is, we found the position of the shock wave \(x=\Phi(t)\). Then the density can be found as \(n(t,x)=1-[E]|_{x=\Phi(t)}\delta(x-\Phi(t))\).
Thus, we must take into account the presence of a strongly singular component of the solution. However, before proceeding to the construction of a solution in this case, we will give a general definition of a strongly singular solution and obtain its main properties.
### Definition of a generalized strongly singular solution
Starting from the divergent form (9), we define a generalized strongly singular solution to the problem (9), (6) according to [15].
Let
\[V(t,x) = V_{-}(t,x)+[V(t,x)]|_{x=\Phi(t)}\Theta(x-\Phi(t)), \tag{23}\] \[E(t,x) = E_{-}(t,x)+[E(t,x)]|_{x=\Phi(t)}\Theta(x-\Phi(t)),\] (24) \[n(t,x) = \hat{n}(t,x)+e(t)\delta(x-\Phi(t)), \tag{22}\]
where \([f]=f_{+}-f_{-}\), \(f_{\pm}\) are differentiable functions having one-sided limits, \(t\geq 0\), \(x\in\mathbb{R}\), \(\hat{n}(t,x)=1-\{E_{x}(t,x)\}\), \(\{E_{x}\}\) is the derivative of the function \(E\) at the points at which it exists in the usual sense, \(e(t):=e(t,\Phi(t))\), \(e(t)=-[E(t,x)]|_{x=\Phi(t)}\).
**Definition 4.1**.: _The triple of distributions \((V,E,n)\), given as (22) - (24) and the curve \(\gamma\), given as \(x=\Phi(t),\)\(\Phi(0)=0\), \(\Phi(t)\in C^{1}\), is called a generalized singular solution of the problem (9),_
\[(V,E,n)|_{t=0}=\]
\[(V^{0}_{-}(x)+[V(x)]^{0}\Theta(x),\,E^{0}_{-}(x)+[E(x)]^{0}\Theta(x),\,n^{0}( x)=\hat{n}^{0}(x)+e^{0}\delta(x)),\]
_if for all test functions \(\phi(t,x)\in\mathcal{D}(\mathbb{R}\times[0,\infty))\)_
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}}\hat{n}(\phi_{t} +V\phi_{x})dxdt+\int\limits_{\gamma}e(t)\frac{\delta\phi(t,x)}{\delta t}\frac{ dl}{\sqrt{1+(\dot{\Phi}(t))^{2}}}+\] \[\int\limits_{\mathbb{R}}\hat{n}^{0}(x)\phi(0,x)dx+e(0)\phi(0,0)=0, \tag{25}\]
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}}\left((\frac{\hat{n}V^{2}}{2}+ E^{2})\phi_{t}+\frac{\hat{n}V^{3}}{2}\phi_{x}\right)dxdt+\int\limits_{\gamma} \frac{e(t)(\dot{\Phi}(t))^{2}}{2}\frac{\delta\phi(t,x)}{\delta t}\frac{dl}{ \sqrt{1+(\dot{\Phi}(t))^{2}}}+\] \[\int\limits_{\mathbb{R}}\left(\frac{\hat{n}^{0}(x)(V^{0}(x))^{2} }{2}+(E^{0}(x))^{2}\right)\phi(0,x)dx+\frac{e(0)(\dot{\Phi}(0))^{2}}{2}\phi(0,0 )=0,\]
_where \(\int\limits_{\gamma}\cdot dl\) is the curvilinear integral along the curve \(\gamma\), the delta-derivative \(\frac{\delta\phi(t,x)}{\delta t}|_{\gamma}\) is defined as the tangential derivative on the curve \(\gamma\), namely_
\[\frac{\delta\phi(t,x)}{\delta t}\big{|}_{\gamma}=\left(\frac{\partial\phi(t,x)}{ \partial t}+\dot{\Phi}(t)\frac{\partial\phi(t,x)}{\partial x}\big{|}_{\gamma} \right)\big{|}_{\gamma}=\frac{d\phi(t,\Phi(t))}{dt}=\sqrt{1+(\dot{\Phi}(t))^{2 }}\frac{\partial\phi(t,x)}{d\mathbf{l}},\]
_where \(\mathbf{l}=(-\nu_{2},\nu_{1})=\frac{(1,\dot{\Phi}(t))}{\sqrt{1+(\dot{\Phi}(t)) ^{2}}}\) is a unit vector tangent to \(\gamma\)._
The action of the delta function \(\delta(\gamma)\) concentrated on the curve \(\gamma\) on the test function is defined according to [8], as
\[(\delta(\gamma),\phi(t,x))=\int\limits_{\gamma}\phi(t,x)\frac{dl}{\sqrt{1+( \dot{\Phi}(t))^{2}}},\]
where \(\phi(t,x)\in\mathcal{D}(\mathbb{R}\times[0,\infty))\).
### Rankine-Hugoniot conditions for delta-shock waves (the Rankine-Hugoniot deficit)
**Theorem 1**.: _Let the domain \(\Omega\in\mathbb{R}^{2}\) be divided by a smooth curve \(\gamma_{t}=\{(t,x):x=\Phi(t)\}\) into the left and right sides \(\Omega_{\mp}\). Let the triple of distributions \((V,E,n)\), given as (22) - (24) and the curve \(\gamma_{t}\) be a strongly singular generalized solution for the system (9). Then this solution satisfies the following analogue of the Rankine-Hugoniot conditions_
\[\frac{d}{dt}e(t) = \left(-[\hat{n}V]+[\hat{n}]\dot{\Phi}(t)\right)\big{|}_{x=\Phi(t )}, \tag{28}\] \[\frac{d}{dt}\frac{e(t)(\dot{\Phi}(t))^{2}}{2} = \left(-\left[\frac{\hat{n}V^{3}}{2}\right]+\left[\frac{\hat{n}V^{ 2}+E^{2}}{2}\right]\dot{\Phi}(t)\right)\big{|}_{x=\Phi(t)}. \tag{27}\]
The proof of the first statement, (27), is contained in [15], the proof of (28) repeats the proof of the analogue of the Rankine-Hugoniot conditions for the energy equation in the "pressureless" gas dynamics model [11]. Let us briefly recall this.
We denote \(\mathbf{n}=(\nu_{1},\nu_{2})=\frac{(\dot{\Phi}(t),-1)}{\sqrt{1+(\dot{\Phi}(t)) ^{2}}}\) the unit normal to the curve \(\gamma_{t}\) directed from \(\Omega_{-}\) to \(\Omega_{+}\).
Choose a test function \(\phi(t,x)\) with support \(K\subset\Omega\). Then
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}}\left((\frac{\hat {n}V^{2}}{2}+E^{2})\phi_{t}+\frac{\hat{n}V^{3}}{2}\phi_{x}\right)dxdt=\] \[\int\limits_{\Omega_{-}\cap K}\left((\frac{\hat{n}V^{2}}{2}+E^{2} )\phi_{t}+\frac{\hat{n}V^{3}}{2}\phi_{x}\right)dxdt+\int\limits_{\Omega_{+} \cap K}\left((\frac{\hat{n}V^{2}}{2}+E^{2})\phi_{t}+\frac{\hat{n}V^{3}}{2} \phi_{x}\right)dxdt.\]
Integration by parts by the second equation (9) gives
\[\int\limits_{\Omega_{\pm}\cap K}\left((\frac{\hat{n}V^{2}}{2}+E^{2 })\phi_{t}+\frac{\hat{n}V^{3}}{2}\phi_{x}\right)dxdt=-\int\limits_{\Omega_{\pm }\cap K}\left((\frac{\hat{n}V^{2}}{2}+E^{2})_{t}+(\frac{\hat{n}V^{3}}{2})_{x} \right)\phi\,dxdt\mp\] \[\int\limits_{\gamma_{t}}\left(\nu_{2}(\frac{\hat{n}_{\pm}(V_{\pm })^{2}}{2}+(E_{\pm})^{2})+\nu_{1}\frac{\hat{n}_{\pm}(V_{\pm})^{3}}{2}\right) \phi(t,x)dl-\int\limits_{\Omega_{\pm}\cap K\cap\mathbb{R}}\left(\frac{\hat{n}^ {0}(x)(V^{0}(x))^{2}}{2}+(E^{0})^{2}\right)\phi(0,x)dx.\]
Thus,
\[\int\limits_{0}^{\infty}\int\limits_{\mathbb{R}}\left((\frac{\hat{n}V ^{2}}{2}+E^{2})\phi_{t}+\frac{\hat{n}V^{3}}{2}\phi_{x}\right)dxdt+\] \[\int\limits_{\Omega_{\pm}\cap K\cap\mathbb{R}}\left(\frac{\hat{n} ^{0}(x)(V^{0}(x))^{2}}{2}+(E^{0})^{2}\right)\phi(0,x)dx=\] \[-\int\limits_{\gamma_{t}}\left(\left[\frac{\hat{n}V^{2}}{2}+E^{2} \right]\nu_{2}+\left[\frac{\hat{n}V^{3}}{2}\right]\nu_{1}\right)\phi(t,x)dl. \tag{29}\]
Further,
\[\int\limits_{\gamma}\frac{e(t)(\dot{\Phi}(t))^{2}}{2}\frac{\delta \phi(t,x)}{\delta t}\frac{dl}{\sqrt{1+(\dot{\Phi}(t))^{2}}}=\] \[-\int\limits_{\gamma}\frac{\delta}{\delta t}\left(\frac{e(t)( \dot{\Phi}(t))^{2}}{2}\right)\phi(t,x)\frac{dl}{\sqrt{1+(\dot{\Phi}(t))^{2}}} -\frac{e(0)(\dot{\Phi}(0))^{2}}{2}\phi(0,0). \tag{30}\]
Adding (29) and (30), taking into account (26), we get
\[\int\limits_{\gamma_{t}}\left(\left[\frac{\hat{n}V^{2}}{2}+E^{2}\right]\nu_{2 }+\left[\frac{\hat{n}V^{3}}{2}\right]\nu_{1}-\frac{\delta}{\delta t}\left( \frac{e(t)(\dot{\Phi}(t))^{2}}{2}\right)\frac{1}{\sqrt{1+(\dot{\Phi}(t))^{2}} }\right)\phi(t,x)dl=0\]
for any \(\phi\in\mathcal{D}(\Omega)\). This implies (28).
We see that the generalized Rankine-Hugoniot conditions are a system of second-order ordinary differential equations, therefore, to solve the Cauchy problem (4), (4) (with the divergent form (9)), (22), (23) should set the initial velocity of the shock position \(\dot{\Phi}(0)\).
Since the system (4) has coinciding eigenvalues \(\lambda_{1}(V)=\lambda_{2}(V)=V\), the admissibility condition for a singular shock wave coincides with the geometric entropy condition:
\[\min\{V_{-},V_{+}\}\leq\dot{\Phi}(t)\leq\max\{V_{-},V_{+}\}, \tag{31}\]
meaning that characteristics from both sides come to the shock.
As we will see below, this condition allows us to obtain a condition on the derivative at an intermediate point and construct a solution to the Riemann problem in a unique way. In addition, in this problem, a final point arises, where the trajectory of the delta-shaped singularity must come, which also determines the problem.
### Mass and energy transfer ratios for a singular shock
Suppose that \(V,E\) is a compactly supported classical solution of the (4) system. Then, according to (9), the total mass
\[\int\limits_{\mathbb{R}}\,n(t,x)dx\]
and the total energy
\[\frac{1}{2}\int\limits_{\mathbb{R}}\left(n(t,x)V^{2}(t,x)+E^{2}(t,x)\right)dx\]
are conserved. Note that the total energy consists of kinetic and potential parts. Let us show that in order to obtain analogs of these conservation laws for a strongly singular solution, it is necessary to introduce the mass and energy concentrated on the shock. Suppose that the line of discontinuity \(x=\Phi(t)\) is a smooth curve.
We denote
\[\mathcal{M}(t) =\int\limits_{-\infty}^{\Phi(t)}n(t,x)dx+\int\limits_{\Phi(t)}^{+ \infty}n(t,x)dx,\] \[\mathcal{E}_{k}(t) =\frac{1}{2}\,\left(\int\limits_{-\infty}^{\Phi(t)}n(t,x)V^{2}(t,x)\,dx+\int\limits_{\Phi(t)}^{+\infty}n(t,x)V^{2}(t,x)\,dx\right),\] \[\mathcal{E}_{p}(t) =\frac{1}{2}\,\int\limits_{-\infty}^{+\infty}E^{2}(t,x)\,dx,\]
the mass, kinetic and potential energies concentrated outside the shock. We interpret the amplitude \(e(t)\) and the term \(\frac{1}{2}e(t)(\dot{\Phi}(t))^{2}\) as the mass \(m(t)\) and kinetic energy \(w(t)\) concentrated on the shock.
**Theorem 2**.: _Let the solution (22) - (24) be a strongly singular solution to the system (9). Then the following relations of balance take place:_
\[\dot{m}(t)=-\dot{\mathcal{M}}(t),\quad\dot{w}(t)=-\dot{\mathcal{E} }(t), \tag{33}\] \[M(t)+m(t)=\mathcal{M}(0)+m(0),\quad\mathcal{E}(t)+w(t)=\mathcal{ E}(0)+w(0). \tag{32}\]
Proof. Both equalities (32) can be proved in the same way. Let us prove, for example, the first of them. Because
\[\dot{\mathcal{M}}(t)=-[\hat{n}]\big{|}_{x=\Phi(t)}\dot{\Phi}(t)+ \left(\int\limits_{-\infty}^{\Phi(t)}+\int\limits_{\Phi(t)}^{+\infty}\right) \,n_{t}(t,x)dx=\] \[-[\hat{n}]\big{|}_{x=\Phi(t)}\dot{\Phi}(t)-\left(\int\limits_{- \infty}^{\Phi(t)}+\int\limits_{\Phi(t)}^{+\infty}\right)\,(n(t,x)V(t,x))_{x}dx=\] \[-[\hat{n}]\big{|}_{x=\Phi(t)}\dot{\Phi}(t)+[\hat{n}V]\big{|}_{x= \Phi(t)},\]
together with (27) this equality shows that
\[\dot{m}(t)+\dot{\mathcal{M}}(t)=0,\]
whence the first equalities in (32) and (33) follow.
### Constructing a strongly singular shock wave for piecewise constant initial data
We proceed to constructing a strongly singular solution in our particular case.
Since in this situation \(\hat{n}=1\) and, accordingly, \([\hat{n}]=0\), the equations according to which it is possible to determine the amplitude and location of a strongly singular
shock wave are greatly simplified and take the form
\[\dot{e}(t) = -[V]\big{|}_{x=\Phi(t)}, \tag{35}\] \[\frac{d}{dt}e(t)(\dot{\Phi}(t))^{2} = \left(-\left[V^{3}\right]+\left[V^{2}+E^{2}\right]\dot{\Phi}(t) \right)\big{|}_{x=\Phi(t)}. \tag{34}\]
Since the values of \(V_{\pm}(t)\) and \(E_{\pm}(t)\) are known (see (12)), the values of jumps can be calculated directly:
\[\left[V^{3}\right]\big{|}_{x=\Phi(t)}=(([E]^{0})^{3}-3[EV^{2}]^{0 })\cos^{2}t\sin t\] \[+(([V]^{0})^{3}-3[E^{2}V]^{0})\cos^{3}t+3[EV^{2}]^{0}\cos t-([E]^ {0})^{3}\sin t,\]
and
\[\left[V^{2}+E^{2}\right]\big{|}_{x=\Phi(t)}=\left[V^{2}+E^{2}\right]^{0} \big{|}_{x=x_{0}}=K=\text{const}.\]
Therefore, from (27) we find
\[e(t)=-[V]^{0}\sin t-[E]^{0}\cos t. \tag{36}\]
Note that for all \(t\) for which a shock wave exists, for \(e(0)>0\) the amplitude \(e(t)\) remains positive.
On the interval \((0,T_{*})\) there is a point \(t_{*}\) at which \(V_{-}(t_{*})=V_{+}(t_{*}):=U\). Then from the admissibility condition (31) we have \(V_{-}(t_{*})=\dot{\Phi}(t_{*})=V_{+}(t_{*})=U\). It can be readily found that
\[T_{*}=2\arctan\frac{[V]^{0}}{[E]^{0}},\quad t_{*}=\frac{T_{*}}{2},\quad U= \frac{E_{-}^{0}V_{+}^{0}-E_{+}^{0}V_{-}^{0}}{\sqrt{([V]^{0})^{2}+([E]^{0})^{2 }}}.\]
We denote \((\dot{\Phi}(t)=q\) and \((\dot{\Phi}(t))^{2}=Q(t)\). From (35) we get the Cauchy problems
\[\dot{q} = \frac{-\left[V^{3}\right]+Kq-\dot{e}q^{2}}{2eq},\quad q(t_{*})=U, \tag{37}\]
and
\[\dot{Q} = -\frac{\left[V^{3}\right]}{e}+\operatorname{sign}U\,\frac{K}{e} \sqrt{Q}-\frac{\dot{e}}{e}Q,\quad Q(t_{*})=U^{2}. \tag{38}\]
The solutions to problems (37), (38) cannot be found explicitly, however, they always exist for \(q\neq 0\). If at a certain point \(t=t_{0}\in(0,T_{*})\), we have \(q\to 0\), then, as follows from (37),(38), \(\dot{q}\to\infty\), \(\dot{Q}\to c=\text{const}\neq 0\). Indeed, if \(c=0\), then \(\left[V^{3}\right]|_{t=t_{0}}=0\). Nevertheless, it is easy to check that \(\left[V^{3}\right]=0\) if and only if \(t=t_{*}\). This implies that \(q=O(\sqrt{|t-t_{0}|})\), as \(t\to t_{0}\neq t_{*}\).
Thus, if there exists a point \(t=t_{0}\in(0,T_{*})\) such that \(Q(t_{0})=0\), we first find the unique solution to (38) at the first segment \((0,t_{0})\) or \((t_{0},T_{*})\) (the segment must contain \(t_{*}\)), and then find the unique solution to the Cauchy problem
\[\dot{Q} = -\frac{\left[V^{3}\right]}{e}-\operatorname{sign}U\frac{K}{e} \sqrt{Q}-\frac{\dot{e}}{e}Q,\quad Q(t_{*})=0,\]
on the second segment. Then we find \(q\) on both segments.
Let us note if \(\Phi(t)\) has an extremum on \((0,T_{*})\), then \(\Phi(t)\in C^{1}(0,T_{*})\) and can be found uniquely, however, \(\ddot{\Phi}(t_{0})\) does not exist.
### Examples
1. We start with the case (15), when the system (16) can be reduced to one equation and one of the possible conservative form is
\[V_{t}+(\frac{V^{2}}{2})_{x}=-\sigma\sqrt{C^{2}-V^{2}},\quad\sigma=\text{sign}(-V _{x})=\pm 1,\]
which does not require an introduction of a singular shock. The position of a usual shock is defined by the Rankine-Hugoniot condition and gives
\[\dot{\Phi}(t)=\frac{V_{-}(t)+V_{+}(t)}{2}. \tag{39}\]
Let us choose the initial data as
\[V_{-}^{0}=1,\,V_{+}^{0}=0,\,E_{-}^{0}=0,\,E_{+}^{0}=-1.\]
Here \(T_{*}=\frac{\pi}{2}\), \(K=U=0\) and \(e(0)=1\).
Fig.3, left, presents the behavior of the velocity \(\dot{\Phi}(t)\) of the singular shock satisfying the geometric entropy condition (31) (solid), in comparison with the velocity of shock based on the Rankine-Hugoniot condition (39) (dash). One can see that the difference is very small. Fig.3, center, presents the position of the singular shock between characteristics (solid), in comparison with the Rankine-Hugoniot shock (dash), the difference is almost negligible. Fig.3, right, shows the zoom of this difference near the origin.
2. The next example is for the following data:
\[V_{-}^{0}=1,\,V_{+}^{0}=0.5,\,E_{-}^{0}=1,\,E_{+}^{0}=0.9.\]
Here \(T_{*}=2.746801534\), \(U=-.7844645404\), \(K=-.94\) and \(e(0)=0.1\). This example is interesting, since \(\dot{\Phi}(t)\) changes the sign at a point \(t_{0}=0.69174927\neq t_{*}\). Fig.4, left, presents the behavior of the velocity \(\dot{\Phi}(t)\) of the singular shock satisfying the geometric entropy condition (31), Fig.2, right, presents the position of the singular shock between characteristics.
The solution in the examples are found numerically by means of the Runge-Kutta-Fehlberg method of fourth-fifth order.
Figure 3. Velocity (left) and position (center and right) of the singular shock (solid) vs. the velocity and position of the Rankine-Huhoniot shock (dash) for Example 1.
## 5. Discussion
1. We show that the reduced equations of a cold plasma provide the simplest example of an inhomogeneous system in which the solution of the Riemann problem consists of a rarefaction wave and a shock wave periodically replacing each other. The system is not so interesting from a physical point of view, since it is believed that the cold plasma equations are valid only for a smooth solution [3]. However, they are extremely interesting mathematically. Indeed, firstly, due to the non-strict hyperbolicity of the system, one can construct an example of non-uniqueness of the rarefaction wave. Second, the natural conservative form of the system can be used to construct a singular shock wave.
2. The solution of the Cauchy problem (4), (7) can be rewritten in terms of the solution of the Euler-Poisson equation (5) with discontinuous data \((n_{0},V_{0})\).
3. A similar procedure for solving the Riemann problem can also be applied to other non-strict hyperbolic systems written initially in a non-divergent form. The method consists in introducing an "artificial density", which makes it possible to write the system in a conservative form and define a strong singular solution.
4. The appearance of multiple rarefaction waves was noticed earlier in other models, for example, [9].
5. The presence of pressure apparently prevents the existence of a strong singular solution [7], in other words, the situation is similar to the influence of pressure in the gas dynamics model without pressure.
6. It should be noted that there are different plasma models and a huge amount of literature devoted to shock waves there. One of the popular models is the Vlasov-Maxwell system, which describes a collisionless ionized plasma [10], [14], [12], [2]. Another assumption about plasma naturally changes the properties of shock waves.
7. The non-strictly hyperbolic system considered here has a simple wave solution (an invariant manifold), what makes them related to systems of the Temple class [16]. However, the most interesting feature of the solution of the Riemann problem for the system of cold plasma equations are rooted in its inhomogeneity, while the equations of the Temple class are homogeneous and have constant states to the left and right of the shock wave or rarefaction wave.
## Acknowledgements
Supported by the Moscow Center for Fundamental and Applied Mathematics under the agreement 075-15-2019-1621. The author thanks her former student Darya
Figure 4. Velocity (left) and position (right) of the singular shock for Example 2.
Kapridova for numerical calculations confirming the construction of a rarefaction wave in the case of a simple wave solution. The author expresses her sincere gratitude to the anonymous referee for careful reading.
|
2303.03127 | ST-KeyS: Self-Supervised Transformer for Keyword Spotting in Historical
Handwritten Documents | Keyword spotting (KWS) in historical documents is an important tool for the
initial exploration of digitized collections. Nowadays, the most efficient KWS
methods are relying on machine learning techniques that require a large amount
of annotated training data. However, in the case of historical manuscripts,
there is a lack of annotated corpus for training. To handle the data scarcity
issue, we investigate the merits of the self-supervised learning to extract
useful representations of the input data without relying on human annotations
and then using these representations in the downstream task. We propose
ST-KeyS, a masked auto-encoder model based on vision transformers where the
pretraining stage is based on the mask-and-predict paradigm, without the need
of labeled data. In the fine-tuning stage, the pre-trained encoder is
integrated into a siamese neural network model that is fine-tuned to improve
feature embedding from the input images. We further improve the image
representation using pyramidal histogram of characters (PHOC) embedding to
create and exploit an intermediate representation of images based on text
attributes. In an exhaustive experimental evaluation on three widely used
benchmark datasets (Botany, Alvermann Konzilsprotokolle and George Washington),
the proposed approach outperforms state-of-the-art methods trained on the same
datasets. | Sana Khamekhem Jemni, Sourour Ammar, Mohamed Ali Souibgui, Yousri Kessentini, Abbas Cheddad | 2023-03-06T13:39:41Z | http://arxiv.org/abs/2303.03127v1 | # _ST-Key_S: Self-Supervised Transformer for Keyword Spotting in Historical Handwritten Documents
###### Abstract
Keyword spotting (KWS) in historical documents is an important tool for the initial exploration of digitized collections. Nowadays, the most efficient KWS methods are relying on machine learning techniques that require a large amount of annotated training data. However, in the case of historical manuscripts, there is a lack of annotated corpus for training. To handle the data scarcity issue, we investigate the merits of the self-supervised learning to extract useful representations of the input data without relying on human annotations and then using these representations in the downstream task. We propose ST-KeyS, a masked auto-encoder model based on vision transformers where the pretraining stage is based on the mask-and-predict paradigm, without the need of labeled data. In the fine-tuning stage, the pre-trained encoder is integrated into a siamese neural network model that is fine-tuned to improve feature embedding from the input images. We further improve the image representation using pyramidal histogram of characters (PHOC) embedding to create and exploit an intermediate representation of images based on text attributes. In an exhaustive experimental evaluation on three widely used benchmark datasets (Botany, Alvermann Konzil-sprotokolle and George Washington), the proposed approach outperforms state-of-the-art methods trained on the same datasets.
keywords: Keyword spotting, masked autoencoders, self-supervised +
learning, visual transformers, siamese neural networks, PHOC embedding.
## 1 Introduction
Many digitized historical handwritten documents are hard to access due to the lack of suitable indexing and retrieval tools. One solution is to recognize the image by transforming it to text with an automatic tool, or manually. However, fully manual transcription is time-consuming and expensive. Also, current optical character recognition (OCR) systems are mostly efficient for modern printed documents [1; 2]. But, these systems suffer difficulties in the case of a historical manuscript that usually contains many types of degradation due to bad paper quality, writing style variations, ink flow, shadows, non-uniform lighting, stains, etc.
Another solution to handle the non-indexed documents is the word-matching process, which is based on a low-level matching known as \(word\)\(spotting\)[3]. It is closely associated with content-based image retrieval, since it searches for a word in a set of non-indexed documents using the query image content as the single source of information. As a result, the system will return a ranked list of document word images to the user according to their similarity to the desired searched word image. Ultimately, word spotting can be defined as the process of identifying positions on a document image that have a high potential to correspond to an instance of a query word image, without recognizing it explicitly.
In the literature, two distinct strategies for word spotting appeared depending on the search space, which could be either a set of isolated/segmented word images (segmentation-based approach) or a full document image (segmentation-free approach). In this work, we assume that document images have been segmented into isolated word images serving thereafter for matching the query word image, following previous studies such as [4; 5; 6; 7] Localizing and retrieving word images has been an active field of research, with the goal of finding the more effective word representations that give the most meaningful distances between word images. Early approaches were using the classic learning-free techniques that rely on expert designed feature embedding (handcrafted features) [6; 8]. These methods tend to be immediately applied since they do not rely on a learning stage.
Later, and following the success of other computer vision tasks, machine learning-based techniques (in particular convolutional neural networks (CNNs)) are now dominating the field of word retrieval [9; 10; 11; 12; 13; 14]. But, despite the significant improvement in the performance of these models
over classical handcrafted techniques, they have their associated shortcomings. First, CNNs operate on regular grids and use the same convolutional filter to extract features from handwritten word images, making this technique sensitive to rotation. Second, CNNs fail to capture relevant features for long-range dependencies, as they are more suited to extract low-level spatial information from images.
With the recent success of transformers in natural language processing (NLP) [15; 16], their application in computer vision tasks (such as object detection [17], image recognition [18], question answering [19], image restoration [20], handwritten text recognition (HTR) [21; 22], named entity recognition [23], etc.) has also lately been getting more attention. The self-attention mechanism that has been proposed in [15] allows capturing contextual feature interactions information. This use of local knowledge combined with the information of the global long-range spatial arrangement is beneficial for an efficient keyword spotting (KWS) model.
However, the main issue of transformers is that they are data-hungry. Nonetheless, huge annotated datasets are hard to obtain and labeling large amounts of data is work intensive and can be very expensive, while unlabeled data is much more abundant. Thus, the need to build models that benefit from the unlabeled data in addition to the annotated data, or even omitting the use of any annotated data becomes more appropriate in this situation. In this regard, self-supervised techniques based on ViT (Vision Transformer) models have recently shown significant results.
Motivated by this success in computer vision, such as image classification and object detection [24; 25], image retrieval [26] and speech recognition [27] tasks, we propose in this paper an end-to-end keyword spotting approach in handwritten documents which is based on a self-supervised technique and makes use of masked autoencoders with the self-attention mechanism. To the best of our knowledge, this is the first work that proposes a self supervised learning paradigm of transformer based model in the context of word spotting in historical document images. Our framework is built in two stages. The first stage is the pretraining which is designed for learning useful representations from the unlabeled data, using a masked encoder-decoder architecture, in a self-supervised way. The second stage is the fine-tuning which is devoted to efficiently extracting relevant features from the labeled word image. In the fine-tuning stage, the pretrained encoder is integrated to a siamese neural network (SNN) and is fine-tuned using a few labeled data to improve feature embeddings from the input text images. Then, the resulting model is used as a core component of our word spotting framework by including contextual information extracted from the word text. To achieve
this, we align the visual representations extracted by our encoder with the pyramidal histogram of characters (PHOC) representation. To demonstrate the effectiveness of the proposed method, we conduct several experiments using various training conditions and several public image databases.
The overall contributions of this paper can be summarized as follows:
* To the best of our knowledge, we present the first self-supervised approach for the goal of keyword spotting in handwritten text images, composed of pretraining and fine-tuning stages. The approach is based on vision transformers in an encoder-decoder fashion, without any dependency on CNNs.
* An effective pretext task was learned during pretraining to extract the most useful representations for keyword spotting without the need of any labeled data.
* Then, a two-stage downstream task based on siamese neural networks and PHOC attributes is used to further promote the representation and makes it more powerful in retrieving the best matching images of a given query image.
* Extensive comparative experiments are achieved to validate the efficiency of our proposed method, involving three handwritten word images datasets. We demonstrate that our proposed method can be generalized across different databases and languages. We show also the effectiveness of our method to deal with data variability as well as data scarcity issues.
The rest of this paper is organized as follows. In Section 2 we provide a review of prior works on keyword spotting for segmented documents. Then we introduce our proposed model in Section 3. After that, experimental results and comparisons with existing methods will be described in Section 4. Finally, in Section 5 we draw the conclusions and we propose open challenges for future research directions.
## 2 Related work
Keyword spotting in handwritten documents has drawn the interest of the document analysis research community over the last decades [3] and it has still been challenging given the complexity of historical documents (diverse scripts, various writing styles, diverse noise, etc). In the literature, there are several successful efforts that focus on different aspects of the word
spotting issues like the data modality (printed, handwritten or scene text), method type (segmentation-based and segmentation free), and the embedding (representation) type. In the following sub-sections, we categorize the related methods into two main families depending on the used representations: learning-free and learning-based techniques.
### Learning-free representations
Classical KWS methods were learning-free, different methods were built to find the best handcrafted matching features, or representations. Earlier works considered the handwritten word images as a temporal sequence to build a variable embedding representation. Most of these methods are based on profile features [28] by computing each word image's column using diverse statistics at the pixel level. Within this scope, the Dynamic Time Warping (DTW) algorithm had been employed to match representations having variable lengths inspired by its usefulness for sequence matching problems in speech. In [29; 30], the authors combined the DTW with the profile features (vertical profile, upper and lower word profile, and background to ink transitions) for better accuracy and faster retrieval. However, these simple structural features were leading to unsatisfactory accuracy. Thus, statistical-based features, especially local gradient ones such as SIFT (Scale-invariant feature transform) [31] and HOG (Histogram of oriented gradients) [32] were employed for word spotting.
In [33; 34], the authors propose a Bag Of Words (BOW) method which uses the SIFT descriptors to extract the local features, then projects them to a topic space that conserves the lexical content of the word images. A similar approach with an addition of the corner detector features was also introduced in [35]. Another approach that uses the projections of oriented gradients as descriptors was proposed in [36]. A sequence of descriptors based on the combination of a zoning scheme and an appearance descriptor (or, modified Projections of Oriented Gradients) was also used in [6] to represent the word images. In [37; 38], document images were represented with a grid of HOG descriptors, then the document regions that are most similar to a query word are located in a sliding-window fashion. After that, a second stage of re-ranking the best retrieved regions using a more discriminative BOW representation was applied. In [4], a representation to embed the word image to its corresponding text label in the same space using the developed PHOC was introduced. Additionally, recently, some graph based approaches have been introduced. In [39], the text images were represented with a graph structure, then graph matching was applied for the spotting.
Despite the simplicity of the learning-free approaches and the ability to be adapted and applied in different domains, their results are still unsatisfactory. Thus, learning-based approaches have since been in vogue.
### Learning-based representations
With the advances in machine learning, especially in the deep learning field, features are now learned within the model instead of handcrafting them. Nowadays, machine learning based word spotting models are learning the features using labeled data. Most of the developed approaches within this strategy are using CNNs. In [40], a CNN based model is employed to detect words regions in natural images, then it recognizes these detected words. Another model called PHOCNet [41] was proposed to embed the image features extracted by the CNN layers into the PHOC representation. The representations are calculated by applying a sigmoid on the final fully connected layer. This latter work was extended in [42; 9] to improve the spotting accuracy. Another PHOC based model was introduced in [43], where instead of a distance-based matching of the retrieved words, a probabilistic ranking was used for better performance. In [5], a model called HWNet was introduced to match image collections containing handwritten content, where the representations were learned using a CNN. The HWNet representations were also used in [44; 45] by embedding them into the word attribute space. This was done by training a classifier to project both image and textual attributes to a common subspace. In [11], a triplet loss based CNN approach was proposed to learn the representations by reducing the distance of the anchor image to a similar (positive) image while enlarging the distance to a different (negative) word image.
Other approaches were proposed based on graphs, for instance, [7] learned the representations by a Graph Neural Network (GNN). A message passing neural network was employed to capture the graph structure, that is used for the distance computation.
### Self-Supervised Learning in document processing/analysis
Learning-based approaches are effective when a large amount of labeled data is available. However, with the continual growth of deep learning architectures, they start overfitting on the usual datasets and requiring more samples, which is a challenging problem. Thus, a recent development in self-supervised learning [17; 25; 24], especially with the rise of transformers architectures [15; 18], is now appearing as a solution. Self-supervised methods aim to benefit from a huge amount of unlabeled data that can be added to
the labeled datasets for training. Nowadays, several document analysis papers are proposed following this strategy. These approaches were developed for instance in document understanding in terms of classification, layout analysis or entity extraction [46; 47; 48; 49] with different pretraining objectives (text-image matching, text-image alignment, masked visual-language modeling, document reconstruction, etc). These models were proposed for optical text images and using an OCR for the pretraining. Thus, their learned representation can not be used for fine-tuning our model on handwritten KWS. Other approaches were developed for handwritten/scene text recognition by learning a representation that considers the text image as a sequence of characters in a sequence-to-sequence contrastive learning fashion. In [50], the authors first applied transformation on each unlabeled word. Then, the feature maps are divided into different instances, over which the contrastive loss is computed, where from each image several positive pairs and multiple negative examples are extracted. However, the segmentation of words was not accurate due to the use of unlabeled data, which makes the method learns a representation of a sequence of "word parts" rather than the actual characters. In [51], a similar approach was proposed by concatenating different unlabeled words to produce two views, then their aligned features are used with a contrastive loss to pull together the positive samples and push apart the negative samples. However, concatenating unlabeled words can cause the consideration of positive words as negatives. More recently, a self-supervised approach for HTR using the masking-recovering strategy with the generative models was proposed [22]. The method applies different degradation on the unlabeled word images (masking, blurring and background noise) and then learns to reconstruct the original clean image as a pretraining task. It was shown that this method overcomes the contrastive learning drawbacks of requiring large batch sizes and large data points while learning more effective and robust representation, especially for fine-tuning with fewer data. Another method that combines contrastive learning based pretraining with the masking based one was also proposed in [52].
## 3 Proposed method
In this section, we present the proposed framework for word spotting in handwritten document images. We refer to this framework as _ST-KeyS_: Self-supervised Transformer for Keyword Spotting. The idea is to benefit from the big amount of unlabeled data to learn word image representation in a self-supervised way. Then further improve and align these representations using the available labeled data. Fig. 1 illustrates the overall proposed
framework, similar to other recent works in self-supervised representation learning, our proposed method consists of two phases:
* Pretraining phase: a self-supervised pretraining using a masked vision transformer autoencoder is performed to extract useful representations from the unlabeled text images.
* Fine-tuning phase: a downstream task based on siamese neural networks used to learn deep representations (step1) of the word image followed by PHOC embedding enabling the extraction of contextual information from the word (step 2).
In the following sub-sections, we first describe the used masked autoencoder architecture for self-supervised word representation learning. Second, we describe the used model based on the SNN architecture and PHOC representations for the down-stream task.
Figure 1: Pipeline of the proposed _ST-KeyS_ framework. It consists of two stages: a pretraining stage and a fine-tuning stage.
### Pretraining phase: Learning deep representation using a masked autoencoder model
The pretraining phase is used to learn a word representation model in an unsupervised fashion. Inspired by the work [24], we propose to use a masked autoencoder based on vision transformer. Fig. 2 illustrates the architecture of the used masked encoder-decoder. Same as the ViT model [18], each input image \(x\in R^{H\times W\times C}\) (\(H\), \(W\) and \(C\) denote respectively the height, width, and number of channels of \(x\)) is split into a set of \(n\) non overlapping regular patches of size \(p\times p\times C\). Then a fraction of 75% of these patches are randomly masked according to a uniform distribution. The input image is then represented by only the remaining visible patches at the input of the encoder.
Figure 2: Proposed masked encoder-decoder model for word representation learning.
Different from conventional autoencoders, the used masked encoder-decoder operates in an asymmetric fashion [24] enabling the encoder to operate on only the partial observed signal (without the masked patches) and the decoder to rebuild the full image based on the representation given by the encoder and the masked patches. As shown in Fig. 2, the encoder takes as input the visible patches with their positional information and maps them into a latent representation, while the decoder reconstructs the masked pixels based on this latent representation.
**The Encoder.** The used encoder architecture is a vision transformer [18] that operates only on visible patches. First, the visible patches are embedded through a linear projection operation into a patch embedding set (denoted \(E_{i}\)). In order to allow the model to capture the spatial structure of the 2D image, each patch embedding in \(E_{i}\) is then associated with its positional information within the original image. The resulting embedding set \(E^{\prime}_{i}\) is then passed as input to the encoder. After that, a number of transformer blocks are employed to map this embedding \(E^{\prime}_{i}\) into a latent representation (output patch embedding set referred as \(E^{\prime}_{O}\)). These encoder's blocks have the same structure as in [18]. Each block consists of multi-head self-attention and multi-layer perceptron (MLP) alternating layers. The MLP network consists of a single layer with a Gaussian Error Linear Unit (GELU) activation function. Layer normalization (LayerNorm) is applied before each transformer block allowing to train efficiently deep encoders [53], and we used residual connections after every block. An overview of the encoder architecture used in our work is illustrated in Fig. 3. It should be noted also that our encoder is applied only on visible set of patches. In our case, it operates on only a fraction of 25 % of the full set of patches.
**The Decoder.** The decoder is used to predict the pixel values for masked patches and then reconstruct the input image based on the latent representation provided by the encoder. The decoder takes as input the representation of the encoded patches \(E^{\prime}_{O}\) (visible patches) and the masked patches with their positional information which provides their arrangement location in
Figure 3: Architecture of the Encoder used in our work.
side the image. Similar to the encoder architecture, the decoder has also series of transformer blocks. The decoder produces a vector of pixel values to represent each patch. All vectors are then reshaped to reconstruct the image. Similar to [18], we use the mean squared error (MSE) between the masked patches from the original and reconstructed images as the loss function.
We note that the decoder is only used during the pretraining phase in order to perform the image reconstruction and it will be discarded during the second phase.
### Fine-tunig phase: Proposed down-stream task for word spotting
The goal of our study is to propose a spotting method dealing with the query by example scenario for KWS that can handle existing challenges in handwritten documents such as writing style variations. For that, we introduce as the downstream task a KWS method based on a siamese neural network (SNN) architecture coupled with PHOC embedding. The SNN is a dedicated network having two inputs and it is able to learn relevant representations of handwritten images and therefore it allows distinguishing between similar and non-similar pairs of images. While the SNN network is used to learn a descriptor that is representing the image, the PHOC embedding is involved to provide a contextual representation of the characters embedded inside the word. Such process shall be a suitable way to benefit from the visual (image) and language (text) modalities when producing the final representations.
#### 3.2.1 Visual image representation
Once the masked autoencoder is pretrained in a self-supervised way on large unlabeled dataset, the decoder is discarded and only the encoder part is retained for the downstream task. Therefore, the pretrained encoder is used as a backbone to build the SNN-transformer architecture as shown in Fig. 4. The pretrained encoder is considered as a good starting point for the SNN model because it has learned word image representation from unlabeled data during the first pretraining phase without any human supervision or annotation. It should be noted that during this phase, the encoder is applied to uncorrupted (non-masked) images represented by the full set of their patches. As shown in Fig. 4, the SNN-transformer architecture comprises two identical encoder blocks, with shared weights, followed by a dense layer. The dense layer involves a Linear layer with dropout and sigmoid as activation function.
Given two input images, the embedded features are propagated, each through an encoder block. The SNN-transformer works in parallel on the two different input vectors to compute comparable output vectors. These output vectors are compared using a distance function, corresponding to the absolute values of the descriptors subtraction operation, and then passed to the fully connected layer with a sigmoid activation function to return a similarity measure indicating whether or not the two images belong to the same class. The learning process is designed to adjust the embedding to be representative and enables thereafter the SNN-transformer to extract relevant features as effectively as possible.
#### 3.2.2 PHOC word embedding
Once the encoder has been fine-tuned firstly in a siamese fashion using a few labeled data, we use this latter as the main component for the second adaptation stage. This stage is performed to align the visually extracted features by the previous stage with the text based features. Particularly, we make use of the textual representation referred to as PHOC embedding. This representation combines the histogram of characters in several spatial
Figure 4: The proposed SNN-transformer architecture. The H1 and H2 refer to numerical feature vectors corresponding to Image 1 and Image 2, respectively.
regions in a pyramidal manner. Here, each feature indicates the presence or absence of a specific character in a given spatial region.
The PHOCNet-transformer's pipeline is visualized in Fig. 5. It consists of a single encoder block followed by a final dense layer. This last layer is a Linear layer with dropout and Sigmoid activation function. Similar to [41], we represent a word image using PHOC embedding with 2, 3, 4 and 5 levels. For instance, within the second level, the PHOC captures the presence of a certain character in the first or the second half of the word. Fig. 6 provides an exemplary illustration of PHOC extraction from a given text string. This yields a binary histogram having a size of 504. Additionally, we make use of the 50 most frequent bigrams (level 2). Thus, by the use of the Latin alphabet (lower case) with the ten digits, the resulting PHOC has a size of 604 which is corresponding to the output size of the defined dense layer.
Learning PHOC representations allows the adjustment of the embedding to be representative and enables the network to perform the spotting accurately. Therefore, after having trained our encoder using the PHOC representations, we make use of the layer preceding the final dense layer of this model to extract feature vectors representing the word images. The extracted feature vectors are then reduced to 400 dimensions using Principal Component Analysis (PCA), which is popularly used as a dimensionality reduction technique. To perform the word spotting task, we use the cosine distance metric which is calculated from a pair of feature vectors that each
Figure 5: The proposed PHOCNet-transformer architecture.
corresponds to a word image.
## 4 Experiments
In this section, we first describe the datasets used in our experiments. We then outline the implementation details that are most relevant to our approach. Lastly, we report our results and provide a comparison with state-of-the-art methods.
### Datasets
Four databases of handwritten documents are used in our experiments: while the IAM off-line database is used to pretrain the model in a self-supervised way, three other datasets are used to fine-tune and evaluate the KWS proposed method. These three datasets are the Botany (BOT) dataset, Alvermann Konzilsprotokolle dataset (AK), and the George Washington dataset (GW) which are categorized as historical collections.
**IAM dataset.** The IAM dataset [54] involves forms of handwritten English text useful for training and testing handwriting recognition models and writer identification methods. IAM contains handwriting belonging to 657 different writers which lead to a large variety of writing styles. This database consists of 1.539 pages of scanned text from the Lancaster-Oslo/Bergen corpus [55], leading to 115.320 isolated and labeled word images split into training, validation and test partitions. Due to its large style variability, we have used the IAM dataset for the self-supervised pretraining phase. A sample document image is shown in Fig. 7a.
**Botany dataset (BOT).** This dataset is a part of ICFHR 2016 Handwritten Keyword Spotting Competition [56]. It consists of more than 100
Figure 6: Visualisation of the PHOC embedding extracted from a given text at level 1, 2 and 3.
different botanical documents made between the year 1800 and 1850 by the then government of British India. These documents are written in English and show some signs of deterioration, especially fading as shown in Fig. 6(c). Variations in writing style are noticeable, especially in scaling and intra-word variations.
**Alvermann Konzilsprotokolle dataset (AK).** This dataset is also a part of ICFHR 2016 Handwritten Keyword Spotting Competition [56]. It is a historical document collection consisting of 18.000 pages of handwritten minutes of official meetings in the central administration of the University of Greifswald between the year 1794 and 1797. These documents are written in German and show minor signs of degradation as shown in Fig. 6(b). This dataset presents low variation in the writing style.
**George Washington dataset (GW).** The GW dataset is well known to the community of word spotting. It consists of 20 pages of letters written by George Washington and his secretaries in 1755. It contains 4.860 annotated word images. Due to the small size of the dataset, there are no available standard partitions or query selection for GW. Thus, a four-fold cross validation setup is generally adopted by the majority of learning-based recent works. We have adopted this setup using the same split as in [4]. For each test split, words having at least two instances are selected as query images in a leave-one-out fashion. A sample document image from the GW database is shown in Fig. 6(d).
### Evaluation protocol
For the word spotting task, the aim is to retrieve all instances of the query words in a dataset's partition. Given a query image, the database items are sorted with respect to their similarity to the introduced query following the protocol presented in [4]. In our work, we use all the words of the test set as a query in a leave-one-out style. When performing keyword spotting, the query image is removed from the dataset, and queries that have no extra relevant words in the database are discarded. However, the remaining irrelevant words are maintained in the test set to act as distractors in the ranking process. This protocol is adopted in all our experiments using the three evaluation sets, namely, BOT, AK and GW databases. Particularly, for the GW dataset, we evaluate our proposed method in a four-cross validation fashion to enable comparison with state-of-the-art systems. For the BOT and AK datasets, there were three partitions of training sets: small, medium, and large. Here we use the smallest partition for conducting experiments. Table 1 provides statistics on the databases used in our experiments.
Since 1958, 13 colours left faces and 1
a maximum of 500 epochs, with a batch size of 20. During the pretraining stage we used AdamW, an improved version of the algorithm [57], for parameter optimization. The learning rate is initially set to 1.5e\(-\)4 and decreased using a scheduler. For the fine-tuning stage, we employed the SGD (Stochastic gradient descent) optimization algorithm with a learning schedule. An initial learning rate of 0.01 was used when fine-tuning the siamese neural network. This rate is declined every 3 epochs. The experiments were performed using an NVIDIA TITAN Xp GPU. To improve the generalization power of the proposed _ST-KeyS_ model, we used different random transformation operations, as data augmentation techniques, such as erosion, dilation, and skewing.
### KeyWord Spotting Results
We present in this section the obtained results. It should be noted that in all our experiments, we rely on the Cosine similarity distance as a metric allowing us to measure the similarity between two feature vectors. This metric computes the angle's cosine between two vectors and identifies if two vectors point approximately in the same direction.
#### 4.4.1 Evaluation on ICFHR 2016 competition's datasets
The evaluation process is conducted firstly on the _ICFHR 2016_ competition's datasets, namely, the BOT and the AK. Table 3 shows the obtained results. We recall that the _ST-KeyS_ model is first pretrained on the IAM database in a self-supervised way in order to learn deep representation from word images. Secondly, the pretrained encoder is fine-tuned using a few
\begin{table}
\begin{tabular}{|l|l|} \hline Parameter & Value \\ \hline Patch size & \(16\times 16\) \\ Image size & 224x224 \\ Train size\(\dagger\) & 60k \\ Encoder embedding dimension & 768 \\ Encoder layers number & 12 \\ Encoder head number & 12 \\ Decoder embedding dimension & 384 \\ Decoder layers number & 4 \\ Decoder head number & 6 \\ \hline \end{tabular}
\end{table}
Table 2: Proposed model’s hyperparameters used in our work.
labeled data from the target dataset. Since the IAM, BOT, AK and GW databases are belonging to different datasets distributions, we are intended to deal with challenges related to domain variations and writers styles shifts. That's why we extend the pretraining stage with some epochs on a subset from the target dataset (BOT, AK and GW) in a self-supervised way (without annotation/labels). Such a process allows to reduce the gap between domains originating from the calligraphic style fluctuations of the IAM and the used evaluation datasets.
From Table 3, we notice that the proposed _ST-KeyS_ model yields good results on the BOT dataset reaching a performance of 59.10% which outperforms TPP-PHOCNet(CPS), PHOCNet(CPS) and Triplet CNN architectures [9]. However, we obtained moderate results on the AK dataset comparing to TPP-PHOCNet(CPS) and PHOCNet(CPS) architectures [9]. Moreover, we observe that the _ST-KeyS_ leads to better results than learning-free based methods [36; 39; 6] on both the BOT and AK datasets although they represent different calligraphic styles compared to the IAM database utilized in the first pretraining stage and also despite the difference in the written language script. Additionally, our _ST-KeyS_ model gives better results compared to the Graph Edit Distance method proposed in [7] thanks to the transformer's capacity to focus on meaningful information included in the word image. This capacity is owed to the attention mechanism allowing to extract discriminant features from word images and which are used later in the matching process.
In summary, we note that the _ST-KeyS_ semi-supervised model outperforms the fully supervised PHOC-based methods [9] that were conducted in the literature on the BOT dataset and yields moderate results with these methods on the AK dataset. Generally, these methods rely on synthetically generated data to learn the PHOC contextual representations. Thus, such models are able to treat large vocabulary allowing to alleviate to the out of vocabulary words issue [58]. In the remainder of our work, we limit the PHOC learning stage to the data provided in the competition image databases. In other words, we are based on a closed vocabulary since our goal is to build a robust model allowing to retrieve handwritten words independently of the out of vocabulary word problem. Therefore, it is unfair to compare our results to PHOC-based methods that were conducted in the literature. For that reason, we restrict the comparison to methods using closed vocabulary.
#### 4.4.2 Evaluation on the GW dataset
Further experiments are conducted on the GW dataset to assess the performance of our proposed _ST-KeyS_ method. The obtained results on GW dataset are illustrated in Table 4. _ST-KeyS_ achieved a mAP of 95.70% on the GW database. Table 4 shows that our proposed method leads to interesting results that are exceeding all considered learning-free methods [37; 4; 36; 6]. This result proves the efficiency of the _ST-KeyS_ model to extract deep meaningful features from word images enabling an accurate word spotting task compared to the use of handcrafted ones (HOG, mPOG, etc.). In addition, the _ST-KeyS_ method gives better results than the current state-of-the-art methods that are based on supervised approaches [11; 41; 59] and using a closed vocabulary. Moreover, _ST-KeyS_ outperforms the graph-based approach [7] by a large margin.
It is noteworthy that our method focus on designing a robust model allowing retrieval of word images in handwritten documents when only few labelled data are available. Despite this, we can see from Table 4 that _ST-KeyS_ achieved competitive results compared to PHOC based methods trained on large generated synthetic data. _ST-KeyS_ outperforms the methods presented in [4; 5] and gets a slightly lower performance compared the methods [60; 42] without the need for any external data.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Method & BOT & AK \\ \hline \hline \multicolumn{3}{|l|}{_Learning-free methods_} \\ \hline Projections of Oriented Gradients [36] & 46.70 & 56.50 \\ Graph matching [39] & 49.57 & 77.24 \\ PSeq-mPOG+MISM [6] & 58.30 & 76.20 \\ \hline \hline \multicolumn{3}{|l|}{_Learning-based methods_} \\ \hline Graph Edit Distance [7] & 52.83 & 79.55 \\ TPP-PHOCNet(CPS) [9]\(\uparrow\)\(\diamondsuit\) & 51.25 & 90.97 \\ PHOCNet(CPS) [9]\(\uparrow\)\(\diamondsuit\) & 45.82 & 88.31 \\ Triplet-CNN [9]\(\uparrow\)\(\diamondsuit\) & 54.95 & 82.15 \\ \hline \hline \multicolumn{3}{|l|}{**Ours (_ST-KeyS_ )\(\uparrow\)**} & **59.10** & **85.16** \\ \hline \multicolumn{3}{|l|}{\(\uparrow\)_Methods based on PHOC representation_.} \\ \multicolumn{3}{|l|}{\(\diamondsuit\)_Methods based on generated synthetic data (large vocabulary)_.} \\ \end{tabular}
\end{table}
Table 3: Evaluation results (mAP %) of the word spotting proposed method performed on the BOT and the AK datasets.
Fig.8 shows qualitative evaluation of our proposed _ST-KeyS_ method on the GW database where the top five retrieved images for each query word image are given. This figure highlights the robustness of the features extracted using the proposed method. Such features make the method invariant across different word capitalization forms and some degradation characterizing historical documents. Some failure cases are presented at the end of the figure where our model fails to retrieve the correct matching word image. This can occur as a result of the stroke corruption of some words like the word 'Waggons' at the fifth row where the last letter's' is not clearly noticeable. Hence, the model faces an ambiguity in deciding whether this letter is present at the end of the word. In other cases, this can be attributed to the lexical complexity of the word image. For instance, the word "Sergeant" at the sixth row is well retrieved at the top 1 and 2 instances, but it was then conflicted with the word "Regiment" which makes the model not able
\begin{table}
\begin{tabular}{|l|c|} \hline
**Method** & **mAP \%** \\ \hline \hline _Learning-free methods_ & \\ \hline HOG + SVM [37] & 49.40 \\ DTW [4] & 60.63 \\ FV [4] & 62.72 \\ Projections of Oriented Gradients [36] & 37.00 \\ PSeq-mPOG+MISM [6] & 77.10 \\ \hline \hline _Learning-based methods_ & \\ \hline Zoning Aggregated features (CNN) [59] & 58.30 \\ Softmax CNN [41] & 78.24 \\ Siamese Triplet CNN [11] & 91.63 \\ Graph Edit Distance [7] & 78.48 \\ Att. + Platts (PHOC) [4]\(\uparrow\)\(\diamond\) & 93.04 \\ PHOCNet [41]\(\uparrow\)\(\diamond\) & 96.71 \\ TPP-PHOCNet [42]\(\uparrow\)\(\diamond\) & 97.78 \\ HWNet [5]\(\uparrow\)\(\diamond\) & 94.84 \\ HWNet v2 (ROI) [60]\(\uparrow\)\(\diamond\) & 96.01 \\ HWNet v2 (TPP) [60]\(\uparrow\)\(\diamond\) & 98.24 \\ \hline \hline
**Ours (_ST-KeyS_)\(\uparrow\)** & **95.70** \\ \hline \end{tabular}
\(\uparrow\)_Methods based on PHOC representations_.
\(\diamond\)_Methods based on generated synthetic data (large vocabulary)_.
\end{table}
Table 4: Evaluation results (mAP %) of the word spotting proposed method performed on the GW dataset.
to decide about some regions of the word image. In other cases, the model is missing the appropriate instances because of the scarcity of this image's representation in the used dataset as is the case of the seventh row of Fig. 8. However, our model was able to retrieve the unique right instance in the second rank for the example shown in the eighth row. To summarize, we conclude that the ambiguity and the complexity of the script writing and the close similarity among different words are the principal causes of the failures.
Figure 8: Qualitative results of the proposed _ST-KeyS_ method on sample examples from the GW evaluation dataset. Green boxes correspond to correctly retrieved words and red boxes correspond to incorrectly retrieved words.
### Ablation study
In preliminary research, we have conducted different experiments to first select the best architecture and the optimal parameters leading to the most efficient _ST-KeyS_ model, then to compare the proposed method to different self-supervised/supervised methods. These investigations are detailed in this section.
- **Investigating the design choice:** In here, we discuss the self-supervised pretraining and the different downstream task options. We performed several experiments to prove the efficiency of our ST-KeyS model that consists of two phases: a self-supervised pretraining phase followed by a downstream task (supervised fine-tuning phase). The fine tuning stage is further composed of siamese network architecture followed by PHOC embedding (SNN-PHOC). Thus, we study the effectiveness of the pretraining, as well as the utility of the fine-tuning in the chosen order. The obtained results of this investigation on the three datasets are presented in Table 5, where we studied different scenarios. Particularly, we compare the efficiency of the proposed model while using different downstream tasks such as 1) a siamese architecture followed by a PHOC alignement (SNN-PHOC), 2) PHOC followed by a siamese architecture (PHOC-SNN), 3) the use of PHOC representations exclusively and 4) the use of a siamese network uniquely (SNN). Also, we check the utility of the pretraining by comparing it with a supervised-only approach.
As it can be seen from the table, the best performance by our model is with the setting SNN-PHOC. We obtain using the GW and AK databases 95.70% and 85.16%, respectively. For the BOT dataset, the performance is beyond 59%. Hence, we can conclude that the use of a SNN and PHOC downstream task at the fine-tuning stage is the better option, especially in the order SNN-PHOC (first train the SNN and then train with PHOC). The main justification behind this lies in the fact that the siamese network has the capacity to extract relevant deep features leading to a relevant representation of the word image and that is subsequently processed by the PHOC-based encoder allowing the contextual content of the word image to be promoted.
Moreover, we conducted an evaluation of the self-supervised approaches compared to the supervised ones. As it can be depicted from Table 5, the use of the self-supervised approach followed by a downstream task (SNN) performs better than using a supervised approach based on a siamese architecture (row 5 vs. row 4). A performance of 92.48% is reached using a pretraining stage while it was at 69.98% when performing supervised approach on the GW database. Therefore, we can affirm the importance of the
pretraining stage in a self-supervised fashion to improve the word spotting results.
Finally, we explored the relevance of the pretraining stage in a self-supervised way without any labeled data. We observe (from the last row in Table 5) that we obtained interesting results from only the image representations. For instance, we achieved a performance of 61.85% using the GW dataset which is good and even better than some learning-free word spotting methods.
- **Evaluation of the model utility in the low resource scenario:** This evaluation is based on siamese downstream task using different annotated data amounts. To further prove the importance of the self-supervised approach followed by a downstream task according to the amount of annotated data utilized in the fine-tuning phase, we have conducted a study while varying the amount of labeled data to simulate a low resource scenario. From Table 6, we remark that we have achieved a good result using merely 20% of the labeled data which is relatively comparable to the use of 60% of annotated data and even to the utilization of the entire annotated data provided in the GW database. This confirms the utility of our approach, especially for the low resource datasets, when the size of the labeled data is limited.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
**Approach** & Downstream & GW & BOT & AK \\ & Task & & & \\ \hline \hline Self-Supervised & SNN-PHOC & **95.70** & 59.10 & **85.16** \\ Self-Supervised & PHOC-SNN & 92.62 & **59.30** & 82.80 \\ Self-Supervised & PHOC & 92.50 & 59.17 & 82.53 \\ Self-Supervised & SNN & 92.48 & 58.06 & 81.51 \\ \hline Supervised (SNN) & – & 69.98 & \(NP\) & \(NP\) \\ \hline Self-Supervised (encoder) & – & 61.85 & 43.79 & 58.63 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation of different downstream tasks for word spotting performed on the GW, BOT and AK datasets. (results are presented in term of mAP% metric)
- **Model's parameters optimisation:** To perform the proposed word spotting method, we have conducted extensive preliminary experiments to set up the suitable parameters for the model configuration that leads to the best performance. Therefore, we start our experiments by selecting the optimum patch size and model architecture.
In our experiments, we define three types of model's variants which are _ST-KeyS-Base-16_, _ST-KeyS-Base-32_ and _ST-KeyS-Small-16_, as enlisted in Table 7. Obviously, implementing a larger model requires more computational memory and learning time as the number of model parameters increases. Thus, a trade-off between the size of the model and its performance must be considered.
In Table 8, we present the results of the Self-supervised approach using only the pretraining representations (without downstream task) from the three defined model variants. We note that the performance of the word spotting method is improved when using a patch size of \(16\times 16\) instead of using a patch size of \(32\times 32\) for the same encoder-decoder architecture (row 1 vs. row 2 in Table 8). The explanation behind this behavior is that, by utilizing a smaller patch size, we make each patch of the image have more local information during self-attention. Therefore, the model is able to use more and finer information during the word spotting process with a patch size of \(16\times 16\). However, in the case of using a smaller patch size such as \(8\times 8\), it will lead to the augmentation of the model parameters
\begin{table}
\begin{tabular}{|l|l|} \hline Annotated Fraction\% & mAP\% \\ \hline
20 & 80.98 \\
40 & 85.06 \\
60 & 86.88 \\
100 & 92.48 \\ \hline \end{tabular}
\end{table}
Table 6: Evaluation results (mAP %) of the word spotting task performed on the GW dataset for the Self-supervised approach with a siamese downstream task using different data amounts.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline Model & EL & EAH & DL & DAH & Patch & Params \\ \hline _ST-KeyS-Base-16_ & 12 & 6 & 4 & 6 & 16\(\times\)16 & 9.3M \\ _ST-KeyS-Base-32_ & 12 & 6 & 4 & 6 & 32\(\times\)32 & 9.5M \\ _ST-KeyS-Small-16_ & 6 & 3 & 4 & 3 & 16\(\times\)16 & 1.2M \\ \hline \end{tabular}
\end{table}
Table 7: Details of our model variants (EL: Encoder Layers, EAH: Encoder Attention Heads, DL: Decoder Layers, DAH: Decoder Attention Heads).
which results in the requirement of more computation resources. For that reason, we have discarded this setting. Also, we remark that the use of a model having a large number of layers and heads leads to a significant improvement in the word spotting results (row 1 vs. row 3 in Table 8). Since we attempt to achieve a compromise between these hyper-parameters and resource consumption while maintaining good performance, we selected the _ST-KeyS-Base-16_ model's setting as the base model architecture in all our performed experiments.
## 5 Conclusion and Future Work
In this paper, we proposed a novel self-supervised approach for keyword spotting called _ST-KeyS_. Our method is based on a pure vision transformer architecture without any CNN layer, and it is composed of a pre-training phase and a fine-tuning phase. In the pretraining stage, a masking-recovering autoencoder is used to learn useful representations from the unlabeled word images. In the fine-tuning stage, a two steps strategy is employed to extract further promoted representations for the spotting task. To produce more robust and meaningful features, a siamese approach is first utilized to embed the images visually, followed by an alignment approach to the PHOC attributes produced from the text.
The underlined architecture deals with the data scarcity issue since we have restricted our study to the available data provided in the evaluation datasets without using synthetic data or a more extensive dictionary. Extensive experiments have been made to validate the design choice of our proposal and evaluate it according to the state-of-the-art. The experiment results demonstrated the efficiency of self-supervised learning for word spotting by breaking with the need to have large annotated datasets. We have shown that our model allows achieving close, and sometimes better, performance to fully supervised approaches based on large synthetic datasets despite our restriction to limited data amounts and closed vocabulary.
\begin{table}
\begin{tabular}{|l|l|} \hline Model & mAP \% \\ \hline _ST-KeyS-Base-16_ & 61.85 \\ _ST-KeyS-Base-32_ & 40.29 \\ _ST-KeyS-Small-16_ & 37.53 \\ \hline \end{tabular}
\end{table}
Table 8: Evaluation of the Self-supervised approach (without a downstream task) for word spotting task in terms of mAP% on the GW database using different model variants.
In the future, we will enhance the performance of our proposed word spotting method by adding an n-gram language model to re-rank the retrieved word image list. We would also like to apply the self-supervised approach in a segmentation-free fashion and for other connected fields, such as the handwriting recognition area.
## Acknowledgement
This work has been supported by the DocPRESERV (Preserving & Processing Historical Document Images with Artificial Intelligence) project, funded by the Swedish Foundation for International Cooperation in Research and Higher Education (STINT), grant AF2020-8892.
|
2301.02440 | An Image captioning algorithm based on the Hybrid Deep Learning
Technique (CNN+GRU) | Image captioning by the encoder-decoder framework has shown tremendous
advancement in the last decade where CNN is mainly used as encoder and LSTM is
used as a decoder. Despite such an impressive achievement in terms of accuracy
in simple images, it lacks in terms of time complexity and space complexity
efficiency. In addition to this, in case of complex images with a lot of
information and objects, the performance of this CNN-LSTM pair downgraded
exponentially due to the lack of semantic understanding of the scenes presented
in the images. Thus, to take these issues into consideration, we present
CNN-GRU encoder decode framework for caption-to-image reconstructor to handle
the semantic context into consideration as well as the time complexity. By
taking the hidden states of the decoder into consideration, the input image and
its similar semantic representations is reconstructed and reconstruction scores
from a semantic reconstructor are used in conjunction with likelihood during
model training to assess the quality of the generated caption. As a result, the
decoder receives improved semantic information, enhancing the caption
production process. During model testing, combining the reconstruction score
and the log-likelihood is also feasible to choose the most appropriate caption.
The suggested model outperforms the state-of-the-art LSTM-A5 model for picture
captioning in terms of time complexity and accuracy. | Rana Adnan Ahmad, Muhammad Azhar, Hina Sattar | 2023-01-06T10:00:06Z | http://arxiv.org/abs/2301.02440v1 | # An Image captioning algorithm based on the Hybrid Deep Learning Technique (CNN\(+\)GRU)
###### Abstract
Image captioning by the encoder-decoder framework has shown tremendous advancement in the last decade where CNN is mainly used as encoder and LSTM is used as a decoder. Despite such an impressive achievement in terms of accuracy in simple images, it lacks in terms of time complexity and space complexity efficiency. In addition to this, in case of complex images with a lot of information and objects, the performance of this CNN-LSTM pair downgraded exponentially due to the lack of semantic understanding of the scenes presented in the images. Thus, to take these issues into consideration, we present CNN-GRU encoder decode framework for caption-to-image reconstructor to handle the semantic context into consideration as well as the time complexity. By taking the hidden states of the decoder into consideration, the input image and its similar semantic representations is reconstructed and reconstruction scores from a semantic reconstructor are used in conjunction with likelihood during model training to assess the quality of the generated caption. As a result, the decoder receives improved semantic information, enhancing the caption production process. During model testing, combining the reconstruction score and the log-likelihood is also feasible to choose the most appropriate caption. The suggested model outperforms the state-of-the-art LSTM-A5 model for picture captioning in terms of time complexity and accuracy.
Deep Learning, Image captioning, CNN, GRU
## 1 Introduction
Deep Learning has made great strides recently due to rapid growth and high utilization [1, 2, 3, 4]. Thus, similar to Neural Machine Translation (NMT)[5], generating captions of the images through neural encoder-decoder framework has shown the dominance in recent years. In this process of image captioning, encoding of the image is done through encoder which is typically from the Convolutional Neural Networks (CNN) [6] family (like Vanilla CNN [6], Region based CNN [7], Fast R-CNN [8], Faster R-CNN [9] etc.) and decoder is from the RNN family [10] (like LSTM [11], BLSTM [12] etc.). In this framework of CNN-LSTM pair [13, 14, 15, 16], the encoder (CNN) learns the visual features by making the feature maps and max-pooling during the feature learning stage and then detection of objects after flattening and applying fully connected layer. Thus it converts the image to vector of numbers which is learned form of the visual content of the image under consideration. In the decoder part, the vector output of the encoder is used as the initial input of the decoder to produce caption word by word.
Even though Long Short Term Memory (LSTM) solves the issue of handling long dependency by decreasing the effect of exploding and vanishing gradients [11], the time complexity issue is still a major drawback in this model due to many gates residing in the LSTM unit for the memorization purpose. Another key issue with these kind of encoder-decoder models are the lack of understanding of the semantic context as the encoder of these models fail to transfer the major key visual information to the decoder. Because of the absence of reverse dependency checking (Caption-to-Image), these models do not perform well in case of complex images.
Several approaches have been proposed to deal with the above-mentioned issues [17, 18, 19, 20]. Some researches have proposed the attention mechanism to get the information from the key regions automatically and tried to encode that specific information into the context vector which then used by the decoder to generate the caption [17, 18]. Some other researchers have tried to extract semantic attributes as a supplement of the CNN features to embed into encoder by various methods [19, 20].
The major drawback of all the above mentioned methods was that those methods only explore the image-to-caption dependency but not the reverse way for the validation of the extracted information. Even though, Jinsong Su et.el. [21] have tried to use the semantic reconstructor of caption-to-image but still they could not validated the results in effective way. In addition to this, the time complexity issue was also remained due to the usage of LSTM unit.
To resolve the above mentioned issues, we have proposed a hybrid deep learning technique based on the CNN-GRU encoder-decoder framework with the better hyper-parameter tuning and with the caption-to-image validation method by taking the motivation from Jinsong Su et.el. [21]. This caption-to-image reconstructor helps to handle the semantic context into consideration as well as the time complexity. By taking the hidden states of the decoder into consideration, the input image and its similar semantic representations is reconstructed and reconstruction scores from a semantic reconstructor are used in conjunction with likelihood during model training to assess the quality of the generated caption.
As a result, the decoder receives improved semantic information, enhancing the caption production process. During model testing, combining the reconstruction score and the log-likelihood is also feasible to choose the most appropriate caption.
To validate our proposed method, we have used the benchmark MS COCO dataset [22] and the experimental results have proved that our method outperformed the current state-of-the-art methods in terms of accuracy and time complexity.
Inspiration for our work comes from the auto encoder [23, 24] and how well it performs in NMT [25], which employs semantic production to hone the learning representation of input data. In this activity, we are fine-tuning the idea with captions to the image. Basically, related work involves taking after two strands. In general, NMT's common hands are very much based on the demonstration of source-to-target interpretation. Encouraged by questions about the auto encoder that makes reproduction more realistic and looking at whether the recreated inputs are more reliable than the original inputs [26], many analysts are committed to using the adaptation of dual-directed NMT conditions [27].
Compared with NMT, most models of neural image captions are based on the neural encoder-decoder system [30]. However, this engineer cannot guarantee that the image data can be completely converted into a decoder. To discuss this problem, analysts are currently accepting to take after two types of approaches: (1) As in NMT attention [31], a few analysts link part of the visual considerations to capture the semantic presentations of critical image regions [32, 33]. (2) In various ways, a number of analysts are committed to extract semantic features or high-level concepts into images, which can be integrated into an LSTM-based decoder as an additional input [28, 29]. In this way, the show will be directed to settings that are closely related to the theme of the image. Besides, You et al. [34] encompassed the two types of methods listed above.
Our proposed representation is based on the CNN-LSTM model, in which the proposed semantic reconstructor is comparably compared to the LSTM, which is why it benefits both to display preparation and testing when the regional language indicate and the coding system are modeled independently. From Wena et al. [35] Institution devoted to improving automatically generated image captions by making inferences about their semantic content. However, the visual highlights are generally employed as the decode of the decoder in the current model captions, while the semantic elements of the image are provided exclusively to the decoder. As a result, we agree that visual robustness is more crucial than semantic characteristics. Through this research, we provide semantic features to neural machine translation as well as video captions. In conclusion, we are experimenting with three different methods for reconstructing photos based on fabricated captions. Additionally, our claim may be distinct from earlier studies due to K's extensive utilization of visually similar photos.
## 3. Proposed Model
This section describes the proposed hybrid deep learning approach based on CNN-GRU encoder-decoder framework. This framework has 3 major parts, 1) Encoder: which is the CNN. 2) Decoder: which is the GRU layer and 3) The Semantic validator: for validation of the caption-to-image information.
### Model architecture
The three neural network modules (Encoder, Decoder, and Semantic validator) that make up our proposed model are depicted in Fig. 1. The details of each module is given below:
* Encoder
In encoder, a model similar to [36] has been used where the image \(I\) is taken as input and the features from the image \(\mathbf{F}\) is extracted by the CNN-based encoder. The feature vector \(\mathbf{F}\)\(\in\) R\({}^{\text{Dv}}\) is used to represent the features extracted from the image \(I\). Dv represents the diemnsions of the feature vector. As all the sementic information can not be extracted by one feature vector, thus additional semantic attributes have been extracted by the algorithm proposed by Yao et al. [36]. The extracted attribute vector is denoted by \(\mathbf{A}\in\) R\({}^{\text{Da}}\) which shows the probabilty of each high level attribute existed in the caption dataset which is generated by the MIL (Multiple Instance Learning) model presented in [27]. MIL model showed the promising results in finding the semantic relations between the attributes of the image. Da represents the diemnsions of the attribute vector \(\mathbf{A}\).
After extracting both feature map \(\mathbf{F}\) and attribute vector \(\mathbf{A}\), the encoder gives these 2 outputs to decoder as an input which is used for the caption generation purpose.
* Decoder
As we got feature vector \(\mathbf{F}\) and the attribute vector \(\mathbf{A}\) as an output of the encoder from previous network, this F and A is used as the input to the decoder for the caption generation. Yao et al. [36] proposes 5 different and diverse variants for the LSTM network and it is proved that the fifth one named LSTM-A\({}_{5}\) works better than others, so we also used the same network for getting the better performance. Thus, according to LSTM-A\({}_{5}\) we used the \(\mathbf{A}\) and \(\mathbf{F}\) vectors to calculate the log Probability \(\mathbf{\xi}\) as mentioned in Equation (1).
\[\mathbf{\xi}(\mathbf{S}|\mathbf{I})=\mathbf{\xi}(\mathbf{S}|\mathbf{F}, \mathbf{A})=\sum_{t=1}^{Ns}\mathbf{\xi}(\mathbf{w}_{t}\mid\mathbf{F},\mathbf{ A},\mathbf{w}_{\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-}\text{-} \text{-}\text{-}\text{-
### LSTM- A\({}_{5}\) Model:
Our Proposed CNN+GRU :
Three person standing together on beach during sunset Ground Truth:
Three person standing together on beach holding each other hand during sunset
During testing, semantic reconstruction can be utilized to improve selected captions. As is it shown in Fig 2, we use a multi-stage system that combines beam search and position reset. Inserted image captioning techniques:
1. A collection of applicant captions, log possibilities, and unobserved state sequences are generated using the standard decoder components via an initial application beam investigation.
2. After that, we use the hidden captions of each candidate to reconstruct the semantic model of the merged image by computing the appropriate reconstructive points.
Figure 1: Provides an overview of our proposed model’s architecture, which consists of three neural networks (Encoder, Decoder, and Semantic Reconstructor).
Figure 2: A test image of our model. h, P, and R show the hidden sequence, log likelihood probability, and caption reconstruction score, respectively.
After arranging a log and potential school rebuilding sites, we calculate the final outcome of each caption and select final captions based on the combination of points.
## 4 Experiments
The experiments have been conducted on the most popular benchmark dataset COCO [30] to compare the performance of our image captioning proposed model with other state-of-the-art methods.
\(\bullet\) Experimental setup
COCO data-set was used to check the validity of our proposed model which contains 130000 manually annotated images. Each image has 4 descriptions which were used for the training purpose. In addition to this, 5000 images were used as testing dataset.
Out of the 130000 images of the training set, 80000 images were used for the training purpose while 5000 images were used for the validation purpose. Based on these settings, the vocabulary has been built with 8500 unique words. For getting the image features, the following setting was used for the hyper-paramter tuning.
Adam [32] is used as the optimizer. We employed stop-reading techniques [33] and pre-stop techniques, and we determined the following take-out hyper-parameter parameters: reading level beginning at 2 4, input rate as 300, covering layer size as -1024, mini-batch. A maximum cycle count of 30 is used with a scale of 1024. We used Word2vec's [34] pre-trained embeddings, which we optimized by setting the tradeoff parameter to 1. The threshold was established at 3 in our model testing.
\(\bullet\) Evaluation metrices used:
The evaluation metrices used are 1) BLEU [40] where we set beam size K=3 thus BLEU@1, BLEU@2, BLEU@3 and BLEU@4 are calculated. In addition to this, METFOR [46] which is shown as M in Table 1, ROUGE-L [37] which is shown as R and CIDEr-D [38] which is shown as C in the Table1. The values of these metrices were calculated by the COCO released code [39]. BLEU, ROUGE-L, and METEOR were initially developed as benchmarks for evaluating the accuracy of machine translation. Image caption testing follows the same procedure as machine translation testing, where the phrases generated are compared to the actual sentences, and metrics are often utilized.
\(\bullet\) Description of the compared state-of-the-art methods
1) NIC: The decoder of NIC is based on LSTM which directly use features of the images as input to LSTM.
2) ME: The distinction of this method is its language model that explore the mappings bidirectionally in images and their captions. This language model is independently built from the encoder-decoder framework.
3) ATT: This model uniquely extracts the key information of the images by a model based on semantic attention.
4) Soft-Attention and Hard Attention (SA and HA) models: This model differs from other models in terms of using CNN features as input to decoder. The Soft-Attention (SA) is with the normal Back-propagation method while in Hard-Attention (HA), the stochastic attention is used with reinforcement learning.
5) LRCN: It is unique in terms of taking the image feature and its previous caption as the input at each time-step.
6) Sentence Condition (SE): In this method, a text-conditional attention model is used which helps decoder to learn the semantic information of the text.
7) LSTM-A5: This is based on the best variant of LSTM. Our proposed model is inspired by this. We have used the same settings of LSTM-A5 for comparison purpose as the dataset is also same.
\(\bullet\) Test results on COCO
The results got from the experiments by using the COCO dataset is shown in the Table 1. As it is obvious from the results, our method performed better than all other state-of-the-art methods. The results of the metrices BLEU@1, BLEU@2, BLEU@3 and BLEU@4 are all better than the NIC, HA, SA, ATT, ME, and other compared methods. Even on the metrics METEOR [37] which is shown as M in Table 1, ROUGE-L [38] which is shown as R and CIDEr-D which is shown as C in the Table1, which were initially developed as benchmarks for evaluating the accuracy of machine translation, our resulting indexes are still better on the above metrices as compared to NIC, HA, SA, ATT, ME, and other compared methods.
These results proved that the proposed CNN-GRU method with semantic validator of caption-to-image is working perfectly.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline
**Model** & **B@1** & **B@2** & **B@3** & **B@4** & **MET** & **ROU** & **CID** \\ \hline SA [5] & 0.700 & 0.490 & 0.322 & 0.242 & 0.238 & - & - \\ \hline ME [26] & 0.731 & 0.559 & 0.429 & 0.299 & 0.246 & 0.529 & 1.001 \\ \hline ATT [12] & 0.699 & 0.527 & 0.399 & 0.299 & 0.232 & - & - \\ \hline SC [15] & 0.719 & 0.540 & 0.400 & 0.297 & 0.239 & - & 0.94 \\ \hline HA[5] & 0.715 & 0.503 & 0.355 & 0.249 & 0.229 & - & - \\ \hline NIC [6] & 0.659 & 0.449 & 0.399 & 0.202 & - & - & - \\ \hline LRCN [41] & 0.690 & 0.514 & 0.379 & 0.270 & 0.230 & 0.500 & 0.830 \\ \hline LSTM-AS & 0.729 & 0.559 & 0.429 & 0.325 & 0.253 & 0.539 & 1.002 \\ \hline
**Proposed CNN+G** & & & & & & & \\ \hline RU & & & & & & \\ \hline \end{tabular} \(\bullet\) Test results on COCO’s online test server
\end{table}
Table 1: The performance of our proposed model against other state-of-the-art methods building VGG framework or GoogleNet framework. For clarity, B@K is for BLEU@K where K={1,2,3,4}, MET is used for METEOR, ROU is represents ROUGE-L, and CID is used for CIDER-D.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline
**Model** & **B@1** & **B@2** & **B@3** & **B@4** & **MET** & **ROU** & **CID** \\ \hline SA [5] & 0.721 & 0.494 & 0.333 & 0.251 & 0.242 & 0.511 & 0.98 \\ \hline ME [26] & 0.743 & 0.561 & 0.432 & 0.293 & 0.251 & 0.534 & 1.013 \\ \hline ATT [12] & 0.691 & 0.532 & 0.393 & 0.287 & 0.242 & - & - \\ \hline SC [15] & 0.729 & 0.544 & 0.412 & 0.281 & 0.241 & 0.511 & 0.92 \\ \hline HA[5] & 0.725 & 0.512 & 0.358 & 0.253 & 0.231 & - & - \\ \hline NIC [6] & 0.669 & 0.456 & 0.382 & 0.218 & 0.232 & - & - \\ \hline LRCN [41] & 0.698 & 0.521 & 0.382 & 0.281 & 0.241 & 0.512 & 0.812 \\ \hline LSTM-AS & 0.731 & 0.562 & 0.432 & 0.331 & 0.258 & 0.542 & 1.000 \\ \hline
**Proposed** & **0.742** & **0.583** & **0.441** & **0.339** & **0.263** & **0.556** & **1.012** \\ \hline \end{tabular} \(\bullet\) Test results on COCO’s online test server
\end{table}
Table 2: Performance comparisons on online COCO test server (C40). MS Captivator is a photo caption model suggested by Fang et al. [27]. |
2310.14563 | NormDial: A Comparable Bilingual Synthetic Dialog Dataset for Modeling
Social Norm Adherence and Violation | Social norms fundamentally shape interpersonal communication. We present
NormDial, a high-quality dyadic dialogue dataset with turn-by-turn annotations
of social norm adherences and violations for Chinese and American cultures.
Introducing the task of social norm observance detection, our dataset is
synthetically generated in both Chinese and English using a human-in-the-loop
pipeline by prompting large language models with a small collection of
expert-annotated social norms. We show that our generated dialogues are of high
quality through human evaluation and further evaluate the performance of
existing large language models on this task. Our findings point towards new
directions for understanding the nuances of social norms as they manifest in
conversational contexts that span across languages and cultures. | Oliver Li, Mallika Subramanian, Arkadiy Saakyan, Sky CH-Wang, Smaranda Muresan | 2023-10-23T04:38:34Z | http://arxiv.org/abs/2310.14563v2 | # NormDial: A Comparable Bilingual Synthetic Dialogue Dataset
###### Abstract
Social norms fundamentally shape interpersonal communication. We present NormDial, a high-quality dyadic dialogue dataset with turn-by-turn annotations of social norm adherences and violations for Chinese and American cultures. Introducing the task of social norm observance detection, our dataset is synthetically generated in both Chinese and English using a human-in-the-loop pipeline by prompting large language models with a small collection of expert-annotated social norms. We show that our generated dialogues are of high quality through human evaluation and further evaluate the performance of existing large language models on this task. Our findings point towards new directions for understanding the nuances of social norms as they manifest in conversational contexts that span across languages and cultures.
## 1 Introduction
Social norms--implicitly learned notions of acceptable behavior--both develop from and guide our everyday interactions Sherif (1936). As with the value systems that underlie these notions, the acceptability and deemed typicality of behaviors varies across cultures Triandis et al. (1994). For example, due to a strong emphasis on individualism, open and direct expression of opinions and disagreement is often encouraged and valued in Western cultures Arieli (1964), while such acts may often be viewed as disruptive to social order in Eastern Asian cultures that value collectivism Triandis (1993). Understanding these cultural nuances is key to empower computational systems to reason across cultural contexts Liu et al. (2021).
We introduce NormDial, a bilingual synthetically generated dyadic dialogue dataset of social norms as they appear within the context of conversational interactions. Gathering realistic data at scale in this domain presents a challenging and potentially cost-prohibitive task, particularly in the context of identifying social norm adherences and violations across multiple cultural contexts. This paucity of data hinders progress towards developing cross-cultural communication tools.
As a small step towards addressing this gap, leveraging recent successes in utilizing large language models (LLMs) for social data generation and augmentation Kim et al. (2022); Chen et al. (2023), we propose a human-in-the-loop framework to synthesize realistic conversational data under expert prompting for modeling social norm adher
Figure 1: Examples of generated dialogues with adherences (top, in Chinese) and violations (bottom, in English) to social norms about responding to compliments and giving requests, respectively.
ence and violation. Using this human-AI collaboration framework, we generate a series of 4231 dyadic dialogues totaling 29550 conversational turns grounded in theoretical norm categories (Linguistic Data Consortium, 2022) across Chinese and American cultures, and demonstrate that our synthetic bilingual conversations are comparable to or exceed the quality of existing, naturally occurring datasets under interactive human evaluation and automatic metrics; examples of our dialogues are shown in Figure 1. Our dataset presents social norm adherences and violations labeled on a dialogue-turn basis; with this labeling task decoupled from dialogue generation, we further evaluate the capability of existing LLMs in reasoning about norm adherences and violations in a conversational setting and show that existing models often fail to reason correctly about these contexts. We hope that this resource will further motivate research towards designing better systems able to promote more fluid cross-cultural conversational interactions. We make NormDial available at [https://github.com/Aochong-Li/NormDial](https://github.com/Aochong-Li/NormDial).
## 2 Background and Related Work
LLMs for Synthetic Data Generation.Prompting LLMs to synthesize and augment language data for existing tasks (Li et al., 2022; Moller et al., 2023; Chen et al., 2023) has emerged as a viable, cost-effective alternative in lieu of crowd-sourced annotation at scale or alternative strategies such as fine-tuning language generators (Papangelis et al., 2021; Zhang et al., 2020) in the dialogue domain. LLMs, trained on massive amounts of web text, suffer from representational and allocational harms (Blodgett et al., 2020; Weidinger et al., 2021). Yet, such models often also possess high algorithmic fidelity in the realm of representing latent social variables (Argyle et al., 2023), in that these sources of bias may often be finely controlled for to accurately emulate responses from a variety of human demographic sub-populations in areas such as predicting historically missing survey responses in social research (Kim and Lee, 2023). Here, under this vein, we employ a human-in-the-loop framework to both finely condition and validate the generation of dialogues for modeling social norms.
Computational Social Norms.Our work is situated in the broader push towards empowering computational systems of interaction with the capability to reason in socio-culturally situated contexts (Ziems et al., 2023), spanning commonsense reasoning (Sap et al., 2019; Rashkin et al., 2018), the determination of appropriate and morally ethical behavior (Emelin et al., 2021; Jiang et al., 2022), and the further grounding of this behavior in areas like dialogue systems and situated question answering (Kim et al., 2022; Ziems et al., 2022; Gu et al., 2022) more specifically on underlying knowledge of social norms. While most work on computational models of social norms has been focused on the American context (Forbes et al., 2020), recent work has begun to bridge this gap cross-culturally to enable a comparison of descriptive nuances in norms across cultures (CH-Wang et al., 2023; Fung et al., 2022). Here, our work builds a dialog dataset around conversational social norms for both American and Chinese cultures.
## 3 The NormDial Dataset
NormDial is a human-in-the-loop synthetically generated bilingual (Chinese & English) dyadic dialogue dataset for studying social norms as they appear in different conversational contexts. Dialogue turns are further labeled on whether they adhere to or violate a given social norm with textual explanations. Our human-AI collaboration framework for creating NormDial is shown in Figure 2.
**Social Norm Augmentation (Stage 0 in Figure 2).** The Linguistic Data Consortium (LDC) (Linguistic Data Consortium, 2022) taxonomizes 10 categorizations of social norms in its guidelines--apology, compliment, condolence, criticism, greeting, leave, persuasion, request, response to compliment, giving thanks--and provides a detailed set of associated norms (5 for each category) for Chinese culture. Examples of verbal evidence of adherence to a norm in a conversational context, alongside the relationship data of each hypothetical interlocutor, are provided as details for norm definitions.
We augment this starting set of validated social norms by in-context prompting ChatGPT (Wei et al., 2022), making use of LDC norm descriptions and examples in our prompt as few-shot in-context examples, to generate a greater set of social norms for Chinese culture, which are then conditioned on and further prompted to generate corresponding norms for American culture. To verify the correctness and applicability of generated norms for each cultural context, we task annotators who identify as native speakers of each respective language and who have significant (10+ years) lived experiences
in each culture to manually evaluate and rectify if each generated norm is (1) factually correct according to their own lived experiences, (2) in line with the defined norm category, (3) specific to the culture, and (4) detailed in its description, removing those that did not satisfy these criteria. This process enables us to collect a dataset of 133 and 134 Chinese and American social norms, respectively (see Table 1). Additional norm examples alongside the prompts used are shown in Appendix B.
Scenario Imagination and Situation Elaboration (Stages 1 and 2 in Figure 2)Social norms manifest in different ways depending on the conversational context [10]. An issue in dialogue generation from a small amount of hand-written data is its lack of diversity, as inconstext examples have a large impact on prompting results [11]. To tackle this issue, with our augmented set of social norms, we first prompt ChatGPT using one-shot learning to generate a list of 10 short scenarios in the form of _social relation; location_ where given norms are most likely to take place in real life. In the second stage, we combine each scenario with a given social norm to enable ChatGPT to elaborate on and expand each scenario description into ones that are more detailed and realistic. In total, we obtained 4231 unique situations from this process; the topics of elaborated situations as captured by a 30-topic Latent Dirichlet Allocation (LDA) model are presented in Appendix E. To ensure that situations are elaborated faithfully from the given norm, we collected a sample of 218 situations along with their norms for three annotators to verify if each situation entails the norm. The results in Appendix D show high faithfulness scores, with the lowest for American norm violations. For the final version of NormDial, we manually verify and remove situations that deviate from the norm descriptions (releasing both raw and cleaned datasets).
Dialogue Generation (Stage 3 in Figure 2).By prompting ChatGPT with pairs of norms and their elaborated situations along with an in-context example, we generate turn-by-turn dyadic dialogues that either adhere to or violate the given social norm. Shown in Figure 3, we find that CoT prompting with Scenario Generation (Stage 1) and Situation Elaboration (Stage 2) greatly improves dialogue lexical diversity as compared to directly
\begin{table}
\begin{tabular}{p{341.4pt}} \hline \hline
**Chinese Norm (Respond to Compliments)** : When a person of lower status respond to a compliment from someone of higher status, it is common to express gratitude and acknowledge the compliment gracefully. [...] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of manually verified Chinese and American norms generated by ChatGPT.
Figure 2: Our human-AI collaboration framework for creating NormDial, through (0) norm augmentation with a small set of expert-labeled social norms under the LDC taxonomy, (1) scenario generation, (2) situation elaboration, (3) dialogue generation, and (4) turn-by-turn norm labeling with human verification at every stage. We show a Chinese dialogue generation example here for illustration; the same pipeline is adopted for English dialogues.
generating dialogues from norms alone (Simple), measured by distinct N-grams. Prompts used at each stage are provided in Appendix C.
**Automatic Turn-level Annotation of Norm Adherence and Violation (Stage 4 in Figure 2).** With this set of situationally grounded dyadic dialogues, we then further prompt ChatGPT using Chain-of-Thought (CoT) (Wei et al., 2022) reasoning to label whether each dialogue turn (1) adheres to, (2) violates, or (3) is not relevant to the given norm. Providing the social norm, situation, and dialogue, we prompt ChatGPT to (1) summarize the norm into a short description of its rules and actions as the _norm action_; (2) respond with the names of the characters who are mostly involved with acting in alignment with the norm as the _norm actors_; and (3), for each dialogue turn, with information about the _norm action_ and _norm actors_, predict the label (_Adhered_, _Violated_, or _Not Relevant_) together with a short textual explanation of each decision.
In the next Section, we discuss the evaluation of generated dialogue quality and automatic turn-level annotations of adheremes and violations.
## 4 Evaluation
We conduct a human evaluation on the quality of dialogues and the correctness of model-predicted dialogue-turn norm adherence and violation labels with a pool of 12 annotators, each possessing significant lived experiences in both cultures.
**Synthetic Dialogue Quality.** To determine the quality of our Chinese and American synthetic dialogues, we perform a human evaluation against two types of baselines: (1) specifically curated domain-relevant dialogues or pre-existing and naturally occurring dialogue datasets, and (2) human-written dialogues specifically for our task.
For the former, we compare our Chinese dialogues against a collection of 200 dialogues randomly sampled from the Linguistic Data Consortium (LDC), which contains 413 Chinese conversations with expert-annotated turn-by-turn Chinese norm observations from video transcripts. These conversations contain dialogues and chitchat from sources such as TV shows, Vlogs, and other types of videos from Bilibili, a Chinese video-sharing platform. For English dialogues, as no existing domain-comparable dialogue is available, we compare against a set of 200 realistic, human-written dialogues reflecting daily communication that covers various topics about daily life, DailyDialog (Li et al., 2017); a comparison that has been used for previous evaluations of synthetic dialogue quality (Chen et al., 2023).
For the latter, to evaluate our dialogues against human-written counterparts specific to our task, we asked three native speakers of Chinese and English to creatively write a set of 20 dialogues for each language, based on a sample of 20 norms and situations, which we selected from each of our 10 norm categories by randomly selecting an adherence and violation (_norm_, _situation_) pair from each category.
We ask sets of 3 annotators to rate each conversation, formatted consistently, on their (1) naturalness, or how natural each dialogue sounded, (2) nativeness, if they thought the dialogue came from a native speaker, (3) coherence, and (4) interestingness, each on a 5-point Likert scale, with final scores for each aspect being the average of the scores received. On a separate task, we ask annotators to rate if synthetic dialogues were faithful and on-topic to their provided social norm, i.e. _does the main topic of the dialogue match the provided social norm, yes/no?_, of which final labels are obtained via majority-voting. A comparison of quality evaluation scores is shown in Table 2 and details about evaluation metrics are shown in Appendix Section A.
Figure 3: Distinct N-grams for both US and Chinese generated dialogues under Simple and CoT prompting. CoT prompting with situation elaboration improves dialogue diversity compared to Simple prompting without using situations. Chinese texts are tokenized by jieba.1
For baseline (1), Annotators rate NormDial dialogues higher in almost all dialogue quality aspects than their pre-existing curated and naturally occurring dialogue baseline counterparts, with only DailyDialog outperforming NormDial in interestingness and LDC outperforming our synthetic dialogues in nativeness; synthetic dialogues were rated higher to a statistically significant degree in coherence and interestingness for Chinese and nativeness and coherence for the American side. As for benchmark (2), NormDial dialogues were found to be of higher quality than their specificially tasked human written counterparts for English and lower for Chinese, in line with language performance differences for ChatGPT. Despite this performance difference in Chinese dialogues, it took the average annotator more than an hour to finish writing 20 dialogues; as this is a highly creative task, tasking annotators in our task can prove to be a challenge in scalability, given emerging evidence of annotator fatigue (Derczynski et al., 2020), especially for creative tasks.2 On the other hand, taking the majority vote of dialogue on-topic labels from annotators showed that 92.5% and 86.5% of dialogues for Chinese and English, respectively, were faithful to their prompted (norm, situation) pairs.
Footnote 2: [https://www.reddit.com/tr/ProflicAc/comments/17btjs/](https://www.reddit.com/tr/ProflicAc/comments/17btjs/)
Automatic Turn-level Annotation.As our automatic dialogue-turn norm adherence/violation labeling via prompting is separate from dialogue generation, a natural question arises as to how well existing LLMs are able to perform this task, i.e., _how well can LLMs accurately detect if a given social norm is adhered to or violated for a conversational round_? Here, collecting a sample of 200 dialogues for each language, two annotators manually validated the correctness of ChatGPT labeled dialogue rounds to check for correctness, resolving disagreements via discussion to produce a set of final gold standard labels for 1281 Chinese and 1503 English dialogue rounds. Table 3 shows the precision, recall, and F1 scores of ChatGPT predictions against ground truth labels, stratified across dialogue language and label categories.
Shown in Table 3, empirically, ChatGPT achieved a higher F1 score on correctly predicting if a dialogue round adhered to or was not relevant to a given norm, but performed significantly worse in predicting norm violations for both languages. Given _violation_'s high recall, conversational turns that are not violations of the norm were falsely labeled as so, even with few-shot expert prompting. Under qualitative examination, we found that many of the turns that were falsely labeled as violations served to instead provide _context_ before the actual violations rather than the violation behavior itself, suggesting the potential for further future improvement in this area.
## 5 Conclusion
We presented NormDial, a synthetic, validated, high-quality dyadic dialogue dataset with turn-by-turn annotations of social norm adhereences and violations for Chinese and American cultures in Chinese and English. Our evaluation of synthetic dialogue quality reveals that our dataset is comparable to and/or exceeds the quality of naturally occurring and domain-specific dialogue datasets. Furthermore, our analysis of LLM predictions of norm observance reveals areas for existing models to improve in this domain. Our resource points towards new directions for understanding the nuances of social norms as they manifest in conversational contexts that span across languages and cultures.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Sources & Natural & Native & Coherent & Interesting \\ \hline Ours (ZH) & 3.78 & 3.78 & 3.94 & 3.66 \\ LDC & 3.69 & 3.81 & 3.28 & 2.53 \\ Human (ZH) & **4.50** & **4.60** & **4.55** & **4.25** \\ \hline Ours (EN) & **4.41** & 4.78 & **4.85** & 4.22 \\ DailyDialog & 4.39 & 4.40 & 4.72 & **4.34** \\ Human (EN) & 4.15 & **4.95** & 4.20 & 3.95 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Dialogue quality evaluation across NormDial synthetic dialogues (Ours), established and domain-specific baselines (LDC and DailyDialog), and human-written baselines (Human). Chinese language data is marked as ZH; English as EN.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multicolumn{4}{c}{Chinese} \\ \hline Norm Labels & Precision & Recall & F1-Score \\ \hline \hline Adhered & 78.4\% & 84.3\% & 0.81 \\ Not Relevant & 94.4\% & 80.7\% & 0.87 \\ Violated & 53.6\% & 85.6\% & 0.66 \\ \hline \multicolumn{4}{c}{English} \\ \hline Adhered & 77.0\% & 89.7\% & 0.83 \\ Not Relevant & 95.9\% & 68.6\% & 0.80 \\ Violated & 51.2\% & 98.8\% & 0.68 \\ \hline \hline \end{tabular}
\end{table}
Table 3: ChatGPT norm adherence and violation label prediction performance against annotator-corrected gold labels.
### Limitations
Annotation Bias.While we have augmented our synthetic data generation pipeline with human validation at every stage from individuals who possess significant lived experiences in Chinese and American cultural contexts to ensure correctness, it is important to acknowledge that our ultimate _representation_ of the views and values of these cultures is limited _to_ these lived experiences. In working towards more culturally representative studies, it is important to broaden to the views and values of those who are represented in experimental data and acknowledge the presence of further _intra_-cultural variations [16].
Language Model Bias.As with that which has been aforementioned, language models also possess sources of bias arising from the fundamental trained behavior of the tendency to mimic patterns in their training data. As such, it is important to critically question and challenge the viewpoints of those who are represented and reproduced within and which may seep into our dataset as a result, even under significant human validation.
## Ethical Considerations
Names as Sources of Bias.Within our human evaluation and annotation, a deliberate measure was implemented to address the potential introduction of biases by excluding character names during dialogue rounds. The purpose of this approach was to minimize the potential impact of personal biases or preconceived notions that may arise from specific names, ethnic backgrounds, or genders. As a result, annotators were solely guided by the dialogue's content and the cultural norms under discussion. In our data release, we emphasize the same need for future work to undertake similar measures to mitigate such sources of bias in annotation.
Social Norms.Cultural nuances are complex and multifaceted and do not remain static across time. Here, we have collaborated with social science researchers with significant established expertise in Chinese and American cultures to ensure the accuracy and validity of our set of social norms under rigorous verification, further consulting and adapt to the taxonomy provided by the LDC in defining and characterizing social norms. It is important to note here that it is impossible to have a "ground truth" set of social norms for every culture, as they are by nature aggregated judgments of acceptability that are subject to variation across longitudinal scales. Nonetheless, a central contribution of this work is a framework to create faithful dialogues that are themselves based on any given set of social norms, which allows for a "plug-and-play" dialogue generation for any additional/given set of social norms.
## Acknowledgements
We thank Maximillian Chen and Bo Feng for their helpful comments, thoughts, and discussions; Winston Wu, Zoie Zhao, Shiyu Zhang, Gechen Shen, Anxin Yi, Feiyang Zhu, Christopher Lee, Aniv Ray, Batool Taraif, and other anonymous crowdworkers for their help in annotations and evaluation; and the anonymous reviewers for their helpful feedback. This research is being developed with funding from the Defense Advanced Research Projects Agency (DARPA) CCU Program No. HR001122C0034. The views, opinions and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. Sky CH-Wang is supported by a National Science Foundation Graduate Research Fellowship under Grant No. DGE-2036197.
|
2308.11889 | Internal exact controllability for nagdhi shell | In this work we study the exponential stability of the energy associated with
a Naghdi shell model with localized internal dissipation. Using several tools
from Riemannian Geometry we show the well possedness of the model, via
semigroup theory, and obtain observability inequalities which allows to prove
the exponential decay of the total energy. As a consequence then we use Russell
Principle for obtain exact controllability. | Alexis Rodriguez Carranza, Jose Luis Ponte Bejarano, Juan Carlos Ponte Bejarano | 2023-08-23T03:23:03Z | http://arxiv.org/abs/2308.11889v1 | # Internal exact controllability for Nagdih shell
###### Abstract.
In this work we study the exponential stability of the energy associated with a Naghdi shell model with localized internal dissipation. Using several tools from Riemannian Geometry we show the well possedness of the model, via semigroup theory, and obtain observability inequalities which allows to prove the exponential decay of the total energy. As a consequence then we use Russell Principle for obtain exact controllability.
Key words and phrases:Internal controllability, Naghdi shell
###### Contents
* 1 Introduction and summary
* 1.1 Preliminaries and stationary problem
* 1.2 Equation of motion
* 2 The stabilization result
* 3 Some comments on scap vector fields:
* 4 Naghdi shell stabilization with internal dissipation
* 5 Controllability via Stability
* 6 conclusions
## 1. Introduction and summary
Let us denote by \(S\) a two dimensional smoooth Riemannian Manifold with the induced metric of \(I\!\!R^{3}\) and inner product denoted by \(g\) or simply \(<.,.>\). Recall that this means that for each \(p\in S\) we have an inner product \(<.,.>\) on the tangent space \(T_{p}S\) and this relation is \(C^{\infty}\). We will consider \(S\) as the middle surface of a thin shell.
Suppose we consider a bounded open region \(M\) of \(S\) with a smooth boundary \(\Gamma=\partial M\). This paper is devoted to consider \(M\) as a Naghdi type thin shell. In this dynamical model we study exponential stabilization of the total energy provided we assume an internal localized dissipation acting on \(M\).
In the literature, mos of the authors prefer to use the classical geometrical approach while working on properties of the solutions of evolution thin shell. In those situations the middle surface is instead, the image under a smooth map of a two dimensional connected domain de \(I\!\!R^{2}\) and therefore described under just a one coordinate path. Several interesting model for plates or shells with a variable coefficients and other hyperbolic type systems obtained by using traditional geomentry became very difficult to treat mainly due to the explicit presence of Chritoffel symbols \(\Gamma^{k}_{ij}(p)\). Interesting aternative was given in the work due to peng fei Yao [35] and collaborators about twenty years ago. In [35], used
and intrinsic model of the middle surface of a shallow shell as a two dimensional Riemannian manifold. This approach allows to get satisfactory results using multipliers. The basic idea was iniciated by S. Bochner in [8]. In order to very an identity or a pointwise estimate, it sufficies to do so at each point \(p\) relative to a coordinate frame field wich give as more simplifications. The best coordinate system in our case would be the one in wich the symbols \(\Gamma^{k}_{ij}(p)\) vanish at the given point \(p\). As in [8] the best frame will be given by the so called coordinate system normal at \(p\).
This paper is devoted to study exponential stabilization of the total energy associated with a dynamic thin shell equation of Naghdi type in the presence of localized internal dissipation.Using several Yao ideas we show the well posedness of the model, via semigroup theory, and obtain observability inequalities wich allo us to prove the exponential decay of the total energy. As a consequence then we use Russell Principle for obtain exact controllability.
### Preliminaries and stationary problem
Let \(S\) and \(M\) be as in the introduction. In addition we will assume \(S\) orientable and a normal field at each \(x\in M\) will be denote by \(N(x)\). The shell, a body of \(I\!\!R^{3}\) is definied as
\[S=\{p\in I\!\!R^{3},\quad p=x+zN(x),\quad x\in M,\quad|x|<\frac{h}{2}\}\]
where \(h>0\) denotes the, small, thickness off the shell and \(z\in I\!\!R\). In Naghdi model, the displacement vector \(\xi(p)\) at a point \(p\in S\) can be approximated by
\[\xi(p)=\xi_{1}(p)+z\Psi(x),\quad x\in M, \tag{1}\]
where \(p=x+zN(x)\xi_{1}(x)\in I\!\!R^{3}\) denotes the displacement vector of the middle surface and \(\Psi(x)\) captures the rotations of the normal \(N(x)\) at each \(x\in M\). Following Naghdi description [27] we assume that a normal after a deformation may not be again a normal, but, the distance from a point on the normal to the surface remains invariant. As a consequence, the deformations in the direction of the normal can be neglected. This implies in particular that \(\xi_{1}(x)\) and \(\Psi(x)\) in (1) can be decompose as
\[\xi_{1}(x)=W_{1}(x)+w_{1}(x)N(x) \tag{2}\]
and
\[\Psi_{1}(x)=V(x)+w_{2}(x)N(x) \tag{3}\]
where \(W_{1}\) and \(V\in\chi(M)\), and \(w_{1}\), \(w_{2}\in C^{\infty}(M)\). Here \(\chi(M)\) denotes the set of all vector fields on \(M\). We recall that a vector field is a map wich associates to each point \(p\in M\) a vector \(X(p)\in T_{p}(M)\) wich is the tangential plane of \(M\) at \(p\).
In order to find a model describing deformations of the middle surface we need to analyse the tensor field (of variation of the metric) using the second and thrid-fundamental forms on the surface \(M\). In other words, we consider
\[\Upsilon=\frac{1}{2}(\tilde{g}-g) \tag{4}\]
where \(g\) and \(\tilde{g}\) denote the metric induced on the middle surface before and after the deformation, respectively. \(\Upsilon\) is called the strain tensor of the middle surface. Given
\(x\in M\), we choose a coordinate system normal at \(x\), say \(\{E_{1}(x),E_{2}(x),E_{3}(x)=N(x)\}\) wich is a basis of \(I\!\!R^{3}\). Now, we calculate \(\tilde{g}(E_{i},E_{j})-g(E_{i},E_{j})\) to find, after linearization
\[\Upsilon(\xi)=\frac{1}{2}(DW_{1}+D^{*}W_{1})+w_{1}\Pi \tag{5}\]
where \(\xi=(W_{1},V,w_{1},w_{2})\), \(DW_{1}\) is the covariant differential of \(W_{1}\) of \(W_{1}\) and \(D^{*}W_{1}\) is the transpose of \(DW_{1}\) and \(\Pi\) is the second fundamental form of \(M\) wich is a \(2-\) covariant tensor. \(\Upsilon(\xi)\) is called the tensor change of metric(linearized). In a similar way we can deduce the change of curvature tensor(linearized)
\[\chi_{0}(\xi)=\frac{1}{2}\{DV+D^{*}V+\Pi(.,DW_{1})+\Pi(DW_{1},.)\}+w_{2}\Pi+w_ {1}c \tag{6}\]
where \(c\) is the third fundamental form on the surface \(M\). Also, the tensor wich captures rotations of the normal is given by:
\[\varphi_{0}(\xi)=\frac{1}{2}[Dw_{1}+V-i(W_{1})\Pi] \tag{7}\]
where \(i(W_{1})\Pi\) is the interior product of the tensro field \(\Pi\) by the vector field \(W_{1}\). It is convenient to write (5), (6), (7) in a more concise way. For example, consider the change of variable,
\[W_{2}=V+i(W_{1})\Pi \tag{8}\]
for \(x\in M\). As above we consider the system normal at \(x\), \(\{E_{1}(x),E_{2}(x),E_{3}(x)=N(x)\}\). Direct calculations give us,
\[DW_{2}(E_{i},E_{j})=DV(E_{i},E_{j})+D\Pi(W_{1},E_{i},E_{j})+\Pi(E_{i},D_{E_{j} }W_{1})\]
because \(D_{E_{j}}E_{i}(x)=0.\) Thus \(DW_{2}=DV+\Pi(.,DW_{1})+i(W_{1})D\Pi\) for \(x\in M.\) Substitution into (6) gives
\[\chi_{0}(\xi)=\frac{1}{2}(DW_{2}+D^{*}W_{2})+K_{ol}(\xi) \tag{9}\]
substitution of (8) into (7) give us
\[\varphi_{0}(\xi)=\frac{1}{2}Dw_{1}+\varphi_{ol}(\xi) \tag{10}\]
where \(K_{ol}(\xi)=-i(W_{1})D\Pi+w_{1}c+w_{2}\Pi\) and \(\varphi_{ol}(\xi)=-i(W_{1})\Pi+\frac{W_{2}}{2}\) Assuming the material of the shell is homogeneous and isotropic we find in the literature (see for instance [15]) the stress-strain relations of the \(3-\) dimensional shell on the middle surface \(M\) and express energy as an integral over \(M\times[\frac{-h}{2},\frac{h}{2}]\). Let \(R>0\) be the smallest principal radius of curvature of the undedermed middle surface (see [15], pg 166). As usual, for a thin shell it is assumed that \(\frac{h}{R}<<1\) (see [15], pg 18). Using this assumption, the following
approximation of the strain energy of the shell is obtain (see [15], pg 253)
\[\begin{split} I(\xi)=&\alpha h\int_{M}\left\{\left| \Upsilon(\xi)\right|^{2}+2\left|\varphi_{0}(\xi)\right|^{2}+\omega_{2}^{2}+ \beta\left(tr(\Upsilon(\xi))+w_{2}\right)^{2}\right.\\ &\left.+\gamma\left[\left|\chi_{0}(\xi)\right|^{2}+\frac{\left| Dw_{2}\right|^{2}}{2}+\beta\left(tr(\chi_{0}(\xi))\right)^{2}\right]\right\}dM \end{split} \tag{11}\]
where \(\alpha=\frac{E}{1+\mu}\), \(\beta=\frac{\mu}{1-2\mu}\) and \(\gamma=\frac{h^{2}}{12}\). Here, \(E\) denotes Young modules and \(\mu\) is Poisson ratio (\(0<\mu<\frac{1}{2}\)).
The above expression (11) of \(I(\xi)\) allows us to consider the following symetric, bilinear form \(\tilde{B_{0}}\) associated to the strain energy, definied on the space \(Z=[H^{1}(M,\wedge)]^{2}\times[H^{1}(M)]^{2}:\)
\[\tilde{B_{0}}(\xi,\theta)=\frac{\alpha h}{2}\int_{M}B_{0}(\xi,\theta)dM \tag{12}\]
where \(\xi=(W_{1},W_{2},w_{1},w_{2})\in Z\), \(\theta=(\theta_{1},\theta_{2},u_{1},u_{2})\in Z\) and
\[\begin{split}& B_{0}(\xi,\theta)=2\langle\Upsilon(\xi),\Upsilon( \theta)\rangle\\ &+4<\varphi_{0}(\xi),\varphi_{0}(\theta)\rangle+2\omega_{2}u_{2}+ \\ &+2\beta\cdot\left(\operatorname{tr}(\Upsilon(\xi))+\omega_{2} \right)\left(t_{2}\Upsilon(\theta)+u_{2}\right)\\ &+2\gamma<\chi_{0}(\xi),\chi_{0}(\theta))\rangle+\gamma\left\langle D \omega_{2},Du_{2}\right\rangle\\ &+2\gamma\beta\operatorname{tr}(\chi_{0}(\xi)).\operatorname{tr }(\chi_{0}(\theta))\end{split} \tag{13}\]
In order to obtain Green identify we consider the Hodge-Laplace type operator \(\Delta_{\beta}\) given by:
\[\Delta_{\beta}=-[\delta d+2(1+\beta)d\delta] \tag{14}\]
where \(\beta=\frac{\mu}{1-2\mu}\), \(d\) is the exterior derivative and \(\delta\) is its formal adjoint. The operator \(\Delta_{\beta}\) acts taking a \(p\)-form to another \(p\)-form. In our case, we will need only \(\Delta_{\beta}\) operating on 1-forms. In [35](theorem 5.1) the following result was prove: Let us consider the bilinear form \(\tilde{B_{0}}(.,.)\) given by(12). Then, for any \(\xi=(W_{1},W_{2},w_{1},w_{2})\in Z\) and \(\theta=(\theta_{1},\theta_{2},u_{1},u_{2})\in Z\), the identity:
\[\widetilde{B}_{0}(\xi,\theta)=\frac{\alpha h}{2}\left\langle A_{0}\xi,\theta \right\rangle L^{2}+\frac{\alpha h}{2}\int_{\Gamma=\partial M}\partial(A_{0} \xi,\theta)d\Gamma \tag{15}\]
holds, where \(L^{2}=[L^{2}(M,\wedge)]^{2}\times[L^{2}(M)]^{2}\) and
\[\begin{split} A_{0}(\xi)=&-(\Delta\omega_{1}+F_{1} (\xi),\gamma\Delta_{\beta}W_{2}+F_{2}(\xi),\\ &\Delta w_{1}+f_{1}(\xi),\gamma\Delta\omega_{2}+f_{2}(\xi))\\ \partial\left(A_{0}\xi,\theta\right)=&\left\langle c_{1 }(\xi),\theta_{1}\right\rangle+\gamma\left\langle c_{2}(\xi),\theta_{2}\right\rangle \\ &+2\left\langle\varphi_{0}(\xi);\eta\right\rangle u_{1}+\gamma \frac{\partial\omega_{2}}{\partial\eta}u_{2}\end{split} \tag{16}\]
with
\[\begin{split} c_{1}(\xi)&=2i(\eta)\Upsilon(\xi)+2 \beta\left(\operatorname{tr}\Upsilon(\xi)+w_{2}\right)\eta\\ c_{2}(\xi)&=2\gamma i(\eta)\chi_{0}(\xi)+2\beta( \operatorname{tr}(\chi_{0}(\xi))\eta\end{split} \tag{17}\]
here \(\eta\) denotes the exterior normal vector along the curve \(\Gamma=\partial M,\)\(\Delta\) is the usual Laplace-Beltrami operator on the Riemannian manifold \(M\) and \(F_{j}(\xi)\) and \(f_{j}(\xi)\) are first order terms, i.e, of order \(\leq 1\) for \(j=1\) or \(2\)
The above description was shown in [35]. In fact, the variable \(\xi=(W_{1},W_{2},w_{1},w_{2})\) satisfies the following system
\[\begin{cases}W_{1}^{\prime\prime}-\Delta W_{\beta}+F_{1}(\xi)=0&\text{on }M \times(0,\infty)\\ W_{2}^{\prime\prime}-\Delta W_{2}+F_{2}(\xi)=0&\text{on }M\times(0,+\infty)\\ w_{1}^{\prime\prime}-\Delta w_{1}+f_{1}(\xi)=0&\text{on }M\times(0,+\infty)\\ w_{2}^{\prime\prime}-\Delta w_{2}+f_{2}(\xi)=0&\text{on }M\times(0,+\infty) \end{cases}\]
with
\[\left\{\begin{array}{l}\xi=0\quad\text{ on }\Gamma(M)\times(0,+\infty)\\ \xi(0)=\xi_{0},\xi_{t}(0)=\xi\quad\text{on }\quad\text{M}\end{array}\right.\]
Here \(\Gamma(M)\) denotes the boundary of \(M\). Let us consider the following spaces
\[H_{\Gamma}^{1}(M)=\left\{u\in H^{1}(M),u\equiv 0\text{ on }\Gamma= \Gamma(M)\right\}\] \[H_{\Gamma}^{1}(M,\Lambda)=\left\{z\in H^{1}(M,\Lambda),z\equiv 0 \text{ on }\Gamma=\Gamma(M)\right\}\]
and
\[X_{\Gamma}=\left[H_{\Gamma}^{1}(M,N)\right]^{2}\times\left[H_{\Gamma}^{\prime }(M)\right]^{2}\]
Next, we can prove that the symetric bilinear form \(\tilde{B}(\xi,\theta)\) defined in (15) for any \(\bar{\xi},\theta\in Z\) is coercive, that is, there exits a positive constant \(c_{3}\) such that
\[\widetilde{B}(\xi,\xi)\geqslant c_{3}\|\xi\|_{X_{\Gamma}}^{2}\text{ for any }\xi\epsilon X_{\Gamma} \tag{18}\]
In fact, substitution of (5), (9) and (10) into (13) followed by integration over \(M\) give us
\[\tilde{B}_{0}(\xi,\xi)+K_{2}\|\xi\|_{L^{2}}^{2}\geqslant K_{1}\|\xi\|_{H_{ \Gamma}^{1}}^{2}(M) \tag{19}\]
for some positive constants \(K_{1}\) and \(K_{2}\).
In order to use (19) to obtain (18) we can use the following uniqueness results: Let \(\xi=(W_{1},W_{2},w_{1},w_{2})\) belonging to \(X_{\Gamma}\) such that \(\Upsilon(\xi)=0,\chi_{0}(\xi)=0,\varphi_{0}(\xi)=0\) and \(w_{2}=0\). Then \(\xi=0\) for all \(x\in M\). Using this uniqueness result together with the method called compactness-uniqueness we can "adsorb" the term \(K_{2}\|\xi\|_{L^{2}}^{2}\) into the term of the right hand side of (19) to conclude the validity of (18). The expression (15) for the bilinear form \(\widetilde{B}_{0}(\xi,\theta)\) is known. Using the above discusiion we deduce that the variational problem associated to the bilinear form \(\widetilde{B}_{0}\) it is equivalent to the following boundary value problem: To find \(\xi=(\boldsymbol{W}_{1},\mathbf{W}_{2},\mathbf{w}_{1},\mathbf{w}_{2})\) such that
\[\left\{\begin{array}{l}\frac{\alpha h}{2}A_{0}\xi=\tilde{F}\\ \tilde{W}_{1}|_{\Gamma}=\left.W_{2}\right|_{\Gamma}=0,\left.w_{1}\right|_{ \Gamma}=\left.w_{2}\right|_{\Gamma}=0\end{array}\right. \tag{20}\]
for a given \(\tilde{F}\in\mathbf{L}^{2}=\left[L^{2}(M,\Lambda)\right]^{2}\times\left[L^{2} (M)\right]^{2}\)
### Equation of motion
In this Section we will consider the equations of motion of Naghdi model. We assume that there are no external loads on the shell and the shell is clamped along \(\Gamma=\Gamma(M)\). In this situation \(\tilde{\xi}=\tilde{\xi}(x,t)\), \(x\in M\) and \(t\) is time. In order to include the kinetic energy to our problem it is convevient to consider the change of variables \(t\to tc_{1}^{-1}\) where \(c_{1}^{2}=\frac{2}{\alpha}\) with \(\alpha\) as in (11). Also, denoting by
\[R=\left[\begin{array}{cccc}1&0&0&0\\ 0&\gamma&0&0\\ 0&0&1&0\\ 0&0&0&\gamma\end{array}\right]\]
and \(\xi=R^{1/2}.\widetilde{\xi}\) we write the equations of evolution of a Naghdi shell for \(\xi=(\boldsymbol{W}_{1},\mathbf{W}_{2}\mathbf{w}_{1},\mathbf{w}_{2})\) as
\[\left\{\begin{array}{l}\xi+A\xi=0\text{ on }M\times\mathbb{R}^{+}\\ \xi=0\text{ on }\Gamma(M)\times\mathbb{R}^{+}\\ \xi(x,0)=\xi_{0}(x),\xi_{t}(x,0)=\xi_{1}(x)\text{ on }M\end{array}\right. \tag{21}\]
where \(\mathbf{A}=R^{-1/2}\mathbf{A}_{0}R^{-1/2}\). The bilinear form \(J\) associated to operator \(\mathbf{A}\) is given by
\[\begin{split}\mathbf{J}(\xi,u)&=2\langle\Upsilon(\xi), \Upsilon(u)\rangle+4\left\langle\boldsymbol{\varphi}_{0}(\xi),\boldsymbol{ \varphi}_{0}(u)\right\rangle\gamma\\ &+2\beta\left[Tr(\Upsilon(\xi))+\frac{w_{2}}{\sqrt{\gamma}}\right] \left[Tr(\Upsilon(u))+\frac{u_{2}}{\sqrt{\gamma}}\right]\\ &+2\beta Tr(\chi_{0}(\xi))Tr(\chi_{0}(u))+2\left\langle\chi_{0}(\xi), \chi_{0}(u)\right\rangle\\ &+(Dw_{2},Du_{2})+\frac{2}{\gamma}w_{2}u_{2}\end{split} \tag{22}\]
for all \(\xi=(W_{1},W_{2},w_{1},w_{2})\) and \(u=(U_{1},U_{2},u_{1},u_{2})\) with \(\xi\) and \(u\in Z\). Also \(\Upsilon,\chi_{0}\) and \(\varphi_{0}\) are as in (5), (6) and (7) respectively. The bilinear form associated with the operator \(A\) is \(\mathbf{J}(\xi,u)\) and the coresponding Green formula would be
\[\widetilde{J}(\xi,u)=\langle\mathbf{A}\xi,u\rangle_{Y}+\int_{\Gamma=\partial M }\partial\left(\mathbf{A}\xi,u\right)dr \tag{23}\]
where \(\tilde{J}(\xi,u)=\int_{M}\mathbf{J}(\xi,u)dM\). Now, we consider a localized perturbation model (21): To find \(\xi=(W_{1},W_{2},w_{1},w_{2})\in Z\) satisfying
\[\left\{\begin{array}{l}\xi_{tt}+A\xi+a(x)\ \xi_{t}=0\quad\text{ on }M\times(0,+\infty)\\ \xi=0\text{ on }\Gamma(M)\times(0,+\infty)\\ \xi(x,0)=\xi_{0}(x),\xi_{t}(x,0)=\xi_{1}(x)\text{ on }M\end{array}\right. \tag{24}\]
where \(a(x)\) is a real valued function defined for all \(x\in M\) such that has support in a small interior region of \(M\). Well posedness of problem (24) it follows using standar tools, for example, the semigroup theory [29] and omitt the proof here. Before we present a proof of the uniform stabilization of the total energy of model (24) we need some preliminaries
Let us denote by \(T^{2}(M)\) the set of all tensor fields on \(M\) of rank \(2\). We define the bilinear form \(b(\cdot,\cdot):\quad T^{2}(M)\times T^{2}(M)\mapsto\mathbb{R}\) gives by
\[b\left(T_{1},T_{2}\right)=\langle T_{1},T_{2}\rangle+\beta(trace(T_{1}))( trace(T_{2})) \tag{25}\]
for any \(T_{1},T_{2}\in T^{2}(M)\). Here, for any \(T\in T^{2}(M)\), the trace of \(T\) at \(x\in M\) is given by \(trace(T)=\sum_{i=1}^{2}T(e_{i},e_{i})\) where \(\{e_{1},e_{2}\}\) is an ortonormal basis of \(T_{x}M\). For any
\(W\in H^{1}(M,\Lambda)\) we define
\[\widetilde{S}(W)=\frac{1}{2}\left(DW+D^{*}W\right) \tag{26}\]
It is known that there exist a positive constant \(\lambda\) such that
\[2\|\tilde{S}(W)\|_{L^{2}(M,T^{2})}=\|DW+D^{*}W\|_{L^{2}(M,T^{2})}\geq\lambda\| W\|_{H^{1}(M,\Lambda)} \tag{27}\]
for any \(w\in H^{1}(M,\Lambda)\). See lemma (4.5) in [35]. We claim that there exist \(\lambda_{0}>0\) such that
\[\lambda_{0}\int_{M}\left[b(\widetilde{S}(w),\tilde{S}(w))+|W|^{2}\right]dM \geqslant\|Dw\|_{L^{2}(M,\Lambda)}^{2} \tag{28}\]
holds for any \(W\in H^{1}_{\Gamma}(M,\Lambda)\). In fact,
\[b(\tilde{S}(w),\tilde{S}(w))=\frac{1}{4}\left|Dw+D^{*}w\right|^{2}+\beta\left( Trace\tilde{S}(w)\right)^{2}\]
consequently,
\[b(\widetilde{S}(w),\widetilde{S}(w))+|w|^{2}\geq\frac{1}{4}\left|DW+D^{*}W \right|^{2} \tag{29}\]
integration of (29) over \(M\) and using (27) we obtain,
\[\int_{m}\left[b(\tilde{S}(w),\widetilde{S}(W))+\ |\ Wi^{2}\right]dM \geqslant\frac{1}{4}\left\|DW+D^{*}w\right\|_{L^{2}(M,T^{2})}^{2}\] \[\geqslant\frac{\lambda}{4}\|w\|_{H^{1}(M,\Lambda)}^{2}\] \[\geqslant\frac{\lambda}{4}\|DW\|_{L^{2}(M,\Lambda)}^{2}\]
wich prove our claim. Next, was will use the tedmiage of multiplies to stain appropiatte identitis and inequalities. wet us assume that given \(V\in\chi(M)\) there exists a function \(v(x)\in C^{\infty}(M)\) such that
\[DV(x)(X,X)=v(x)|X|^{2} \tag{30}\]
for all \(X\in T_{x}(M),x\in M\). Given \(\xi=(W_{1},W_{2},w_{1},w_{2})\in Z\) we consider
\[m(\xi)=\left(D_{V}W_{1},D_{V}W_{2},V\left(w_{1}\right),V\left(w_{2}\right)\right) \tag{31}\]
we recall that \(DV(X,X)=\operatorname{DV}_{X}(X)=\langle\operatorname{DV}_{X},X\rangle\), for all \(X\in T_{x}(M),x\in M\).
Using our assumption (30), we take the inner product of equation (24) with \(m(\xi)\) and integrate over \(M\)
\[\langle\xi_{tt},m(\xi)\rangle_{L^{2}(M,\Lambda)}+\langle A\xi,m(\xi)\rangle+ \langle a(x)\xi_{t},m(\xi)\rangle=0 \tag{32}\]
we can use identity (23) to deduce from (32)
\[\langle\xi_{tt},m(\xi)\rangle_{L^{2}(M,\Lambda)}+\tilde{J}(\xi,m(\xi))-\int_{ \Gamma=\Gamma(M)}\partial(A\xi,m(\xi))d\Gamma=-\left\langle a(x)\xi_{t},m(\xi )\right\rangle. \tag{33}\]
Using similar calculations as the ones given in Lemma 5.2 in [35] we deduce the identity
\[2\widetilde{J}(\xi,m(\xi))= \int_{\Gamma}J(\xi,\xi)\langle V,\eta\rangle d\Gamma-2\int_{M}vJ (\xi,\xi)dM+2\int_{M}K(\xi,\xi)dM\] \[+l_{0}(\xi) \tag{34}\]
where \(\eta\) is the outside normal vector field along \(\Gamma=\Gamma(M)\),
\[\begin{split} K(\xi,\xi)&=2b(\tilde{S}(W_{1}),G\left( V,DW_{1}\right))+2b(\tilde{S}\left(W_{2}\right),G\left(V,DW_{2}\right))+\\ &+4v\left|\Psi_{0}(\xi)\right|^{2}+v\left|DW_{2}\right|^{2}. \end{split} \tag{35}\]
Here \(b(\cdot,\cdot)\) is as in (25) and \(G\) is a map defined by,
\[\begin{split}& G:\chi(M)\times T^{2}(M)\longmapsto T^{2}(M)\\ & G(W,T)=\frac{1}{2}\left[T(\cdot,\nabla\cdot W)+T^{*}(\cdot,W \cdot W)\right]\end{split} \tag{36}\]
Finally, in (34), \(l_{0}(\xi)\) denotes lower order terms with respect to the energy in the sense that for any \(\epsilon>0\) there exists \(C_{\epsilon}>0\) such that
\[\left|l_{0}(\xi)\right|^{2}\leqslant\varepsilon d(\xi)+C_{\varepsilon}h(\xi) \text{ for all }x\in M \tag{37}\]
where \(d(\xi)\) is the density energy \(e(t)=\frac{1}{2}\int_{M}d(\xi)dM\) y \(h(\xi)\) has partial derivatives only up to order \(1\).
## 2. The stabilization result
Let \(\xi\) be the displacement field of Naghdi shell. The total energy of the model is
\[E(t)=\frac{1}{2}\left\|\xi_{t}\right\|_{L^{2}(M)}^{2}+\frac{1}{2}\tilde{J}(\xi,\xi) \tag{38}\]
where \(\tilde{J}\) is given as in(23). In order to study the stabilization result for the solutions of model (24) we will consider the following assumption on \(a(x)\)
**Assumption 1** a:\(\mathrm{M}\longmapsto\mathbb{R}^{+}\)is a real valued function nonnegative and with support in a small interior region of \(M\).
**Theorem A.**_Consider the solution of problem (24), (with \(\Gamma_{0}=\Gamma\) ) and initial data \(\xi_{0}\in Z=\left[\mathbf{H}^{1}(M,1)\right]^{2}\times\left[\mathbf{H}^{1}( M)\right]^{2}\), \(\xi_{1}\in Y=\left[L^{2}(M,\Lambda)\right]^{2}\times\left[L^{2}(M)\right]^{2}\). Assume that condition (30) holds. Then, the identity:_
\[\begin{split}&\int_{0}^{T}\int_{\Gamma}\left[2\partial(A\xi,m( \xi))+\left(\left|\xi_{t}\right|^{2}-J(\xi,\xi)\right)\left\langle V,\eta \right\rangle\right]drdt\\ &\quad=2\left(\xi_{t},m(\xi)\right)_{L^{2}}\right]_{0}^{T}+2\int_ {0}^{T}\int_{M}v\left[\left|\xi_{t}\right|^{2}-J(\xi\xi)\right]dMdt\\ &+2\int_{0}^{T}\int_{M}^{T}k(\xi,\xi)dMdt-\int_{0}^{T}\left(a(x) \xi_{t},2m(\xi)\right)dt\\ &\quad+l_{0}(\xi)\end{split} \tag{39}\]
_holds._
holds. Here \(m(\xi)\) is a in (31) and \(K\) given in (35). Furthermore, if we consider \(P:M\longrightarrow\mathbb{R}\) any smooth function, then, the identity
\[\begin{split}\int_{0}^{T}\int_{\Gamma}\partial(A\xi,p\xi)d\Gamma dt &=\int_{0}^{T}\int_{M}p\left[J(\xi,\xi)-\left|\xi\right|^{2} \right]dMdt\\ &-\int_{0}^{T}\left(a(x)\xi_{t},p\xi\right)_{L^{2}}dt+l_{0}(\xi) \end{split} \tag{40}\]
holds.
**Proof:** As before we can use (32) and (23) to obtain from equation (24) the identity
\[\left\langle\xi_{tt},2m(\xi)\right\rangle_{L^{2}(M,\lambda)}+\tilde{ J}(\xi,2m(\xi))\] \[-\iint_{\Gamma}\partial(A\xi,2m(\xi))d\Gamma=-\left\langle a(x)| \xi_{t}|,2m(\xi)\right\rangle. \tag{41}\]
Using (30), (31) we obtain after taking inner product, integrate over \(M\) and use Green formula
\[-2\left\langle\xi_{t},m\left(\xi_{t}\right)\right\rangle_{L^{2}} =2\int_{M}v\left|\xi_{t}\right|^{2}dM-\int_{\Gamma=\partial M} \left|\xi_{t}\right|^{2}\left\langle V,\eta\right\rangle d\Gamma \tag{42}\]
Substitution of (42) into (41) gives us
\[\left\langle\xi_{tt},2m(\xi)\right\rangle_{L^{2}} =2\frac{\partial}{\partial t}\left\langle\xi_{t},m(\xi)\right\rangle _{L^{2}}+2\int_{M}v\left|\xi_{t}\right|^{2}dM\] \[-\int_{\Gamma=\partial M}\left|\xi_{t}\right|^{2}\left\langle V, \eta\right\rangle d\Gamma \tag{43}\]
Using Green formula for operator \(A\) and relation (34) we deduce
\[\left\langle A\xi,2m(\xi)\right\rangle_{L^{2}} =\int_{\Gamma=\partial M}[\boldsymbol{J}(\xi,\xi)\langle V,\eta \rangle-2\partial(\boldsymbol{A}\xi,m(\xi))]d\Gamma\] \[-2\int_{M}v\boldsymbol{J}(\xi,\xi)dM+2\int_{M}K(\xi,\xi)dM+l_{0}(\xi) \tag{44}\]
Using identities (43), (44) follow (40). We need some geometric hypothesis in order to obtain the desired stabilization result in case of localized internal dissipation.
**Definition B**.: _Let \(V\) a vector field on \(M\) that is \(V\in X(M)\). We say that \(V\) is an escape vector field for the Naghdi shell if the following conditions are satisfy:_
* _There exists a function_ \(v\) _on_ \(M\) _such that_ \[DV(X,X)=v(x)|X|^{2}\] _for all_ \(X\in T_{x}(M),x\in M\)__
* _Let_ \(\varepsilon(x)\) _denote the "volume element of the middle surface M", consider_ \[f(x)=\frac{\left\langle DV,\varepsilon\right\rangle}{2}\quad\text{for}\;\,x\in M\] _The functions_ \(v(x)\) _and_ \(l(x)\) _are assume to satisfy the inequality_ \[2\min_{x\in M}r(x)>\lambda_{0}(1+2\beta)\max_{x\in M}\left|\ell(x)\right|\] _where_ \(\lambda_{0}\geqslant 1\) _satisfies (_28_) with_ \(\lambda_{0}^{-1}=\frac{c}{4}\) _and_ \(\beta=\frac{\mu}{1-2\mu}\)__
## 3. Some comments on scrape vector fields:
1. It is well known that on a 2-dimensional middle surface \(M\) there always exists a vector freed \(V\) satisfying assumption a). To obtain assumption b) may be the difficult part. It is known that a necessary condition to hold condition b) is that there is no closed geodesics inside the middle surface \(M\).
2. Condition b) says in a sense that function \(l(x)\) is related to the symmetry of the covariant differential DV. In fact, if \(DV\) is symmetric then \(l(x)=0\) for all \(x\in M\).
3. In our case \(M\) is an oriented Riemsannian manifold of \(\dim M=2\). Let \(\{e_{1},e_{2}\}\in T_{x}(M)\) be linearly independent and let \(\varepsilon\) be a differential form of degree \(2\) defined as \[\varepsilon\left(e_{1},e_{2}\right)(x) =\pm\sqrt{\det\left(\langle e_{i}e_{j}\rangle\right)}\] \[=\mbox{orient.vol}\{e_{1},e_{2}\}\] \(x\in M\). The oriented volume is affected by the sign \(+\) or \(-\) depending on wheter or not a basis \(\{e_{1},e_{2}\}\) belongs to the orientation on \(M\). \(\varepsilon=\varepsilon(x)\) is called the volume element of \(M\). Some texts define the volume element as a \(2-\) form on \(M\) if for any frame field \(\{e_{1},e_{2}\}\), \(|\varepsilon(e_{1},e_{2})|=1\). In chapter 4 of [35] several examples are presented where we can assure the construction of escape vector fields for a shallow shells.
Next, we define the motion of scape region a piece \(\overline{M}\) of the middle surface \(M\) wich will be convenient in order to obtain the desired stabilization result. We are interested in case where \(\overline{M}\) is not the whole \(M\) and maybe small as possible (in some sense). The notion was used by several authors in related work (see [35], [30], and references therein)
**Definition C**.: _A region \(G\subset\Omega\) is called a scape region for the Naghdi shell, if_
1. _There is a finite number of sub-regions_ \(\{\Omega_{i}\}_{i=1}^{J}\)_, with boundary_ \(\Gamma_{i}\)_,_ \(J\) _a natural positive, such that_ \[\Omega_{i}\cap\Omega_{j}=\phi\quad\mbox{para~{}todo}\quad 1\leq i<j\leq J.\]
2. _For each_ \(\Omega_{i}\) _there is a vector field_ \(V_{i}\) _and a function_ \(v_{i}\) _such that,_ \[DV_{i}(X,X) = v_{i}(x)|X|^{2}\quad\mbox{on}\quad\Omega_{i}\] \[2\min_{x\in\Omega_{i}}v_{i}(x) > \lambda_{0}(1+2\beta)\max_{x\in\Omega_{i}}\frac{|l_{i}(x)|}{2},\] _where_ \(l_{i}(x)=\frac{\langle DV_{i},E\rangle}{2}\) _for all_ \(1\leq i\leq J\)_;_
3. \[G\supset\bar{\Omega}\cap N_{\epsilon}\left[\cup_{i=1}^{J}\Gamma_{i0}\cup \left(\Omega\setminus\cup_{i=1}^{J}\Omega_{i}\right)\right]\] _where_ \(\epsilon>0\)_, small, and:_ \[N_{\epsilon}(S) = \cup_{x\in S}\left\{y\in\Omega/d_{g}(y,x)<\epsilon\right\}\quad para \quad S\subset M\] \[\Gamma_{i0} = \left\{x\in\Gamma_{i},\langle V_{i}(x),\nu(x)\rangle>0\right\},\] \(\nu_{i}\) _is the normal to_ \(\Omega_{i}\) _pointing out._
In general, there is no defined scape vector field over the entire median surface \(\Omega\). However, such fields can be defined on small geodesic balls. Then, considering \(\Omega=\cup_{n\in I\!\!N}B(x_{n},\delta)\) with \(x_{n}\in\Omega\) and \(\delta>0\) small enough, an escape vector field can be defined in \(B(x_{n},\delta)\). Then \(\mu(\Omega)=\lim_{k\to\infty}\sum_{n=1}^{k}\mu(B(x_{n},\delta))\), where \(\mu\) is the two-dimensional Lebesgue measure on the surface \(\Omega\). So, given \(\epsilon>0\), there is \(N\in I\!\!N\) big enough that
\[\sum_{\infty}^{n=N+1}\mu(B(x_{n},\delta))<\epsilon.\]
Then considering \(\Omega_{i}=B(x_{i},\delta)\) con \(1\leq i\leq N\), we have proved the following
**Theorem D**.: _For \(\epsilon>0\) given, an escape region can be chosen \(G\subset\bar{\Omega}\) such that_
\[\mu(G)<\epsilon\]
_where \(\mu(G)\) is the two-dimensional Lebesgue measure of \(G\)_
Now we will give some examples.
**Exemplo 3.1**.: _In the case of an escape vector field, \(V\), defined on all \(\Omega\), then in the definition (C) we have to \(J=1\). By condition(3) a region escape is supported in the boundary region \(\Gamma_{0}\), where_
\[\Gamma_{0}=\left\{x\in\partial\Omega/\left\langle V(x),\nu(x)\right\rangle>0 \right\}.\]
That escape region was already used by many authors[11], [6], [25]. The escape field considered, in the case \(I\!\!R^{n}\), was \(V=x-x_{0}\)
**Exemplo 3.2**.: _consider now_
\[\Omega=C=\left\{x=(x_{1},x_{2},x_{3})\in I\!\!R^{3}/x_{1}^{2}+x_{2}^{2}=1,\quad -1\leq x_{3}\leq 1\right\},\]
_the cylinder limited in \(I\!\!R^{3}\). it is known that is not possible to define a vanishing vector field over all \(\Omega\). To construct an escape region, let \(x_{0}=(x_{01},x_{02},x_{03})\in C\), with \(x_{03}=0\) and either \(L_{0}\) the generating line containing \(x_{0}\). Be \(x_{1}\in C\) the antipode of \(x_{0}\). Since the interior of the cut-locus of \(x_{1}\) is \(C\setminus\left\backslash L_{0}\right\), there exists an escape vector field defined over \(C\setminus L_{0}\). Thus, an escape region for \(\Omega\) is supported in a neighborhood of the edge of \(\Omega\) and the \(L_{0}\)._
## 4. Naghdi shell stabilization with internal dissipation
To continue with the resolution of the problem of the exponential decay of energy, we need the following lemmas.
**Lema E**.: _Let \(V\in\chi(\Omega)\) satisfying the first condition of the definition (B). So the tensor field \(DV\) can be decomposed as_
\[DV=v(x)g+l(x)E\quad\text{para}\quad x\in\Omega\]
Proof.: decomposing \(DV\) in its symmetric and antisymmetric part, by
\[DV=\frac{1}{2}\left(DV+D^{*}V\right)+\frac{1}{2}\left(DV-D^{*}V\right) \tag{45}\]
Given that \(\frac{1}{2}\left(DV-D^{*}V\right)\) is a 2-antisymmetric shape and \(\Omega\) is 2-dimension, there is a function \(q\) such that
\[\frac{1}{2}\left(DV-D^{*}V\right)=q(x)E\quad\text{para}\quad x\Omega. \tag{46}\]
Because the dimension of the 2-shapes defined over spaces 2-dimensionais is 1. substituting (46) in the expression of \(l(x)\), we have
\[l(x) = \frac{1}{2}\left\langle DV,E\right\rangle=\frac{1}{2}\left\langle 2 q(x)E+D^{*}V,E\right\rangle\] \[= \left\langle q(x)E,E\right\rangle+\frac{1}{2}\left\langle D^{*}V, E\right\rangle=2q(x)+\frac{1}{2}\left\langle D^{*}V,E\right\rangle\] \[= 2q(x)-\frac{1}{2}\left\langle DV,E\right\rangle\] \[= 2q(x)-l(x),\]
de donde,
\[l(x)=q(x). \tag{47}\]
substituting (47) in (46) and using that \(DV=vg\) we have the result.
Another notation to fix is the following. Given \(W\in\chi(\Omega)\) e \(T\in T^{2}(\Omega)\), be \(G(W,T)\in T^{2}(\Omega)\) given by
\[G(W,T)=\frac{1}{2}\left[T(.,\nabla.W)+T^{*}(.,\nabla.W)\right] \tag{48}\]
Now we prove some necessary lemmas
**Lemma F**.: _There exists a constant \(c>0\) such that_
\[||DW+D^{*}W||_{L^{2}(\Omega,T^{2})}\geq c||W||_{H^{1}(\Omega,\Lambda)}\quad \forall W\in H^{1}_{\Gamma_{0}}(\Omega,\Lambda) \tag{49}\]
We note that, for \(W\in H^{1}_{\Gamma_{0}}(\Omega,\Lambda)\)
\[b(S(W),S(W)) = \langle S(W),S(W)\rangle+\left(\mathrm{Trac}(S(W))\right)^{2}\] \[= \frac{1}{4}|DW+D^{*}W|^{2}+\left(\mathrm{Trac}(S(W))\right)^{2}\]
Then,
\[b(S(W),S(W))+|W|^{2}\geq\frac{1}{4}|DW+D^{*}W|^{2}.\]
From where, integrating in \(\Omega\) and using the lemma (49)
\[\int_{\Omega}\left[b(S(W),S(W))+|W|^{2}\right]dx = \frac{1}{4}||DW+D^{*}W||^{2}_{L^{2}(\Omega,T^{2})}\geq\frac{c}{4}|| W||^{2}_{H^{1}(\Omega,\Lambda)} \tag{51}\] \[\geq \frac{c}{4}||DW||^{2}_{L^{2}(\Omega,\Lambda)}\]
with \(\lambda_{0}=\frac{4}{c}\).
**Lemma G**.: _Be \(V\) an escape vector field for the Naghdi shell model and let \(G(V,DW)\in T^{2}(\Omega)\) given by (48), for \(W\in H^{1}(\Omega,\Lambda)\). Then,_
\[\sigma_{1}\int_{\Omega}b\left(S(W),S(W)\right)dx\leq\int_{\Omega}b\left(S(W), G(V,DW)dx+Lo(\xi)\right).\]
_where \(\sigma_{1}=\min_{x\in\Omega}v(x)-(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}\)._
Proof.: Remembering that, for \(T_{1},T_{2}\in T^{2}(\Omega)\), we have:
\[b(T_{1},T_{2})=\langle T_{1},T_{2}\rangle+\beta\mathrm{Trac}T_{1}\mathrm{Trac }T_{2}.\]
Then,
\[b(S(W),G(V,DW))=\langle S(W),G(V,DW)\rangle+\mathrm{Trac}(S(W))\mathrm{Trac}(G (V,DW)) \tag{52}\]
Now let estimate each term of (52). For this, given \(x\in\Omega\), be \(\{e_{1},e_{2}\}\) an orthonormal basis of \(T_{x}\Omega\) such that
\[DW(e_{1},e_{2})+D^{*}W(e_{1},e_{2})=0\quad\mathrm{em}\quad x \tag{53}\]
This is possible since the order tensor \(2\), \(DW+D^{*}W\), it is symmetrical. It follows that,
\[S(W)(e_{1},e_{2})=0. \tag{54}\]
Considering \(W_{ij}=DW(e_{i},e_{j})\), have
\[\mathrm{Trac}(S(W)) = S(W)(e_{1},e_{2})+S(W)(e_{1},e_{2}) \tag{55}\] \[= DW(e_{1},e_{2})+DW(e_{2},e_{1})\] \[= W_{1}1+W_{2}2\]
\[\mathrm{Trac}G(V,DW) = G(V,DW)(e_{1},e_{2})+G(V,DW)(e_{2},e_{2}) \tag{56}\] \[= DW(e_{1},\nabla_{e_{1}}V)+DW(e_{2},\nabla_{e_{2}}V)\]
Now, using the lemma (E), have
\[\nabla_{e_{1}}V = \left\langle\nabla_{e_{1}}V,e_{1}\right\rangle e_{1}+\left\langle \nabla_{e_{1}}V,e_{2}\right\rangle e_{2} \tag{57}\] \[= DV(e_{1},e_{2})e_{1}+DV(e_{2},e_{1})e_{2}\] \[= v(x)e_{1}-l(x)e_{2}\]
\[\nabla_{e_{2}}V = \left\langle\nabla_{e_{2}}V,e_{1}\right\rangle e_{1}+\left\langle \nabla_{e_{2}}V,e_{2}\right\rangle e_{2} \tag{58}\] \[= DV(e_{1},e_{2})e_{1}+DV(e_{2},e_{2})e_{2}\] \[= l(x)e_{1}+v(x)e_{2}\]
substituting (58), (57) en (56), have
\[\mathrm{Trac}G(V,DW) = DW(e_{1},v(x)e_{1}-l(x)e_{2})+DW(e_{2},l(x)e_{1}+v(x)e_{2})\] \[= v(x)DW(e_{1},e_{1})-l(x)DW(e_{1},e_{2})+l(x)DW(e_{2},e_{1})+v(x) DW(e_{2},e_{2})\]
que, por (53), tenemos
\[\mathrm{Trac}G(V,DW) = v(x)(W_{11}+W_{22})+2l(x)W_{21} \tag{59}\] \[= v(x)\mathrm{Trac}(DW)+2l(x)W_{21}\]
substituting (54), (55) and (59) in (52), obtain:
\[b(S(W),G(V,DW)) = \beta\mathrm{Trac}DW\left(v(x)\mathrm{Trac}DW+2l(x)W_{21}\right)\] \[= \beta v(x)\left(\mathrm{Trac}DW\right)^{2}+2\beta l(x)\left(W_{11 }+W_{22}\right)W_{21}\] \[= v(x)b(S(W),S(W))+2\beta l(x)\left(W_{11}+W_{22}\right)W_{21}\] \[\geq \min_{x\in\Omega}v(x)b(S(W),S(W))-(1+2\beta)\max_{x\in\Omega} \frac{|l(x)|}{2}|DW|^{2}+Lo(\xi),\]
Integrating this last equation into \(\Omega\), have:
\[\int_{\Omega}b(S(W),G(V,DW))dx\geq\min_{x\in\Omega}\int_{\Omega}b(S(W),S(W))dx -(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}\int_{\Omega}|DW|^{2}dx \tag{60}\]
Finally, using (51) in (60), have:
\[\int_{\Omega}b(S(W),G(V,DW))dx \geq \min_{x\in\Omega}\int_{\Omega}b(S(W),S(W))dx\] \[- \lambda_{0}(1+2\beta)\max_{x\in\Omega}\frac{|l(x)|}{2}\int_{\Omega }b(S(W),S(W))dx+Lo(\xi)\] \[= \sigma_{1}\int_{\Omega}b(S(W),S(W))dx+Lo(\xi)\]
**Lemma H**.: \[2\mathsf{B}(\xi,m(\xi))=\int_{\Gamma}\mathbf{B}(\xi,\xi)\left\langle V,\nu \right\rangle d\Gamma-2\int_{\Omega}v\mathbf{B}(\xi,\xi)+2\int_{\Omega}e(\xi, \xi)dx+Lo(\xi)\]
where \(\mathsf{B}\) is the bilinear form given in (15) and
\[e(\xi,\xi)=2b(S(W_{1}),G(V,DW_{1}))+2b(S(W_{2}),G(V,DW_{2}))+4v|\varphi(\xi)|^ {2}+v|Dw_{2}|^{2}\]
Proof.: by the formula (25) we have to estimate \(\Upsilon(\xi),\chi(m(\xi)),\varphi(m(\xi))\,\mathrm{e}\left\langle Dw_{2},D(V( w_{2}))\right\rangle\). We start with the first term,
\[\Upsilon(m(\xi))=\frac{1}{2}\left[D(\nabla_{V}W)+D^{*}(\nabla_{V}W)\right]+V( w_{1})\Pi \tag{61}\]
We will use the Bochner technique. Be \(x\in\Omega\) e \(\left\{E_{i}\right\}_{i=1}^{2}\) a normal referential in \(x\), we have:
\[D(\nabla_{V}W_{1})(E_{i},E_{j}) = E_{j}\left(\left\langle\nabla_{V}W_{1},E_{i}\right\rangle\right)\] \[= E_{j}\left(DW_{1}(E_{i},V)\right)=D^{2}W_{1}\left(E_{i},V,E_{j} \right)+DW_{1}(E_{i},\nabla_{E_{j}}V)\] \[= D^{2}W_{1}\left(E_{i},E_{j},V\right)+\left\langle R_{VE_{j}}W_{ 1},E_{i}\right\rangle+DW_{1}\left(E_{i},\nabla_{E_{j}}V\right)\] \[= \nabla_{V}DW_{1}(E_{i},E_{j})+R(V,E_{j},W_{1},E_{i})+DW(E_{i}, \nabla_{E_{j}}V)\]
Where from,
\[D(\nabla_{V}W_{1})=\nabla_{V}DW_{1}+R(V,.,W_{1},.)+DW_{1}(.,\nabla.V) \tag{62}\]
Similarly,
\[D^{*}(\nabla_{V}W_{1})=\nabla_{V}D^{*}W_{1}+R(V,.,W_{1},.)+DW_{1}(.,\nabla.V) \tag{63}\]
and,
\[V(w_{1}\Pi) = V(w_{1})\Pi+w_{1}\nabla_{V}\Pi\]
So
\[V(w_{1})\Pi=V(w_{1}\Pi)-w_{1}\nabla_{V}\Pi \tag{64}\]
substituting (64), (63), (62) in (61), have
\[\Upsilon(m(\xi)) = \frac{1}{2}\left\{\nabla_{V}\left(DW_{1}+D^{*}W_{1}\right)+DW_{1 }(.,\nabla.V)+DW_{1}(\nabla.V,.)\right\} \tag{65}\] \[+ R(V,.,W_{1},.)+V(w_{1}\Pi)-w_{1}\nabla_{V}\Pi\] \[= \nabla_{V}\Upsilon(\xi)+G(V,DW_{1})+Lo(\xi)\]
where, \(Lo(\xi)=R(V,.,W_{1},.)-w_{1}\nabla_{V}\Pi\). Continuing with the following terms,
\[\chi(m(\xi))=\frac{1}{2}\left(D(\nabla_{V}W_{2})+D^{*}(\nabla_{V}W_{2})\right)+V (w_{2})\Pi-\sqrt{\gamma}\left(i(\nabla_{V}W_{1})D\Pi-V(w_{1})c\right) \tag{66}\]
now estimating the terms of (66),
\[\nabla_{V}\left(i(W_{1})D\Pi\right)(E_{i},E_{j}) = D\left(i(W_{1})D\Pi\right)(E_{i},E_{j},V)=V\left(i(W_{1})D\Pi(E_{ i},E_{j})\right)\] \[= V\left(D\Pi(W_{1},E_{i},E_{j})\right)\] \[= D(D\Pi)(W_{1},E_{i},E_{j},V)+D\Pi(\nabla_{V}W_{1},E_{i},E_{j})\] \[= i(W_{1})\nabla_{V}D\Pi(E_{i},E_{j})+i(\nabla_{V}W_{1})D\Pi(E_{i},E_{j})\]
Therefore,
\[i(\nabla_{V}W_{1})D\Pi=\nabla_{V}\left(i(W_{1})D\Pi\right)-i(W_{1})\nabla_{V }D\Pi \tag{67}\]
substituting (62), (63), (64) y (67) in (66), have
\[\chi(m(\xi)) = \frac{1}{2}\left\{\nabla_{V}\left(DW_{2}+D^{*}W_{2}\right)+DW_{2 }(.,\nabla.V)+DW_{2}(\nabla.V,.)\right\} \tag{68}\] \[+ R(V.,.,W_{2},.)+V(w_{2}\Pi)-w_{2}\nabla_{V}\Pi-\sqrt{\gamma} \nabla_{V}(i(W_{1})D\Pi)-\sqrt{\gamma}i(W_{1})\nabla_{V}D\Pi\] \[- \sqrt{\gamma}V(w_{1}c)-\sqrt{\gamma}w_{1}\nabla_{V}c\] \[= \nabla_{V}\chi(\xi)+G(V,DW_{2})+Lo(\xi)\]
continuing with \(\varphi(\xi)\),
\[\varphi(m(\xi))=\frac{1}{2}D(V(w_{1}))-i(\nabla_{V}W_{1})\Pi+\frac{1}{\sqrt{ \gamma}}\nabla_{V}W_{2} \tag{69}\]
Estimating the terms of (69),
\[\left\langle D(V(w_{1})),E_{i}\right\rangle = E_{i}(V(w_{1}))=E_{i}(\left\langle Dw_{1},V\right\rangle)\] \[= \left\langle\nabla_{E_{i}}Dw_{1},V\right\rangle+\left\langle Dw_ {1},\nabla_{E_{i}}V\right\rangle\] \[= \left\langle\nabla_{V}Dw_{1},E_{i}\right\rangle+Dw_{1}\left( \nabla_{E_{i}}V\right)\]
Therefore,
\[D(V(w_{1}))=\nabla_{V}Dw_{1}+Dw_{1}\left(\nabla.V\right) \tag{70}\]
continuing,
\[\left\langle\nabla_{V}\left(i(W_{1})\Pi,E_{i}\right)\right\rangle = D(i(W_{1})\Pi)(E_{i},V)=V(i(W_{1})\Pi(E_{i}))=V(\Pi(W_{1},E_{i}))\] \[= D\Pi(W_{1},E_{i},V)+\Pi(\nabla_{V}W_{1},E_{i})\] \[= \nabla_{V}\Pi(W_{1},E_{i})+\left\langle i(\nabla_{V}W_{1})\Pi,E _{i}\right\rangle\] \[= \left\langle i(W_{1})\nabla_{V}\Pi,E_{i}\right\rangle+\left\langle i (\nabla_{V}W_{1})\Pi,E_{i}\right\rangle\]
Therefore,
\[i(\nabla_{V}W_{1})\Pi=\nabla_{V}\left(i(W_{1})\Pi\right)-i(W_{1})\nabla_{V}\Pi \tag{71}\]
substituting (70), (71) in (69), have,
\[\varphi(m(\xi)) = \frac{1}{2}\left(\nabla_{V}Dw_{1}+Dw_{1}\left(\nabla.V\right) \right)-\nabla_{V}\left(i(W_{1})\Pi\right)+i(W_{1})\nabla_{V}\Pi+\frac{1}{ \sqrt{\gamma}}\nabla_{V}W_{2}\] \[= \nabla_{V}\varphi(\xi)+\varphi(\xi)\left(\nabla.V\right)+Lo(\xi)\]
Now writing the equation (16), with \(\eta=m(\xi)\), have
\[\mathbf{B}(\xi,m(\xi)) = 2\left\langle\Upsilon(\xi),\Upsilon(m(\xi))\right\rangle+4\left\langle \varphi(\xi),\varphi(m(\xi))\right\rangle+2\beta\left(\mathrm{Trac}(\Upsilon( \xi)+\frac{1}{\sqrt{\gamma}})w_{2}\right)\] \[\left(\mathrm{Trac}(\Upsilon(m(\xi))+\frac{1}{\sqrt{\gamma}})V(w _{2})\right)+2\left\langle\chi(\xi),\chi(m(\xi))\right\rangle+2\beta\mathrm{Trac }(\chi(\xi))\mathrm{Trac}(\chi(m(\xi)))\] \[+ \left\langle Dw_{2},D(V(w_{2}))\right\rangle+\frac{1}{\gamma}w_{ 2}V(w_{2})\]
using (65), (68) and (72) Let estimate each term of (73).
\[2\left\langle\varphi(\xi),\varphi(m(\xi))\right\rangle = 2\left\langle\varphi(\xi),\nabla_{V}\varphi(\xi)+\varphi(\xi) \left(\nabla_{V}\right)+Lo(\xi)\right\rangle\] \[= 2\left\langle\varphi(\xi),\nabla_{V}\varphi(\xi)\right\rangle+2 \left\langle\varphi(\xi),\varphi(\xi)\left(\nabla_{V}\right)\right\rangle+Lo(\xi)\] \[= V(|\varphi(\xi)|^{2})+2\left\langle\varphi(\xi),\nabla_{ \varphi(\xi)}V\right\rangle+Lo(\xi)\] \[= V(|\varphi(\xi)|^{2})+2DV(\varphi(\xi),\varphi(\xi))+Lo(\xi).\]
From where, using (B), we have
\[2\left\langle\varphi(\xi),\varphi(m(\xi))\right\rangle=V(|\varphi(\xi)|^{2})+2 v|\varphi(\xi)|^{2}+Lo(\xi) \tag{74}\]
Continuing with the terms of (73),
\[2\left\langle Dw_{2},D(V(w_{2}))\right\rangle = 2\left\langle Dw_{2},\nabla_{V}Dw_{2}+Dw_{2}\left(\nabla_{V} \right)\right\rangle \tag{75}\] \[= 2\left\langle Dw_{2},\nabla_{V}Dw_{2}\right\rangle+2\left\langle Dw _{2},\nabla_{Dw_{2}}V\right\rangle\] \[= V(|Dw_{2}|^{2})+2v|Dw_{2}|^{2}\]
and,
\[2\left\langle\Upsilon(\xi),\Upsilon(m(\xi))\right\rangle = \left\langle\Upsilon(\xi),\nabla_{V}\Upsilon(\xi)+G(V,DW_{1})+Lo( \xi)\right\rangle \tag{76}\] \[= V(|\Upsilon(\xi)|^{2})+2\left\langle\Upsilon(\xi),G(V,DW_{1}) \right\rangle+Lo(\xi)\]
Continuing with the rest of the terms,
\[2\beta\left(\mathrm{Trac}\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}}w_ {2}\right)\left(\mathrm{Trac}\Upsilon(m(\xi))+\frac{1}{\sqrt{\gamma}}V(w_{2}) \right)=2\beta\mathrm{Trac}\Upsilon(\xi)\mathrm{Trac}\Upsilon(m(\xi))\] \[+ \frac{2\beta}{\sqrt{\gamma}}\mathrm{Trac}\Upsilon(\xi)V(w_{2})+ \frac{2\beta}{\sqrt{\gamma}}w_{2}\mathrm{Trac}\Upsilon(m(\xi))+\frac{2\beta} {\sqrt{\gamma}}w_{2}V(w_{2})\] \[= 2\beta\mathrm{Trac}\Upsilon(\xi)\left(\mathrm{Trac}\nabla_{V} \Upsilon(\xi)+\mathrm{Trac}G(V,DW_{1})+Lo(\xi)\right)+\frac{2\beta}{\sqrt{ \gamma}}\mathrm{Trac}\Upsilon(\xi)V(w_{2})\] \[+ \frac{2\beta}{\sqrt{\gamma}}w_{2}\mathrm{Trac}\Upsilon(m(\xi))+ \frac{\beta}{\gamma}V(w_{2}^{2})\] \[= 2\beta\mathrm{Trac}\Upsilon(\xi)\mathrm{Trac}\nabla_{V}\Upsilon (\xi)+2\beta\mathrm{Trac}\Upsilon(\xi)\mathrm{Trac}G(V,DW_{1})+\frac{2\beta} {\sqrt{\gamma}}\mathrm{Trac}\Upsilon(\xi)V(w_{2})\] \[+ \frac{2\beta}{\sqrt{\gamma}}w_{2}\mathrm{Trac}\Upsilon(m(\xi))+ \frac{\beta}{\gamma}V(w_{2}^{2})\] \[\beta V\left(\left(\mathrm{Trac}\Upsilon(\xi)+\frac{1}{\gamma}w_{2} \right)^{2}\right)+2\beta\mathrm{Trac}\Upsilon(\xi)\mathrm{Trac}G(V,DW_{1})+ Lo(\xi)\]
and,
\[2\left\langle\chi(\xi),\chi(m(\xi))\right\rangle = 2\left\langle\chi(\xi),\nabla_{V}\chi(\xi)+G(V,DW_{2})+Lo(\xi)\right\rangle\] \[= 2\left\langle\chi(\xi),\nabla_{V}\chi(\xi)\right\rangle+2\left\langle \chi(\xi),G(V,DW_{2})\right\rangle+Lo(\xi)\] \[= V\left(|\chi(\xi)|^{2}\right)+2\left\langle\chi(\xi),G(V,DW_{2} )\right\rangle+Lo(\xi)\]
and,
\[2\beta\mbox{Trac}(\chi(\xi))\beta\mbox{Trac}(\chi(m(\xi))) = 2\beta\mbox{Trac}(\chi(\xi))\left(\mbox{Trac}\nabla_{V}\chi(\xi) +\mbox{Trac}G(V,DW_{2})+Lo(\xi)\right)\] \[= \beta V\left(\left(\mbox{Trac}\chi(\xi)\right)^{2}\right)+2 \beta\mbox{Trac}\chi(\xi)\mbox{Trac}G(V,DW_{2})+Lo(\xi)\]
substituting (79), (78), (77), (76), (75) and (74) in (73), we have
\[\mathbf{B}(\xi,m(\xi)) = V\left(|\Upsilon(\xi)|^{2}\right)+2\left\langle\Upsilon(\xi),G (V,DW_{1})\right\rangle+2V\left(|\varphi(\xi)|^{2}\right)+4v|\varphi(\xi)|^{2}\] \[+ \beta V\left(\left(\mbox{Trac}\Upsilon(\xi)+\frac{1}{\sqrt{ \gamma}}w_{2}\right)^{2}\right)+2\beta\mbox{Trac}\Upsilon(\xi)\mbox{Trac}G(V, DW_{1})\] \[+ 2\beta\mbox{Trac}\Upsilon(\xi)\mbox{Trac}G(V,DW_{1})+V\left(| \chi(\xi)|^{2}\right)+2\left\langle\chi(\xi),G(V,DW_{2})\right\rangle+\beta V \left(\left(\mbox{Trac}\chi(\xi)\right)^{2}\right)\] \[+ 2\beta\mbox{Trac}\chi(\xi)\mbox{Trac}G(V,DW_{2})+\frac{1}{2}V \left(|Dw_{2}|^{2}\right)+v|Dw_{2}|^{2}\] \[+ \frac{1}{\gamma}V(w_{2}^{2})+Lo(\xi)\] \[= \frac{1}{2}V\left(2|\Upsilon(\xi)|^{2}+4|\varphi(\xi)|^{2}+2\beta \left(\mbox{Trac}\Upsilon(\xi)+\frac{1}{\sqrt{\gamma}}w_{2}\right)^{2}+2|\chi (\xi)|^{2}+2\beta\left(\mbox{Trac}\chi(\xi)\right)^{2}\right)\] \[+ \frac{1}{2}\left(|Dw_{2}|^{2}+\frac{1}{\gamma}w_{2}^{2}\right)+2 \left\langle\Upsilon(\xi),G(V,DW_{1})\right\rangle+4v|\varphi(\xi)|^{2}+2 \beta\mbox{Trac}\Upsilon(\xi)\mbox{Trac}G(V,DW_{1})\] \[+ 2\left\langle\chi(\xi),G(V,DW_{2})\right\rangle+2\beta\mbox{Trac }\chi(\xi)\mbox{Trac}G(V,DW_{2})+v|Dw_{2}|^{2}+Lo(\xi)\] \[= \frac{1}{2}V\left(B(\xi,\xi)\right)+2b\left(\Upsilon(\xi),G(V,DW_ {1})\right)+2b\left(\chi(\xi),G(V,DW_{2})\right)\] \[+ 4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}+Lo(\xi)\]
Then,
\[2\mathbf{B}(\xi,m(\xi)) = \int_{\Omega}\left[V\left(B(\xi,xi)\right)+4b\left(\Upsilon(\xi ),G(V,DW_{1})\right)+b\left(\chi(\xi),G(V,DW_{2})\right)\right]dx\] \[+ \int_{\Omega}\left[8v|\varphi(\xi)|^{2}+2v|Dw_{2}|^{2}+Lo(\xi)\right]\]
Now, being \(V=\sum_{i=1}^{2}V_{i}E_{i}\), where \(\{E_{1},E_{2}\}\) is a normal referential that varies with \(x\). have
\[\int_{\Omega}V\left(B(\xi,\xi)\right)dx = \sum_{i=1}^{2}\int_{\Omega}V_{i}E_{i}(B(\xi,\xi))dx \tag{81}\] \[= \sum_{i=1}^{2}\int_{\Omega}E_{i}\left(V_{i}B(\xi,\xi)\right)dx- \sum_{i=1}^{2}\int_{\Omega}E_{i}(V_{i})B(\xi,\xi)dx\] \[= \int_{\Omega}\text{div}\left(B(\xi,\xi)V\right)dx-\int_{\Omega} \text{div}(V)B(\xi,\xi)dx\] \[= \int_{\Omega}B(\xi,\xi)\left\langle V,\nu\right\rangle dx-2\int_ {\Omega}vB(\xi,\xi)dx\]
In the last line of (81) we will use (B). In fact,
\[\text{div}(V)=\text{Trac}DV=\sum_{i=1}^{2}DV(E_{i},E_{i})=\sum_{i=1}^{2}v|E_{i} |^{2}=2v\]
Substituting (81) into (80), we have
\[2\mathsf{B}(\xi,m(\xi))=\int_{\Gamma}B(\xi,\xi)dx-2\int_{\Omega}vB(\xi,\xi)dx +2\int_{\Omega}e(\xi,\xi)dx+Lo(\xi)\]
with \(e(\xi,\xi)=2b\left(\Upsilon(\xi),G(V,DW_{1})\right)+b\left(\chi(\xi),G(V,DW_{2 })\right)+4v|\varphi(\xi)|^{2}+v|Dw_{2}|^{2}\) and the lemma is.
Other result need is the following
**Theorem I**.: _Let \(\xi=(W_{1},W_{2},w_{1},w_{2})\in\mathsf{H}^{1}(\Omega)\) problem solution_
\[\xi_{tt}+A\xi+a(x)\xi_{t}=0,\quad\text{em}\quad(0,T)\times\Omega \tag{82}\]
_Then_
\[\int_{\Sigma}\left[2\partial(A\xi,m(\xi))+\left(|\xi_{t}|^{2}-B( \xi,\xi)\right)\left\langle V,\nu\right\rangle\right]d\Sigma =2\left(\xi_{t},m(\xi)\right)|_{0}^{T}+2\int_{Q}v\left[|\xi_{t}|^ {2}-B(\xi,\xi)\right]dQ\] \[+2\int_{Q}e(\xi,\xi)dQ-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)+ L(\xi) \tag{83}\]
_where \(\Sigma=(0,T)\times\Gamma\) and \(e(\xi,\xi)=e(\xi,\xi)=2b(S(W_{1}),G(V,DW_{1}))+2b(S(W_{2}),G(V,DW_{2}))+4v| \varphi(\xi)|^{2}+v|Dw_{2}|^{2}\). \(L(\xi)\)denotes the terms of lower order with respect to energy. Additionally, if \(p\) is a function in \(\Omega\), then_
\[\int_{\Sigma}\partial\left(A\xi,p\xi\right)dQ=\int_{Q}p\left[B(\xi,\xi)-|\xi_ {t}|^{2}\right]dQ-\int_{0}^{T}\left(a\xi_{t},p\xi\right)+L(\xi) \tag{84}\]
Proof.: Multiplying the equation (82) by \(2m(\xi)\) and integrating in \(\Omega\) we have:
\[(\xi_{tt},2m(\xi))_{\mathsf{L}^{2}(\Omega)}+(A\xi,2m(\xi)) = -\left(a\xi_{t},2m(\xi)\right) \tag{85}\] \[(\xi_{tt},2m(\xi))_{\mathsf{L}^{2}(\Omega)}+B(\xi,2m(\xi))-\int_ {\Gamma}\partial\left(\xi,2m(\xi)\right) = -\left(a\xi_{t},2m(\xi)\right)\]
Estimating the first term on the left hand side of (85), since the second term was estimated in the lemma (H).
\[\left(\xi_{tt},2m(\xi)\right)_{\mathtt{L}^{2}(\Omega)}=2\left[\left(\xi_{t},m( \xi)\right)_{\mathtt{L}^{2}(\Omega)}\right]_{t}-2\left(\xi_{t},m(\xi_{t})\right) _{\mathtt{L}^{2}(\Omega)} \tag{86}\]
and,
\[2\left(\xi_{tt},2m(\xi)\right)_{\mathtt{L}^{2}(\Omega)} = 2\left(\left(W_{1t},W_{2t},w_{1t},w_{2t}\right),\left(\nabla_{V} W_{1t},\nabla_{V}W_{2t},V(w_{1t}),V(w_{2t})\right)\right)_{\mathtt{L}^{2}( \Omega)}\] \[= V\left(||W_{1t}||^{2}_{L^{2}(\Omega,\Lambda)}+||W_{2t}||^{2}_{L^ {2}(\Omega,\Lambda)}+||w_{1t}||^{2}_{L^{2}(\Omega)}+||w_{2t}||^{2}_{L^{2}( \Omega)}\right)\] \[= V(||\xi_{t}||^{2}_{L^{2}(\Omega)})=\int_{\Omega}V(|\xi_{t}|^{2}) dx=\sum_{i=1}^{2}\int_{\Omega}V_{i}E_{i}(|\xi_{t}|^{2})dx\] \[= \sum_{i=1}^{2}\int_{\Omega}E_{i}\left(V_{i}|\xi_{t}|^{2}\right) dx-\sum_{i=1}^{2}\int_{\Omega}E_{i}(V_{i})|\xi_{t}|^{2}dx\] \[= \int_{\Gamma}|\xi_{t}|^{2}\left\langle V,\nu\right\rangle d \Gamma-2\int_{\Omega}v|\xi_{t}|^{2}dx\]
substituting (87) en (86), have
\[\left(\xi_{tt},2m(\xi)\right)_{\mathtt{L}^{2}(\Omega)}=2\left[\left(\xi_{t},m (\xi)\right)_{\mathtt{L}^{2}(\Omega)}\right]_{t}+2\int_{\Omega}v|\xi_{t}|^{2} dx-\int\Gamma|\xi_{t}|^{2}\left\langle V,\nu\right\rangle d\Gamma \tag{88}\]
substituting (88) and using the lema (H) in (85), have:
\[2\left[\left(\xi_{tt},m(\xi)\right)_{\mathtt{L}^{2}(\Omega)} \right]_{t} + 2\int_{\Omega}v|\xi_{t}|^{2}dx-\int_{\Gamma}|\xi_{t}|^{2}\left\langle V,\nu\right\rangle d\Gamma+\int_{\Gamma}B(\xi,\xi)\left\langle V,\nu\right\rangle d\Gamma\] \[- 2\int_{\Omega}vB(\xi,\xi)dx+2\int_{\Omega}e(\xi,\xi)dx-\int_{ \Gamma}\partial\left(\xi,2m(\xi)\right)d\Gamma+Lo(\xi)=\] \[- \left(a\xi_{t},2m(\xi)\right) \tag{89}\]
integrating (89) of \(0\) to \(T\), have
\[2\left[\left(\xi_{tt},m(\xi)\right)_{\mathtt{L}^{2}(\Omega)} \right]|_{0}^{T} + 2\int_{Q}v|\xi_{t}|^{2}dx-\int_{\Sigma}|\xi_{t}|^{2}\left\langle V,\nu\right\rangle d\Sigma+\int_{\Sigma}B(\xi,\xi)\left\langle V,\nu\right\rangle d\Sigma\] \[- 2\int_{Q}vB(\xi,\xi)dx+2\int_{Q}e(\xi,\xi)dx-\int_{\Sigma} \partial\left(\xi,2m(\xi)\right)d\Sigma+Lo(\xi)=\] \[\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)\]
where,
\[\int_{\Sigma}\left[2\partial(\xi,m(\xi))+\left(|\xi_{t}|^{2}-B( \xi,\xi)\right)\left\langle V,\nu\right\rangle\right]d\Sigma = 2\left[\left(\xi_{t},m(\xi_{t})\right)_{\mathtt{L}^{2}(\Omega)} \right]|_{0}^{T}+2\int_{Q}v\left[|\xi_{t}|^{2}-B(\xi,\xi)\right]\] \[+ 2\int_{Q}e(\xi,\xi)dQ-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right)+ Lo(\xi)\]
And the theorem is shown.
In this section we state and prove the main result, the stabilization of the evolution equation of the Naghdi model. We will need the following result.
**Theorem J**.: _Let \(V\) an escape vector field for the Naghdi shell. Let \(\xi\) problem solution._
\[\xi_{tt}+A\xi+a(x)\xi_{t}=0. \tag{90}\]
_Then for \(T>0\),_
\[\int_{\Sigma}SBd\Sigma+\lambda_{0}\sigma_{0}\left[E(0)+e(T)\right]-\int_{0}^{T }\int_{\Omega}a\left\langle\xi,\xi_{t}\right\rangle\geq 2\sigma_{1}\int_{0}^{T }E(t)dt+L(\xi) \tag{91}\]
_where_
\[\begin{split} SB&=\partial\left(A\xi,2m(\xi)+\rho \xi\right)+\left[\mid\xi_{t}\mid^{2}-B(\xi,\xi)\right]\left\langle V,\nu\right\rangle \\ m(\xi)&=\left(\nabla_{V}W_{1},\nabla_{V}W_{2},V(w _{1}),V(w_{2})\right),\quad\rho=2v-\sigma_{1}\end{split} \tag{92}\]
Proof.: Let \(p=\rho\) in the identity (84) and adding with the identity (84), obtain
\[\begin{split}\int_{\Sigma}SBd\Sigma&=2\left(\xi_{t},m(\xi)\right)_{(L^{2}(\Omega))}\mid_{0}^{T}+\sigma_{1}\int_{Q}\left[\mid\xi_{t }\mid^{2}+B(\xi,\xi)\right]dQ+\\ &\qquad\qquad 2\int_{Q}e(\xi,\xi)dQ+L(\xi)\end{split} \tag{93}\]
and, by the expression of \(B\), we have to
\[\begin{split} B(\xi,\xi)&=2b(S(W_{1}),S(W_{1}))+2 \beta b(S(W_{2}),S(W_{2}))+\\ &\qquad 4\mid\varphi(\xi)\mid^{2}+\mid Dw_{2}\mid+L(\xi)\end{split} \tag{94}\]
Using the lemma (G), have
\[\int_{Q}e(\xi,\xi)dQ\geq\sigma_{1}\int_{Q}B(\xi,\xi)dQ+L(\xi) \tag{95}\]
using the identity (94), the coercivity of \(b\), have
\[\begin{split} 2\left(\xi_{t},m(\xi)\right)_{L^{2}(\Omega)}& \leq\sigma_{0}\left[\parallel\xi_{t}\parallel_{L^{2}(\Omega)}^{2 }+\sum_{i=1}^{2}\left(\parallel DW_{i}\parallel_{L^{2}(\Omega,T^{2})}^{2}+ \parallel Dw_{i}\parallel_{L^{2}(\Omega,\Lambda)}^{2}\right)\right]\\ &\leq 2\lambda_{0}\sigma_{0}E(t)+L(\xi)\end{split} \tag{96}\]
finally, substituting (96) and (95), in (93), we have inequality (91).
We now state and prove our main result, the stabilization of the Naghdi evolution model.
**Theorem K**.: _Let be the evolution equation for the Naghdi shell model, with internal dissipation_
\[\begin{split}\xi_{tt}+A\xi+a(x)\xi_{t}&=0\quad \text{em}\quad\Omega\times(0,T)\\ \xi&=0\quad\text{em}\quad\partial\Omega\times(0,T) \end{split} \tag{97}\]
_where the function \(a=a(x)\) is supported in an escape region \(w\subset\bar{\Omega}\). Let \(a_{0}>0\) such that_
\[a(x)\geq a_{0}>0\quad\text{para todo}\quad x\in w \tag{98}\]
_So, there are constants \(c_{1},c_{2}\) such that_
\[E(t)\leq c_{1}E(0)\exp^{(-c_{2}t)} \tag{99}\]
_where \(E(t)\) is the total energy of the system (97)_
Proof.: multiplying the equation (97) for \(\xi_{t}\), integrating into \(\Omega\) and considering the boundary conditions, we have that
\[\frac{d}{dt}E(t)=-\int_{\Omega}a(x)\mid\xi_{t}\mid^{2}dx \tag{100}\]
from where, integrating into \((0,T)\), we have
\[E(T)=E(0)-\int_{\Omega}a(x)\mid\xi_{t}\mid^{2}dx. \tag{101}\]
By (101) it is enough to prove that there is a \(T>0\) e \(C>0\), independent of the solutions of the problem (97), such that
\[E(T)\leq C\int_{Q}a\mid\xi_{t}\mid^{2}dQ,\]
for in this case, substituting in (101), we have to
\[E(T)\leq\frac{C}{C+1}E(0) \tag{102}\]
We affirm that (99) if follow from (102). In fact, notice that (102) is equivalent
\[E(T)\leq\gamma E(0) \tag{103}\]
where \(\gamma=\frac{C}{C+1}<1\). Since the system is invariant by translations in time, we have that (103)is valid in \([(m-1)T,mT]\), so
\[E(mT) \leq\gamma E((m-1)T)\] \[\leq\gamma^{2}E((m-2)T)\] \[\vdots\] \[\leq\gamma^{m}E(0)\] \[\leq e^{-\omega mT}E(0) \tag{104}\]
where \(\omega=\frac{1}{T}\ln(\frac{1}{\gamma})>0\). For \(t>0\) arbitrary, exists \(m=1,2,\dots\), such that \((m-1)T<t\leq mT\). Finally, using that the energy of the system is decreasing, we have:
\[E(t) \leq E((m-1)T)\] \[\leq e^{-\omega(m-1)T}E(0)\] \[\leq \frac{1}{\gamma}e^{-\omega mT}E(0)\] \[\leq \frac{1}{\gamma}e^{-\omega t}E(0)\]
Which proves our statement. So, let try (102)
By the definition of escape fields and regions discussed, we can assume that there are subsets of \(\Omega\), \(\{\Omega_{i}\}_{i=1}^{N}\), satisfying (B). Then the identities (92) and (84) can be used on each \(\Omega_{i}\), since vectorial escape fields are defined on each of them.
The idea is to estimate the total energy of the system, first inside \(\Omega\) using that on the \(\Omega_{i}\) we have vanishing vector fields defined on them and in the plugin use the property of the dissipation function, \(a\).
Now, since (91) is valid only in \(\Omega_{i}\), We are going to restrict ourselves, first, to making estimates on them.
be then \(0<\epsilon_{2}<\epsilon_{1}<\epsilon_{0}<\epsilon\) and be
\[V_{j}=N_{\epsilon_{j}}\left\{\cup_{i=1}^{N}\Gamma_{0}^{i}\cup\left(\Omega \setminus\cup_{i=1}^{N}\Omega_{i}\right)\right\},\quad j=0,1,2.\]
notice that \(V_{2}\subset V_{1}\subset V_{0}\subset\bar{V_{0}}\). be
\[\phi^{i}=\left\{\begin{array}{ll}1,&\Omega_{i}\setminus V_{i},\;\;\;i=1,2, \ldots,N\\ 0,&V_{2}\end{array}\right.\]
consider now \(V^{i}=\phi^{i}H^{i}\), \(p^{i}=\phi^{i}q\) and \(\xi^{i}=\phi^{i}\xi\) where \(H^{i}\) is an escape vector field in \(\Omega_{i}\) and \(q\) a function defined in \(\Omega\), respectively. So, in each \(\Omega_{i}\) it is valid (91) and we have
\[2\sigma_{1}\int_{0}^{T}E^{i}(t)\leq\int_{\Sigma_{i}}SB_{i}d\Sigma_{i}+\lambda_ {0}\sigma_{0}\left[E^{i}(0)+E^{i}(T)\right]-\int_{0}^{T}\int_{\Omega_{i}}a \left\langle\xi^{i},\xi_{t}^{i}\right\rangle dxdt \tag{105}\]
where,
\[SB_{i}=\partial\left(A\xi^{i},2m(\xi^{i})+\rho\xi^{i}\right)+\left[\mid\xi_{t }^{i}\mid^{2}-B(\xi^{i},\xi^{i})\right]\left\langle V^{i},\nu\right\rangle \tag{106}\]
By the definition of \(V_{2}\), we have to \(\Gamma_{0}^{i}\subset V_{2}\) and what \(\phi^{i}=0\) in \(V_{2}\), we have to \(x\in\Gamma_{0}^{i}\), the terms at the border are annulled. So
\[SB_{i}=0\qquad\text{para}\quad x\in\Gamma_{0}^{i}. \tag{107}\]
If \(x\in\Omega_{i}\cap\Gamma\), using boundary conditions, \(\xi=0\), replacing in (106) have
\[SB_{i} =2\partial\left(A\xi_{i},m(\xi^{i})\right)-B(\xi^{i},\xi^{i}) \left\langle V^{i},\nu\right\rangle\] \[=B(\xi^{i},\xi^{i})\left\langle V^{i},\nu\right\rangle\] \[\leq 0 \tag{108}\]
where we use the coercivity of \(B\). It is concluded that
\[\int_{\Sigma_{i}}SB_{i}\leq 0\quad\text{para todo}\quad i \tag{109}\]
Using the estimate (107) and (109) in (105) have
\[2\sigma_{1}\int_{0}^{T}\int_{\Omega_{i}}\mid\xi_{t}\mid^{2}+2\sigma_{1}\int_{0 }^{T}\int_{\Omega_{i}}B(\xi,\xi)\leq-\int_{0}^{T}\left(a\xi_{t},2m(\xi)\right) +\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi)\]
where,
\[\int_{0}^{T}\int_{\Omega_{i}\backslash V_{1}}B(\xi,\xi)\leq C_{1} \int_{T}^{0}\int_{\Omega_{i}\cap V_{1}}B(\xi,\xi)+C_{\beta}\int_{0}^{T}\int_{ \Omega_{i}}\mid\xi\mid^{2}+\beta\int_{0}^{T}\int_{\Omega_{i}}B(\xi,\xi)dQ+\] \[\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi) \tag{110}\]
where \(\beta>0\) is small enough that \(\beta\int_{0}^{T}\int_{\Omega_{i}}B(\xi,\xi)dxdt\leq\int_{0}^{T}\int_{\Omega_{ i}\cap V_{1}}B(\xi,\xi)dxdt\).
Given that \(\Omega\subset(\cup\Omega_{i})\cup V_{1}\), then
\[\int_{0}^{T}\int_{\Omega\backslash V_{1}}B(\xi,\xi) \leq\sum_{i=1}^{N}\int_{0}^{T}\int_{\Omega_{i}\backslash V_{1}}B (\xi,\xi)\] \[\leq C_{2}\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ\ \ +C_{3}\int_{0}^{T}a\parallel\xi_{t}\parallel^{2}dt+\lambda_{0}\sigma_{0} \left[E(T)+E(0)\right]+L(\xi) \tag{111}\]
Now let us estimate in the complement of the union of the \(\Omega_{i}\). For this, be \(\psi\in C^{\infty}(I\!\!R^{3})\) given by
\[\psi(x)=\begin{cases}0,&x\in I\!\!R^{3}\setminus V_{0}\\ 1,&x\in V_{1}\end{cases} \tag{112}\]
Considering \(p=\psi\) en (84) have
\[\int_{Q_{i}}B(\xi,\xi)dQ_{i}=\int_{Q_{i}}\mid\xi_{t}\mid^{2}dQ_{i}+\int_{0}^{T }\left(a\xi_{t},\psi\xi\right)dt+L(\xi)\]
where from,
\[\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ+\int_{0}^{T}\int_{\Omega\cap V _{0}}B(\xi,\xi)dQ\leq\int_{0}^{T}\int_{\Omega\cap V_{0}}B\parallel\xi_{t} \parallel^{2}dQ+\int_{0}^{T}\int_{\Omega\cap V_{0}}\left\langle a\xi_{t},\xi \right\rangle+L(\xi)\]
And what \(\epsilon_{0}<\epsilon\), then \(w\supset\bar{\Omega}\cap V_{0}\). Using the function hypothesis \(a\) in \(w\), have
\[\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dQ \leq\int_{0}^{T}\int_{\Omega\cap V_{0}}\parallel\xi_{t}\parallel ^{2}dQ+\int_{0}^{T}\int_{\Omega\cap V_{0}}\left\langle a\xi_{t},\xi\right\rangle +L(\xi)\] \[\leq\frac{1}{a_{0}}\int_{0}^{T}\int_{\Omega\cap V_{0}}a\mid\xi\mid ^{2}dt+L(\xi)+\beta\int_{0}^{T}\parallel\xi_{t}\parallel^{2}dt+L(\xi)\]
where \(\beta\) will be chosen later. so, of (111), (113), have
\[\int_{Q}B(\xi,\xi) =\int_{0}^{T}\int_{\Omega\backslash V_{1}}B(\xi,\xi)dxdt+\int_{0} ^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dxdt\] \[\leq C_{4}\int_{0}^{T}\int_{\Omega\cap V_{1}}B(\xi,\xi)dxdt+C_{5} \int_{0}^{T}a\parallel\xi_{t}\parallel^{2}dt+L(\xi)\] \[\leq C_{6}\int_{Q}a\mid\xi_{t}\mid^{2}dt+\beta\int_{0}^{T} \parallel\xi_{t}\parallel^{2}dt+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+L(\xi) \tag{113}\]
now considering \(p=\frac{1}{2}\) in (84), have
\[\frac{1}{2}\int_{Q}\left[\mid\xi_{t}\mid^{2}-B(\xi,\xi)\right]dQ=\frac{1}{2}\int_ {0}^{T}\left(a\xi_{t},\xi\right)dt+L(\xi) \leq C_{8}\int_{0}^{T}a\parallel\xi_{t}\mid^{2}dt+L(\xi) \tag{114}\]
grouping (113) and (114), have
\[\int_{0}^{T}E(t)dt =\int_{Q}B(\xi,\xi)dQ+\frac{1}{2}\int_{Q}\left[\mid\xi_{t}\mid^{2} -B(\xi,\xi)\right]dQ\] \[\leq C\int_{Q}a\mid\xi_{t}\mid^{2}dQ+\beta\int_{0}^{T}\parallel \xi_{t}\mid^{2}dt+\lambda_{0}\sigma_{0}\left[E(T)+E(0)\right]+C_{8}\int_{Q}a \mid\xi_{t}\mid dQ+L(\xi)\] \[\leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+\beta\int_{0}^{T}E(t)dt+C_{ 9}\left[E(T)+E(0)\right]+L(\xi) \tag{115}\]
Taking \(\beta=\frac{1}{2}\), considering that the energy is decreasing, this is, \(E(t)\geq E(T)\) for \(t\in[0,T]\) and \(E(T)=E(0)-\int_{Q}a\mid\xi\mid^{2}dQ\), have
\[\frac{1}{2}\int_{0}^{T}E(t)dt \leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+C_{9}\left[E(T)+E(0)\right] +L(\xi)\] \[\frac{T}{2}E(T) \leq C_{9}\int_{Q}a\mid\xi_{t}\mid dQ+\lambda_{0}\sigma_{0}\left[E (T)+E(T)+\int_{Q}a\mid\xi_{t}\mid dQ\right]+L(\xi)\] \[\left(\frac{T}{2}-2\lambda_{0}\sigma_{0}\right) \leq C_{10}\int_{Q}a\mid\xi_{t}\mid dQ+L(\xi) \tag{116}\]
For \(T>4\lambda_{0}\sigma_{0}\) of (116) and the uniqueness-compactness argument [35], have
\[E(t)\leq C\int_{Q}a\mid\xi_{t}\mid dQ\]
This proves the theorem.
## 5. Controllability via Stability
Russell Principle[32] provides a method for testing exact controllability using the uniform stabilization result.
In this section we are going to study the exact controllability for the Naghdi evolution system. For this, we apply Russell Principle using the result of energy decay proved in the theorem K
The exact controllability problem, with controls inside, consists of finding a vector function \(F=F(x,t)\), control call, such that for some \(T>0\) the following problem has a solution:
\[\begin{cases}\eta_{tt}+A\eta=F(x,t)\quad\text{em}\quad\Omega\times(0,T)\\ \eta=0\quad\text{em}\quad\partial\Omega\times(0,T)\\ \eta(0)=\eta_{0},\quad\eta_{t}(0)=\eta_{1}\\ \eta(T)=\tilde{\eta}_{0},\quad\eta_{t}(T)=\tilde{\eta}_{1}\end{cases} \tag{117}\]
for any initial data \((\eta_{0},\eta_{1})\) and endings \((\tilde{\eta}_{0},\tilde{\eta}_{1})\) in the appropriate functional spaces and \(F\) acting in a subregion of \(\Omega\).
We will prove that the function \(F\) needs to act only in a sub-region of \(\Omega\) arbitrarily small, precisely in the escape region for the Naghdi shell.
We consider \(T>0\) such that
\[c_{1}e^{-c_{2}T}<1 \tag{118}\]
where \(c_{1}>0\) and \(c_{2}>0\) are the constants that appear in the theorem K.
In this section we will prove the following result:
**Theorem L**.: _Let \(\Omega\) the median surface of the Naghdi shell and \(w\subset\Omega\) the vanishing region given in the proof of the theorem K. Then, if \(T>0\) satisfies (118) the system (117) is exactly controllable at time \(T\) with controls located at \(w\)_
Proof.: Since the system is linear and reversible in time, it is enough to consider the controllability at zero, that is, \((\tilde{\eta}_{0},\tilde{\eta}_{1})=(0,0)\)
We have to, for \((\eta_{0},\eta_{1})\in V\times H\), there is only one solution \(\eta\in C\left([0,\infty);V\right)\times C^{1}\left([0,\infty);H\right)\) of the problem.
\[\begin{cases}\eta_{tt}+A\eta+a(x)\eta_{t}=0\quad\text{em}\quad\Omega\times(0, T)\\ \eta=0\quad\text{em}\quad\partial\Omega\times(0,T)\\ \eta(0)=\eta_{0},\quad\eta_{t}(0)=\eta_{1}\end{cases} \tag{119}\]
Additional, for the initial data \((-\eta(T),\eta_{t}(T))\) there is only one solution to the problem.
\[\begin{cases}\theta_{tt}+A\theta+a(x)\theta_{t}=0\quad\text{em}\quad\Omega \times(0,T)\\ \theta=0\quad\text{em}\quad\partial\Omega\times(0,T)\\ \theta(0)=-\eta(T),\quad\theta_{t}(T)=\eta_{t}(T)\end{cases} \tag{120}\]
where \(a\) acts in the escape region given in the theorem K.
let define \(\xi(x,t)=\eta(x,t)+\theta(x,T-t)\). The field \(\xi\) satisfies
\[\begin{cases}\xi_{tt}+A\xi=a(x)(\eta_{t}+\theta_{t})\quad\text{em}\quad\Omega \times(0,T)\\ \xi=0\quad\text{em}\quad\partial\Omega\times(0,T)\\ \xi(0)=\eta_{0}+\theta(T),\quad\xi_{t}(0)=\eta_{1}-\theta_{t}(T)\\ \xi(T)=0,\quad\xi_{t}(T)=0\end{cases} \tag{121}\]
Of (121) we have that the initial data that are brought to equilibrium have the form
\[(\xi_{0},\xi_{1})=(\eta_{0}+\theta(T),\eta_{1}-\theta_{t}(T)) \tag{122}\]
Thus it is enough to prove that for each initial data \((\xi_{0},\xi_{1})\in V\times H\), exists \((\eta_{0},\eta_{1})\) satisfying (122). Equivalently, show that the application:
\[\begin{split} L:& V\times H\longrightarrow V\times H \\ (\eta_{0},\eta_{1})\longrightarrow(\eta_{0}+\theta(T),\eta_{1}-\theta_{t}(T)) \end{split} \tag{123}\]
is surjective. As \(L=I-K\) where \(K\) is the application given by
\[K(\eta_{0},\eta_{1})=(-\theta(T),\theta_{t}(T))\]
it is enough to show that \(\parallel K\parallel_{V\times H}<1\). Applying the theorem twice K have
\[\begin{split}\parallel K(\eta_{0},\eta_{1})\parallel_{V\times H }&=\parallel(-\theta(T),\theta_{t}(T))\parallel_{V\times H}\\ &\leq c_{1}e^{-c_{2}T}\parallel(-\eta(T),\eta_{t}(T))\parallel_{V \times H}\\ &\leq c_{3}e^{-c_{4}T}\parallel(\eta(0),\eta_{1})\parallel_{V \times H}.\end{split} \tag{124}\]
choosing \(T>0\) such that \(c_{3}e^{-c_{4}T}<1\) we have to \(\parallel K\parallel_{V\times H}<1\). Thus \(L=I-K\) is a surjective application and therefore the theorem L it is demonstrated.
## 6. conclusions
* The dissipation can be considered in an arbitrarily small region of the shell and obtain stabilization and controllability.
* The existence of flight regions are verifiable for Naghdi shells, this implies that localized dissipation in the complement of the union of said regions generates stabilization and controllability.
|
2307.08585 | Identity-Preserving Aging of Face Images via Latent Diffusion Models | The performance of automated face recognition systems is inevitably impacted
by the facial aging process. However, high quality datasets of individuals
collected over several years are typically small in scale. In this work, we
propose, train, and validate the use of latent text-to-image diffusion models
for synthetically aging and de-aging face images. Our models succeed with
few-shot training, and have the added benefit of being controllable via
intuitive textual prompting. We observe high degrees of visual realism in the
generated images while maintaining biometric fidelity measured by commonly used
metrics. We evaluate our method on two benchmark datasets (CelebA and AgeDB)
and observe significant reduction (~44%) in the False Non-Match Rate compared
to existing state-of the-art baselines. | Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon | 2023-07-17T15:57:52Z | http://arxiv.org/abs/2307.08585v1 | # Identity-Preserving Aging of Face Images via Latent Diffusion Models
###### Abstract
The performance of automated face recognition systems is inevitably impacted by the facial aging process. However, high quality datasets of individuals collected over several years are typically small in scale. In this work, we propose, train, and validate the use of latent text-to-image diffusion models for synthetically aging and de-aging face images. Our models succeed with few-shot training, and have the added benefit of being controllable via intuitive textual prompting. We observe high degrees of visual realism in the generated images while maintaining biometric fidelity measured by commonly used metrics. We evaluate our method on two benchmark datasets (CelebA and AgeDB) and observe significant reduction (\(\sim 44\%\)) in the False Non-Match Rate compared to existing state-of the-art baselines.
## 1 Introduction
**Motivation.** It is well known that facial aging can significantly degrade the performance of modern automated face recognition systems [16, 19, 29]. Improving the robustness of such systems to aging variations is therefore critical for their lasting practical use. However, building systems that are robust to aging variations requires high quality longitudinal datasets: images of a large number of individuals collected over several years. Collection of such data constitutes a major challenge in practice. Datasets such as MORPH [5] contains longitudinal samples of only 317 subjects from a total of \(\sim\)13K subjects over a period of five years [8]. Other datasets like AgeDB [28] and CACD [10] contains unconstrained images with significant variations in pose, illumination, background, and expression.
An alternative approach to gathering longitudinal data is to digitally simulate face age progression [21]. Approaches include manual age-editing tools, such as YouCam Makeup, FaceApp, and AgingBooth [1, 13]; more recently, GAN-based generative models, such as AttGAN, Cafe-GAN, Talk-to-Edit [17, 20, 23, 24, 37] have also been used to simulate age progression in face images. However, we find that generative models struggle to correctly model biological aging, which is a complex process affected by genetic, demographic, and environmental factors. Moreover, training high quality GANs for adjusting facial attributes themselves require a large amount of training data.
**Our Approach.** Existing generative models often struggle to manipulate the age attribute and preserve facial identity. They also require auxiliary age classifiers and/or extensive training data with longitudinal age variations. To address both of the above issues, we propose a new latent generative model for simulating high-quality facial aging, while simultaneously preserving biometric identity. The high level algorithmic idea is to finetune latent text-to-image diffusion models (such as Stable Diffusion [32]) with a novel combination of contrastive and biometric losses that help preserve facial identity. See Fig. 1 for an overview of our method.
The proposed method requires: (i) a pre-trained latent diffusion model (see Sec. 2), (ii) a small set (numbering \(\approx\) 20) of training face images of an individual, and (iii) a small auxiliary set (numbering \(\approx\) 600) of image-caption pairs. The pairs contain facial images of individuals and captions indicating their corresponding age. This auxiliary set of image-caption pairs serve as the regularization set. The individuals in the training set and the regularization set are disjoint. We use the training images during fine-tuning to learn the identity-specific information of the individual, and the regularization images with captions to learn the association between an image (face) and its caption (age). Finally, we simulate age regression and progression of the trained individual using a text prompt specifying the target age. See the details of our method in Sec. 3.
**Summary**. Our main contributions are as follows.
* We adapt latent diffusion models to perform age regression and progression in face images. We introduce two key ideas: an identity-preserving loss (in addition to perceptual loss), and a small regularization set of image-caption pairs to resolve the limitations posed by existing GAN-based methods.
* As a secondary finding, we show that face recognition classifiers may benefit by fine-tuning on generated images with significant age variations as indicated in [31].
* We conduct experiments on CelebA and AgeDB datasets and perform evaluations to demonstrate that the synthesized images i) appear visually compelling in terms of aging and de-aging through qualitative analysis and automated age predictor, and ii) match with the original subject with respect to human evaluators and automated face matcher. We demonstrate that our method outperforms SOTA image editing methods, namely, IPCGAN [34], AttGAN [17] and Talk-to-Edit [20].
The rest of the paper is organized as follows. Sec. 2 outlines existing work. Sec. 3 describes the proposed method for simulating facial aging and de-aging. Sec. 4 describes the experimental settings. Sec. 5 presents our findings and analysis. Sec. 6 concludes the paper.
## 2 Related Work
Previous automated age progression models have used a variety of architectures, including recurrent ones [36] and GANs. [37] uses a hierarchy of discriminators to preserve the reconstruction details, age and identity. STGAN [24] utilizes selective transfer units that accepts the difference between the target and source attribute vector as input, resulting in more controlled manipulation of the attribute. Cafe-GAN [23] utilizes complementary attention features to focus on the regions pertinent to the target attribute while preserving the remaining details. HRFAE [38] encodes an input image to a set of age-invariant features and an age-specific modulation vector. The age-specific modulation vector re-weights the encoded features depending on the target age and then passes it to a decoder unit that edits the image. CUSP [15] uses a custom structure preserving module that masks the irrelevant regions for better facial structure preservation in the generated images. The method performs style and content disentanglement while conditioning the generated image on the target age. ChildGAN [9] is inspired from the self-attention GAN and uses one-hot encoding of age labels and gender labels appended to the noise vector to perform age translation in images of young children.
We focus on three methods in our comparisons. IPCGAN [34] uses a conditional GAN with an identity preserving module and an age classifier to perform image-to-image style transfer for age-editing. AttGAN [17] performs binary facial attribute manipulation by modeling the relationship between the attributes and the latent representation of the face. The network enables high quality facial attribute editing while controlling the attribute intensity and style. Talk-to-Edit [20] provides fine-grained facial attribute editing via dialog interaction, similar to our approach. The method uses a language encoder to convert the user's request into an 'editing encoding' that encapsulates information about the
Figure 1: Overview of the proposed method. The proposed method needs a fixed _Regularization Set_ comprising facial images with age variations and a variable _Training Set_ comprising facial images of a target individual. The latent diffusion module (comprising a VAE, U-Net and CLIP-text encoder) learns the concept of age progression from the regularization images and the identity-specific information from the training images. We integrate biometric and contrastive losses in the network for identity preservation. At inference, the user prompts the trained model using a rare token associated with the trained target subject and the desired age to perform age editing.
degree and direction of change of the target attribute, and seeks user feedback to iteratively edit the desired attribute.
We also highlight two recent methods that also use diffusion models for face generation. In DCFace [21], the authors propose a dual condition synthetic face generator to allow control over simulating intra-class (within same individual) and inter-class (across different individuals) variations. In [30], the authors explore suitable prompts for generating realistic faces using stable diffusion and investigate their quality. Neither method focus on identity-preserving text guided facial aging and de-aging, which is our goal.
## 3 Our Proposed Method
Although a suite of age editing methods exist in the literature as discussed above, the majority of them focuses on perceptual quality instead of biometric quality. A subset of latent space manipulation methods struggle with'real' face images and generate unrealistic outputs. Existing works reiterate that age progression is a smooth but non-deterministic process that requires incremental evolution to effectively transition between ages. This motivates the use of diffusion models, which naturally model the underlying data distribution by incrementally adding and removing noise. We start with a brief mathematical overview.
### Preliminaries
Denoising diffusion probabilistic models (DDPMs) [18] perform the following steps: 1) a forward diffusion process \(\mathbf{x}_{0}\ \ \ \xrightarrow[]{>\eta_{t}}\ \mathbf{x}_{t}\)1 that incrementally adds Gaussian noise, \(\eta\) sampled from a normal distribution, \(\mathcal{N}(\mathbf{0},\mathbf{I})\), to the clean data, \(\mathbf{x}_{0}\) sampled from a real distribution, \(p(\mathbf{x})\) over \(t\) time steps. 2) a backward denoising process \(\mathbf{x}_{0}\ \
ated images as follows.
\[\begin{split}&\mathbb{E}_{\mathbf{x},\mathbf{c},t}[w_{t}\|f_{\theta}(g_{t}( \mathbf{x}),c)-\mathbf{x}\|_{2}^{2}+\\ &\lambda w_{t^{\prime}}\|f_{\theta}(g_{t^{\prime}}(\mathbf{x^{\prime }}),c_{class})-\mathbf{x^{\prime}}\|_{2}^{2}+\\ &\lambda_{b}\mathcal{B}(f_{\theta}(g_{t}(\mathbf{x}),c_{class}),\bm {x})].\end{split} \tag{2}\]
We use this new loss to fine-tune the VAE. The third term in Eqn. 2 refers to the biometric loss computed between the ground-truth image of the subject, \(\mathbf{x}\), and the generated image weighted by \(\lambda_{b}=0.1\). Note that \(f_{\theta}(g_{t^{\prime}}(\mathbf{x}),c_{class})\) uses the training set (_i.e_., images of an individual subject), whereas \(f_{\theta}(g_{t^{\prime}}(\mathbf{x^{\prime}}),c_{class})\) uses the regularization set that contains representative images of a class. Here, \(\mathcal{B}(\cdot,\cdot)\) computes the \(L_{1}\) distance between the biometric features extracted from a pair of images (close to zero for same subjects, higher values correspond to different subjects). We use a pre-trained VGGFace [4] feature extractor, such that,
\[\mathcal{B}(i,j)=\left\|VGGFace(i)-VGGFace(j)\right\|_{1}.\]
Now, we turn to target-specific fine-tuning. The implementation used in our work [3, 14] uses a frozen VAE and a text-encoder while keeping the U-Net model unfrozen. U-Net denoises the latent representation produced by the encoder of VAE, \(g_{t}(\mathbf{x})=\mathbf{z}_{t}=\alpha_{t}\mathbf{x}+\sigma_{t}\eta\). Therefore, we use identity-preserving contrastive loss using the latent representation. We adopted the SimCLR [11] framework that uses a normalized temperature-scaled cross-entropy loss between positive and negative pairs of augmented latent representations, denoted by \(\mathcal{S}(\cdot,\cdot)\) in Eqn. 3. We compute the contrastive loss between the latent representation of the noise-free inputs (\(\mathbf{z}_{0}\)) and the de-noised outputs (\(\mathbf{z}_{t}\)) with a weight term \(\lambda_{s}=0.1\) and a temperature value = 0.5. Refer to [11] for more details. The contrastive loss between the latent representation in the U-Net architecture enables us to fine-tune the diffusion model for each subject as follows.
\[\begin{split}&\mathbb{E}_{\mathbf{x},\mathbf{c},t}[w_{t}\|f_{\theta}(g_{t}( \mathbf{x}),c)-\mathbf{x}\|_{2}^{2}+\\ &\lambda w_{t^{\prime}}\|f_{\theta}(g_{t^{\prime}}(\mathbf{x^{\prime }}),c_{class})-\mathbf{x^{\prime}}\|_{2}^{2}+\lambda_{s}\mathcal{S}(\mathbf{z}_{t},\mathbf{ z}_{0})].\end{split} \tag{3}\]
In addition to customizing the losses, we use the regularization set to impart the concept of facial age progression and regression to the latent diffusion model. The regularization set contains representative images of a class, in our case, 'person'. A regularization set comprising face images selected from the internet would have sufficed if our goal was to generate realistic faces as done in [30]. However, our task involves learning the concept of aging and de-aging, and then apply it to any individual. To accomplish this task, we use face images from different age groups and then pair it with one-word captions that indicate the age group of the person depicted in the image. The captions correspond to one of the six age groups: 'child', 'teenager', 'youngadults','middleaged', 'elderly', and 'old'. We could have used numbers as age groups, for example, twenties, forties or sixties, but we found that a language description is more suitable than a numeric identifier. Another reason for pairing these age descriptions with the images is that we can use these same age identifiers while prompting the diffusion model during inference (photo of a \(\langle\) token \(\rangle\)\(\langle\) class label \(\rangle\) as \(\langle\) age group \(\rangle\)). We use the following six prompts during inference. 1) photo of a sks person as child, 2) photo of a sks person as teenager, 3) photo of a sks person as youngadults, 4) photo of a sks person as middleaged, 5) photo of a sks person as elderly, and 6) photo of a sks person as old. We have explored other tokens (see Sec. 5.4).
## 4 Experiments
**Setup and implementation details.** We conduct experiments using DreamBooth implemented using Stable Diffusion v1.4 [3]. The model uses CLIP's [2] text encoder trained on lain-aesthetics v2 5+ and a vector quantized VAE [35] to accomplish the task of age progression. The text encoder stays frozen while training the diffusion model. We use two datasets, namely, **CelebA**[27] and **AgeDB**[28]. We use 2,258 face images belonging to 100 subjects from the CelebA [27] dataset, and 659 images belonging to 100 subjects from the AgeDB dataset to form the 'training set'. CelebA does not contain age information, except a binary 'Young' attribute annotation. We do not have ground-truth for evaluating the generated images synthesized from the CelebA dataset. On the other hand, AgeDB dataset comprises images with exact age values. We then select the age group that has the highest number of images and use them as the training set, while the remaining images contribute towards the testing set. Therefore, 2,369 images serve as ground-truth for evaluation in AgeDB dataset.
We use a regularization set comprising image-caption pairs where each face image is associated with a caption indicating its corresponding age label. We use 612 images belonging to 375 subjects from the CelebA-Dialog [20] dataset,where the authors provide fine-grained annotations of age distributions. We convert the distribution to categorical labels to use as captions in the regularization images. We refer to them as {Child: \(<\)15 years, Teenager: 15-30 years, Youngadults: 30-40 years, Middleaged: 40-50 years, Elderly: 50-65 years and Old: \(>\)65 years}. We use 612 (\(102\times 6\)) images in the subject disjoint regularization set.
The success of generating high quality images often depend on effectively prompting the diffusion model during inference. The text prompt at the time of inference needs a rare token/identifier that is associated with the concept learnt during fine-tuning. We use four different rare tokens {_wxz_, _sks_, _ams_, _ukj_} [6] in this work for brevity.
We use the implementation of DreamBooth using stable diffusion in [3] and used the following hyperparameters.
We adopt a learning rate = 1e-6, number of training steps \(=800\), embedding dimensionality in autoencoder \(=4\), and batch size \(=8\). The generated images are of size \(512\times 512\). We use \(\lambda=1,\lambda_{b}=0.1\) and \(\lambda_{s}=0.1\) (refer to Eqns. 2 and 3). We generate 8 samples at inference. However, we perform a facial quality assessment using EQFace [26] to limit the number of generated face images to 4, such that, each generated image contains a single face with frontal pose. We adopt a threshold of 0.4, and retain the generated images if quality exceeds the threshold, else, discard them. Training each subject requires \(\sim\)5-8 mins. on a A100 GPU.
We perform **qualitative evaluation** of the generated images by conducting a user study involving 26 volunteers. The volunteers are shown a set of 10 face images (original) and then 10 generated sets; each set contains five images belonging to five age groups (excluding old), resulting in a total of 60 images. They are assigned two tasks: 1) identify the individual from the original set who appears most similar to the subject in the generated set; 2) assign each of the five generated images to the five age groups they are most likely to belong to. We compute the proportion of correct face recognition and age group assessment.
Further, we perform **quantitative evaluation** of the generated outputs using the ArcFace [12] matcher (different from VGGFace used in identity-preserving biometric loss). We utilize the genuine (intra-class) and imposter (inter-class) scores to compute Detection Error Trade-off (DET) curves and report the False Non-Match Rate (FNMR) at a False Match Rate (FMR) of 0.01% and 0.1%.
## 5 Results
We report the biometric matching performance using the ArcFace matcher between **original and modified** images in Table 1 for the CelebA dataset. See examples of generated images in Fig. 2. In CelebA, we do not have access to ground-truths, so we perform biometric matching with disjoint samples of the subject not used in the training set. We refer this as the'simulation' result. We achieve the best biometric matching using the initial loss settings of latent diffusion (Eqn. 1). The biometric matching impacts the sim
\begin{table}
\begin{tabular}{l||c c||c c} \hline \hline
**Age group** & \multicolumn{2}{c||}{**Initial loss**} & \multicolumn{2}{c}{**With contrastive loss**} \\ & sks & wzx & sks & wzx \\ \hline \hline child & 0.49/0.21 & 0.58/0.27 & 0.56/0.26 & 0.60/0.29 \\ teenager & 0.23/0.07 & 0.32/0.12 & 0.29/0.10 & 0.34/0.12 \\ youngadults & 0.25/0.08 & 0.30/0.10 & 0.28/0.08 & 0.31/0.10 \\ middleaged & 0.20/0.07 & 0.28/0.09 & 0.27/0.09 & 0.30/0.10 \\ elderly & 0.22/0.07 & 0.29/0.10 & 0.25/0.09 & 0.29/0.11 \\ old & 0.24/0.10 & 0.31/0.12 & 0.29/0.11 & 0.32/0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 1: CelebA simulation results for biometric matching between **Original-Modified** images. The metrics are False Non-Match Rate (FNMR) at False Match Rate (FMR) = 0.01/0.1%.
Figure 3: (Top:) DET curves of face matching using generated images from the CelebA dataset. (Bottom:) Recognition performance in the table indicating FNMR @ FMR=0.01/0.1%. The age-edited images are generated using the _wzx_ token with contrastive loss.
Figure 2: Illustration of age edited images generated from the CelebA dataset.
ilarity between generated and gallery images and does not quantify the success of age editing. On the other hand, the generated images using contrastive loss2 (Eqn. 3) successfully accomplish aging/de-aging but achieve low matching as the ArcFace model is not trained on generated images.
Footnote 2: We also compare with VGGFace-based biometric loss (Eqn. 2), and observed contrastive loss outperforms biometric loss. See Sec. 5.1.
Therefore, we conduct an additional experiment of fine-tuning the ArcFace model on subject disjoint age-edited images (\(\sim\)3,400) and then repeat the matching experiments for the CelebA dataset. We report the **original-original**, **modified-modified**, **original-modified** (before fine-tuning ArcFace) and **modified-modified** (after fine-tuning ArcFace) face matching performance and the corresponding DET curves in Fig. 3 for the contrastive loss and _wzx_ token combination. Note that there is a significant improvement in face matching performance between the modified-modified images and original-modified images after fine-tuning. We achieve **FNMR=3% at FMR=0.01%** and **FNMR=1% at FMR = 0.1%** with the fine-tuned face matcher on the age-edited images. We down-sample the modified images to the same resolution as the original images, and observe similar results. Additionally, the fine-tuned face matcher drastically improves when comparing original-modified images, indicating that synthetic images can improve the robustness of existing face matchers as suggested in [31].
We report the biometric matching performance using the ArcFace matcher between **original and modified** images for the Age DB dataset. In AgeDB, we have a separate gallery set consisting of images across age groups different than the images used during training. We use them as ground-truth for evaluation and refer this as the 'imputation' result. As anticipated, we observe modest performance across a majority of the age groups barring 'child'. We had only 28 images from 18 subjects (out of 100) corresponding to child group, and some of the images were of extremely poor quality, thereby resulting in an abnormal high value of FNMR. See examples of generated images in Fig. 4. We present the the DET curves and the corresponding FNMR values @FMR=0.01/0.1% in Fig. 5.
### Comparison of auxiliary loss functions
We compare the proposed loss functions: 1) VGGFace-based Biometric loss and 2) Contrastive loss and observe a reduction in FNMR up to 46% @FMR=0.01% averaged across all age groups when using contrastive loss with respect to biometric loss. Genuine match scores (scores between original and age-edited images of the same individual) that indicate intra-class fidelity are much better preserved when using contrastive loss (see Fig. 6). We ex
Figure 4: Illustration of age edited images generated from the AgeDB dataset.
Figure 5: (Top:) DET curves of face matching using generated images from the AgeDB dataset for the six age groups. (Bottom:) Recognition performance in the table indicating FNMR @ FMR=0.01/0.1%. The age-edited images are generated using the _wzx_ token with contrastive loss.
plored different values of \(\lambda_{b}\) and \(\lambda_{s},=\{0.01,0.1,1,10\}\), and observe 0.1 produces the best results for both variables.
### Comparison with existing methods
We use IPCGAN [34], AttGAN [17] and Talk-to-Edit [20] as baselines for comparison. We evaluate using the pre-trained models of the baselines provided by the authors. As IPCGAN was trained on the CACD dataset [10], we fine-tune our method on 62 subjects from the CACD dataset. We observe an FNMR=2% (IPCGAN), compared to FNMR=11% (Ours) @ FMR=0.01. IPCGAN defaults to the original when it fails to perform aging or de-aging resulting in spuriously low FNMR. We perform automated age prediction using the DeepFace [7] age predictor. We observe the images synthesized by our method result in wider dispersion of age predictions compared to the original images and the IPCGAN-generated images, indicating successful age editing. See Fig. 7. We apply AttGAN and Talk-to-Edit on the CelebA dataset. See comparison between generated images of proposed and baseline methods, and biometric matching performance in Fig. 8. We observe that the proposed method (contrastive loss, _sks_) outperforms AttGAN by 19% on 'young' images and by 7% on 'old' images at FMR=0.01. AttGAN can only edit to young or
Figure 8: (Top): Comparison of ‘young’ outputs (columns 2-4) and ‘old’ outputs (columns 5-7) generated by the proposed method with baselines: AttGAN and Talk-to-Edit. The original images are in the first column. (Bottom): False Non-Match Rate (FNMR) at False Match Rate (FMR) = 0.01/0.1%
Figure 6: Comparison of auxiliary loss functions (VGGFace-based biometric loss vs. Contrastive loss) in terms of cosine distance scores computed for genuine pairs using the ArcFace matcher. Contrastive loss produces desirable lower distance between genuine pairs.
Figure 7: (Top and middle): Comparison of outputs produced by IPCGAN and the proposed method. (Bottom): Age predictions by automated age predictor shows that our method generates images with a wider age dispersion compared to original CACD images and IPCGAN-generated images.
Figure 9: Comparison of the images generated using the four tokens in this work.
old ages. Further, we observe that the method outperforms Talk-to-Edit by an average FNMR =44% at FMR=0.01. The different age groups are simulated using a target value parameter in Talk-to-Edit that varies from 0 to 5, each value representing an age group. However, we observe several cases of distorted or absence of outputs in Talk-to-Edit.
### User study
We collected 26 responses from the user study. Rank-1 biometric identification accuracy (averaged across the total number of responses) is equal to 78.8%. The correct identification accuracy of the age groups are: child = 99.6%, teenager = 72.7%, youngadults = 68.1%, middleaged = 70.7% and elderly = 93.8%. The users were able to successfully distinguish between generated images from different age groups with reasonably high accuracy.
### Effect of rare tokens
We use four tokens in this work, namely, {_sks_, _ukj_, _ams_, _wzx_}, for the sake of brevity. We observe _sks_ and _wzx_ tokens result in visually compelling results compared to the remaining two tokens, and have been used for further evaluation. Note these tokens are condensed representations provided by the tokenizer that are determined by identifying rare phrases in the vocabulary (see Fig. 9). Additionally, we evaluate the effect of the token and the class label in the prompt in Fig. 10; removing the token results in lapse in identity-specific features.
### Effect of demographics
We also observed the following effects. **Age:** The generated images can capture different age groups well if the training set contains images in the middle-aged category. We observe that if training set images comprise mostly elderly images, then the method struggles to render images in the other end of the spectrum, _i.e._, the child category, and vice-versa. We also observe that we obtain visually compelling results of advanced aging when we use 'elderly' in the prompt instead of 'old'. **Sex:** The generated images can effectively translate the training images into older age groups for men compared to women. This can be due to the use of makeup in the training images. **Ethnicity:** We do not observe any strong effects of ethnicity/race variations in the outputs. See Fig. 11. Although in some cases, the proposed method struggles with generating 'child' images if most of the training images belong to elderly people or contain facial hair. See Fig. 12.
## 6 Conclusion
Existing facial age editing methods typically struggle with identity-preserved age translation. In this work, we harness latent diffusion coupled with biometric and contrastive losses for enforcing identity preservation while performing facial aging and de-aging. We use a regularization image set to impart the understanding of age progression and regression to the diffusion model, that in turn, transfers the effects onto an unseen individual while preserving their identity. The generation process is guided by intuitive text prompts indicating the desired age. Our method demonstrates significantly better results in terms of both qualitative and quantitative evaluation, and outperforms existing methods with a reduction in FNMR up to 44% at FMR=0.01%.
Future work will focus on designing zero-shot age editing without fine-tuning, and utilizing composable diffusion models [25] for fine-grained age editing.
Figure 11: Examples of generated images pertaining to diverse sex and ethnicity for ‘child’ group.
Figure 12: Failure cases corresponding to ‘child’ age group.
Figure 10: Impact of token (_wzx_) and class label (_person_) on generated images: “photo of a person” (left) vs. “photo of a _wzx_ person” (right). Note the token is strongly associated with a specific identity belonging to that class. |
2303.02239 | Electronic transport in titanium carbide MXenes from first principles | We compute from first principles the electronic, vibrational, and transport
properties of four known MXenes compound : Ti3C2, Ti3C2F2, Ti3C2(OH)2, and
Ti2CF2. We study the effect of different surface terminations and monosheet
thickness on the electrical conductivity, and show that the changes in
conductivity can be explained by the squared velocity density of the electronic
state, as well as their phonon scattering lifetime. We also compare the
solution of the iterative Boltzmann transport equation (IBTE) to different
linearized solutions, namely, the self-energy relaxation time approximation
(SERTA) and the momentum relaxation time approximation (MRTA), and we show that
the SERTA significantly underestimates the electrical conductivity while the
MRTA yields results in better agreement with the IBTE.
We compute from first principles the electronic, vibrational, and transport
properties of four known MXenes: Ti3C2, Ti3C2F2, Ti3C2(OH)2, and Ti2CF2. We
study the effect of different surface terminations and monosheet thickness on
the electrical conductivity, and show that the changes in conductivity can be
explained by the squared velocity density of the electronic state, as well as
their phonon scattering lifetime. We also compare the solution of the iterative
Boltzmann transport equation (IBTE) to different linearized solutions, namely,
the self-energy relaxation time approximation (SERTA) and the momentum
relaxation time approximation (MRTA), and we show that the SERTA significantly
underestimates the electrical conductivity while the MRTA yields results in
better agreement with the IBTE. The computed monolayer conductivity at 300K is
in reasonable agreement with reported experimental measurements. | Nesrine Boussadoune, Olivier Nadeau, Gabriel Antonius | 2023-03-03T22:05:19Z | http://arxiv.org/abs/2303.02239v2 | # Electronic transport in titanium carbide MXenes from first principles
###### Abstract
We compute from first principles the electronic, vibrational, and transport properties of four known MXenes: Ti\({}_{3}\)C\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\), and Ti\({}_{2}\)CF\({}_{2}\). We study the effect of different surface terminations and monosheet thickness on the electrical conductivity, and show that the changes in conductivity can be explained by the squared velocity density of the electronic state, as well as their phonon scattering lifetime. We also compare the solution of the iterative Boltzmann transport equation (IBTE) to different linearized solutions, namely, the self-energy relaxation time approximation (SERTA) and the momentum relaxation time approximation (MRTA), and we show that the SERTA significantly underestimates the electrical conductivity while the MRTA yields results in better agreement with the IBTE. The computed monolayer conductivity at 300K is in reasonable agreement with reported experimental measurements.
MXenes form a large family of two-dimensional transition metal carbides and nitrides with interesting electrochemical properties [1; 2; 3; 4; 5]. These layered materials have shown potential for a wide range of applications in energy storage and conversion [6; 7; 8; 9; 10; 11]. Their high specific surface area and electrochemical activity make them suitable for supercapacitors [7; 11; 12; 13; 14], lithium-ion batteries [15; 16; 17], catalysis [18], photocatalysis [19; 20], and hydrogen storage [21; 22]. With a suitable hydrophilic surface termination, MXenes also exhibit electrocatalytic activity for the oxygen evolution reaction (OER) [23; 20], the oxygen reduction reaction (ORR) [24], and the hydrogen evolution reaction (HER)[25; 26].
The terminated MXenes have a general chemical formula M\({}_{n+1}\)X\({}_{n}\) T\({}_{x}\) (n = 1, 2 or 3), where M is a transition metal (Sc, Ti, Zr, Hf, V, Nb, Ta, Cr, Mo, etc.), X denotes carbon and/or Nitrogen, T represents the surface terminations group typically -O, -OH or -F [27; 28; 7], and x is the number of terminations. The surface termination of the 2D layers originates from their synthesis by chemical etching [15; 29; 30; 31], starting from three-dimensional precursors known as MAX phases [27; 32], of which nearly one hundred compounds have been identified [1; 2; 33; 34; 35]. Previous first-principles calculations have investigated how the surface termination alters the electronic properties of the MXenes [3; 36; 37; 15]. Some MXenes become semiconductors when terminated by oxygen, such as Ti\({}_{2}\)CO\({}_{2}\), Zr\({}_{2}\)CO\({}_{2}\), and Hf\({}_{2}\)CO\({}_{2}\)[38], while others, like V\({}_{2}\)C, remain metallic for all surface terminations [39].
Beyond their general classification as metals or semiconductors, a key property of these materials for most applications is their electrical conductivity. The electronic transport properties can be computed from first principles within the framework of the Boltzmann transport equation (BTE), assuming that phonon scattering is the dominant scattering mechanism at room temperature and above, and neglecting other scattering channels such as defects and impurities [40]. Furthermore, one avoids solving the BTE iteratively (IBTE) by using the self-energy relaxation time approximation (SERTA) or the momentum relaxation time approximation (MRTA) [41; 42; 43]. This framework has been widely used to study the electrical transport semiconductors and metals [44; 45; 46]. It has been recently shown, however, that some of these approximations may underestimate significantly the charge mobility in semiconductors, while the IBTE can be achieved at virtually the same computational cost [47].
In the present work, we study the phonon-limited electrical conductivity of four known MXenes: Ti\({}_{3}\)C\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\), and Ti\({}_{2}\)CF\({}_{2}\). The comparison among these materials allows us to discern the influence of surface termination and monosheet thickness on the scattering lifetime of the charge carriers. We also compare the different frameworks for computing the electrical conductivity, and find that the conclusions of Claes et. al [47] do hold for this class of two-dimensional metallic systems, namely that the SERTA approach underestimates the electrical conductivity while the MRTA results are in better agreement with the IBTE. We show that the predicted conductivity is consistent with experimental measurements.
## I Results and discussion
### Computational details
We perform density functional theory (DFT) and density functional perturbation theory (DFPT) calculations using the Abinit software [48; 49], to obtain the structural, electronic, and vibrational properties of the materials. We use the PBE exchange correlation functional [50], with pseudopotentials from the Pseudo-Dojo database [51]. For all the structures considered, we use an energy cutoff of 35 Hartree to represent the wavefunctions, a \(16\times 16\times 1\) grid of k-points to sample the Brillouin zone, such that the energy is converged within \(10^{-6}\) eV/cell. We perform geometry optimization to relax the forces below \(10^{-5}\) eV/\(\AA\). A vacuum spacing of 20 Ais used, in order to avoid interactions between the periodic images of the MXenes monolayers.
### Structural parameters
All the 2D materials considered assume the space group \(P6_{3}/mmc\). We perform geometry optimization to relax the forces below \(10^{-5}\) eV/\(\AA\) and obtain the relaxed lattice parameters of 3.098 A, 3.076 A, 3.086 A, and 3.059 A for the Ti\({}_{3}\)C\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\), Ti\({}_{3}\)C\({}_{2}\)OH\({}_{2}\), and Ti\({}_{2}\)CF\({}_{2}\) monosheets, respectively. Several possible configurations exist for the surface termination of Ti\({}_{n+1}\)C\({}_{n}\)T\({}_{x}\)[52; 53; 28]. We use the most energetically favorable configuration where surface termination atoms (F or OH) are at the hollow site of three neighboring carbon atoms. The atomic structure relaxations of all the materials considered are presented in Fig 1(a-f).
### Electronic bands
The band structure and the projected density of states (PDOS) of the materials are presented in Fig 1(e-h). All the materials are metallic, with the electronic states near the Fermi level mostly composed of Ti \(d\) orbitals. For the surface-terminated systems, the band in the \(M-K\) direction is highly dispersive at the Fermi level, suggesting a high electrical conductivity [54]. We note the presence of a valley and a flat band region along the \(\Gamma-K\) direction, which contribute to singularities in the density of states and represent a potential scattering channels for the charge carriers.
### Phonon bands
In Fig 1(i-l), we present the phonon band structures and the projected phonon density of states. These results were obtained by employing a coarse q-points meshes of \(8\times 8\times 1\) for \(Ti_{3}C_{2}\), \(Ti_{3}C_{2}F_{2}\) and \(T_{3}C_{2}(OH)_{2}\), and \(16\times 1\) for \(Ti_{2}CF_{2}\). Every phonon frequency is real and positive, indicating that the structures are stable with respect to atomic displacements [55; 56].
Figure 1: **a-d** Structure of the monosheet materials. Titanium atoms are in blue, carbon in brown, fluorine in grey, oxygen in red and hydrogen in pink. **e-h** Electronic band structures along the high symmetry directions and Projected Density of States (PDOS). The energy levels are referenced to the Fermi level at zero. **i-l** Phonon band structures along the high symmetry directions and Projected Phonon Density of States. Note that Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\) also possess a two phonon branches associated with the motion of the hydrogen atoms at an energy of 449 meV.
From the projected phonon density of states, we see a clear energy separation between the phonon modes associated with the different atomic species. The low frequency bands correspond to the vibrating motion of the metallic atoms, the high frequency bands are associated with the motion of carbon atoms, and the surface terminations bring additional phonon bands at intermediate energies. This general feature has been observed in other MXene materials as well [8, 39].
### Electrical conductivity
The electrical conductivity can be computed by solving the iterative Boltzmann transport equation (IBTE) [43, 57, 58] where the main scattering mechanism are the phonon collisions, which are described by the electron-phonon coupling matrix elements computed from first principles. By making use of the relaxation time approximation, the BTE can be linearized, avoiding the iterative procedure and writing the electrical conductivity \(\sigma_{\alpha}\) as
\[\sigma_{\alpha} =\frac{-e}{\Omega}\sum_{n}\int\frac{d\mathbf{k}}{\Omega_{BZ}} \tau_{n\mathbf{k}}(T)|v_{n\mathbf{k}\alpha}|^{2}f^{\prime}(\varepsilon_{n \mathbf{k}}) \tag{1}\]
where \(\alpha\) is a Cartesian direction, \(\Omega\) is the volume of the unit cell, \(\Omega_{BZ}\) is the volume of the Brillouin zone, \(\tau_{n\mathbf{k}}(T)\) is the temperature-dependent scattering lifetimes of the electron state, \(v_{n\mathbf{k}\alpha}\) is the electron velocity, and \(f^{\prime}(\varepsilon)\) is the derivative of the Fermi-Dirac distribution, which depends on temperature. Different approximations exist for the computation of the electron scattering lifetime, including the Self-Energy Relaxation Time Approximation (SERTA) [42, 43] and the Momentum Relaxation Time Approximation (MRTA) [44, 47]. In the former, the lifetime is computed from the inverse of the Fan-Migdal electron-phonon coupling self-energy, and in the latter, the lifetime is computed from the squared electron-phonon coupling matrix elements weighted by an efficiency factor which accounts for the momentum direction of the scattering states relative to the electrical field [47].
### Convergence study
One of the main challenges in computing the electrical conductivity is the fine sampling of electron (\(\mathbf{k}\)-points) and phonon (\(\mathbf{q}\)-points) wavevectors required to converge the transport properties [49, 59]. A dense \(\mathbf{k}\)-mesh is required to achieve good sampling of the electronic occupations near the Fermi level, while a dense \(\mathbf{q}\)-sampling is required to converge the electronic lifetimes [59]. This is especially true in two-dimensional metals, where the density of states is expected to vary rapidly near the Fermi level, as can be seen in Fig 1(**e-h**).
In order to optimize the overall computational cost, we employ the Shankland-Koelling-Wood interpolation scheme [60, 61], a feature recently made available within the Abinit automated workflows [46, 49]. The electronic energies and wavefunctions near the Fermi level are interpolated from coarse \(\mathbf{k}\)-grid onto a fine \(\mathbf{k}\)-grid. We set the coarse \(\mathbf{k}\)-grid to \(16\times 16\times 1\) and vary the fine \(\mathbf{k}\)-grid to perform the convergence study.
Figure 2 shows the convergence of the temperature-dependent electrical conductivity of Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\) with varying \(\mathbf{k}\)-points and \(\mathbf{q}\)-points grids. From this figure, we conclude that a \(64\times 64\times 1\) homogeneous grid for both \(\mathbf{k}\)-points and \(\mathbf{q}\)-points are sufficiently converged, and we use these parameters for all the studied materials.
### Results for the electrical conductivity
The temperature-dependent electrical conductivity of the four MXenes with the different approximations is presented in Fig. 3. We note, again, that the IBTE calculation uses the same computational cost as the SERTA and MRTA calculations. We find that the SERTA underestimates the conductivity by as much as 14 % at T=300 K and 4 % at T=800 K, compared to the IBTE, while the MRTA is in somewhat better agreement with the IBTE (9 % at T=300 K and 0.8 % at T=800 K).
In Figure 4, we decompose the SERTA electrical conductivity into functions separating the integrants of Eq. (1). The derivative of the equilibrium Fermi Dirac distribution function \(-\frac{\partial f}{\partial\varepsilon}\) is peaked around the Fermi
Figure 2: Convergence of the electrical conductivity in Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\) with respect to the Brillouin zone sampling of electron (\(\mathbf{k}\)) and phonon (\(\mathbf{q}\)) wavevectors. Left: Varying \(\mathbf{k}\)-points grids and fixed \(\mathbf{q}\)-points grid. Right: Varying \(\mathbf{q}\)-points grids and commensurate \(\mathbf{k}\)-points grids.
level and indicates the energy range where the electronic states may contribute to the conductivity. The squared velocity density \(\langle v^{2}\rangle\) is defined as
\[\langle v_{\alpha}^{2}\ (\varepsilon)\rangle=\sum_{n}\int\frac{d\mathbf{k}}{ \Omega_{Bz}}|v_{n\mathbf{k}\alpha}|^{2}\ \delta(\varepsilon-\varepsilon_{n\mathbf{k}}) \tag{2}\]
This function is indicative of the number of carriers available for conductivity. It is a smooth function of energy, unlike the density of states, which, for 2D materials, has spiky structure that requires a high number of k-points to sample. By comparing the \(\langle v^{2}\rangle\) function of the different materials, we note that the addition of either surface termination to the \(Ti_{3}C_{2}\) sheet results in an increase of the squared velocity density. This is due to the surface termination atoms pulling electrons off the central layer and lowering the Fermi level to intercept the highly dispersive Ti \(d\)-band along the \(M-K\) segment of the bandstructures.
Looking at the electronic scattering lifetimes \(\tau_{n\mathbf{k}}\) shown in Fig 4, we note that Ti\({}_{2}\)CF\({}_{2}\), being the thinnest monosheet, also has the shortest lifetimes. Among the terminated structures, Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\) and Ti\({}_{2}\)CF\({}_{2}\) have shorter scattering lifetimes than Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\), most likely due to the presence of flat bands near the Fermi level in the \(\Gamma-K\) region for the fluorinated structures. As a result, according to the SERTA calculation, Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\) has the highest electrical conductivity. However, the full IBTE calculation reveals instead that Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\) has a higher electrical conductivity than Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\) at temperatures up to 500 K.
### Comparison with experiments
Several conductivity measurements of layered Ti\({}_{3}\)C\({}_{2}\)T\({}_{x}\) are reported with a variety of experimental setups [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. The surface termination is either F or OH, but is generally unspecified. In practice, the bulk conductivity of MXene flakes depends on the synthesis method, which may yield different concentrations of defects and impurities, as well as different spacings between nanosheets.
From a dimensional analysis, the bulk conductivity must be proportional to the density of MXene nanosheets as \(\sigma=\sigma_{2D}/L_{z}\), where \(\sigma_{2D}\) is the monolayer conductivity with units of Siemens (S), and \(L_{z}\) is the interlayer distance. In the present calculation, \(L_{z}\) is set arbitrarily to 20 \(\AA\), wereas in experiments, \(L_{z}\) is inferred from XRD spectra.
Table 1 presents a list of recent experimental measurements of electrical conductivities. While the experimental precision is on the fourth significant digit or better, the measured values among different samples vary by an order of magnitude. The present calculation corresponds to a defect-free system, in which electron-phonon scattering is the only source of resistivity, and sets an upper bound on the conductivity. Comparing against the highest electrical conductivity achieved for Ti\({}_{3}\)C\({}_{2}\)T\({}_{x}\), the computed IBTE value at 300K is indeed larger by 59% for Ti\({}_{3}\)C\({}_{2}\)F\({}_{2}\) and 34% for Ti\({}_{3}\)C\({}_{2}\)(OH)\({}_{2}\).
Figure 4: The phonon self-energy lifetime at \(T=300K\) of each electronic state near the Fermi level (green discs) and the squared velocity density (blue) with \(\alpha=1\), that is, the direction along a primitive vector. The grey shaded curve represents the negative of the derivative of the Fermi-Dirac distribution.
Figure 3: Temperature-dependent electrical conductivity of each structures with the linearized (SERTA, MRTA) and iterative Boltzmann transport equation (IBTE).
Aside from the \(\mathbf{k}\)-points / \(\mathbf{q}\)-points convergence, and unaccounted scattering channels like defects, other sources of error in our theoretical calculation include the thermal expansion of the lattice, the renormalization of the electron velocities due to phonons [57], as well as the accuracy of the exchange-correlation functional for the band strucutre [43] and the electron-phonon coupling strength [83; 84; 85]. Overall, an overestimation by about 50% represents a reasonably good agreement, and an accuracy comparable to that of typical mobility calculations in 2D materials [45].
## II Conclusion
In summary, we performed a comparative study of the electronic transport of the pristine Ti\({}_{3}\)C\({}_{2}\), terminated Ti\({}_{3}\)C\({}_{2}\)T\({}_{2}\) (T = F, OH) and Ti\({}_{2}\)CF\({}_{2}\) monosheets from first principles. We computed the electrical conductivity of the MXenes with different relaxation time approximations, as well as with the iterative Boltzmann transport equation. We found that the SERTA underestimates the conductivity, while the MRTA is in better agreement with the IBTE. However, the relative differences among monosheets with different surface terminations can only be resolved by the iterative procedure. Nonetheless, the relaxation-time approximation provides a useful understanding of the underlying physics by decomposing the electrical conductivity into scattering lifetime and squared velocity density. The computed monolayer conductivity is in reasonable agreement with experiments, within 30% to 60%. The methodology presented in this work may be used to further explore candidate materials for energy storage.
###### Acknowledgements.
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference numbers RGPIN-2019-07149 and DGECR-2019-00008], as well as support from Universite du Quebec a Trois-Rivieres. The computational resources were provided by Calcul Quebec and the Digital Research Alliance of Canada.
|
2303.15745 | On Feature Scaling of Recursive Feature Machines | In this technical report, we explore the behavior of Recursive Feature
Machines (RFMs), a type of novel kernel machine that recursively learns
features via the average gradient outer product, through a series of
experiments on regression datasets. When successively adding random noise
features to a dataset, we observe intriguing patterns in the Mean Squared Error
(MSE) curves with the test MSE exhibiting a decrease-increase-decrease pattern.
This behavior is consistent across different dataset sizes, noise parameters,
and target functions. Interestingly, the observed MSE curves show similarities
to the "double descent" phenomenon observed in deep neural networks, hinting at
new connection between RFMs and neural network behavior. This report lays the
groundwork for future research into this peculiar behavior. | Arunav Gupta, Rohit Mishra, William Luu, Mehdi Bouassami | 2023-03-28T06:00:41Z | http://arxiv.org/abs/2303.15745v1 | # On Feature Scaling of Recursive Feature Machines
###### Abstract
In this technical report, we explore the behavior of Recursive Feature Machines (RFMs), a type of novel kernel machine that recursively learns features via the average gradient outer product, through a series of experiments on regression datasets. When successively adding random noise features to a dataset, we observe intriguing patterns in the Mean Squared Error (MSE) curves with the test MSE exhibiting a decrease-increase-decrease pattern. This behavior is consistent across different dataset sizes, noise parameters, and target functions. Interestingly, the observed MSE curves show similarities to the "double descent" phenomenon observed in deep neural networks, hinting at new connection between RFMs and neural network behavior. This report lays the groundwork for future research into this peculiar behavior.
## 1 Introduction
Recent work into understanding the theory behind recent advances in the breakthrough performance of large neural networks (Brown et al., 2020; Ramesh et al., 2021; Oppenlaender, 2022; Touvron et al., 2023) has analyzed kernel machines as more theoretically accessible approximations of neural networks (Belkin et al., 2018, 2019; Shi et al., 2022) The Recursive Feature Machine, as introduced in Radhakrishnan et al. (2023) describes a new type of kernel machine which recursively learns a Mahalanobis norm (Mahalanobis, 1936) using the average gradient outer product. The original paper indicates several theoretical and empirical results which connect the RFM to the Neural Tangent Kernel (Jacot et al., 2018), as well as commonly observed behavior in neural networks such as the arrangement of weights in a dense layer of a deep neural network.
In this technical report, we describe a series of experiments which show interesting behavior under feature scaling for RFMs. As we add random noise features to a regression dataset, MSE after training appears to decrease, then increase, then decrease again. Results are compared to "baseline" kernels which use the Laplacian norm (these are also simply RFM kernels that have been trained for 0 iterations).
Some discussion on the results is provided, as the curves appear to look similar to "double descent" as seen in deep neural networks. While random feature-style theoretical setups have been studied in the context of kernel machines (Rahimi and Recht, 2007; Mei and Montanari, 2020), more work in this direction is required, and this report hopes to serve as the baseline for further research into this peculiar behavior.
## 2 Experiment Setup
The RFM is trained according to the algorithm described in (Radhakrishnan et al., 2023). The models are trained at full precision on a single NVIDIA RTX 2060 6GB VRAM running Ubuntu 20.04 with Pytorch v1.13 (Paszke et al., 2019) for 10 iterations. At the beginning of training, 20% of the training set is reserved for validation; after each iteration, MSE is measured on the validation set and the best \(M\) is kept. Empirically, the best model is found around the 3rd or 4th iteration, and in rare cases the validation MSE improves marginally from the 3rd iteration to the 10th iteration.
For the base experiment, a dataset of size \(1000\times 2000\) (called \(X\)) is randomly generated, with each value being i.i.d from \(\mathcal{N}(0,1)\). Variations to the base experiment are described in the relevant sections below.
For every \(d\in[5,6,7,\ldots,99]\cup[100,110,120,\ldots,2000]\), we take the first \(d\) columns of \(X\) and divide them by \(\frac{1}{\sqrt{d}}\) in order to control the standard deviation. A 80-20 train-test split is applied, and target values are generated using a simple cubic function:
\[f(x)=5x_{1}^{3}+2x_{2}^{2}+10x_{3} \tag{1}\]
Note that the target equation 1 only utilizes the first three values in each \(x\). This means, for each \(d\), there are \(d-3\) columns of pure noise appended, which do not have any contribution to the target value. Thus, every model is given all data required to compute the exact value of the target -- all fluctuations in model performance are due to the model's ability to learn the target function.
Finally, the RFM is trained for 10 iterations with cross-validation, and the MSE is computed for the train and test splits. The full scaling test is repeated 100 times with different randomly generated \(X\) in order to further validate the results.
## 3 Results
Figure 1 shows the result for the base experiment with \((N,D)=(1000,2000)\). Results of variations are described in the subsequent sections below.
### Noise Effects
In order to better understand how dataset noise affects the RFM's performance, we successively add higher amounts of noise to the target function. Random gaussian noise was added to the target function like so:
\[y=f(x)+\mathcal{N}(0,\sigma) \tag{2}\]
Where \(\sigma\in[0,0.1,0.01,0.001]\) are the tested noise values. For consistency, the same random noise is used for each tested \(d\) value. Figure 2 shows the results of the noise experiment.
### Dataset Size Effects
We also test if dataset size has any affect on the scaling behavior. Dataset sizes of \(N\in[200,400,600,800,1000]\) are tested, and for each, the values of \(d\) are scaled proportionally: \(d=\left[5,6,7,\ldots,\frac{N}{10}\right]\cup\left[\frac{N}{10},\frac{N}{10}+10,\frac{N}{10}+20,\ldots,2N\right]\). Figure 3 shows the results of the dataset size experiment.
Figure 1: Train and Test MSE as a function of feature size for RFM and Baseline (Laplacian) kernel. 95% confidence intervals are shown by the respectively colored shaded areas.
### Target Function Effects
Our final experiment tests a different target function and compares the result to the original target function defined in equation 1 (labeled as "cubic" on figure 4). The alternative target function, called "randmat" is defined by equation 3:
\[f(X)=X^{\prime}K\qquad\texttt{where}\ K\sim\mathcal{N}_{10\times 1}(0,1) \tag{3}\]
And \(X^{\prime}\) is the first 10 columns of \(X\). Figure 4 compares the results of the alternative target function to the cubic target function.
Figure 3: Effects of dataset size, from \(N=200\) to \(N=1000\). Feature size is scaled from \(5\) to \(2N\) for each dataset. Left (red) is baseline, right (blue) is RFM.
Figure 2: Noise effects, from \(\sigma=0.0\) to \(\sigma=0.1\). Red is baseline kernel, blue is RFM.
## 4 Conclusion
Overall, the results indicate that despite modifications to the dataset size, noise parameter, and target function, the shape of the test MSE curve as a function of feature size does not change. In all cases, the RFM MSE drops sharply until \(D=0.1N\), then rises until around \(D=0.5N\), and then continues a shallower descent back down until the end of the experiment at \(D=2N\). Interestingly, the Laplacian kernel also follows this pattern, albeit with a much shorter turnaround (the inflection points occur at around \(D=0.05N\) and \(D=0.2N\), respectively).
The type of dataset and target function being used in these experiments make the shapes the MSE follow highly peculiar. For one, only the first three variables of each datapoint are used to compute the target value, so as additional "noise" features are added, the MSE is not expected to drop (thus indicating a performance increase) in the region to the left of \(D=0.1N\). We can confirm that the values in \(X\) are indeed i.i.d -- multicollinearity is centered at zero across all features. More perplexing, however, is the region to the right of \(D=0.5N\), where the test MSE begins to fall slowly. As this happens in all cases, it appears to be an artifact of the model, not the dataset.
The authors note that these MSE curves look similar to "double descent" curves seen in deep neural networks (Nakkiran et al., 2020). However, more analysis is needed before confirming this relationship. Indeed, early evidence points in this direction, as the feature size of the RFM directly relates to the size of the \(M\)-norm matrix (which is \(D\times D\)). The authors of the original paper theorize a relation between the \(M\)-norm matrix and the first layer weights of a deep fully connected neural network. Thus, it is possible that increasing the feature size by adding random noise features, we are performing an operation analogous to increasing the width of a neural network, thus inducing a double-descent-style phenomenon.
All experiments shown in this report are reproducible by using the code at 1 and following the instructions under the 'Scaling Test' section in the README.
Footnote 1: [https://github.com/agupta01/ml-theory-capstone](https://github.com/agupta01/ml-theory-capstone)
## Acknowledgments
Experiments described in this technical report were done under the HDSI DSC 180 Capstone program. The authors are immensely grateful to Mikhail Belkin and Parthe Pandit from UCSD for their mentorship and guidance during the project.
Figure 4: Scaling for cubic and random matrix functions, \((N,D)=(1000,[5,2000])\). |
2303.04890 | On the invariant and anti-invariant cohomologies of hypercomplex
manifolds | A hypercomplex structure $(I,J,K)$ on a manifold $M$ is said to be
$C^\infty$-pure-and-full if the Dolbeault cohomology $H^{2,0}_{\partial}(M,I)$
is the direct sum of two natural subgroups called the $\bar{J}$-invariant and
the $\bar{J}$-anti-invariant subgroups. We prove that a compact hypercomplex
manifold that satisfies the quaternionic version of the $dd^c$-Lemma is
$C^\infty$-pure-and-full. Moreover, we study the dimensions of the
$\bar{J}$-invariant and the $\bar{J}$-anti-invariant subgroups, together with
their analogue in the Bott-Chern cohomology. For instance, in real dimension 8,
we characterize the existence of hyperk\"ahler with torsion metrics in terms of
the dimension of the $\bar{J}$-invariant subgroup. We also study the existence
of special hypercomplex structures on almost abelian solvmanifolds. | Mehdi Lejmi, Nicoletta Tardini | 2023-03-08T21:10:13Z | http://arxiv.org/abs/2303.04890v1 | # On the invariant and anti-invariant cohomologies of hypercomplex manifolds.
###### Abstract.
A hypercomplex structure \((I,J,K)\) on a manifold \(M\) is said to be \(C^{\infty}\)-pure-and-full if the Dolbeault cohomology \(H^{2,0}_{\partial}(M,I)\) is the direct sum of two natural subgroups called the \(\overline{J}\)-invariant and the \(\overline{J}\)-anti-invariant subgroups. We prove that a compact hypercomplex manifold that satisfies the quaternionic version of the \(dd^{c}\)-Lemma is \(C^{\infty}\)-pure-and-full. Moreover, we study the dimensions of the \(\overline{J}\)-invariant and the \(\overline{J}\)-anti-invariant subgroups, together with their analogue in the Bott-Chern cohomology. For instance, in real dimension \(8\), we characterize the existence of hyperkahler with torsion metrics in terms of the dimension of the \(\overline{J}\)-invariant subgroup. We also study the existence of special hypercomplex structures on almost abelian solvmanifolds.
2010 Mathematics Subject Classification: 53C55 (primary); 53B35 (secondary) The first author is supported by the Simons Foundation Grant #636075. The second author is partially supported by GNSAGA of INdAM.
**Theorem**.: _(Theorem 5) Let \((M,I,J,K)\) be a compact hypercomplex manifold that satisfies the \(\partial\partial_{J}\)-Lemma then the hypercomplex structure is \(C^{\infty}\)-pure-and-full._
Then, we discuss the dimensions \(h^{\pm}_{\overline{J}}\) of \(H^{\overline{J},\pm}_{\partial}(M)\). We prove that, for a deformation of an \(SL(2,\mathbb{H})\)-structure on a compact manifold, the dimension \(h_{\overline{J}}\) is an upper-semi-continuous function (Corollary 8). This is similar to a result obtained in [8] on compact almost-complex manifolds. Then, we show that on a compact \(SL(2,\mathbb{H})\)-manifold, the existence of HKT metrics can be characterized in terms of \(h^{+}_{\overline{J}}\) and the dimension of \(H^{\overline{J},+}_{BC}(M)\) a subgroup of the second quaternionic Bott-Chern cohomology group. Indeed, we have the following:
**Theorem**.: _(Theorem 14) On a compact \(SL(2,\mathbb{H})\)-manifold, either \(\dim H^{\overline{J},+}_{BC}(M)=h^{+}_{\overline{J}}+1\) or \(\dim H^{\overline{J},+}_{BC}(M)=h^{+}_{\overline{J}}.\) Moreover, the \(SL(2,\mathbb{H})\)-manifold is HKT if and only if \(\dim H^{\overline{J},+}_{BC}(M)=h^{+}_{\overline{J}}.\)_
In Section 4, we discuss \(8\)-dimensional hypercomplex nilmanifolds and we obtain the following:
**Corollary**.: _(Corollary 19) Let \(N\) be an \(8\)-dimensional nilmanifold endowed with a left-invariant hypercomplex structure \((I,J,K)\), then we have the following,_
* \(N\) _admits an HKT metric if and only if_ \(h^{+}_{\overline{J}}=4\)_;_
* \(N\) _admits no HKT metrics if and only if_ \(h^{+}_{\overline{J}}=2.\)__
In Section 5, we focus on hypercomplex almost abelian Lie groups. Such Lie groups were studied in a recent paper of Andrada and Barberis [1]. We similarly obtain that on unimodular almost abelian Lie groups a left-invariant hyperhermitian structure is HKT if and only if it is hyperkahler. Moreover, since among hyperhermitian structures the \(SL(n,\mathbb{H})\) condition plays a fundamental role we give an explicit characterization on almost abelian solvmanifolds for an invariant hyperhermitian structure to be \(SL(n,\mathbb{H})\). Then, we focus on the \(8\)-dimensional case and we prove the following
**Corollary**.: _(Corollary 28) Let \(\mathfrak{g}\) be an \(8\)-dimensional non-abelian almost abelian unimodular Lie algebra equipped with a left-invariant \(SL(2,\mathbb{H})\)-structure. Then the dimension of \(\partial\)-closed non \(\partial\)-exact left-invariant imaginary \((2,0)\)-forms is non-zero if and only if \(\tilde{f}=0\) and \(a=0\) where \(\tilde{f}\) and \(a\) are given by Theorem 21. In particular, \(\mathfrak{g}\) is nilpotent and do not admit any HKT metric._
Acknowledgments. Part of this work has been carried on during the stay of the second author at the Graduate Center of the City University of New York. She would like to thank Mehdi Lejmi and the City University of New York for the invitation, financial support, and hospitality. The authors are grateful to Adrian Andrada and Maria Laura Barberis for the private communication about the overlap with [1]. The authors would like to thank Gueo Grantcharov, Yuri Ustinovskiy and Scott Wilson for useful discussions.
## 2. Preliminaries
In this section we will recall some well known facts about hypercomplex manifolds and fix some notations. Let \(M\) be a smooth manifold and \(L\) be a complex structure on \(M\). Then, \(L\) acts as an isomorphism on the space of \((p,q)\)-forms on \(M\) with respect to \(L\) via
\[L\alpha=\big{(}\sqrt{-1}\big{)}^{p-q}\alpha,\quad\alpha\in A^{p,q}_{L}(M),\]
where we denote by \(A^{p,q}_{L}(M)\) the space of \((p,q)\)-forms on \(M\) (here the bi-degree is taken with respect to the complex structure \(L\)). A _hypercomplex manifold_ is a smooth manifold \(M\) of real dimension \(4n\) equipped with three complex structures \(I,J,K\) that anticommute with each other, and such that \(IJ=K\). In particular, this induces a \(2\)-sphere of complex structures on \(M\) given by
\[\left\{aI+bJ+cK\mid a^{2}+b^{2}+c^{2}=1\right\}.\]
A Riemannian metric \(g\) on \(M\) that is Hermitian with respect to the three complex structures \(I,J,K\) is called _hyperhermitian_. We set \(\omega_{L}(x,y)=g(Lx,y)\) for the fundamental form, with \(L=I,J,K\). One can define a 2-form on \(M\) by
\[\Omega:=\frac{1}{2}\left(\omega_{J}+\sqrt{-1}\omega_{K}\right),\]
and it is easy to see that
\[\Omega\in A_{I}^{2,0}(M).\]
Then, \((M,I,J,K,g)\) is called _hyperkahler_ if \(d\Omega=0\) and it is called _hyperkahler with torsion_, or briefly _HKT_, if \(\partial\Omega=0\), where again the complex differential operator \(\partial\) is taken with respect to the complex structure \(I\). In terms of fundamental forms, \(g\) is hyperkahler if and only if
\[d\omega_{I}=d\omega_{J}=d\omega_{K}=0\]
and, as proven in [15] it is HKT if and only if
\[d^{c,I}\omega_{I}=d^{c,J}\omega_{J}=d^{c,K}\omega_{K},\]
where \(d^{c,L}:=L^{-1}dL\), with \(L=I,J,K\).
Notice that if \((M,I,J,K,g)\) is a 4-dimensional hyperhermitian manifold then it is clearly HKT for dimensional reasons but this is not true in general in higher dimension. Remark that \(g\) is related to \(\Omega\) by
\[g(x,\overline{y})=\Omega(x,J\overline{y}),\qquad x,y\in T_{I}^{1,0}(M). \tag{1}\]
Since \(JI=-IJ\) notice that \(J:A_{I}^{p,q}(M)\to A_{I}^{q,p}(M)\). We recall the following:
**Definition 1**.: A form \(\eta\in A_{I}^{2p,0}(M)\) is called _real_ if \(J\overline{\eta}=\eta\). A real (2,0)-form \(\eta\) is called _q-positive_ if \(\eta(x,J\overline{x})>0\), for \(x\in T_{I}^{1,0}(M)\), \(x\neq 0\).
In particular, an HKT structure \(\Omega\) is
* real \(J\Omega=\overline{\Omega}\),
* q-positive \(\Omega(x,J\overline{x})>0\), for \(x\in T_{I}^{1,0}(M)\), \(x\neq 0\),
and vice versa, a real, q-positive, \(\partial\)-closed \((2,0)\)-form defines an HKT structure via the formula (1).
Let \((M,I,J,K)\) be a compact \(4n\)-dimensional hypercomplex manifold. An important differential operator in this setting is the following
\[\partial_{J}:A_{I}^{p,q}(M)\to A_{I}^{p+1,q}(M),\qquad\partial_{J}:=J^{-1} \overline{\partial}J,\]
where the operator \(\overline{\partial}\) is considered with respect to \(I\). It was shown in [27] that
\[\partial_{J}^{2}=0\,,\qquad\partial\partial_{J}+\partial_{J}\partial=0.\]
Notice that both operators increase the first degree by one, so if we fix \(q=0\), we get a cochain complex \((A_{I}^{p,0}(M),\partial,\partial_{J})\) with two anticommuting differentials. For simplicity of notations we will drop the letter \(I\) in \(A_{I}^{p,q}(M)\) when it is understood.
**Definition 2**.: On a hypercomplex manifold \((M,I,J,K)\) of real dimension \(4n\), we say that the \(\partial\partial_{J}\)-Lemma holds if every \(\partial\)-closed, \(\partial_{J}\)-exact \((p,0)\)-form in \(A_{I}^{p,0}(M)\) is \(\partial\partial_{J}\)-exact, for any \(0\leqslant p\leqslant 2n\).
Furthermore, it is natural to consider the _quaternionic Dolbeault cohomology groups_
\[H_{\partial}^{p,0}(M):=\frac{\operatorname{Ker}(\partial|_{A^{p,0}(M)})}{ \partial A^{p-1,0}(M)}\,,\qquad H_{\partial_{J}}^{p,0}(M):=\frac{\operatorname {Ker}(\partial_{J}|_{A^{p,0}(M)})}{\partial_{J}A^{p-1,0}(M)}\,,\]
and the _quaternionic Bott-Chern and Aeppli cohomology groups_ (see [14])
\[H_{\mathrm{BC}}^{p,0}(M):=\frac{\operatorname{Ker}(\partial|_{A^{p,0}(M)}) \cap\operatorname{Ker}(\partial_{J}|_{A^{p,0}(M)})}{\partial\partial_{J}A^{p-2,0}(M)}\,,\]
\[H^{p,0}_{\rm A}(M):=\frac{{\rm Ker}(\partial\partial J|_{A^{p,0}(M)})}{\partial A ^{p-1,0}(M)+\partial JA^{p-1,0}(M)}.\]
It was shown in [14] that these cohomology groups are isomorphic to the kernels of suitable elliptic differential operators and so, if \(M\) is compact, they are finite dimensional. We will denote with \(h^{p,0}_{\rm BC}\) the dimension of \(H^{p,0}_{\rm BC}(M)\) and so on.
In special bidegrees natural decompostions of forms appear. As discussed in [20], any \(\varphi\in A^{2,0}_{I}(M)\) can be written as
\[\varphi=\varphi^{\overline{J},+}+\varphi^{\overline{J},-},\]
where
\[\varphi^{\overline{J},+}:=\frac{1}{2}\left(\varphi+J\overline{\varphi}\right),\qquad\varphi^{\overline{J},-}:=\frac{1}{2}\left(\varphi-J\overline{\varphi} \right).\]
This gives a decomposition of the bundle \(\Lambda^{2,0}(M)\) in
\[\Lambda^{2,0}(M)=\Lambda^{\overline{J},+}(M)\oplus\Lambda^{\overline{J},-}(M),\]
where sections of \(\Lambda^{\overline{J},+}(M)\) are _real_ forms and are denoted by \(\Omega^{\overline{J},+}(M)\), and sections of \(\Lambda^{\overline{J},-}(M)\) satisfy \(J\overline{\varphi}=-\varphi\) and are called _imaginary_ and are denoted by \(\Omega^{\overline{J},-}(M)\).
For any compact hypercomplex manifold one can define the following two subgroups of \(H^{2,0}_{\partial}(M)\), the \(\overline{J}\)_-invariant_ subgroup
\[H^{\overline{J},+}_{\partial}(M)=\left\{a\in H^{2,0}_{\partial}(M)\,|\, \exists\varphi\in a\text{ such that }\partial\varphi=0\text{ and }\varphi\in\Omega^{\overline{J},+}(M)\right\},\]
and the \(\overline{J}\)_-anti-invariant subgroup_
\[H^{\overline{J},-}_{\partial}(M)=\left\{a\in H^{2,0}_{\partial}(M)\,|\, \exists\varphi\in a\text{ such that }\partial\varphi=0\text{ and }\varphi\in\Omega^{\overline{J},-}(M)\right\}\,.\]
We recall the following definition (cf. [20])
**Definition 3**.: A hypercomplex structure \((I,J,K)\) on a smooth manifold \(M\) is called
* \(C^{\infty}\)_-pure_ if \[H^{\overline{J},+}_{\partial}(M)\cap H^{\overline{J},-}_{\partial}(M)=\{0\}\,;\]
* \(C^{\infty}\)_-full_ if \[H^{\overline{J},+}_{\partial}(M)+H^{\overline{J},-}_{\partial}(M)=H^{2,0}_{ \partial}(M);\]
* \(C^{\infty}\)_-pure-and-full_ if \[H^{\overline{J},+}_{\partial}(M)\oplus H^{\overline{J},-}_{\partial}(M)=H^{2,0 }_{\partial}(M).\]
A hypercomplex manifold \((M,I,J,K)\) admits a unique torsion-free connection preserving \(I,J,K\) called the Obata connection [23]. The holonomy of the Obata connection then lies in the general quaternionic linear group \(GL(n,\mathbb{H})\) (see for example [26]). However, in many examples such as nilmanifolds [4], the holonomy is actually contained in \(SL(n,\mathbb{H})\) (see [27, 28]).
**Definition 4**.: A hypercomplex manifold \((M,I,J,K)\) of real dimension \(4n\) is called an \(SL(n,\mathbb{H})\)-manifold if the holonomy of the Obata connection lies in \(SL(n,\mathbb{H})\).
Verbitsky in [28] proved that if a compact \(4n\)-dimensional hypercomplex manifold \((M,I,J,K)\) is \(SL(n,\mathbb{H})\) then the canonical bundle of \((M,I)\) is holomorphically trivial. Moreover, the vice versa also holds under the additional assumption that there exists an HKT metric. We denote an \(SL(n,\mathbb{H})\)-manifold by \((M,I,J,K,\Phi)\), where \(\Phi\) is a nowhere degenerate form in \(A^{2n,0}_{I}(M)\) and we can also assume that \(\Phi=J\overline{\Phi}\), in particular \(\partial\Phi=\partial_{J}\Phi=0.\) It was shown in [20] that every hypercomplex structure on a compact \(SL(2,\mathbb{H})\)-manifold is \(C^{\infty}\)-pure-and-full.
## 3. Pure and Full Hypercomplex structures
In this section we will focus on the \(C^{\infty}\)-pure-and-full condition. First of all we prove the following which is an analogue of the complex case (see [7, 21] and [2, Theorem 2.4])
**Theorem 5**.: _Let \((M,I,J,K)\) be a compact hypercomplex manifold that satisfies the \(\partial\partial_{J}\)-Lemma then the hypercomplex structure is \(C^{\infty}\)-pure-and-full._
Proof.: First, we prove that the hypercomplex structure is \(C^{\infty}\)-pure. Let \(\mathfrak{a}\in H^{\overline{J},+}_{\partial}(M)\cap H^{\overline{J},-}_{ \partial}(M).\) Choose a \(\partial\)-closed \((2,0)\)-form \(\alpha\in\Omega^{\overline{J},+}(M)\) and a \(\partial\)-closed \((2,0)\)-form \(\beta\in\Omega^{\overline{J},-}(M)\) both representatives of \(\mathfrak{a}\). Then, we have \(\alpha-\beta=\partial\gamma,\) for some \((1,0)\)-form. Since \(\alpha\in\Omega^{\overline{J},+}(M)\) and \(\beta\in\Omega^{\overline{J},-}(M),\) then \(\alpha\) and \(\beta\) are also \(\partial_{J}\)-closed. Hence \(\partial\gamma\) is \(\partial_{J}\)-closed. The hypercomplex structure satisfies the \(\partial\partial_{J}\)-lemma so \(\partial\gamma=\partial\partial_{J}w,\) for some complex-valued function \(w.\) We write \(w=u+\sqrt{-1}v,\) for some real-valued functions \(u,v.\) Hence \(\alpha-\beta=\partial\partial_{J}\left(u+\sqrt{-1}v\right)\) and so
\[\alpha-\partial\partial_{J}u=\beta+\sqrt{-1}\partial\partial_{J}v.\]
Since \(\alpha-\partial\partial_{J}u\in\Omega^{\overline{J},+}(M)\) and \(\beta+\sqrt{-1}\partial\partial_{J}v.\in\Omega^{\overline{J},-}(M),\) we deduce that \(\alpha=\partial\partial_{J}u\) and \(\beta=-\sqrt{-1}\partial\partial_{J}v\) and so \(\mathfrak{a}\) is the zero class.
Now, we would like to prove that the hypercomplex structure is \(C^{\infty}\)-full. Let \(\alpha\) be a \(\partial\)-closed \((2,0)\)-form representative of \(\mathfrak{a}\in H^{2,0}_{\partial}(M).\) First, we claim that we can choose \(\alpha\) such that \(\partial\alpha=\partial_{J}\alpha=0\). Indeed, the form \(\partial_{J}\alpha\) is \(\partial\)-closed and \(\partial_{J}\)-exact hence \(\partial_{J}\alpha=\partial_{J}\partial\beta,\) for some \(1\)-form \(\beta.\) Hence, \(\alpha-\partial\beta\) is \(\partial\)-closed, \(\partial_{J}\)-closed and cohomologous to \(\alpha.\) Now, we decompose \(\alpha\) as \(\alpha=\alpha^{\overline{J},+}+\alpha^{\overline{J},-}.\) Because \(\partial\alpha=\partial_{J}\alpha=0,\) we deduce that \(\partial\alpha^{\overline{J},+}=\partial\alpha^{\overline{J},-}=0\) and so we get the classes \([\alpha^{\overline{J},+}]\in H^{\overline{J},+}_{\partial}(M)\) and \([\alpha^{\overline{J},-}]\in H^{\overline{J},-}_{\partial}(M).\) The theorem follows.
As a consequence of Theorem 5 and [14] we have
**Corollary 6**.: _The hypercomplex structure on a compact HKT \(SL(n,\mathbb{H})\)-manifold is \(C^{\infty}\)-pure-and-full._
Now, on a compact \(SL(2,\mathbb{H})\)-manifold \((M,I,J,K,\Phi)\) equipped with a hyperhermitian metric \(g,\) we define the operator:
\[P:\Omega^{\overline{J},-}(M) \to\Omega^{\overline{J},-}(M),\] \[\alpha \mapsto(\partial\partial^{\star}\alpha)^{\overline{J},-},\]
where \((\cdot)^{\overline{J},-}\) is the imaginary part, and \(\partial^{\star}\) is defined as the adjoint of \(\partial\) with respect to the (global) Hermitian inner product
\[\langle\alpha,\beta\rangle=\int_{M}h(\alpha,\beta)\,\frac{\Omega^{2}\wedge \overline{\Phi}}{2},\]
(here \(2h=g-\sqrt{-1}\omega_{I},\) and \(\Omega\) is the \((2,0)\)-form induced by \(g\)). Moreover, \(\partial^{\star}=-\ast\partial\ast,\) where \(\ast\) is the Hodge star operator defined by (see [20] for more details)
\[\alpha\wedge\ast\beta\wedge\overline{\Phi}=h(\alpha,\beta)\,\frac{\Omega^{2} \wedge\overline{\Phi}}{2}.\]
We also define the Laplacian \(\Delta_{\partial}=\partial\partial^{\star}+\partial^{\star}\partial.\)
**Lemma 7**.: _On a compact \(SL(2,\mathbb{H})\)-manifold \((M,I,J,K,\Phi)\) equipped with a hyperhermitian metric \(g\), the operator \(P\) is a self-adjoint strongly elliptic linear operator with kernel the \(\Delta_{\partial}\)-harmonic imaginary \((2,0)\)-forms._
Proof.: If \(\alpha\) is in the kernel of \(P\), then \(0=\left\langle\left(\partial\partial^{\star}\alpha\right)^{\overline{J},-},\alpha \right\rangle=\left\langle\partial^{\star}\alpha,\partial^{\star}\alpha\right\rangle\). Hence \(\partial^{\star}\alpha=\partial\alpha=0\) because \(*\alpha=\alpha\) so \(\alpha\) is \(\Delta_{\partial}\)-harmonic. Furthermore, we can express \(P\) as follows
\[P(\alpha) =\left(\partial\partial^{\star}\alpha\right)^{\overline{J},-},\] \[=\frac{1}{2}\left(Id+*\right)\left(\partial\partial^{\star} \alpha\right)-\frac{1}{4}h\left(\left(Id+*\right)\left(\partial\partial^{ \star}\alpha\right),\Omega\right)\,\Omega,\] \[=\frac{1}{2}\Delta_{\partial}\alpha-\frac{1}{4}h\left(\Delta_{ \partial}\alpha,\Omega\right)\,\Omega.\]
We would like to compute the principal symbol of the operator \(P\). First, we remark that
\[h\left(\partial\partial^{\star}\alpha,\Omega\right)=h\left(*\partial\partial^ {\star}\alpha,*\Omega\right)=-h\left(*\partial*\partial*\alpha,\Omega\right)= h\left(\partial^{\star}\partial\alpha,\Omega\right).\]
A straightforward computation shows that \(h\left(\Delta_{\partial}\alpha,\Omega\right)\) is a first order operator on \(\alpha\). Indeed,
\[*h\left(\Delta_{\partial}\alpha,\Omega\right) =2*h\left(\partial^{\star}\partial\alpha,\Omega\right),\] \[=-2*h\left(*\partial*\partial\alpha,\Omega\right),\] \[=-20*\partial\alpha\wedge\Omega,\] \[=-2\partial\left(*\partial\alpha\wedge\Omega\right)+2*\partial \alpha\wedge\partial\Omega,\] \[=-2\partial\left(L_{\Omega}*\partial\alpha\right)+2*\partial \alpha\wedge\partial\Omega,\] \[=2\partial\left(*h\left(\partial\alpha,\Omega\right)\right)+2* \partial\alpha\wedge\partial\Omega,\] \[=2\partial\left(*h\left(\partial\alpha,\Omega\right)\right)+2* \partial\alpha\wedge\partial\Omega,\] \[=-2\partial\left(*h\left(\alpha,\partial\Omega\right)\right)+2* \partial\alpha\wedge\partial\Omega,\]
where we use the fact that \(h(\alpha,\Omega)=0\). Here \(L_{\Omega}\) denotes the operator \(L_{\Omega}(\cdot)=\cdot\wedge\Omega\) and \(\Lambda_{\Omega}:=*L_{\Omega}*\) is the contraction by \(\Omega\). We conclude that the principal symbol of \(P\) is the same as \(\frac{1}{2}\Delta_{\partial}\). The lemma follows.
Denote by \(h_{\overline{J}}^{\pm}\) the dimension of \(H_{\partial}^{\overline{J},\pm}(M)\) and by \(h_{\partial}^{2,0}\) the dimension of \(H_{\partial}^{2,0}(M)\). Then, on a compact \(SL(2,\mathbb{H})\)-manifold \((M,I,J,K,\Phi)\), we have by [20]
\[h_{\partial}^{2,0}=h_{\overline{J}}^{+}+h_{\overline{J}}^{-}. \tag{2}\]
We can then prove a path-wise semi-continuity property of \(h_{\overline{J}}^{-}\). This is similar to the result obtained by [8] on almost-complex manifolds.
**Corollary 8**.: _Let \((I_{t},J_{t},K_{t},\Phi_{t})\) be a smooth family of \(SL(2,\mathbb{H})\)-structures on a compact manifold \(M\) with \(t\in[0,1]\). Then, \(h_{\overline{J}_{t}}^{-}\) is an upper-semi-continuous function in \(t\)._
Proof.: This follows from Lemma 7 and the upper-semi-continuity of the kernel of a family of elliptic operators [22, Theorem 4.3].
**Remark 9**.: From Equation (2) and because the dimension \(h_{\partial}^{2,0}\) depends on the choice of the complex-structure, the dimension \(h_{\overline{J}}^{+}\) is not necessarily lower-semi-continuous, as we show in Example 15. Clearly, if the initial hypercomplex structure admits an hyperkahler metric, then along small deformations the Hodge numbers do not vary and so in that case \(h_{\overline{J}}^{+}\) is lower-semi-continuous.
An immediate consequence of Corollary 8 is the following
**Corollary 10**.: _Let \((I_{t},J_{t},K_{t},\Phi_{t})\) be a smooth family of \(SL(2,\mathbb{H})\)-structures on a compact manifold \(M\) such that \(h_{\overline{J}_{0}}^{-}=0\) at \(t=0\). Then, \(h_{\overline{J}_{t}}^{-}=0\) for a small \(t\)._
For any compact hypercomplex manifold \((M,I,J,K)\), we can define the following two subgroups of \(H_{BC}^{2,0}(M)\):
\[H_{BC}^{\overline{J},+}(M):=\{\mathfrak{a}\in H_{BC}^{2,0}(M)\,|\,\exists\, \alpha\in\mathfrak{a}\text{ such that }\partial\alpha=\partial_{J}\alpha=0\text{ and }\alpha\in\Omega^{ \overline{J},+}(M)\},\]
\(H^{\overline{J},-}_{BC}(M):=\{\mathfrak{a}\in H^{2,0}_{BC}(M)\,|\,\exists\,\alpha \in\mathfrak{a}\text{ such that }\partial\alpha=\partial_{J}\alpha=0\text{ and }\alpha\in \Omega^{\overline{J},-}(M)\}.\)
We can easily deduce the following:
**Proposition 11**.: _Let \((M,I,J,K)\) be a compact hypercomplex manifold. Then,_
\(H^{2,0}_{BC}(M)=H^{\overline{J},+}_{BC}\oplus H^{\overline{J},-}_{BC}.\)__
Proof.: Let \(\mathfrak{a}\in H^{\overline{J},+}_{BC}(M)\cap H^{\overline{J},+}_{BC}(M).\) Choose \(\alpha\in\Omega^{\overline{J},+}(M)\) and form \(\beta\in\Omega^{\overline{J},-}(M)\) both representatives of \(\mathfrak{a}\). Then \(\alpha-\beta=\partial\partial_{J}\left(u+\sqrt{-1}v\right)\) for some real values functions \(u,v\). Hence \(\alpha-\partial\partial_{J}u=\beta+\sqrt{-1}\partial\partial_{J}v\) and so \(H^{\overline{J},+}_{BC}(M)\cap H^{\overline{J},+}_{BC}(M)\) is the zero class. Let \(\alpha\) be a \(\partial\)-closed and \(\partial_{J}\)-closed representative of \(\mathfrak{a}\in H^{2,0}_{BC}(M).\) We decompose \(\alpha\) as \(\alpha=\alpha^{\overline{J},+}+\alpha^{\overline{J},-}\) where \(\alpha^{\overline{J},\pm}\in\Omega^{\overline{J},\pm}(M).\) Then, \(\partial\alpha^{\overline{J},\pm}=\partial_{J}\alpha^{\overline{J},\pm}=0.\) The proposition follows.
Consider the natural map \(H^{2,0}_{BC}(M)\to H^{2,0}_{\partial}(M),\) we have the following
**Lemma 12**.: _On a compact hypercomplex manifold of any dimension the natural maps \(H^{\overline{J},-}_{BC}(M)\to H^{\overline{J},-}_{\partial}(M)\) and \(H^{\overline{J},+}_{BC}(M)\to H^{\overline{J},+}_{\partial}(M)\) are surjective._
Proof.: To prove that both maps are surjective, we consider \(\alpha\in\Omega^{\overline{J},\pm}(M)\) a representative of \(\mathfrak{a}\in H^{\overline{J},\pm}_{\partial}(M)\). Because \(\alpha\in\Omega^{\overline{J},\pm}(M)\), then \(\partial\alpha=\partial_{J}\alpha=0\). The lemma follows.
On a compact \(SL(2,\mathbb{H})\)-manifold, we can push further these relations and prove the following
**Lemma 13**.: _On a compact \(SL(2,\mathbb{H})\)-manifold, the natural map \(H^{\overline{J},-}_{BC}(M)\mapsto H^{\overline{J},-}_{\partial}(M)\) is an isomorphism._
Proof.: We only need to prove that the map \(H^{\overline{J},-}_{BC}(M)\mapsto H^{\overline{J},-}_{\partial}(M)\) is injective. Let \(\alpha\in\Omega^{\overline{J},-}(M)\) be a representative of \(\mathfrak{a}\in H^{\overline{J},-}_{BC}(M)\). We suppose that \(\alpha=\partial\beta,\) for some \((1,0)\)-form \(\beta\). Then,
\[\|\alpha\|_{h}^{2} = \int_{M}\alpha\wedge*\alpha\wedge\overline{\Phi},\] \[= \int_{M}\alpha\wedge\alpha\wedge\overline{\Phi},\] \[= \int_{M}\partial\beta\wedge\partial\beta\wedge\overline{\Phi}=0.\]
We deduce that \(\alpha=0\) and hence the map is injective.
On \(SL(2,\mathbb{H})\)-manifolds we are then able to characterize the existence of HKT metrics in terms of \(\dim H^{\overline{J},+}_{BC}(M)\) and \(h^{\pm}_{\overline{J}}\) as follows.
**Theorem 14**.: _On a compact \(SL(2,\mathbb{H})\)-manifold, either \(\dim H^{\overline{J},+}_{BC}(M)=h^{\pm}_{\overline{J}}+1\) or \(\dim H^{\overline{J},+}_{BC}(M)=h^{\pm}_{\overline{J}}.\) Moreover, the \(SL(2,\mathbb{H})\)-manifold is HKT if and only if \(\dim H^{\overline{J},+}_{BC}(M)=h^{\pm}_{\overline{J}}.\)_
Proof.: From [20, Theorem 9.8], we have that
\(0\leqslant h^{2,0}_{BC}-h^{2,0}_{\partial}\leqslant 1.\)
Moreover, the \(SL(2,\mathbb{H})\)-manifold is HKT if and only if \(h^{2,0}_{BC}=h^{2,0}_{\partial}.\) It follows from (2), Proposition (11) and Lemma (13) that
\(0\leqslant\dim H^{\overline{J},+}_{BC}(M)-h^{+}_{\overline{J}}\leqslant 1,\)
and that the \(SL(2,\mathbb{H})\)-manifold is HKT if and only if \(\dim H^{\overline{J},+}_{BC}(M)=h^{+}_{\overline{J}}.\)
We now compute explicitly the spaces \(H^{\overline{J},\pm}_{\partial}(M)\) and \(H^{\overline{J},\pm}_{BC}(M)\) on a family of examples.
**Example 15**.: Let \(t\in(0,1)\) and consider the following family of Lie algebras \(\mathfrak{g}_{t}\) (see [9]) with structure equations
\[\begin{array}{l}[e_{1},e_{2}]=-t\,e_{6},\quad[e_{3},e_{4}]=(1-t)\,e_{6},\\ [e_{1},e_{3}]=-t\,e_{7},\quad[e_{2},e_{4}]=(t-1)\,e_{7},\\ [e_{1},e_{4}]=-t\,e_{8},\quad[e_{2},e_{3}]=(1-t)\,e_{8},\end{array}\]
or equivalently
\[de^{1}=de^{2}=de^{3}=de^{4}=de^{5}=0,\]
\[de^{6}=te^{12}-(1-t)e^{34},\quad de^{7}=te^{13}-(t-1)e^{24},\quad de^{6}=te^{14}- (1-t)e^{23}.\]
Define the hypercomplex structure
\[Ie_{1}=e_{2},\quad Ie_{3}=e_{4},\quad Ie_{5}=e_{6},\quad Ie_{7}=e_{8}\]
\[Je_{1}=e_{3},\quad Je_{2}=-e_{4},\quad Je_{5}=e_{7},\quad Je_{6}=-e_{8}.\]
The Lie algebras \(\mathfrak{g}_{t}\) are all isomorphic to \(\mathfrak{n}_{3}\), see [9]. Setting
\[\varphi^{1}=e^{1}+ie^{2},\quad\varphi^{2}=e^{3}+ie^{4},\quad\varphi^{3}=e^{5}+ ie^{6},\quad\varphi^{4}=e^{7}+ie^{8}\]
as global coframe of \((1,0)\) forms, the complex structure equations become
\[\left\{\begin{array}{lcl}d\varphi^{1}&=&0,\\ d\varphi^{2}&=&0,\\ d\varphi^{3}&=&-\frac{t}{2}\varphi^{1\overline{1}}+\frac{1-t}{2}\varphi^{1 \overline{2}},\\ d\varphi^{4}&=&\left(t-\frac{1}{2}\right)\varphi^{12}-\frac{1}{2}\varphi^{2 \overline{1}}.\end{array}\right.\]
We recall that a complex structure \(J\) on a Lie algebra \(\mathfrak{g}\) is said to be _abelian_ if \([Jx,Jy]=[x,y]\) for every \(x,y\in\mathfrak{g}\), this is equivalent to \(d\Lambda^{1,0}(\mathfrak{g}_{C}^{\star})\subseteq\Lambda^{1,1}(\mathfrak{g}_ {C}^{\star})\). In [4] it is proven that an hypercomplex nilmanifold admits an HKT metric if and only if the underlying hypercomplex structure is abelian.
Notice that, in the example, the hypercomplex structure is abelian if and only if \(t=\frac{1}{2}\) and so, by [4], there exists an HKT structure if and only if \(t=\frac{1}{2}\).
Also remark that the simply connected nilpotent Lie groups \(G_{t}\) associated to \(\mathfrak{g}_{t}\) admit lattices and we will still denote the hypercomplex structure \((I,J,K)\) with the same letters. Since the complex structure \(I\) is nilpotent the Dolbeault cohomology groups on the associated nilmanifolds can be computed using only invariant forms.
In particular, notice that for \(t\neq\frac{1}{2}\), we have
\[H^{2,0}_{\partial}(M)\simeq\left\langle\varphi^{13},\varphi^{14},\varphi^{23},\varphi^{24}\right\rangle.\]
Moreover,
\[H^{\overline{J},+}_{\partial}(M)\simeq\left\langle\varphi^{13}+\varphi^{24}, \varphi^{14}-\varphi^{23}\right\rangle\quad\text{and}\quad H^{\overline{J},-}_ {\partial}(M)\simeq\left\langle\varphi^{13}-\varphi^{24},\varphi^{14}+\varphi^{ 23}\right\rangle.\]
For \(t=\frac{1}{2}\), the hypercomplex structure is abelian and we have
\[H^{2,0}_{\partial}(M)\simeq\left\langle\varphi^{12},\varphi^{13},\varphi^{14},\varphi^{23},\varphi^{24},\varphi^{34}\right\rangle.\]
Moreover,
\[H^{\overline{J},+}_{\partial}(M)\simeq\left\langle\varphi^{13}+\varphi^{24}, \varphi^{14}-\varphi^{23},\varphi^{34},\varphi^{12}\right\rangle\quad\text{ and}\quad H^{\overline{J},-}_{\partial}(M)\simeq\left\langle\varphi^{13}-\varphi^{24}, \varphi^{14}+\varphi^{23}\right\rangle.\]
Notice that for \(t=\frac{1}{2}\), \(h^{\overline{J},+}_{\partial}(M)=4\) and for \(t\neq\frac{1}{2}\), \(h^{\overline{J},+}_{\partial}(M)=2\) confirming, as mentioned in Remark 9 that in general, \(h^{\overline{J},+}_{\partial}(M)\) is not lower-semi-continuous.
Similarly, we can compute the quaternionic Bott-Chern cohomology using invariant forms by [20]. Therefore, for \(t\neq\frac{1}{2}\), we have
\[H^{2,0}_{BC}(M)\simeq\left\langle\varphi^{12},\varphi^{13},\varphi^{14}, \varphi^{23},\varphi^{24}\right\rangle.\]
Moreover,
\[H^{\overline{J},+}_{BC}(M)\simeq\left\langle\varphi^{12},\varphi^{13}+\varphi^{24 },\varphi^{14}-\varphi^{23}\right\rangle\quad\text{and}\quad H^{\overline{J},-}_ {BC}(M)\simeq\left\langle\varphi^{13}-\varphi^{24},\varphi^{14}+\varphi^{23} \right\rangle.\]
In particular, as expected, notice that \(\dim H^{\overline{J},+}_{BC}(M)=\dim H^{\overline{J},+}_{\partial}(M)+1\) and \(\dim H^{\overline{J},-}_{BC}(M)=\dim H^{\overline{J},-}_{\partial}(M)\).
For \(t=\frac{1}{2}\), the hypercomplex structure is abelian and so by [4] there exists an HKT balanced metric. In such a case, by [13], we know that
\[H^{2,0}_{BC}(M)\simeq H^{2,0}_{\partial}(M),\]
and so we can use the previous computation.
**Remark 16**.: On \(4\)-dimensional manifolds admitting almost-complex structures it was proved that the almost-complex structures \(J\) with \(h^{-}_{\overline{J}}=0\) (where \(h^{-}_{\overline{J}}\) is the dimension of the subspace of \(H^{2}_{dR}(M)\) with classes represented by \(J\)-anti-invariant forms) form an open dense set in the \(\mathcal{C}^{\infty}\)-Frechet-topology in the space of almost-complex structures metric related to an integrable one [8, Theorem 1.1]. Based on this, Draghici, Li and Zhang made a conjecture (Conjecture 2.4 in [8]) about \(h^{-}_{\overline{J}}\) on a compact \(4\)-manifold which asserts that \(h^{-}_{J}\) vanishes for generic almost complex structures \(J\). In particular, they have confirmed their conjecture for \(4\)-manifolds with \(b^{+}=1\). We notice that if \((M,I,J,K,g)\) is a compact hyperkahler manifold with \(b_{2}=4\) then \(h^{\overline{J},-}_{\partial}=0\). Indeed, since \((M,I,g)\) is Kahler we have
\[b_{2}=2h^{2,0}_{\partial}+h^{1,1}_{\partial}\]
and since it satisfies the \(\partial\partial_{J}\)-lemma
\[b_{2}=2h^{2,0}_{\partial}+h^{1,1}_{\partial}=2(h^{+}_{\overline{J}}+h^{-}_{ \overline{J}})+h^{1,1}_{\partial}\,.\]
Now, since the structure is hyperkahler \(h^{2,0}_{\partial}\geqslant 1\) (\(\Omega\) represents a non-trivial class) and \(h^{1,1}_{\partial}\geqslant 1\) (\(\omega_{I}\) represents a non-trivial class). Hence,
\[b_{2}\geqslant 3+2h^{-}_{\overline{J}},\]
so we obtain the conclusion if \(b_{2}=4\). In particular, in this case \(h^{2,0}_{\partial}=1\). Notice that in the literature there are several results concerning the second Betti number of a hyperkahler manifold, see for instance [16, 24].
Clearly, this is a special case of having an \(SL(n,\mathbb{H})\) HKT manifold \((M,I,J,K,\Phi,\Omega)\) with \(h^{2,0}_{\partial}=1\). Indeed, in such a case
\[h^{2,0}_{\partial}=h^{+}_{\overline{J}}+h^{-}_{\overline{J}}\]
and \(\Omega\) represents a non-trivial class in \(H^{\overline{J},+}_{\partial}\) and so \(h^{-}_{\overline{J}}=0\) and \(h^{+}_{\overline{J}}=1\).
Moreover, notice that if \((M,I,J,K,\Phi,\Omega)\) is an \(SL(2,\mathbb{H})\) HKT manifold with \(h^{2,0}_{\partial}=1\) and we consider a small deformation \(J_{t}\) of \(J_{0}=J\), such that \((M,I,J_{t},K_{t})\) is hypercomplex, then
\[h^{-}_{\overline{J}_{t}}=0\]
since \(h^{-}_{\overline{J}_{t}}\) is an upper-semi-continuous function of \(t\), and
\[h^{+}_{\overline{J}_{t}}=1\]
since \(h^{+}_{\overline{J}_{t}}\) is a lower-semi-continuous function of \(t\) (because \(I\) is fixed), and \(H^{\overline{J}_{t},+}_{\partial}(M)\subseteq H^{2,0}_{\partial}(M)\) whose dimension is \(1\).
## 4. Hypercomplex eight-dimensional Nilpotent Lie groups
This section is devoted to show that \(h_{\overline{J}}^{+}\) alone can be used to characterize the existence of HKT metrics on \(8\)-dimensionale nilpotent Lie groups. We consider a nilpotent Lie algebra of real dimension \(8\) equipped with a hypercomplex structure \((I,J,K)\). Then, it follows from [6, Proposition 3.1] that we have the existence of four \(d\)-closed \(1\)-forms \(e^{1},e^{2}=Ie^{1},e^{3}=Je^{1},e^{4}=Ke^{1}.\)
We can consider a basis of \(1\)-forms \(\{e^{1},e^{2},e^{3},e^{4},e^{5},e^{6}=Ie^{5},e^{7}=Je^{5},e^{8}=Ke^{5}\}\). The hypercomplex structure is given by:
\[Ie^{1}=e^{2},\quad Ie^{3}=e^{4},\quad Ie^{5}=e^{6},\quad Ie^{7}=e^{8}.\]
\[Je^{1}=e^{3},\quad Je^{2}=-e^{4},\quad Je^{5}=e^{7},\quad Je^{6}=-e^{8}.\]
A basis of \((1,0)\)-forms is given by:
\[\varphi^{1}=e^{1}+\sqrt{-1}e^{2},\quad\varphi^{2}=e^{3}+\sqrt{-1}e^{4},\quad \varphi^{3}=e^{5}+\sqrt{-1}e^{6},\quad\varphi^{4}=e^{7}+\sqrt{-1}e^{8}.\]
It follows from [6] that
\[\partial\varphi^{1}=0,\quad\partial\varphi^{2}=0,\quad\partial\varphi^{3}= \frac{1}{2}\left(t_{2}-\sqrt{-1}t_{3}\right)\varphi^{1}\wedge\varphi^{2},\quad \partial\varphi^{4}=\frac{1}{2}\left(t_{4}+\sqrt{-1}t_{1}\right)\varphi^{1} \wedge\varphi^{2},\]
for some constants \(t_{1},t_{2},t_{3},t_{4}.\) We consider the \((2,0)\)-forms \(\Phi_{1},\Phi_{2}\in\Omega^{\overline{J},-}(M)\):
\[\Phi_{1}=\varphi_{1}\wedge\varphi_{3}-\varphi_{2}\wedge\varphi_{4},\quad\Phi_ {1}=\varphi_{1}\wedge\varphi_{4}+\varphi_{2}\wedge\varphi_{3}.\]
Then, we have that \(\partial\Phi_{1}=\partial\Phi_{2}=0.\) It is clear that \(\Phi_{1},\Phi_{2}\) can not be \(\partial\)-exact. We conclude the following:
**Theorem 17**.: _For any left-invariant hypercomplex structure \((I,J,K)\) on a nilpotent Lie group of real dimension \(8\), \(h_{\overline{J}}^{-}=2.\)_
Moreover we have the following
**Theorem 18**.: _Let \(G\) be an \(8\)-dimensional real nilpotent Lie group with a left-invariant hyperhermitian structure \((I,J,K,g)\), then we have the following,_
* \(\Omega\) _is HKT if and only if_ \(h_{\overline{J}}^{+}=4\)_;_
* \(\Omega\) _is not HKT if and only if_ \(h_{\overline{J}}^{+}=2\)_._
Proof.: From [6], we consider the \((2,0)\)-forms \(\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4}\in\Omega^{\overline{J},+}(G)\):
\[\Psi_{1}=\varphi_{1}\wedge\varphi_{2},\quad\Psi_{2}=\varphi_{3}\wedge\varphi_ {4},\quad\Psi_{3}=\varphi_{1}\wedge\varphi_{3}+\varphi_{2}\wedge\varphi_{4}, \quad\Psi_{4}=\varphi_{1}\wedge\varphi_{4}-\varphi_{2}\wedge\varphi_{3}.\]
Then, we have that \(\partial\Psi_{3}=\partial\Psi_{4}=0\), and it is clear that \(\Psi_{3},\Psi_{4}\) can not be \(\partial\)-exact.
Now, \(\Omega\) is HKT if and only if the hypercomplex structure is abelian which is equivalent to \(t_{2}-\sqrt{-1}t_{3}=t_{4}+\sqrt{-1}t_{1}=0\), namely
\[t_{1}=t_{2}=t_{3}=t_{4}=0.\]
In such a case, \(\Psi_{1}\) and \(\Psi_{2}\) are also \(\partial\)-closed and not \(\partial\)-exact and so \(h_{\overline{J}}^{+}=4\).
If \(\Omega\) is not HKT, then at least one among \(t_{1},t_{2},t_{3},t_{4}\) is not zero. Hence, \(\Psi_{1}\) is \(\partial\)-exact and \(\Psi_{2}\) is not \(\partial\)-closed, therefore \(h_{\overline{J}}^{+}=2\).
**Corollary 19**.: _Let \(N\) be an \(8\)-dimensional nilmanifold endowed with a left-invariant hypercomplex structure \((I,J,K)\), then we have the following,_
* \(N\) _admits an HKT metric if and only if_ \(h_{\overline{J}}^{+}=4\)_;_
* \(N\) _admits no HKT metrics if and only if_ \(h_{\overline{J}}^{+}=2\)_._
Proof.: The result follows from [9], indeed if there exists an HKT metric then there exists also an invariant HKT metric. The equivalence follows then from Theorem 18.
## 5. Hypercomplex almost-abelian Lie groups
In this section, we discuss the case of hypercomplex almost-abelian Lie groups. More precisely, recall that a (solvable) real Lie algebra \(\mathfrak{g}\) is _almost-abelian_ if it has a codimension one abelian ideal. A Lie group \(G\) is almost-abelian if its Lie algebra is. Theorem 21, Theorem 23, and Theorem 24 were obtained by Barberis and Andrada in [1]. For the sake of completeness, we give proofs that we obtained independently of [1]. First, we recall the following fact first proven in [19],
**Theorem 20**.: _Let \(\mathfrak{g}\) be a \(2m\)-dimensional almost-abelian Lie algebra with codimension one abelian ideal \(\mathfrak{u}\) and let \(I\) be an almost complex structure on \(\mathfrak{g}\). Choose \(X\in\mathfrak{g}\setminus\mathfrak{u}\) such that \(IX\in\mathfrak{u}\) and set \(\mathfrak{u}_{I}\coloneqq\mathfrak{u}\cap I\mathfrak{u}\) and \(f\coloneqq ad_{X}|_{\mathfrak{u}}\in\mathrm{End}(\mathfrak{u})\). Then \(I\) is integrable if and only if there are \(f_{0}\in\mathfrak{gl}(\mathfrak{u}_{I},I)\coloneqq\{h\in\mathrm{End}( \mathfrak{u}_{I})\mid[h,I]=0\}\cong\mathfrak{gl}(m-1,\mathbb{C})\), \(w\in\mathfrak{u}_{I}\cong\mathbb{R}^{2m-2}\) and \(a\in\mathbb{R}\) such that_
\[f=\begin{pmatrix}f_{0}&w\\ \mathbf{0}&a\end{pmatrix}\]
_with respect to the splitting \(\mathfrak{u}=\mathfrak{u}_{I}\oplus\langle IX\rangle\)._
Now we consider the hypercomplex case
**Theorem 21**.: _[_1_]_ _Let \(\mathfrak{g}\) be a \(4n\)-dimensional almost-abelian Lie algebra with codimension one abelian ideal \(\mathfrak{u}\) and let \(I,J\) be two anti-commuting almost complex structures on \(\mathfrak{g}\). Choose \(X\in\mathfrak{g}\setminus\mathfrak{u}\) such that \(IX\in\mathfrak{u}\) and set \(\mathfrak{u}_{I}\coloneqq\mathfrak{u}\cap I\mathfrak{u}\), \(\mathfrak{u}_{I,J}\coloneqq\mathfrak{u}\cap I\mathfrak{u}\cap J\mathfrak{u}\) and \(f\coloneqq ad_{X}|_{\mathfrak{u}}\in\mathrm{End}(\mathfrak{u})\). Then \(I,J\) are integrable if and only if there are \(\tilde{f}\coloneqq f|_{\mathfrak{u}_{I,J}}\in\mathrm{End}(\mathfrak{u}_{I,J})\) satisfying \([\tilde{f},I]=0\) and \([\tilde{f},J]=0\), \(v\in\mathfrak{u}_{I,J}\cong\mathbb{R}^{4n-4}\) and \(a\in\mathbb{R}\) such that_
\[f=\begin{pmatrix}\tilde{f}&-Jv&Kv&v\\ \mathbf{0}&a&0&0\\ \mathbf{0}&0&a&0\\ \mathbf{0}&0&0&a\end{pmatrix}\]
_with respect to the splitting \(\mathfrak{u}=\mathfrak{u}_{I,J}\oplus\langle IX\rangle\oplus\langle IX\rangle\), where \(K:=IJ\)._
Proof.: Set \(m=2n\). From the previous result \(I\) is integrable if and only if there are
\(f_{0}\in\mathfrak{gl}(\mathfrak{u}_{I},I)\coloneqq\{h\in\mathrm{End}( \mathfrak{u}_{I})\mid[h,I]=0\}\cong\mathfrak{gl}(m-1,\mathbb{C})\), \(w\in\mathfrak{u}_{I}\cong\mathbb{R}^{2m-2}\) and \(a\in\mathbb{R}\) such that
\[f=\begin{pmatrix}f_{0}&w\\ \mathbf{0}&a\end{pmatrix}\]
with respect to the splitting \(\mathfrak{u}=\mathfrak{u}_{I}\oplus\langle IX\rangle\).
We study now the integrability of \(J\), hence we need to check when the Nijenhuis tensor
\[N_{J}(Y,Z)=[Y,Z]+J([JY,Z]+[Y,JZ])-[JY,JZ]\]
vanishes.
First of all, notice that if \(N_{J}(Y,Z)=0\) for some \(Y,Z\in\mathfrak{g}\) then \(N_{J}(JY,Z)=N_{J}(Y,JZ)=N_{J}(JY,JZ)=0\). Moreover, \(N_{J}(Y,JY)=0\) for all \(Y\in\mathfrak{g}\).
Since \(\mathfrak{u}\) is abelian, \(N_{J}(Y,Z)=0\) for \(Y,Z\in\mathfrak{u}_{J}:=\mathfrak{u}\cap J\mathfrak{u}\).
According to the splitting,
\[\mathfrak{g}=\langle X\rangle\oplus\mathfrak{u}_{J}\oplus\langle JX\rangle\]
we only need to check \(N_{J}(X,Y)=0\) for \(Y\in\mathfrak{u}_{J}\), Let \(Y\in\mathfrak{u}_{J}\),
\[N_{J}(X,Y)=[X,Y]+J([JX,Y]+[X,JY])-[JX,JY]=[X,Y]+J[X,JY]=f(Y)+Jf(JY),\]
hence
\[N_{J}(X,Y)=0\iff Jf(Y)=f(JY).\]
In particular, we have that, since \(IX\in\mathfrak{u}_{J}\),
* \(Jf(Y)=f(JY)\) for \(Y\in\mathfrak{u}_{I,J}\),
* \(Jf(IX)=-f(KX)\),
Similarly, since \(I\) is integrable and \(JX\in\mathfrak{u}_{I}\), by \(If(Y)=f(IY)\) for every \(Y\in\mathfrak{u}_{I}\) we obtain
\[If(JX)=f(KX).\]
Using the previous notations we have
\[f(IX)=aIX+w\]
therefore, there exist \(b,c\in\mathbb{R}\) and \(v\in\mathfrak{u}_{I,J}\) such that
\[f(IX)=aIX+bJX+cKX+v.\]
By the previous considerations we have
* \(f(KX)=-Jf(IX)\),
* \(f(JX)=-If(KX)\),
hence, from the first equation we obtain,
\[f(KX)=-Jf(IX)=bX-cIX+aKX-Jv\]
but \(f\in\operatorname{End}(\mathfrak{u})\) and so \(b=0\). Similarly, from the second equation
\[f(JX)=-If(KX)=-cX+aJX+Kv\]
but \(f\in\operatorname{End}(\mathfrak{u})\) and so \(c=0\). Therefore, we have
\[f(IX)=aIX+v,\] \[f(JX)=aJX+Kv,\] \[f(KX)=aKX-Jv.\]
Therefore, \(I\) and \(J\) are integrable if and only if there are \(\tilde{f}:=f|_{\mathfrak{u}_{I,J}}\in\operatorname{End}(\mathfrak{u}_{I,J})\) satisfying \([\tilde{f},I]=0\) and \([\tilde{f},J]=0\), \(v\in\mathfrak{u}_{I,J}\cong\mathbb{R}^{4n-4}\) and \(a\in\mathbb{R}\) such that
\[f=\begin{pmatrix}\tilde{f}&-Jv&Kv&v\\ \mathbf{0}&a&0&0\\ \mathbf{0}&0&a&0\\ \mathbf{0}&0&0&a\end{pmatrix}\]
with respect to the splitting \(\mathfrak{u}=\mathfrak{u}_{I,J}\oplus\langle KX\rangle\oplus\langle JX\rangle \oplus\langle IX\rangle\).
**Remark 22**.: Notice that for \(n=1\), we obtain
\[f=\begin{pmatrix}a&0&0\\ 0&a&0\\ 0&0&a\end{pmatrix}\,.\]
In particular, \(\mathfrak{g}\) is unimodular if and only if \(a=0\), in such a case \(\mathfrak{g}=\mathbb{R}^{3}\rtimes_{f}\mathbb{R}\) is isomorphic to \(\mathbb{R}^{4}\) and the associated solvmanifold is the \(4\)-dimensional torus, that according to the classification of \(4\)-dimensional hypercomplex manifold in [5] is the only \(4\)-dimensional hypercomplex solvmanifold.
We study now the existence of hyperkahler and HKT metrics on hypercomplex almost-abelian Lie algebras. Let \(\mathfrak{g}\) be a \(4n\)-dimensional almost-abelian Lie algebra endowed with an hypercomplex structure \((I,J,K)\) and an hyperhermitian metric \(g\). Then, there exists an orthonormal basis \(\{e_{i}\}\) such that \(\mathfrak{u}_{I,J}\) is spanned by \(e_{2},\dots,e_{2n-3}\), \(Ie_{1}=e_{2n}\), \(Je_{1}=e_{2n-1}\), \(Ke_{1}=-e_{2n-2}\). In view of Theorem 21, there exist a \((4n-4)\times(4n-4)\)-matrix \(\tilde{A}:=f|_{\mathfrak{u}_{I,J}}\) satisfying \([\tilde{A},I]=0\) and \([\tilde{A},J]=0\), \(v\in\mathbb{R}^{4n-4}\) and \(a\in\mathbb{R}\) such that
\[f=\begin{pmatrix}\tilde{A}&-Jv&Kv&v\\ \mathbf{0}&a&0&0\\ \mathbf{0}&0&a&0\\ \mathbf{0}&0&0&a\end{pmatrix}\]
with respect to the splitting \(\mathfrak{u}=\mathfrak{u}_{I,J}\oplus\langle Ke_{1}\rangle\oplus\langle Je_{1} \rangle\oplus\langle Je_{1}\rangle\).
We study now the existence of an HKT metric in terms of \(\tilde{A},a,v\).
**Theorem 23**.: _[_1_]_ \((I,J,K,g)\) _is HKT if and only if \(\tilde{A}\in\mathfrak{so}(\mathfrak{u}_{I,J})\) and \(v=0\)._
Proof.: We recall that \((I,J,K,g)\) is HKT if and only if
\[d^{c,I}\omega_{I}=d^{c,J}\omega_{J}=d^{c,K}\omega_{K}.\]
In particular, by [3] for any \(L\in\{I,J,K\}\), the torsion form of the Bismut connection on an almost abelian Lie algebra is given by
\[d^{c,L}\omega_{L}(x,y,z)=-g([Lx,Ly],z)-g([Ly,Lz],x)-g([Lz,Lx],y),\]
for every \(x,y,z\in\mathfrak{g}\).
Since \(K=IJ\) it is enough to study when
\[d^{c,I}\omega_{I}=d^{c,J}\omega_{J}.\]
First of all, if \(x,y,z\in\mathfrak{u}_{I,J}\) then, since \(\mathfrak{u}\) is abelian and \(\mathfrak{u}_{I,J}\) is \(I\)- and \(J\)-invariant,
\[d^{c,L}\omega_{L}(x,y,z)=0,\]
for every \(L\in\{I,J\}\). Similarly,
* \(d^{c,L}\omega_{L}(e_{1},e_{2n-2},e_{2n-1})=0\),
* \(d^{c,L}\omega_{L}(e_{1},e_{2n-2},e_{2n})=0\),
* \(d^{c,L}\omega_{L}(e_{1},e_{2n-1},e_{2n})=0\),
* \(d^{c,L}\omega_{L}(e_{2n-2},e_{2n-1},e_{2n})=0\),
* \(d^{c,L}\omega_{L}(e_{1},y,z)=0\),
* \(d^{c,L}\omega_{L}(e_{2n-2},y,z)=0\),
* \(d^{c,L}\omega_{L}(e_{1},e_{2n-2},z)=0\),
for every \(L\in\{I,J,K\}\) and \(y,z\in\mathfrak{u}_{I,J}\).
We will compute the first one, the others are similar. Notice that
\[Ie_{2n-2}=-IKe_{1}=Je_{1}=e_{2n-1},\quad Je_{2n-2}=-JKe_{1}=-Ie_{1}=-e_{2n}.\]
Now,
\[d^{c,I}\omega_{I}(e_{1},e_{2n-2},e_{2n-1}) =-g([Ie_{1},Ie_{2n-2}],e_{2n-1})-g([Ie_{2n-2},Ie_{2n-1}],e_{1})-g ([Ie_{2n-1},Ie_{1}],e_{2n-2}),\] \[=-g([e_{2n},e_{2n-1}],e_{2n-1})+g([e_{2n-1},e_{2n-2}],e_{1})+g([ e_{2n-2},e_{2n}],e_{2n-2}),\] \[=0,\]
since \(\mathfrak{u}\) is abelian. On the other side,
\[d^{c,J}\omega_{J}(e_{1},e_{2n-2},e_{2n-1}) =-g([Je_{1},Je_{2n-2}],e_{2n-1})-g([Je_{2n-2},Je_{2n-1}],e_{1})-g ([Je_{2n-1},Je_{1}],e_{2n-2}),\] \[=g([e_{2n-1},e_{2n}],e_{2n-1})+g([e_{2n},e_{1}],e_{1})-g([e_{1}, e_{2n-1}],e_{2n-2}),\] \[=g([e_{2n},e_{1}],e_{1})-g([e_{1},e_{2n-1}],e_{2n-2}),\]
since \(\mathfrak{u}\) is abelian. Moreover, \(g([e_{2n},e_{1}],e_{1})=0\) since \(e_{1}\) is orthogonal to \(\mathfrak{u}\), and \(g([e_{1},e_{2n-1}],e_{2n-2})=0\) since
\[[e_{1},e_{2n-1}]=ae_{2n-1}+Kv\]
and the basis is orthogonal. Therefore, \(d^{c,J}\omega_{J}(e_{1},e_{2n-2},e_{2n-1})=0\).
The remaining cases give the following conditions, for every \(y,z\in\mathfrak{u}_{I,J}\),
* \(d^{c,I}\omega_{I}(e_{2n-1},y,z)=d^{c,J}\omega_{J}(e_{2n-1},y,z)\) if and only if \(g(AJy,z)=g(AJz,y)\),
* \(d^{c,I}\omega_{I}(e_{2n},y,z)=d^{c,J}\omega_{J}(e_{2n},y,z)\) if and only if \(g(AJy,z)=g(AIz,y)\),
* \(d^{c,I}\omega_{I}(e_{1},e_{2n-1},z)=d^{c,J}\omega_{J}(e_{1},e_{2n-1},z)\) if and only if \(g(Kv,z)=0\),
* \(d^{c,I}\omega_{I}(e_{1},e_{2n},z)=d^{c,J}\omega_{J}(e_{1},e_{2n},z)\) if and only if \(g(v,z)=0\),
* \(d^{c,I}\omega_{I}(e_{2n-2},e_{2n-1},z)=d^{c,J}\omega_{J}(e_{2n-2},e_{2n-1},z)\) if and only if \(g(v,z)=0\),
* \(d^{c,I}\omega_{I}(e_{2n-2},e_{2n},z)=d^{c,J}\omega_{J}(e_{2n-2},e_{2n},z)\) if and only if \(g(Kv,z)=0\),
* \(d^{c,I}\omega_{I}(e_{2n-1},e_{2n},z)=d^{c,J}\omega_{J}(e_{2n-1},e_{2n},z)\) is always satisfied.
We will show explicitly that \(d^{c,I}\omega_{I}(e_{2n-1},y,z)=d^{c,J}\omega_{J}(e_{2n-1},y,z)\) if and only if \(g(AJy,z)=g(AJz,y)\). First of all,
\[d^{c,I}\omega_{I}(e_{2n-1},y,z) =-g([Ie_{2n-1},Iy],z)-g([Iy,Iz],e_{2n-1})-g([Iz,Ie_{2n-1}],y),\] \[=g([e_{2n-2},Iy],z)+g([Iz,e_{2n-2}],y),\] \[=0\]
since \(\mathfrak{u}\) is abelian. On the other side,,
\[d^{c,J}\omega_{J}(e_{2n-1},y,z) =-g([Je_{2n-1},Jy],z)-g([Jy,Jz],e_{2n-1})-g([Jz,Je_{2n-1}],y),\] \[=g([e_{1},Jy],z)+g([Jz,e_{1}],y),\] \[=g(AJy,z)+g(-AJz,y).\]
Hence, \(d^{c,I}\omega_{I}(e_{2n-1},y,z)=d^{c,J}\omega_{J}(e_{2n-1},y,z)\) if and only if \(g(AJy,z)=g(AJz,y)\).
Similarly, we show that \(d^{c,I}\omega_{I}(e_{1},e_{2n},z)=d^{c,J}\omega_{J}(e_{1},e_{2n},z)\) if and only if \(g(v,z)=0\), First of all,
\[d^{c,I}\omega_{I}(e_{1},e_{2n},z) =-g([Ie_{1},Ie_{2n}],z)-g([Ie_{2n},Iz],e_{1})-g([Iz,Ie_{1}],e_{2n}),\] \[=g([e_{2n},e_{1}],z)+g([e_{1},Iz],e_{1}),\] \[=-g(ae_{2n}+v,z)+g(AIz,e_{1}),\] \[=-g(v,z),\]
and
\[d^{c,J}\omega_{J}(e_{1},e_{2n},z) =-g([Je_{1},Je_{2n}],z)-g([Je_{2n},Jz],e_{1})-g([Jz,Je_{1}],e_{2n}),\] \[=-g([e_{2n-1},e_{2n-2}],z)-g([e_{2n-2},Jz],e_{1})-g([Jz,e_{2n-1}],e_{2n}),\] \[=0,\]
hence, \(d^{c,I}\omega_{I}(e_{1},e_{2n},z)=d^{c,J}\omega_{J}(e_{1},e_{2n},z)\) if and only if \(g(v,z)=0\).
Therefore, since \(\mathfrak{u}_{I,J}\) is \(I\)-, \(J\)-, \(K\)-invariant, by the previous considerations the hyperhermitian structure is HKT if and only if
* \(g(AJy,z)=g(AJz,y)\),
* \(g(AJy,z)=g(AIz,y)\),
* \(g(v,z)=0\),
for every \(y,z\in\mathfrak{u}_{I,J}\). In particular, we obtain \(v=0\) and, since \([A,J]=0\) on \(\mathfrak{u}_{I,J}\), \(g(AJy,z)-g(AJz,y)=0\) for every \(y,z\) if and only if
\[g((AJ-A^{t}J^{t})y,z)=g((A+A^{t})Jy,z)=0,\]
for every \(y,z\) if and only if \(A+A^{t}=0\) on \(\mathfrak{u}_{I,J}\). This concludes the proof.
The existence of an hyperkahler metric can be characterized in terms of \(\tilde{A},a,v\).
**Theorem 24**.: _[_1_]_\((I,J,K,g)\) _is hyperkahler if and only if \(\tilde{A}\in\mathfrak{so}(\mathfrak{u}_{I,J})\), \(a=0\) and \(v=0\)._
Proof.: In view of the previous theorem \(Id\omega_{I}=Jd\omega_{J}\) if and only if \(\tilde{A}\in\mathfrak{so}(\mathfrak{u}_{I,J})\) and \(v=0\).
By [10, Lemma 3.6]\((I,g)\) is Kahler if and only if \(v=0\) and
\[\begin{pmatrix}\tilde{A}&0&0\\ \mathbf{0}&a&0\\ \mathbf{0}&0&a\end{pmatrix}\,.\]
belongs to \(\mathfrak{so}(\mathfrak{u}_{I})\).
Therefore, if \((I,J,K,g)\) is hyperkahler then in particular, since it is HKT we have \(\tilde{A}\in\mathfrak{so}(\mathfrak{u}_{I,J})\)
and \(v=0\), and also \((I,g)\) is Kahler giving \(a=0\). Viceversa, if \(\tilde{A}\in\mathfrak{so}(\mathfrak{u}_{I,J})\), \(a=0\) and \(v=0\) then \(g\) is HKT, namely \(Id\omega_{I}=Jd\omega_{J}\) and \((I,g)\) is Kahler that is \(d\omega_{I}=0\), concluding the proof.
As an immediate corollary we obtain
**Corollary 25**.: _Let \(\mathfrak{g}\) be a unimodular \(4n\)-dimensional almost-abelian Lie algebra endowed with an hypercomplex structure \((I,J,K)\) and an hyperhermitian metric \(g\). Then, \((I,J,K,g)\) is HKT if and only if \((I,J,K,g)\) is hyperkahler._
We characterize now the existence of \(SL(n,\mathbb{H})\)-structures on almost-abelian hypercomplex solvmanifolds. In order to do so we adapt [11, Proposition 2.4] to the hypercomplex case.
**Proposition 26**.: _Let \(\mathfrak{g}\) be an almost-abelian Lie algebra of real dimension \(4n\) equipped with a hyperhermitian structure \((I,J,K,g,\Omega)\). Then, \((\mathfrak{g},I,J,K,g,\Omega)\) admits a closed \((2n,0)\)-form if and only if_
\[a+\frac{1}{4}tr\left(\tilde{f}\right)=0,\]
_where \(a,\tilde{f}\) are given by Theorem 21._
Proof.: Let \(\{e_{1},\cdots,e_{4n}\}\) be a \(g\)-orthonormal basis of \(\mathfrak{g}\) such that \(e_{1}\) is in \(\mathfrak{g}/\mathfrak{u}\), \(Ie_{1}=e_{4n}\), \(Je_{1}=e_{4n-1}\), \(Ie_{2p}=e_{2p+1}\) for \(1\leq p\leq 2n-1\), and \(\{\varphi_{1}=e^{1}+\sqrt{-1}e^{4n},J\overline{\varphi_{1}}=e^{4n-1}-\sqrt{-1} e^{4n-2},\varphi_{2}=e^{2}+\sqrt{-1}e^{3},J\overline{\varphi_{2}}=e^{4}+\sqrt{-1}e^{ 5},\varphi_{3}=e^{6}+\sqrt{-1}e^{7},J\overline{\varphi_{3}}=e^{8}+\sqrt{-1}e^{ 9},\cdots,\varphi_{n}=e^{4n-6}+\sqrt{-1}e^{4n-5},J\overline{\varphi_{n}}=e^{4n -4}+\sqrt{-1}e^{4n-3}\}\) is a basis of \((1,0)\)-forms. The image of \((1,0)\)-forms by \(\overline{\partial}\) lies in the space of \((1,0)\)-forms wedge product with \(\overline{\varphi_{1}}\). Thus, the operator \(\overline{\partial}\) acting on \((1,0)\)-forms can been seen as an endomorphism of \((1,0)\)-forms. With respect to the basis \(\{\varphi_{1},J\overline{\varphi_{1}},\cdots,\varphi_{n},J\overline{\varphi_ {n}}\}\), the endomorphism is:
\[\frac{1}{2}\left(\begin{array}{ccc}a&0&\tilde{v}\\ 0&a&\tilde{w}\\ \mathbf{0}&\mathbf{0}&\mathbf{A}^{T}\end{array}\right),\]
where \(a=g([e_{1},Ie_{1}],Ie_{1}),\mathbf{A}\) is the complex matrix corresponding to \(ad_{e_{1}}|_{u_{I,J,K}}\) with respect to the basis \(\{e_{2}-\sqrt{-1}e_{3},\cdots,e_{4n-4}-\sqrt{-1}e_{4n-3}\},\tilde{v}^{T}=(v_{ 3}-\sqrt{-1}v_{2},\cdots,v_{4n-3}-\sqrt{-1}v_{4n-4})\) and \(\tilde{w}^{T}=(-v_{5}-\sqrt{-1}v_{4},v_{3}+\sqrt{-1}v_{2},\cdots,-v_{4n-3}- \sqrt{-1}v_{4n-4},v_{4n-5}+\sqrt{-1}v_{4n-6})\) where \(v_{i}=g([e_{1},Ie_{1}],e_{i})\) for \(2\leq i\leq 4n-3\).
Now, we compute
\[\overline{\partial}\left(\varphi_{1}\wedge J\overline{\varphi_{1 }}\wedge\cdots\wedge\varphi_{n}\wedge J\overline{\varphi_{n}}\right) = \left(a+\frac{1}{2}tr\left(\mathbf{A}^{T}\right)\right)\varphi_{1 }\wedge\overline{\varphi_{1}}\wedge J\overline{\varphi_{1}}\wedge\cdots \wedge\varphi_{n}\wedge J\overline{\varphi_{n}},\] \[= \left(a+\frac{1}{2}tr\left(\mathbf{A}\right)\right)\varphi_{1} \wedge\overline{\varphi_{1}}\wedge J\overline{\varphi_{1}}\wedge\cdots \wedge\varphi_{n}\wedge J\overline{\varphi_{n}}.\]
We note here that \(tr\left(\mathbf{A}\right)\) is real. Indeed, we remark that the basis \(\{e_{2}-\sqrt{-1}e_{3},\cdots,e_{4n-4}-\sqrt{-1}e_{4n-3}\}\) can be expressed as \(\{\varphi_{2}^{*}=e_{2}-\sqrt{-1}e_{3},\left(J\overline{\varphi_{2}}\right)^ {*}=e_{4}-\sqrt{-1}e_{5},\cdots,\varphi_{n}^{*}=e_{4n-6}-\sqrt{-1}e_{5},\cdots, \varphi_{n}^{*}=e_{4n-6}-\sqrt{-1}e_{5},\cdots,e_{5}\}\).
\(\sqrt{-1}e_{4n-5},(J\overline{\varphi_{n}})^{*}=e_{4n-4}-\sqrt{-1}e_{4n-3}\}\). Hence,
\[tr\left(\mathbf{A}\right) = \frac{1}{2}\sum_{i=2}^{n}g([e_{1},\varphi_{i}^{*}],\overline{ \varphi_{i}^{*}})+g([e_{1},(J\overline{\varphi_{i}})^{*}],\overline{(J \overline{\varphi_{i}})^{*}})\] \[= \frac{1}{2}\sum_{k=2,6,\cdots,4n-6}g([e_{1},e_{k}],e_{k})+g([e_{1 },Ie_{k}],Ie_{k})+g([e_{1},Je_{k}],Je_{k})+g([e_{1},JIe_{k}],JIe_{k})\] \[- \frac{1}{2}\sqrt{-1}\sum_{k=2,6,\cdots,4n-6}g([e_{1},Ie_{k}],e_{k} )-g([e_{1},e_{k}],Ie_{k})-g([e_{1},JIe_{k}],Je_{k})+g([e_{1},Je_{k}],JIe_{k})\] \[= 2\sum_{k=2,6,\cdots,4n-6}g([e_{1},e_{k}],e_{k})\] \[= \frac{1}{2}\sum_{i=2}^{4n-3}g([e_{1},e_{i}],e_{i})=\frac{1}{2}tr \left(\tilde{f}\right),\]
where \(\tilde{f}\) is given by Theorem 21. Here, we use the fact that \([e_{1},Le_{k}]=L[e_{1},e_{k}]\), where \(L=I,J,K\).
We remark that the result is a particular case of [11, Proposition 2.4] because the trace of the complexification of \(\tilde{f}\) given by Theorem 21 is real.
As a consequence, with the notations used above, we obtain the following
**Corollary 27**.: _Let \(M:=\Gamma\backslash G\) be a \(4n\)-dimensional solvmanifold with \(G\) almost-abelian Lie group. Let \(\mathfrak{g}=\text{Lie}(G)\) and let \((I,J,K,g,\Omega)\) be an invariant hyperhermitian structure on \(M\). Then, \((M,I,J,K,g,\Omega)\) is \(SL(n,\mathbb{H})\) if and only if \(a=0\) and \(tr\left(\tilde{f}\right)=0\)._
Proof.: Since \(M:=\Gamma\backslash G\) we have that \(G\) is unimodular and this is equivalent to
\[tr\left(\tilde{f}\right)=-3a.\]
Moreover, on solvmanifolds an invariant hyperhermitian structure is \(SL(n,\mathbb{H})\) if and only if there exists an invariant closed \((2n,0)\)-form. By Proposition 26 this turns out to be equivalent to
\[a=-\frac{1}{4}tr\left(\tilde{f}\right)=\frac{3}{4}a.\]
Hence we get the thesis.
### Explicit construction in dimension \(8\)
Let \(\mathfrak{g}\) be an \(8\)-dimensional almost-abelian Lie algebra with codimension one abelian ideal \(\mathfrak{u}\) and let \(I,J\) be two anti-commuting almost-complex structures on \(\mathfrak{g}\). Let \(\{e_{i}\}\) be a basis such that \(f:=ad_{e_{1}}\), \(\mathfrak{u}=\langle e_{2},e_{3},e_{4},e_{5},e_{6},e_{7},e_{8}\rangle\)
\[Ie^{1}=e^{8},\quad Ie^{2}=e^{3},\quad Ie^{4}=e^{5},\quad Ie^{6}=e^{7},\]
\[Je^{1}=e^{7},\quad Je^{2}=e^{4},\quad Je^{3}=-e^{5},\quad Je^{6}=-e^{8},\]
\[Ke^{1}=-e^{6},\quad Je^{2}=e^{5},\quad Ke^{3}=e^{4},\quad Ke^{7}=-e^{8}.\]
With the above notations \(\mathfrak{u}_{I,J}=\langle e_{2},e_{3},e_{4},e_{5}\rangle\),
\[I=\begin{pmatrix}0&-1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&1&0\end{pmatrix}\,,\]
\[J=\begin{pmatrix}0&0&-1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&-1&0&0\end{pmatrix}\,.\]
Therefore, a real matrix \(A\) commutes with \(I\) and \(J\) if and only if it is of the form
\[A=\begin{pmatrix}a_{11}&-a_{21}&a_{13}&-a_{23}\\ a_{21}&a_{11}&a_{23}&a_{13}\\ -a_{13}&-a_{23}&a_{11}&a_{21}\\ a_{23}&-a_{13}&-a_{21}&a_{11}\end{pmatrix}\,.\]
Assume that \(I\) and \(J\) are integrable, hence the structure equations become
\[[e_{1},e_{2}] =a_{11}e_{2}+a_{21}e_{3}-a_{13}e_{4}+a_{23}e_{5}\,,\] \[[e_{1},e_{3}] =-a_{21}e_{2}+a_{11}e_{3}-a_{23}e_{4}-a_{13}e_{5}\,,\] \[[e_{1},e_{4}] =a_{13}e_{2}+a_{23}e_{3}+a_{11}e_{4}-a_{21}e_{5}\,,\] \[[e_{1},e_{5}] =-a_{23}e_{2}+a_{13}e_{3}+a_{21}e_{4}+a_{11}e_{5}\,,\] \[[e_{1},e_{6}] =ae_{6}-v_{4}e_{2}+v_{5}e_{3}+v_{2}e_{4}-v_{3}e_{5}\,,\] \[[e_{1},e_{7}] =ae_{7}-v_{5}e_{2}-v_{4}e_{3}+v_{3}e_{4}+v_{2}e_{5}\,,\] \[[e_{1},e_{8}] =ae_{8}+v_{2}e_{2}+v_{3}e_{3}+v_{4}e_{4}+v_{5}e_{5}\,,\]
for some \(v_{2},v_{3},v_{4},v_{5}\in\mathbb{R}\).
Notice that \(\mathfrak{g}\) is unimodular if and only if \(3a+4a_{11}=0\).
A basis of \((1,0)\)-forms is given by:
\[\varphi^{1}=e^{1}+\sqrt{-1}e^{8},\quad\varphi^{2}=e^{2}+\sqrt{-1}e^{3},\quad \varphi^{3}=e^{4}+\sqrt{-1}e^{5},\quad\varphi^{4}=e^{7}-\sqrt{-1}e^{6}.\]
The structure equations in terms of the differential \(\partial\) is:
\[\partial\varphi_{1}=0,\quad\partial\varphi_{4}=-\frac{a}{2}\varphi_{1}\wedge \varphi_{4},\]
\[\partial\varphi_{2}=\frac{-a_{11}-\sqrt{-1}a_{21}}{2}\varphi_{1}\wedge\varphi _{2}+\frac{-a_{13}-\sqrt{-1}a_{23}}{2}\varphi_{1}\wedge\varphi_{3}+\frac{v_{5 }+\sqrt{-1}v_{4}}{2}\varphi_{1}\wedge\varphi_{4},\]
\[\partial\varphi_{3}=\frac{a_{13}-\sqrt{-1}a_{23}}{2}\varphi_{1}\wedge\varphi _{2}+\frac{-a_{11}+\sqrt{-1}a_{21}}{2}\varphi_{1}\wedge\varphi_{3}+\frac{-v_{3 }-\sqrt{-1}v_{2}}{2}\varphi_{1}\wedge\varphi_{4}.\]
The \((2,0)\)-form corresponding to the hyperhermitian metric is \(\Omega=\varphi_{1}\wedge J\overline{\varphi_{1}}+\varphi_{2}\wedge J \overline{\varphi_{2}}=\varphi_{1}\wedge\varphi_{4}+\varphi_{2}\wedge\varphi _{3}.\) The HKT condition \(\partial\Omega=0\) is equivalent to \(a_{11}=v_{2}=v_{3}=v_{4}=v_{5}=0\).
On the other hand, the structure equations in terms of the differential \(\overline{\partial}\) is:
\[\overline{\partial}\varphi_{1}=\frac{a}{2}\varphi_{1}\wedge\overline{\varphi }_{1},\quad\overline{\partial}\varphi_{4}=-\frac{a}{2}\overline{\varphi}_{1 }\wedge\varphi_{4}.\]
\[\overline{\partial}\varphi_{2}=\frac{-a_{11}-\sqrt{-1}a_{21}}{2}\overline{ \varphi}_{1}\wedge\varphi_{2}+\frac{-a_{13}-\sqrt{-1}a_{23}}{2}\overline{ \varphi}_{1}\wedge\varphi_{3}+\frac{v_{5}+\sqrt{-1}v_{4}}{2}\overline{\varphi }_{1}\wedge\varphi_{4}+\frac{v_{3}-\sqrt{-1}v_{2}}{2}\varphi_{1}\wedge \overline{\varphi}_{1},\]
\[\overline{\partial}\varphi_{3}=\frac{a_{13}-\sqrt{-1}a_{23}}{2}\overline{ \varphi}_{1}\wedge\varphi_{2}+\frac{-a_{11}+\sqrt{-1}a_{21}}{2}\overline{ \varphi}_{1}\wedge\varphi_{3}+\frac{-v_{3}-\sqrt{-1}v_{2}}{2}\overline{\varphi }_{1}\wedge\varphi_{4}+\frac{v_{5}-\sqrt{-1}v_{4}}{2}\varphi_{1}\wedge \overline{\varphi}_{1}.\]
The \(SL(2,\mathbb{H})\) condition \(\overline{\partial}\,(\varphi_{1}\wedge\varphi_{4}\wedge\varphi_{2}\wedge \varphi_{3})=0\) is equivalent to \(a=-a_{11}\).
We consider the \((2,0)\)-forms \(\Phi_{1},\Phi_{2}\in\Omega^{\overline{J},-}(M)\):
\[\Phi_{1}=\varphi_{1}\wedge\varphi_{2}-\varphi_{4}\wedge\varphi_{3},\quad\Phi_{ 2}=\varphi_{1}\wedge\varphi_{3}-\varphi_{2}\wedge\varphi_{4}.\]
Then \(\partial\Phi_{1}=0\) is equivalent to \(a_{11}=a,a_{21}=a_{13}=a_{23}=0\) and \(\partial\Phi_{2}=0\) is equivalent to \(a_{11}=-a,a_{21}=a_{13}=a_{23}=0\). Remark that \(\Phi_{1},\Phi_{2}\) can not be \(\partial\)-exact.
From the above discussion, we conclude
**Corollary 28**.: _Let \(\mathfrak{g}\) be an \(8\)-dimensional non-abelian almost-abelian unimodular Lie algebra equipped with a left-invariant \(SL(2,\mathbb{H})\)-structure. Then the dimension of \(\partial\)-closed non \(\partial\)-exact left-invariant imaginary \((2,0)\)-forms is non-zero if and only if \(\tilde{f}=0\) and \(a=0\) where \(\tilde{f}\) and \(a\) are given by Theorem 21. In particular, \(\mathfrak{g}\) is nilpotent and do not admit any HKT metric._
**Remark 29**.: The nilpotent Lie algebra in Corollary 28 corresponds to the Lie algebra \(\mathfrak{g}_{3}\) in the notation of [1].
|
2305.00978 | Performance of chaos diagnostics based on Lagrangian descriptors.
Application to the 4D standard map | We investigate the ability of simple diagnostics based on Lagrangian
descriptor (LD) computations of initially nearby orbits to detect chaos in
conservative dynamical systems with phase space dimensionality higher than two.
In particular, we consider the recently introduced methods of the difference
($D_L^n$) and the ratio ($R_L^n$) of the LDs of neighboring orbits, as well as
a quantity ($S_L^n$) related to the finite-difference second spatial derivative
of the LDs, and use them to determine the chaotic or regular nature of
ensembles of orbits of a prototypical area-preserving map model, the
4-dimensional (4D) symplectic standard map. Using the distributions of the
indices' values we determine appropriate thresholds to discriminate between
regular and chaotic orbits, and compare the obtained characterization against
that achieved by the Smaller Alignment Index (SALI) method of chaos detection,
by recording the percentage agreement $P_A$ between the two classifications. We
study the influence of various factors on the performance of these indices, and
show that the increase of the final number of orbit iterations T and the order
n of the indices (i.e. the dimensionality of the space where the considered
nearby orbits lie), as well as the decrease of the distance $\sigma$ of
neighboring orbits, increase the $P_A$ values along with the required
computational effort. Balancing between these two factors we find appropriate
T, n and $\sigma$ values, which allow the efficient use of the $D_L^n$, $R_L^n$
and $S_L^n$ indices as short time and computationally cheap chaos diagnostics
achieving $P_A \gtrsim 90 \%$, with $D_L^n$ and $S_L^n$ having larger $P_A$
values than $R_L^n$. Our results show that the three LDs-based indices perform
better for systems with large percentages of chaotic orbits. | Sebastian Zimper, Arnold Ngapasare, Malcolm Hillebrand, Matthaios Katsanikas, Stephen R. Wiggins, Charalampos Skokos | 2023-04-29T07:12:04Z | http://arxiv.org/abs/2305.00978v1 | # Performance of chaos diagnostics based on Lagrangian descriptors.
###### Abstract
We investigate the ability of simple diagnostics based on Lagrangian descriptor (LD) computations of initially nearby orbits to detect chaos in conservative dynamical systems with phase space dimensionality higher than two. In particular, we consider the recently introduced methods of the difference (\(D_{L}^{n}\)) and the ratio (\(R_{L}^{n}\)) of the LDs of neighboring orbits, as well as a quantity (\(S_{L}^{n}\)) related to the finite-difference second spatial derivative of the LDs, and use them to determine the chaotic or regular nature of ensembles of orbits of a prototypical area-preserving map model, the 4-dimensional (4D) symplectic standard map. Using the distributions of the indices' values we determine appropriate thresholds to discriminate between regular and chaotic orbits, and compare the obtained characterization against that achieved by the Smaller Alignment Index (SALI) method of chaos detection, by recording the percentage agreement \(P_{A}\) between the two classifications. We study the influence of various factors on the performance of these indices, and show that the increase of the final number of orbit iterations \(T\) and the order \(n\) of the indices (i.e. the dimensionality of the space where the considered nearby orbits lie), as well as the decrease of the distance \(\sigma\) of neighboring orbits, increase the \(P_{A}\) values along with the required computational effort. Balancing between these two factors we find appropriate \(T\), \(n\) and \(\sigma\) values, which allow the efficient use of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices as short time and computationally cheap chaos diabetics achieving \(P_{A}\gtrsim 90\%\), with \(D_{L}^{n}\) and \(S_{L}^{n}\) having larger \(P_{A}\) values than \(R_{L}^{n}\). Our results show that the three LDs-based indices perform better for systems with large percentages of chaotic orbits. In addition, our findings clearly indicate the capability of LDs to efficiently identify chaos in systems whose phase space is difficult to visualize (due to its high dimensionality), without knowing the variational equations (tangent map) of continuous (discrete) time systems needed by traditional chaos indicators.
## I Introduction
Determining the nature of individual orbits as either chaotic or regular, as well as the dynamics of ensembles of orbits, is fundamental for understanding the behavior of continuous and discrete time dynamical systems. To this end, a variety of different techniques and indicators, to either visualize the system's phase space or to detect chaotic orbits, have been developed over the course of time.
The asymptotic measures introduced by Lyapunov Lyapunov (1948) to characterize the growth or shrinking of small phase space perturbations to orbits (often referred to as deviation vectors) have been widely accepted as a standard tool for this purpose. These quantities are commonly named Lyapunov exponents (LEs). Following the formulation of the multiplicative ergodic theorem by Oseledec (1952), a theoretical basis for the numerical computation of LEs was presented (1952); (1952). The estimation of the maximum LE (mLE) through the numerical computation of the finite-time mLE (ftmLE), is nowadays one of the most commonly used chaos detection methods as the positivity of the mLE of bounded orbits, which do not escape to infinity, indicates chaotic behavior (see for example (1952) and references therein).
The slow convergence of the ftmLE to its limiting value has necessitated the search for alternative, more efficient indicators. Among these indicators are the so-called fast Lyapunov Indicator (FLI) (1952) and its variants (1952), the Mean Exponential Growth of Nearby Orbits (MEGNO) (1952), the Smaller Alignment Index (SALI) (1952) and its extension, the Generalized Alignment Index (GALI) (1952). These indicators have certain advantages over the estimation of the mLE as, in general, they manage to characterize orbits as regular or chaotic faster and with less computational effort, although they also rely on the time evolution of at least one deviation vector.
One of the most successful methods among this set of
new indicators is the SALI, which has been efficiently used to study of chaoticity of several different systems, such as accelerator models [11; 12], predator-prey population maps [13], Bose-Einstein condensates [14], galactic potentials [15; 16], as well as nuclear physics models [17]. The interested reader is referred to the review [18] for more details on this method and its applications.
A recently developed visualization technique for the identification of phase space structures in continuous time dynamical systems and discrete time iterative maps is the method of Lagrangian descriptors (LDs) [19; 20; 21]. The computation of LDs is based on the accumulation of some positive scalar value along the path of individual orbits to produce a scalar field on a grid of initial conditions. From the gradient of this field the manifolds in both regular and chaotic regions can be identified as singular features, following the theoretical discussions in [22] and [23] for the discrete and continuous time settings respectively. Initially applied to the study of ocean currents [19; 20], this method has since been utilized to study the dynamics of systems from a variety of different fields such as chemical transition state theory [24; 25], molecular systems [26; 27], cardiovascular flows [28], and stochastic dynamical systems [29].
In [30] the characterization of regular motion by LDs was considered, while in a recent work [31] an indicator based on the estimation of the second derivative of the LDs field was used to discriminate between regular and chaotic motion in discrete and continuous systems. These works paved the way for LDs to be used for not only a visual inspection of the phase space, but also for determining the chaotic nature of orbits. This was done in [32], where it was shown that indicators derived from LDs of nearby orbits can be used to characterize the chaoticity of ensembles of orbits with \(\gtrsim 90\%\) accuracy (in comparison with the characterizations obtained by the SALI method) for both the Henon-Heiles [33] system and the two-dimensional (2D) standard map [34]. An advantage of LDs-based chaos diagnostics over the more traditional above mentioned chaos indicators is that the evolution of deviation vectors is not required, which reduces the complexity of the performed computations and simultaneously diminishes the required CPU time.
In [32] the introduced methods were applied to low-dimensional systems having 2D phase spaces, which are easily depicted. Here we extend that study by investigating in detail the performance of these diagnostics in a higher-dimensional setting, where the phase space's visualization becomes challenging, although methods like the 'color and rotation' [35; 36] and the 'phase space slices' [37], as well as approaches based on LDs [38] have been used for that purpose. In particular, we demonstrate how these techniques can be used to identify orbits as regular or chaotic within a certain accuracy, using as a test case a 4D map, a higher-dimensional conservative dynamical system of discrete time.
The rest of the paper is organized as follows. In Sect. II we describe the numerical computation of the various chaos diagnostics used in this investigation. In Sect. III, we implement our techniques for studying the chaotic behavior of the 4D standard map for different setups of the system. Finally in Sect. IV, we discuss our findings and summarize our conclusions.
## II Numerical techniques
In order to study the performance and efficiency of the three quantities based on the LDs values of neighboring orbits, which were presented in [32], for systems of higher dimensionality we consider here, as a test case of an area preserving map, the 4D standard map [39] obtained by coupling two (identical in our implementation) 2D standard maps
\[x_{1}^{\prime}=x_{1}+x_{2}^{\prime}, \tag{1}\] \[x_{2}^{\prime}=x_{2}+\frac{K}{2\pi}\sin(2\pi x_{1})-\frac{B}{2 \pi}\sin\big{[}2\pi(x_{3}-x_{1})\big{]},\] \[x_{3}^{\prime}=x_{3}+x_{4}^{\prime},\] \[x_{4}^{\prime}=x_{4}+\frac{K}{2\pi}\sin(2\pi x_{3})-\frac{B}{2 \pi}\sin\big{[}2\pi(x_{1}-x_{3})\big{]},\]
with \(K\) and \(B\) being real parameters, and \(\mathbf{z}=(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime},x_{4}^{\prime})\) denoting the state vector of the map's coordinates after a single iteration. The parameter \(K\) defines the nonlinearity strength of each one of the 2D coupled maps, while \(B\) determines the strength of coupling between the two 2D maps. All coordinates are given (mod 1), so that \(0\leq x_{i}<1\), \(i=1,2,3,4\). We note that the number \(T\) of map's iterations will also be referred to as the (discrete) time of the system.
Small perturbations of tested orbits are key in determining the regular or chaotic nature of these orbits. Such a perturbation defines the deviation vector \(\mathbf{w}=(\delta x_{1},\delta x_{2},\delta x_{3},\delta x_{4})\), whose time evolution is governed by the system's tangent map given by
\[\delta x_{1}^{\prime}= \,\delta x_{1}+\delta x_{2}^{\prime}, \tag{2}\] \[\delta x_{2}^{\prime}= \,\Big{\{}K\cos(2\pi x_{1})+B\cos\big{[}2\pi(x_{3}-x_{1})\big{]} \Big{\}}\delta x_{1}\] \[+\delta x_{2}-B\cos\big{[}2\pi(x_{3}-x_{1})\big{]}\delta x_{3},\] \[\delta x_{3}^{\prime}= \,\delta x_{3}+\delta x_{4}^{\prime},\] \[\delta x_{4}^{\prime}= \,-B\cos\big{[}2\pi(x_{1}-x_{3})\big{]}\delta x_{1}\] \[+\Big{\{}K\cos(2\pi x_{3})+B\cos\big{[}2\pi(x_{1}-x_{3})\big{]} \Big{\}}\delta x_{3}+\delta x_{4}.\]
The mLE \(\lambda_{1}\) of an orbit is estimated through the computation of the ftmLE
\[\Lambda(T)=\frac{1}{T}\ln\left(\frac{\|\mathbf{w}(T)\|}{\|\mathbf{w}(0)\|} \right), \tag{3}\]
as
\[\lambda_{1}=\lim_{T\to\infty}\Lambda(T), \tag{4}\]
with \(\|\cdot\|\) denoting the usual Euclidean norm of a vector. For a chaotic orbit, \(\Lambda\) eventually saturates to a positive
value, whereas in the case of regular orbits \(\Lambda\) decreases following the power law [5]
\[\Lambda(T)\propto\ln(T)/T. \tag{5}\]
In contrast to the estimation of the mLE, the computation of the SALI depends on the evolution of two, initially linearly independent, deviation vectors \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\). Then SALI(\(T\)), which quantifies the alignment of these two deviation vectors, is computed as
\[\text{SALI}(T)=\min\big{\{}\|\hat{\mathbf{w}}_{1}(T)+\hat{\mathbf{w}}_{2}(T)\|,\|\hat{\mathbf{w}}_{1}(T)-\hat{\mathbf{w}}_{2}(T)\|\big{\}}, \tag{6}\]
with
\[\hat{\mathbf{w}}_{k}(T)=\frac{\mathbf{w}_{k}(T)}{\|\mathbf{w}_{k}(T)\|}, \qquad k=1,2, \tag{7}\]
being a vector of unit norm. For chaotic orbits, the two deviation vectors will eventually be aligned to the direction related to the mLE and consequently the SALI will follow an exponential decay to zero, with a rate depending on the values of the two largest LEs \(\lambda_{1}\geq\lambda_{2}\). On the other hand, for regular orbits in the phase space of a 4D symplectic map the SALI remains positive and practically constant. Thus, in summary, the behavior of the SALI for orbits of the 4D standard map (1) is
\[\text{SALI}(T)\propto\begin{cases}\text{constant}&\text{for regular orbits}\\ e^{-(\lambda_{1}-\lambda_{2})T}&\text{for chaotic orbits}\end{cases}. \tag{8}\]
In our study, following [32], we exploit the ability of LDs to capture the basic dynamical features of a system in order to identify regular and chaotic motion. Let us first recall that the "\(p\)-norm" definition of the LD for a discrete map is given by
\[LD=\sum_{j=-T}^{T-1}\sum_{i=1}^{N}\left|z_{j+1}^{(i)}-z_{j}^{(i)}\right|^{p}, \qquad 0<p\leq 1, \tag{9}\]
where \(i\) indexes the \(N\) elements of the state vector \(\mathbf{z}\) [for map (1) \(N=4\)], and \(j\) counts the map's iterations. Since the \(LD\) definition (9) with \(p=0.5\) has been successfully implemented in various studies (e.g., [40; 41]) and has shown a remarkable ability in identifying phase space structures, we will also set \(p=0.5\) for our investigations. We emphasize that, although in the formal definition (9) of the LD the map is integrated both in the past (\(j\) starts at \(j=-T\)) and in the future (\(j\) goes up to \(j=T-1\)), for the purposes of our study the computation of the LDs through only forward (or backward) iterations is sufficient. More specifically, the presented results are solely obtained through forward iterations.
Let us now discuss how we can identify the regular or chaotic nature of an orbit with initial conditions (ICs) at point \(\mathbf{z}\) in the map's phase space, based on the values of the LDs of this orbit and of initially neighboring ones. The ICs of these neighboring orbits can be seen as grid points of a mesh in several spatial dimensions. In the case of the 4D map (1) we can consider neighboring orbits to an IC in \(n\)D spaces with \(1\leq n\leq 4\). For \(n=1\) we have two neighboring points of \(\mathbf{z}\) on a line (1D space), while for \(n=2\) the four nearest neighbors are located on a grid in a 2D subspace of the 4D phase space. Thus, considering ICs of orbits on a finite grid of an \(n(\geq 1)\)D subspace of the \(N(\geq n)\)D phase space of a general \(N\)D symplectic map, any non-boundary grid point \(\mathbf{z}\) in this subspace has \(2n\) nearest neighbors
\[\mathbf{y}_{i}^{\pm}=\mathbf{z}\pm\sigma^{(i)}\mathbf{e}^{(i)},\ i=1,2,\ldots n, \tag{10}\]
where \(\mathbf{e}^{(i)}\) is the \(i\)th unit vector of the usual basis in \(\mathbb{R}^{n}\), and \(\sigma^{(i)}\) is the distance between successive grid points in this direction.
If we respectively denote by \(LD(\mathbf{z})\) and \(LD\left(\mathbf{y}_{i}^{\pm}\right)\) the LDs of orbits with ICs \(\mathbf{z}\) and \(\mathbf{y}_{i}^{\pm}\) we can define the three diagnostics we use in our study, following [32]. More specifically, the difference \(D_{L}^{n}\) of LDs of neighboring orbits at \(\mathbf{z}\) in an \(n\)D subspace is defined as
\[D_{L}^{n}(\mathbf{z})=\frac{1}{2n}\sum_{i=1}^{n}\frac{\left|LD(\mathbf{z})-LD( \mathbf{y}_{i}^{+})\right|+\left|LD(\mathbf{z})-LD(\mathbf{y}_{i}^{-})\right| }{LD(\mathbf{z})}, \tag{11}\]
while the ratio \(R_{L}^{n}\) is given by
\[R_{L}^{n}(\mathbf{z})=\left|1-\frac{1}{2n}\sum_{i=1}^{n}\frac{LD(\mathbf{y}_{i} ^{+})+LD(\mathbf{y}_{i}^{-})}{LD(\mathbf{z})}\right|, \tag{12}\]
with \(n\) also referred to as the order of the index. The last indicator we use is related to the second spatial derivative of the LD quantity. It was introduced in [31], briefly studied in [32], and applied to celestial mechanics problems in [42], where it was denoted by the rather cumbersome notation \(||\Delta LD||\). Here we adopt the notation \(S_{L}^{n}\) to follow similar conventions to the notations of Eqs. (11) and (12), as well as to clearly indicate the dimensionality of the grid on which this diagnostic is computed, and define the order \(n\) index as
\[S_{L}^{n}(\mathbf{z})=\frac{1}{n}\sum_{i=1}^{n}\left|\frac{LD(\mathbf{y}_{i}^{ +})-2LD(\mathbf{z})+LD(\mathbf{y}_{i}^{-})}{(\sigma^{(i)})^{2}}\right|. \tag{13}\]
We note that a difference between the definition of \(S_{L}^{n}\) and of the \(||\Delta LD||\) index used in [31; 32; 42] is that in (13) the factor \(1/n\) is introduced in order to compute a quantity 'per dimension' of the space where the used ICs are, similar to what is done in (11) and (12).
In order to demonstrate the basic behaviors of the two chaos indicators [\(\Lambda\) (3), SALI (6)] and the three LDs-based diagnostics [\(D_{L}^{n}\) (11), \(R_{L}^{n}\) (12), \(S_{L}^{n}\) (13)] we use in our study, we compute them for two representative orbits, one regular with ICs \(x_{1}=0.6\), \(x_{2}=0.05\), \(x_{3}=0.54\), \(x_{4}=0.01\) and one chaotic with ICs \(x_{1}=0.2\), \(x_{2}=0.2\), \(x_{3}=0.54\), \(x_{4}=0.01\), for the 4D map (1) with \(K=1.5\) and \(B=0.05\). We note that for the three diagnostics based on LDs computations we set \(n=2\), consider neighboring orbits on a square grid in the \((x_{1},x_{2})\) plane and compute
the order 2 version of the indices. The projection of the \(T=2500\) consequents of the regular (blue points) and the chaotic orbit (orange points) on the plane \((x_{1},x_{2})\) are shown in Fig. 1(a). The points of the regular orbit lie on a 4D stability island and create a regular, torus-like structure. On the other hand, the consequents of the chaotic orbit correspond to the scattered point in Fig. 1(a).
In Figs. 1(b)-(f) we respectively plot the time evolution of \(\Lambda\), SALI, \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) for the considered regular (blue curves) and chaotic orbit (orange curves). From the results of Fig. 1(b) we see that the ftmLE \(\Lambda\) of the regular orbit eventually decreases to zero proportionally to \(\ln(T)/T\) (dashed line), while for the chaotic orbit it saturates to a positive value as expected. On the other hand, the SALI [Fig. 1(c)] approaches a positive value for the regular orbit while it tends exponentially fast to zero for the chaotic one. We note that all computations throughout this study are performed using double-precision accuracy, thus we stop the time evolution of the SALI when its values reach \(10^{-16}\), i.e. the machine precision. From Fig. 1(c) we see that the SALI of the chaotic orbit requires only about \(T=100\) forwards iterations to reach the \(10^{-16}\) threshold, characterizing the orbit beyond any doubt as chaotic as its SALI is practically zero.
From the results presented in Figs. 1(d)-(f) we see that the values of \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) of the regular orbit remain well above the ones obtained for the chaotic one (apart from some short initial time interval \(T\lesssim 200\) for \(R_{L}^{2}\)). These clear differences between the values of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) diagnostics for regular and chaotic orbits are observed generally and are not related to the particular example orbits shown here. Thus, as was presented in [32], and will be discussed in detail in Sect. III, we can define appropriate threshold values for each one of these three diagnostics to efficiently discriminate between regular and chaotic orbits. Nevertheless, it is important to note that this distinction needs a minimum (rather small) number of iterations in order to be clearly established, as we see in Figs. 1(e) and (f).
## III Numerical results
In this section we investigate in detail the ability of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices to distinguish between regular and chaotic orbits in dynamical systems whose phase space dimension is higher than two. As a representative case of such a system we consider the prototypical 4D standard map (1). In our study we investigate the influence of various factors on the ability of the indicators to accurately characterize the chaoticity of orbits, like the number of the performed map iterations, the extent of the system's chaoticity (i.e. the fraction of the chaotic orbits), and the order of the indicators.
### Dynamics on a 2D subspace
Extending the results presented in [32] for dynamical systems with 2D phase spaces (in particular the Henon-Heiles Hamiltonian [33] and the 2D standard map [34]) to the 4D map (1), we first investigate the performance of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices in a 2D subspace of the map for which we can easily obtain the direct visualization of regular and chaotic regions. In particular, we consider a grid of \(1000\times 1000\) equally spaced ICs in the subspace \((x_{1},x_{2})\) by setting \(x_{3}=0.54\) and \(x_{4}=0.01\), for \(K=1.5\) and \(B=0.05\). This arrangement sets the distance between immediate neighboring ICs to \(\sigma=10^{-3}\) in both directions on the \((x_{1},x_{2})\) plane. The LDs of all these orbits are computed for \(T=10^{3}\) forward iterations, and from the obtained results we evaluate indicators \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) for each IC. Since the considered ICs lie on a 2D plane, we compute the order \(n=2\) versions of the three indicators. In Figs. 2(a)-(c), we present the resulting color plots of these computations, where ICs on the \((x_{1},x_{2})\) plane are colored according to their \(\log_{10}D_{L}^{2}\), \(\log_{10}R_{L}^{2}\) and \(\log_{10}S_{L}^{2}\) values respectively. These plots display similar characteristics, providing a clear qualitative description of the structure of the phase space, with regular regions (islands of stability) corresponding to areas of lower values and chaotic regions (chaotic sea) having higher values, in accordance to what was found in [32].
Although the color plots in Figs. 2(a)-(c) correctly capture the overall dynamical features of the system, our main goal is to use the three indices for obtaining a quantitative identification of orbits as regular or chaotic. In order to obtain an estimation of the chaos extent in the studied 2D subspace of the map a threshold value needs to be established for each index, so that orbits can be characterized as chaotic or regular if they respectively result in index values above or below these thresholds. In Figs. 2(d)-(f) we show the normalized distributions of the logarithms of these three quantities, all of which clearly show two peaks separated by a trough, which demarcates ICs leading to regular (low values) and chaotic motion (high values). Assuming that the minimum between the two peaks provides a good threshold value for discriminating between regular and chaotic orbits, the following values are obtained: \(\log_{10}D_{L}^{2}=-2.14,\log_{10}R_{L}^{2}=-2.85\) and \(\log_{10}S_{L}^{2}=6.70\), respectively denoted by orange vertical, dashed lines in Figs. 2(d)-(f).
As was also observed in [32] this approach does not necessarily lead to the correct characterization of all orbits, with discrepancies mainly appearing at the edges of stability islands. We investigate if this trend also persists for the 4D standard map by comparing the characterization obtained from the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) diagnostics against the one made by the SALI indicator for \(T=10^{3}\) iterations. Noting that the SALI of regular orbits will fluctuate around a positive, constant value, while for chaotic orbits it will exponentially decrease to zero [see Eq. (8) and Fig. 1(c)], we consider a threshold
value of \(\log_{10}\mathrm{SALI}=-8\), so that an orbit is characterized as regular if \(\log_{10}\mathrm{SALI}\geq-8\), and as chaotic if \(\log_{10}\mathrm{SALI}<-8\). The percentage agreement \(P_{A}\) of the characterization of the orbits of Fig. 2 obtained by the three LDs-based diagnostics, with respect to the one obtained by the SALI is \(P_{A}\approx 94.4\%\), \(P_{A}\approx 92.5\%\) and \(P_{A}\approx 94.5\%\) for \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) respectively.
In Figs. 2(g)-(i) we respectively show for the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices the regions in the considered 2D subspace, where the indicators fail to correctly identify (with respect to the SALI categorization) the chaotic or regular nature of orbits. In particular, blue points correspond to regular (according to SALI) orbits which are falsely identified as chaotic by the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indicators, while red points denote orbits classified as chaotic by SALI, which are incorrectly identified as regular. Although the effectiveness of the three indicators in distinguishing between regular and chaotic orbits is clearly captured by the very high agreement percentages (\(\gtrsim 90\%\)) with respect to the SALI classification, the results of Figs. 2(g)-(i) show that the large majority of incorrectly characterized orbits are mainly located at the edges of regular islands where sticky chaotic orbits exist, in agreement to what was reported in [32]. Our results show that the \(D_{L}^{2}\) indicator falsely characterizes as regular many sticky chaotic orbits at the borders of stability islands [red points in Fig. 2(g)], while the use of \(R_{L}^{2}\) and \(S_{L}^{2}\) indices [Figs. 2(h), (i)] results in a more or less similar chart of wrongly identified orbits, which again are mainly located at the boarders of stability islands. It is worth noting that \(S_{L}^{2}\) performs better than \(R_{L}^{2}\) as it falsely characterizes as regular fewer chaotic orbits in the large chaotic sea [i.e. there are fewer red points seen in the chaotic portion of Fig. 2(i) than in Fig. 2(h)].
### Effect of the number of iterations
A key factor when studying chaotic systems is the integration time, or the number of iterations in the case of the 4D map (1), required for indicators to correctly characterize orbits as regular or chaotic. In general, too few iterations do not allow for the exponential divergence of nearby orbits observed in the case of chaotic motion to lead to very large deviations, which in turn, would make apparent the chaotic nature of the orbits. This is
Figure 1: (a) The projection of a regular orbit (blue points) with ICs \(x_{1}=0.6\), \(x_{2}=0.05\), \(x_{3}=0.54\), \(x_{4}=0.01\), and a chaotic orbit (orange points) with ICs \(x_{1}=0.2\), \(x_{2}=0.2\), \(x_{3}=0.54\), \(x_{4}=0.01\), of the 4D map (1) with \(K=1.5\) and \(B=0.05\) on the \((x_{1},x_{2})\) plane for \(T=2500\) forwards iterations of the map. Time evolution of (b) the \(\Lambda\) (3), (c) the SALI (6), (d) the \(D_{L}^{2}\) (11), (e) the \(R_{L}^{2}\) (12), and (f) the \(S_{L}^{2}\) (13) for the two orbits of (a). The \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) are evaluated in the plane \((x_{1},x_{2})\), with a grid spacing \(\sigma=10^{-3}\) in each direction. The dashed line in (b) denotes the function \(\ln(T)/T\) (5).
true not only for indicators based on neighboring orbits' LDs but for any chaos indicator. On the other hand, too many iterations will make the use of the considered indicators less efficient as they will increase the required computational time.
It is plausible to assume that the total number of iterations required for the characterization of orbits as chaotic or regular is directly related to the time it takes for the distributions of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices to clearly reveal two distinct peaks. When the two peaks in the distribution are well formed, a threshold value can be established between them allowing the discrimination between regular and chaotic orbits. Thus, in order to investigate the effect of the number of iterations \(T\) on the behavior of the LDs-based diagnostics we respectively plot in Figs. 3(a)-(c) the normalized distributions of the logarithms of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) values for the ensemble of orbits considered in Fig. 2. These distributions are computed for different numbers of forward iterations \(T\) of map (1), namely for \(T=50\) (blue curves), \(T=100\) (orange curves), \(T=250\) (green curves), \(T=1000\) (red curves) and \(T=2500\) (purple curves). From the results of these figures we see that the shape of the distribution of the three diagnostics does not significantly change, although in the case of \(S_{L}^{2}\) [Fig. 3(c)] the distribution is shifted towards larger \(\log_{10}S_{L}^{2}\) values as \(T\) increases, and that the distance between the peaks remains approximately constant. In addition, for larger \(T\) the height of the trough between the two well formed peaks decreases, allowing the more accurate characterization of the orbits' nature as it becomes easier to identify a well-placed threshold value between the two peaks. Nevertheless, since we would like to use the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices as a fast (i.e. based on low iteration numbers) chaos in the presence of a small number of iterations, the \(D_{L}^{2}\) and \(S_{L}^{2}\) indices are not significantly different.
Figure 2: Results obtained for orbits having their ICs on a \(1000\times 1000\) grid on the 2D subspace \((x_{1},x_{2})\) with \(x_{3}=0.54\), \(x_{4}=0.01\), of the 4D map (1) for \(K=1.5\) and \(B=0.05\), after \(T=10^{3}\) forward iterations. The ICs are colored according to the orbits’ (a) \(\log_{10}D_{L}^{2}\) (11), (b) \(\log_{10}R_{L}^{2}\) (12), and (c) \(\log_{10}S_{L}^{2}\) (13) values, using the color scales at the top of each panel. Normalized distributions of the (d) \(\log_{10}D_{L}^{2}\), (e) \(\log_{10}R_{L}^{2}\) and (f) \(\log_{10}S_{L}^{2}\) values of the orbits considered in (a)-(c). The values \(\log_{10}D_{L}^{2}=-2.14\), \(\log_{10}R_{L}^{2}=-2.85\) and \(\log_{10}S_{L}^{2}=6.70\) are respectively denoted in (d), (e) and (f) by an orange vertical, dashed line. The set of the considered ICs which are incorrectly characterized by, (g) the \(D_{L}^{2}\), (h) the \(R_{L}^{2}\), and (i) the \(S_{L}^{2}\) index, with blue points corresponding to regular orbits (according to the classification obtained by the SALII method for \(T=10^{3}\)) which are falsely identified as chaotic, and red points denoting chaotic orbits which are incorrectly identified as regular.
dicator, we can say that, for the cases considered here, \(T=1000\) is sufficient to properly capture the overall dynamics of the considered ensemble of orbits.
For completeness' sake, we also present in Fig. 3 the evolution of the normalized distributions of the two basic chaos indicators we consider in our study, the ftmLE \(\Lambda\) [Fig. 3(d)] and the SALI [Fig. 3(e)]. From Fig. 3(d) we see that the distributions of the \(\Lambda\) values have a high, sharp peak for \(\log_{10}\Lambda\gtrsim-1\), which corresponds to the system's chaotic orbits for which \(\Lambda\) eventually saturates to a positive value [see the orange curve in Fig. 1(b)]. In addition, we observe a second, smaller in this case, peak corresponding to regular orbits, which propagates to the left of Fig. 3(d), towards smaller \(\log_{10}\Lambda\) values, in agreement with Eq. (5) [also see the blue curve in Fig. 1(b)]. The region between these two well formed peaks corresponds to weakly chaotic orbits for which \(\Lambda\) reaches positive but small values. On the other hand, the distribution of the SALI values [Fig. 3(e)] develops very fast, two well separated formations: a set of high positive values (\(\log_{10}\mathrm{SALI}\gtrsim-4\)), which corresponds to regular orbits [see Eq. (8) and the blue curve in Fig. 1(c)], and a high peak at \(\log_{10}\mathrm{SALI}\approx-16\) corresponding to chaotic orbits whose SALI became practically zero reaching the level of the computer accuracy (i.e. \(10^{-16}\)) due to the exponentially fast decrease of the index [see Eq. (8) and the orange curve in Fig. 1(c)]. It is worth noting that even for as few iterations as \(T=250\) [green curve in Fig. 1(c)] the distribution of the SALI values is practically flat and non-existing between the two well defined regions of small (chaotic orbits) and large (regular orbits) \(\log_{10}\mathrm{SALI}\) values. This fast distinction between the two categories of orbits is a main advantage of the SALI method, which also allows the establishment of a well defined threshold value for discriminating between regular and chaotic orbits, which in our work is set to \(\log_{10}\mathrm{SALI}=10^{-8}\).
From the results of Fig. 3 we see that the increase of the number of iterations does not lead to a drastic improvement of the distinctive ability of the methods based on neighboring orbits' LDs, as the shape of their distributions eventually does not change significantly [Figs. 3(a)-(c)] in contrast to what happens with the distributions of the ftmLE and the SALI shown in Figs. 3(d) and 3(e). Thus, only slight adjustments are required for the threshold values of the \(D_{L}^{2}\) and \(R_{L}^{2}\) indices for the numbers of iterations reported in Fig. 3. In contrast, a change in the number of iterations for \(S_{L}^{2}\) results in the increase of the related threshold value. So, it is a good practice to check the value distributions of the three LDs-based quantities \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\), in order to determine the optimal threshold values.
### Effect of the overall chaos extent and grid spacing
Let us now study the effect of the system's chaoticity on the accuracy of the indices. Both the nonlinearity parameter \(K\) and the coupling constant \(B\) of the 4D map (1) control the system's chaotic behavior, because, in general, their increase leads to more extended chaos. We investigated the performance of the three LDs-based diagnostics for various \(K\) and \(B\) values and we present here some representative results obtained by varying \(K\), while \(B\) is kept fixed. More specifically, in Figs. 4(a)-(c), we show SALI color plots for respectively \(K=0.75\), \(K=1.1\) and \(K=1.5\), and \(B=0.05\), computed for a total of \(T=2.5\times 10^{4}\) forward iterations on a grid of \(1000\times 1000\) evenly spaced ICs on the \((x_{1},x_{2})\) plane with \(x_{3}=0.54\) and \(x_{4}=0.01\). From these figures we clearly see that the increase of \(K\) results in a substantial increase in the number of chaotic orbits, as the area of yellow-colored regions corresponding to very low SALI values (which indicate chaos) increases. In fact, we find the percentage \(P_{C}\) of chaotic orbits to be \(P_{C}\approx 43.9\%\), \(P_{C}\approx 69.8\%\) and \(P_{C}\approx 79.6\%\) respectively for \(K=0.75\), \(K=1.1\) and \(K=1.5\), when the \(\log_{10}\mathrm{SALI}=-8\) threshold is used to discriminate between regular and chaotic orbits.
We next investigate, for the three cases of Fig. 4, the effect of the total number of map iterations \(T\) on the ability of the LDs-based indices to correctly capture the nature of orbits, which is quantified by their percentage agreement \(P_{A}\) with the characterization obtained by SALI for the same \(T\). We note that here we consider the order \(n=2\) indices \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\), which are based on LDs' computations of neighboring orbits on the 2D plane \((x_{1},x_{2})\) defined by \(x_{3}=0.54\) and \(x_{4}=0.01\) [Figs. 4(a)-(c)]. The \(P_{A}\) is computed for ten different final iteration numbers and the obtained results are presented in Figs. 4(d)-(f). For each of the three considered \(K\) values and the ten different final iteration numbers \(T\) an appropriate threshold value for discriminating between regular and chaotic orbits is selected for every index following the approach described in Sect. III.2, while for the SALI the threshold value \(\log_{10}\mathrm{SALI}=-8\) is always used.
For \(K=0.75\) [Figs. 4(a) and 4(d)] the phase space displays the smallest area of chaotic behavior among the three cases we considered, and \(P_{A}\) decreases as the number of iterations increases. This is due to the large number of sticky orbits at the edges of the many regular islands, whose weakly chaotic nature is revealed by the SALI only after a rather high number of iterations. Thus, initially, for small \(T\) values the SALI, as well as the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices, wrongly characterize the sticky orbits as regular, but since all these methods agree on this assessment the related \(P_{A}\) values in Fig. 4(d) are large. For larger \(T\) values the SALI eventually manages to identify the sticky orbits as chaotic, but the LDs-based indicators fail to do so, and consequently the \(P_{A}\) values decrease. This discrepancy is due to the known difficulty of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indicators to correctly characterize sticky orbits, which has been already seen in Figs. 2(g)-(i). This limitation was also pointed out in [32]. For \(K=1.1\) [Figs. 4(b) and 4(e)], the phase space's chaoticity increases and fewer sticky orbits are present compared to the \(K=0.75\) case, and \(P_{A}\) is observed
to increase for large \(T\) values, steadily exhibiting values \(\gtrsim 90\%\). Similarly, for the highly chaotic case of \(K=1.5\) [Figs. 4(c) and 4(f)], for which the number of sticky orbits has been drastically reduced, as the extent of the chaotic sea has grown, \(P_{A}\) enlarges with growing \(T\).
Our analysis shows again that the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indicators are less efficient at properly characterizing orbits at the edges of regular regions. This becomes especially problematic when the system's phase space is occupied by many stability islands and chaos is confined in very thin strips between these islands, as is for example seen in the case of Fig. 4(a). Furthermore, the results of Figs. 4(e) and 4(f) show that the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices have similar chaos diagnostic capabilities as in almost all studied cases they achieve similar \(P_{A}\) values for large enough \(T\) numbers. In addition, taking also into account that we want to use these indicators as fast diagnostics, we observe that (as was also seen in Sect. III.2) \(T=1000\) is a very good number for all indices to produces reliable estimations of chaos extent, especially for the \(K=1.1\) and \(K=1.5\) cases [Figs. 4(e) and 4(f) respectively] for which \(P_{A}\gtrsim 90\%\).
Having considered the effect of \(T\) on the chaos diagnostic accuracy of \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\), let us now discuss the effect of the grid spacing size \(\sigma\) on their performance. For this purpose we respectively present in Figs. 4(g)-(i) for \(K=0.75\), \(K=1.1\) and \(K=1.5\) the \(P_{A}\) values obtained by the three indicators at \(T=10^{3}\) for five different grid spacings on the \((x_{1},x_{2})\) plane considered in Figs. 4(a)-(c) in the range \(10^{-4}\leq\sigma\leq 10^{-2}\). For each \(K\) an increase in accuracy \(P_{A}\) is seen as \(\sigma\) decreases, indicating that computations based on finer grid capture more accurately the system's dynamics. On the other hand, the use of more grid point results in a significant increase of the required computational time, something which is not desirable for the implementation of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices as fast chaos diagnostics. Nevertheless, the fact that the accuracy \(P_{A}\) for \(\sigma=10^{-4}\) is only slightly better than the one obtained for \(\sigma=10^{-3}\), suggests that after some point the further decrease of the grid spacing has only a moderate impact on the achieved accuracy. Thus, a rather good choice for the grid spacing in our study, balancing between the obtaining accuracy and the required computational effort, is \(\sigma=10^{-3}\).
The results of Fig. 4 clearly show that the extent of chaos, as well as its structure in the phase space, significantly influences the usefulness of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indicators in characterizing the overall dynamics. In particular, we should be cautious when these indices are applied to systems for which we expect a small amount
Figure 3: Normalized distributions of the logarithms of the (a) \(D_{L}^{2}\) (11), (b) \(R_{L}^{2}\) (12), (c) \(S_{L}^{2}\) (13), (d) \(\Lambda\) (3), and (e) SALI (6), values of the orbits considered in Fig. 2 for \(T=50\) (blue curves), \(T=100\) (orange curves), \(T=250\) (green curves), \(T=1000\) (red curves) and \(T=2500\) (purple curves) forward iterations of the 4D map (1).
of chaos. Although this is a limitation of the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indicators, it is worth noting that they still prove to be highly accurate in their characterization of orbits for systems with moderate or large \(P_{C}\) values.
### Global dynamics and the role of the order of the LDs-based diagnostics
So far we computed the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices on 2D subspaces of the 4D phase space of map (1). Now we will examine what effect a change in the order \(n\) of the three LDs-based indicators has on their performance by considering not only their \(n=2\) versions. As the order \(n\) is increased we are adding and processing more information from the surroundings of a studied orbit, as we include in the evaluation of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices the LD values of more neighboring orbits. Thus, we expect that the obtained results will capture more accurately the nature of the underlying dynamics. Unfortunately, the increase of order \(n\) comes with the drawback of the raised computational effort required to evaluate the LDs of the additional grid points used for evaluating the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices.
The effect of order \(n\) on the distributions of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) values is shown in Fig. 5 where these distributions are plotted for orders \(n=1\) (blue curves), \(n=2\) (orange curves), \(n=3\) (green curves) and \(n=4\) (red curves). These distributions are obtained for the orbits considered in Fig. 2, whose ICs lie on the 2D subspace \((x_{1},x_{2})\), \(x_{3}=0.54\), \(x_{4}=0.01\) of the 4D map (1) with \(K=1.5\) and \(B=0.05\). In particular, for \(n=1\) neighboring orbits
Figure 4: Results obtained for orbits having their ICs on a \(1000\times 1000\) grid on the 2D subspace \((x_{1},x_{2})\) with \(x_{3}=0.54\), \(x_{4}=0.01\), of the 4D map (1) with \(B=0.05\) and [(a), (d), (g)] \(K=0.75\), [(b), (e), (h)] \(K=1.1\), [(c), (f), (i)] \(K=1.5\). In (a)-(c) the ICs are colored according to the orbits’ \(\log_{10}\) SALI value after \(T=2.5\times 10^{4}\) forward iterations using the color scales on the top of each panel. (d)-(f) The percentage accuracy \(P_{A}\) of the orbits correctly characterized by the \(D_{L}^{2}\) (blue points), \(R_{L}^{2}\) (orange points) and \(S_{L}^{2}\) (green points) with respect to the identification obtained by the SALI method for the same number of iterations \(T\), for the orbits respectively considered in (a)-(c). (g)-(i) The \(P_{A}\) of orbits correctly characterized by the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) (blue, orange and green points respectively) after \(T=10^{3}\) iterations for five different grid spacings \(\sigma\) on the \((x_{1},x_{2})\) space of (a)-(c) respectively. In all panels the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices are evaluated through computations of neighboring orbits’ LDs on the 2D \((x_{1},x_{2})\) plane. In (d)-(i) the dashed line connections are used to guide the eye.
on the \(x_{1}\) direction are considered for the computation of \(D_{L}^{1}\), \(R_{L}^{1}\) and \(S_{L}^{1}\), while for \(n=2\) the nearby orbits are located on the \((x_{1},x_{2})\) plane. For the evaluation of the \(D_{L}^{3}\), \(R_{L}^{3}\) and \(S_{L}^{3}\) indicators additional neighboring orbits with variations in their \(x_{3}\) coordinates are considered, while orbits with variations also in the \(x_{4}\) direction are used for the calculation of the order \(n=4\) indices. We note that in all cases the grid spacing between neighboring orbits is \(\sigma=10^{-3}\).
From the results of Fig. 5 we see that for the \(D_{L}^{n}\) [Fig. 5(a)] and \(S_{L}^{n}\) distributions [Fig. 5(c)] the two observed peaks increase in height as \(n\) grows, although their positions do not change drastically, while at the same time the trough between them is decreasing. Thus, defining a threshold value for discriminating between regular and chaotic orbits becomes easier for larger \(n\). It is also worth noting that the position of the threshold value at the minimum of the trough does not vary significantly with the indices' order, especially for \(n\geq 2\). Interestingly, an increase in the order \(n\) does not seem to have any effect on the shape of the distributions of the \(R_{L}^{n}\) as shown in Fig. 5(b).
In order to gain a more general understanding on how the percentage accuracy \(P_{A}\) of the three indicators changes with order \(n\), and also to investigate the potential effect of the studied ensembles of orbits on the performance of the indices, the orbit classification obtained by each indicator for \(n=1,2,3\) and \(4\) is compared to the SALI characterization for the same number of forward iterations, \(T=10^{3}\), for six different sets of orbits. The examined ensembles of ICs are defined on the 2D subspaces \((x_{1},x_{2})\), \((x_{1},x_{3})\), \((x_{1},x_{4})\), \((x_{2},x_{3})\), \((x_{2},x_{4})\) and \((x_{3},x_{4})\) of the 4D map (1) with \(K=1.5\) and \(B=0.05\), by considering a \(1000\times 1000\) evenly spaced grid of ICs (so that the grid spacing is \(\sigma=10^{-3}\)), while the remaining two variables are kept fixed at \(x_{1}=0.6\), \(x_{2}=0.2\), \(x_{3}=0.54\), and \(x_{4}=0.01\), depending on the 2D subspace under consideration. The accuracy of each of the indicators \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\), is then calculated for \(1\leq n\leq 4\) for each set of ICs in the following way. For \(n=1\) the indices are computed along the \(x_{i}\) direction corresponding to the smaller \(i\) index on the 2D subspace, for \(n=2\) along both directions of the 2D subspace, while for \(n=3\) the \(x_{i}\) direction with the smaller \(i\) index among the ones not included in the 2D subspace is also considered. Obviously, for \(n=4\) all directions are included in the computations. For example, in the case of the \((x_{2},x_{3})\) subspace the used ICs are on a \(1000\times 1000\) grid on the whole \((x_{2},x_{3})\) plane, i.e. \(0\leq x_{2}<1\), \(0\leq x_{3}<1\), with \(x_{1}=0.6\) and \(x_{4}=0.01\). Then for \(n=1\) the three indicators are computed by considering orbits along the \(x_{2}\) direction, for \(n=2\) along both the \(x_{2}\) and \(x_{3}\) directions, and for \(n=3\) along the \(x_{1}\), \(x_{2}\) and \(x_{3}\) directions. The performed studies in the several subspaces, which cover a wide range of coordinate orientations, and for all the possible orders of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indicators, ensure a global investigation of the indices' performance. The percentage \(P_{C}\) of chaotic orbits for the six considered ensembles, according to the SALI classification for \(T=10^{3}\), are \(P_{C}\approx 72.4\%\) for the \((x_{1},x_{2})\) case, \(P_{C}\approx 91.8\%\) for \((x_{1},x_{3})\), \(P_{C}\approx 89.6\%\) for \((x_{1},x_{4})\), \(P_{C}\approx 82\%\) for \((x_{2},x_{3})\), \(P_{C}\approx 77.7\%\) for \((x_{2},x_{4})\) and \(P_{C}\approx 84\%\) for the \((x_{3},x_{4})\) case.
In Fig. 6 we present the percentage accuracy \(P_{A}\) results obtained by the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices of order \(1\leq n\leq 4\) for the six sets of considered ICs. From this figure we see that the efficiency of the \(R_{L}^{n}\) index [Fig. 6(b)] in correctly capturing the regular or chaotic nature of the studied orbits does not practically depend on the order \(n\), as for all considered cases its \(P_{A}\) does not change with \(n\). On the other hand, for the \(D_{L}^{n}\) [Fig. 6(a)] and the \(S_{L}^{n}\) indices [Fig. 6(c)] we see a noticeable rise of \(P_{A}\) when \(n\) is increased from \(n=1\) to \(n=2\) (which is more significant in the case of \(S_{L}^{n}\)), followed by a mild improvement as \(n\) grows further. The main outcome of this analysis is that \(n=2\) seems to be the optimal order for the three indicators, as setting \(n>2\) does not result to significant improvements of the \(P_{A}\) values, which would justify the associated increase in the required computational time. We note that, due to the additional computations of LDs, the evaluation of indices of order \(n=3\) (\(n=4\)) approximately requires three (six) times more computational effort with respect to the \(n=2\) cases.
In Fig. 7 we see the percentage accuracy \(P_{A}\) obtained by the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices with respect to the percentage \(P_{C}\) of chaotic orbits (obtained by the SALI method) in the six considered sets of ICs. As expected, a general increase in accuracy for the three indicators is seen as the percentage of chaos grows, with the \(D_{L}^{2}\) and \(S_{L}^{2}\) indices being more accurate than \(R_{L}^{2}\). This behavior demonstrates again the fact that the three LDs-based indices become more accurate for more chaotic sets of orbits, in accordance to the results discussed in Sect. III.3 [Fig. 4].
As an additional example of the applicability of the three LDs-based diagnostics for investigating the global dynamics of map (1), we consider their implementation on a 4D subspace of the system's phase space for \(K=1.5\) and \(B=0.05\). In particular we consider the subspace defined by \(0.5\leq x_{1}<0.6\), \(0\leq x_{2}<0.1\), \(0\leq x_{3}<1\) and \(0\leq x_{4}<1\), which corresponds to \(1\%\) of the total phase space. From the so far performed analyses we know that \(n=2\) is the optimal order for achieving an accurate characterization of chaotic orbits and that LDs computations for \(T=10^{3}\) forward iterations are sufficient for that purpose. Thus, we evaluate the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices along two directions of the 4D subspace, and in particular, along the \(x_{1}\) and \(x_{2}\) coordinates, by taking a grid of \(100\times 100\) points on the \((x_{1},x_{2})\) space, which corresponds to a \(\sigma=10^{-3}\) grid spacing in accordance to the outcomes of Sect. III.3. Furthermore, in order to get a good representation of the whole considered 4D subspace, without unnecessarily increasing the number of studied ICs, we also regard a grid of \(100\times 100\) points along \(x_{3}\) and \(x_{4}\). This arrangement results in a total of \(10^{8}\) ICs, with \(P_{C}\approx 73\%\) of them being chaotic according to their SALI values at \(T=10^{3}\). The resultant distributions of the
\(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices for this 4D subspace are shown in Figs. 8(a)-(c) respectively, and have the same general shape as those seen in Figs. 2(d)-(f), Figs. 3(a)-(c) and Fig. 5, with two well-formed peaks corresponding to regular and chaotic orbits. The similarity of the obtained distributions for all considered cases in this work clearly indicates the generality of their shape, i.e. two peaks with a trough in between, which defines the place of the indices' threshold value for identifying chaotic orbits. It is worth noting that the exact location of this threshold does not significantly alter the overall orbit characterization. In order to make this point more clear, for each distribution of Fig. 8 we consider intervals for the location of the corresponding thresholds in the trough between the two peaks. These intervals are \(-2.65\leq\log_{10}D_{L}^{2}\leq-2\), \(-3.2\leq\log_{10}R_{L}^{2}\leq-2.8\) and \(4.3\leq\log_{10}S_{L}^{2}\leq 5.2\) and are denoted by the highlighted orange regions in each panel of Fig. 8. Considering ten different evenly distributed threshold values in these intervals we found that the accuracy \(P_{A}\) of the characterization made by the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) indices, in comparison to one achieved by the SALI, was in the ranges \(93.0\%\lesssim P_{A}\lesssim 94.9\%\) for the \(D_{L}^{2}\) index, \(91.8\%\lesssim P_{A}\lesssim 92.6\%\) for the \(R_{L}^{2}\) indicator, and \(92.5\%\lesssim P_{A}\lesssim 94.7\%\) for the \(S_{L}^{2}\) method. These results clearly illustrate that we can implement the \(D_{L}^{2}\), \(R_{L}^{2}\) and \(S_{L}^{2}\) diagnostics to distinguish between regular and chaotic orbits on a global scale in the phase space of the 4D map (1), and that the selected threshold value does not have a strong impact on the accuracy of this characterization.
Figure 5: Normalized distributions of the (a) \(\log_{10}D_{L}^{n}\) (11), (b) \(\log_{10}R_{L}^{n}\) (12), and (c) \(\log_{10}S_{L}^{n}\), (13) values of the orbits considered in Fig. 2 for orders \(n=1\) (blue curves), \(n=2\) (orange curves), \(n=3\) (green curves) and \(n=4\) (red curves). The \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices are evaluated along the \(x_{1}\) direction for \(n=1\), the \(x_{1}\) and \(x_{2}\) directions for \(n=2\), the \(x_{1}\), \(x_{2}\) and \(x_{3}\) directions for \(n=3\) and all coordinate directions for \(n=4\).
Figure 6: The percentage accuracy \(P_{A}\) obtained by the (a) \(D_{L}^{n}\) (11), (b) \(R_{L}^{n}\) (12), and (c) \(S_{L}^{n}\) (13) indices with respect to their order \(n\), for six different sets of ICs of the 4D map (1) with \(K=1.5\) and \(B=0.05\). The considered ensembles of orbits are defined on the 2D subspaces \((x_{1},x_{2})\) (blue points), \((x_{1},x_{3})\) (orange points), \((x_{1},x_{4})\) (green points), \((x_{2},x_{3})\) (red points), \((x_{2},x_{4})\) (purple points) and \((x_{3},x_{4})\) (brown points) by considering a \(1000\times 1000\) grid of ICs, while the remaining two variables are set to \(x_{1}=0.6\), \(x_{2}=0.2\), \(x_{3}=0.54\), and \(x_{4}=0.01\) depending on the 2D subspace under consideration. The results are computed for \(T=10^{3}\) forward iterations, and the dashed line connections are used to guide the eye.
## IV Summary and conclusion
In this work, we investigated the ability of some simple quantities based on LDs computations to correctly identify orbits as regular or chaotic. In particular, we focused our attention on a conservative dynamical system whose phase space dimensionality makes the direct visualization of the dynamics a challenging task: the 4D area preserving map (1), which is composed of two coupled 2D standard maps. More specifically, the quantities we considered were the difference \(D_{L}^{n}\) (11), and the ratio \(R_{L}^{n}\) (12) of neighboring orbits' LDs, as well as the \(S_{L}^{n}\) index (13), which is related to the second spatial derivative of the LDs. The \(S_{L}^{n}\) index was initially presented in [31] (in a slightly different formulation to the one used in our study), while the \(D_{L}^{n}\) and \(R_{L}^{n}\) diagnostics were introduced in [32], where they were also applied to low-dimensional conservative dynamical systems, namely the two degree of freedom Henon-Heiles Hamiltonian and the 2D standard map. Here, trying to investigate the applicability of these indices to high-dimensional systems, we considered a symplectic map having a 4D phase space. We emphasize that all three indicators rely solely on computations of forward in time LDs (although backward LD computations produce similar results) of initially neighboring orbits, lying on \(n\)-dimensional spaces, with \(n\) referred to as the order on each index.
Although color plots of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices manage to correctly capture a qualitative picture of the system's dynamics [Figs. 2(a)-(c)], as also LDs themselves do, we showed that they can also, quite successfully, identify individual orbits as regular or chaotic, and consequently quantify the system's extent of chaos. Actually, in all studied setups the three LDs-based indices managed to correctly reveal the regular or chaotic nature of orbits with an agreement \(P_{A}\gtrsim 90\%\) with respect to the classification obtained by the SALI method. The importance of this achievement becomes higher if we take into account the fact that the evaluation of the these indices depends only on the time evolution of orbits and does not require the knowledge of the related variational equations (in the case of continuous time systems) or the corresponding tangent map (for discrete time maps) governing the evolution of small perturbations to the studied orbits.
In order to use the three LDs-based quantities as chaos diagnostics we defined appropriate threshold values from the distributions of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices [Figs. 2(d)-(f)]. These thresholds were used to characterize an orbit as regular (chaotic) if its index value was below (above) the threshold. The determination of these thresholds was facilitated by the general shape of the distributions, which have two well defined peaks, corresponding to chaotic (peak at higher index values) and regular orbits (peak at lower values), separated by a trough [Figs. 2(d)-(f), Fig. 5, and Fig. 8], where the threshold was set. Typically this threshold was defined at the distribution's minimum in the trough [Figs. 2(d)-(f)], but the obtained orbit classifications were not too sensitive on the exact location of the threshold, as a variation of its value in the trough between the two peaks changed \(P_{A}\) by \(\lesssim 2\%\) [Fig. 8].
Even though the general form of the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) distributions remained the same, their explicit shape and consequently the location of the threshold value for each index, depended on the number of iterations \(T\) of the map for which the indices were computed [Figs. 3(a)-(c)], as well as the order \(n\) [Fig. 5]. In general, the increase of \(T\) and \(n\) resulted in more pronounced peaks [with the exception of \(R_{L}^{n}\) whose distribution does not seem to be affected by \(n\); Fig. 5(b)], while at the same time the trough's height decreased making the determination of the threshold value easier, and the efficiency of the indices higher. Indeed an increase of \(P_{A}\) was observed for various ensembles of studied orbits when \(T\) [Figs. 4(e) and (f)] and \(n\) [Fig. 6] grew. On the other hand, the increase of \(T\) and/or \(n\) led to longer computations, as orbits were followed for more iterations when \(T\) grew, and the number of neighboring orbits, whose LDs was needed for the indices' evaluation, increased for larger orders \(n\). It is also worth noting that all distributions practically covered the same value intervals, with the exception of the \(S_{L}^{n}\) which was shifted to higher index values when \(T\) increased [Fig. 3(c)]. Trying to find a balance between the achieved accuracy \(P_{A}\) in identifying chaos and the overall required computational time, in order to use the \(D_{L}^{n}\), \(R_{L}^{n}\) and \(S_{L}^{n}\) indices as efficient, short time chaos diagnostics, we showed that good choice for the \(T\) and \(n\) variables are \(T=1000\) and \(n=2\). Another factor which influenced the accuracy and the efficiency of the three indicators was the initial phase space distance (grid spacing \(\sigma\)) between the neighboring orbits
Figure 7: The percentage accuracy \(P_{A}\) of the (a) \(D_{L}^{2}\) (11), (b) \(R_{L}^{2}\) (12), and (c) \(S_{L}^{2}\) (13) indices for the six different sets of ICs considered in Fig. 6, with respect to the percentage \(P_{C}\) of chaotic orbits evaluated obtained by the SALI method. In all cases the related LDs and SALI values were calculated for a total of \(T=10^{3}\) forward iterations.
for which LDs were computed. We showed that a finer grid (smaller distances) led to more accurate results and higher \(P_{A}\) values [Figs. 4(g)-(i)], having at the same time the obvious drawback of the increase of required computational effort as more orbits were evolved. Our analysis indicated that a good balance between these two factors was obtained for \(\sigma=10^{-3}\).
We also explored the effect on the performance of the three indicators of the system's extent of chaos, i.e. the fraction \(P_{C}\) of chaotic orbits, as this was defined by the SALI method. Our results showed that the indicators perform better for systems with higher \(P_{C}\) values. More specifically, we found that the three diagnostics mainly failed to correctly identify the nature of orbits located at the edges of stability islands, where sticky chaotic orbits exist [Figs. 2(g)-(i)]. Consequently, the efficiency of these indices was decreased when the system's phase space was occupied by many stability islands of various sizes, having narrow chaotic strips between them where many sticky orbits resided [Figs. 4(a), (d) and (g)]. Nevertheless, even in such cases, an appropriate selection of the computation variables (in our case \(n=2\), \(T=1000\) and \(\sigma=10^{-3}\)) led to good results with \(P_{A}\gtrsim 90\%\). The main outcome of that investigation is that a fair amount of care should be taken for application of these LDs-based indices to systems where low levels of chaos are expected.
In summary, we found that, with respect to the variations of the distributions of the different indices (which affect the determination of the threshold value for discriminating between regular and chaotic orbits), the \(S_{L}^{n}\) distributions were significantly affected (moved to higher values) as \(T\) grew, although they more or less retained their shape [Fig. 3(c)]. On the other extreme end, the \(R_{L}^{n}\) distributions were not influenced by order \(n\) [Fig. 5(b)]. In all other cases we observed slight distribution variations with respect to \(T\) [Figs. 3(a) and (b)] and \(n\) [Fig. 5(a) and (c)], which led to small (if any) changes in the considered threshold values, that nevertheless did not drastically affect the overall orbit classification [Fig. 8].
From the results presented in this study, it is apparent that, in general, the \(D_{L}^{n}\) and \(S_{L}^{n}\) indicators performed better than \(R_{L}^{n}\) as they achieved larger \(P_{A}\) values [Figs. 4(d)-(i) and 7]. Thus, if only one index is to be used for the global investigation of the chaotic behavior of a model, we recommend this indicator to be \(D_{L}^{n}\) or \(S_{L}^{n}\), with, in general, the latter being a preferable choice as it performed slightly better with respect to the obtained \(P_{A}\) values [Figs. 4(d)-(i) and 7], although its threshold value significantly varies with \(T\) [Fig. 3]. Nevertheless, once the LDs have been computed for a tested ensemble of orbits, evaluating any of the three indicators is a straightforward task. It is worth noting that although the results obtained by the \(D_{L}^{n}\) and \(S_{L}^{n}\) indicators are not as precise as those achieved by standard chaos detection techniques like the SALI, the computations needed for their evaluation do not require the knowledge of the variational equations or the construction of the related tangent map, which simplifies the process of revealing the chaoticity of orbits.
We emphasize that the generality of our outcomes is supported by the fact that the presented results were obtained for several sets of ICs located in various subspaces of the map's phase space, having different dimensions, and for different parameter values. Our findings show that tools based on LDs computations can be effectively used as chaos diagnostic techniques also for conservative dynamical systems of higher dimensions, extending and completing in this way the results presented in [32].
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgments
A. N. acknowledges support from the University of Cape Town (University Research Council, URC) postdoctoral Fellowship grant and from the Oppenheimer Memorial Trust (OMT). M. H. acknowledges support by the National Research Foundation (NRF) of South Africa (grant number 129630). M. K. and S.W. acknowledge the financial support provided by the EPSRC Grant No. EP/P021123/1. We thank the High Performance Computing facility of the University of Cape Town and the Centre for High Performance Computing [43] of South Africa for providing computational resources for this project.
|
2310.12112 | A Cautionary Tale: On the Role of Reference Data in Empirical Privacy
Defenses | Within the realm of privacy-preserving machine learning, empirical privacy
defenses have been proposed as a solution to achieve satisfactory levels of
training data privacy without a significant drop in model utility. Most
existing defenses against membership inference attacks assume access to
reference data, defined as an additional dataset coming from the same (or a
similar) underlying distribution as training data. Despite the common use of
reference data, previous works are notably reticent about defining and
evaluating reference data privacy. As gains in model utility and/or training
data privacy may come at the expense of reference data privacy, it is essential
that all three aspects are duly considered. In this paper, we first examine the
availability of reference data and its privacy treatment in previous works and
demonstrate its necessity for fairly comparing defenses. Second, we propose a
baseline defense that enables the utility-privacy tradeoff with respect to both
training and reference data to be easily understood. Our method is formulated
as an empirical risk minimization with a constraint on the generalization
error, which, in practice, can be evaluated as a weighted empirical risk
minimization (WERM) over the training and reference datasets. Although we
conceived of WERM as a simple baseline, our experiments show that,
surprisingly, it outperforms the most well-studied and current state-of-the-art
empirical privacy defenses using reference data for nearly all relative privacy
levels of reference and training data. Our investigation also reveals that
these existing methods are unable to effectively trade off reference data
privacy for model utility and/or training data privacy. Overall, our work
highlights the need for a proper evaluation of the triad model utility /
training data privacy / reference data privacy when comparing privacy defenses. | Caelin G. Kaplan, Chuan Xu, Othmane Marfoq, Giovanni Neglia, Anderson Santana de Oliveira | 2023-10-18T17:07:07Z | http://arxiv.org/abs/2310.12112v1 | # A Cautionary Tale:
###### Abstract.
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility. Most existing defenses against membership inference attacks assume access to reference data, defined as an additional dataset coming from the same (or a similar) underlying distribution as training data. Despite the common use of reference data, previous works are notably reticent about defining and evaluating reference data privacy. As gains in model utility and/or training data privacy may come at the expense of reference data privacy, it is essential that all three aspects are duly considered. In this paper, we conduct the first comprehensive analysis of empirical privacy defenses. First, we examine the availability of reference data and its privacy treatment in previous works and demonstrate its necessity for fairly comparing defenses. Second, we propose a baseline defense that enables the utility-privacy tradeoff with respect to both training and reference data to be easily understood. Our method is formulated as an empirical risk minimization with a constraint on the generalization error, which, in practice, can be evaluated as a weighted empirical risk minimization (WERM) over the training and reference datasets. Although we conceived of WERM as a simple baseline, our experiments show that, surprisingly, it outperforms the most well-studied and current state-of-the-art empirical privacy defenses using reference data for nearly all relative privacy levels of reference and training data. Our investigation also reveals that these existing methods are unable to trade off reference data privacy for model utility and/or training data privacy, and thus fail to operate outside of the high reference data privacy case. Overall, our work highlights the need for a proper evaluation of the triad "model utility / training data privacy / reference data privacy" when comparing privacy defenses.
privacy-preserving machine learning, empirical privacy defenses, statistical learning +
Footnote †: This paper has been accepted to PETS 2024: The 24th Privacy Enhancing Technologies Symposium, July 15–20, 2024, Bristol, UK.
+
Footnote †: This paper has been accepted to PETS 2024: The 24th Privacy Enhancing Technologies Symposium, July 15–20, 2024, Bristol, UK.
+
Footnote †: This paper has been accepted to PETS 2024: The 24th Privacy Enhancing Technologies Symposium, July 15–20, 2024, Bristol, UK.
+
Footnote †: This paper has been accepted to PETS 2024: The 24th Privacy Enhancing Technologies Symposium, July 15–20, 2024, Bristol, UK.
## 1. Introduction
Data-driven applications, often using machine learning models, are proliferating throughout industry and society. Consequently, concerns about the use of data relating to individual persons has led to to a growing body of legislation, most notably the European Union's General Data Protection Regulation (GDPR) (Sundundhi et al., 2018). According to the GDPR principle of data minimization, it is necessary to reduce the degree to which data can be connected to individuals, even when that data is used for the purposes of training a statistical model (Sundhi et al., 2018). It has therefore become important to ensure that a machine learning model is not leaking private information about its training data.
Membership inference attacks (MIAs), which seek to discern whether or not a given data point has been used during training, have emerged as a key evaluation tool for empirically measuring a machine learning model's privacy leakage (Krizhevsky et al., 2014). Indeed, inferring training dataset membership can be thought of as the most fundamental privacy violation. Although other attacks exist, such as model inversion (Krizhevsky et al., 2014), property inference (Krizhevsky et al., 2014), dataset reconstruction (Sundhi et al., 2018), and model extraction (Krizhevsky et al., 2014; Ghahramani et al., 2018; Ghahramani et al., 2018), they all require a stronger adversary than is necessary for MIAs.
Many methods have been proposed to defend against MIAs. The use of differential privacy (Krizhevsky et al., 2014) (DP) has emerged as a leading candidate for two reasons. First, it provides mathematically rigorous guarantees that upper-bound the influence a given data point can exert on the final machine learning model. Second, it is straightforward to integrate DP into a machine learning model's training procedure with algorithms such as differentially private gradient descent (DP-SGD) (Bengio et al., 2015) or PATE (Petersson et al., 2016). Despite the many advantages associated with DP, there are several key drawbacks that include: the significant degradation of model utility when using DP during training (Ghahramani et al., 2018), even more severe for underrepresented groups (Bengio et al., 2015; Ghahramani et al., 2018; Ghahramani et al., 2018; Ghahramani et al., 2018), and the difficulty of translating DP's theoretical privacy guarantees to real-world privacy leakage (Bengio et al., 2015; Bengio et al., 2015; Bengio et al., 2015; Bengio et al., 2015).
To address these issues, empirical privacy defenses (i.e., without theoretical privacy guarantees) have been developed to protect the privacy of training data against MIAs. Existing empirical privacy defenses can be categorized by their method of protecting the training data (e.g., regularization (Sundhi et al., 2018; Ghahramani et al., 2018), confidence-vector masking (Sundhi et al., 2018; Ghahramani et al., 2018), knowledge distillation (Krizhevsky et al., 2014)). Alternatively, one can group defenses by whether they use only the private training data (Krizhevsky et al., 2014) or require access to reference data (Sundhi et al., 2018; Bengio et al., 2015; Bengio et al., 2015; Bengio et al., 2015; Bengio et al., 2015), defined as additional data from the same (or a similar) underlying distribution (Bengio et al., 2015). The two most prominent differentially private defenses can also be distinguished according to this distinction,
where PATE (PATE, 2017) requires access to (unlabeled) reference data but DP-SGD (Beng et al., 2017) does not.
There are several problems with the current evaluation strategy of empirical privacy defenses. First, today's best practice is to produce a utility-privacy curve that compares a model's classification accuracy with its training data privacy for different values of a given defense parameter (e.g., different regularization term values). Although this approach appears valid in the general case, assuming access to reference data makes the situation more complicated. This additional dataset may have its own privacy requirements (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018), which we discuss in detail in Section 2.4. As gains in model utility and/or training data privacy usually come at the expense of reference data privacy, it is only possible to meaningfully compare defenses when the _relative_ level of privacy considerations between these two datasets is made explicit. To demonstrate this issue, we present a concrete example in Figure 1, where "AdvReg" corresponds to adversarial regularization (Zhou et al., 2017), the most well-studied empirical privacy defense, and "AdvReg-RT" corresponds to an alternative version of the defense that we propose (defined in Section 4.3.2). Looking only at the utility-privacy curves1 with respect to training data, it seems that AdvReg-RT is strictly better than AdvReg: for any given value value of test accuracy, AdvReg-RT is able to produce a model that yields a lower MIA accuracy on the training data. However, when the utility-privacy curves are examined with respect to both training and reference data, one cannot determine the better method without knowing their relative privacy considerations.
Footnote 1: For the AdvReg and AdvReg-RT, the curves are obtained by changing the relative importance of the classification loss and the attacker loss (Zhou et al., 2017), i.e., the value of the parameter \(\lambda\) in (8).
A second problem with the current evaluation methodology is the lack of a well-understood and simple baseline. The literature contains several examples where proposed empirical privacy defenses have been later shown to leak significantly more training data privacy than originally reported and sometimes to even perform worse than simpler defenses (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018). A well-established baseline could have provided more accurate expectations about the ability of these defenses.
_Thus, there is a strong need for the development of a baseline designed to operate in the same assumption setting as the vast majority of existing empirical privacy defenses and for an evaluation that takes reference data privacy into account._
**Contributions.** We introduce the notion of a training-reference data privacy tradeoff and conduct the first comprehensive investigation into how empirical privacy defenses perform with respect to all three relevant metrics: model utility, training data privacy leakage, and reference data privacy leakage. Given this evaluation setting, we propose a well-motivated baseline that introduces the privacy requirement as a constraint on the generalization capability (Zhou et al., 2017) of the learned model. Our formulation leads to a convenient weighted empirical risk minimization (WERM), where the training and reference data can be weighted according to the relative privacy level of the two datasets. We prove that WERM enjoys theoretical guarantees both on the resulting model utility and the relative privacy level of training and reference data.
Our experimental results show that, surprisingly, WERM outperforms state-of-the-art empirical privacy defenses using reference data in nearly all training and reference data relative privacy regimes, including the case of public reference data. Additionally, we demonstrate that existing methods are only capable of extracting limited information from reference data during training and thus fail to effectively trade off reference data privacy for model utility and/or training data privacy. In particular, the mechanisms provided by these defenses to control the utility-privacy tradeoff with respect to the three aforementioned factors do not function as expected, since they are only able to operate in the case where reference data privacy is highly valued. By contrast, WERM is interpretable, straightforward to train, and highly effective. These traits enable it to serve as a baseline for evaluating future empirical privacy defenses using reference data. Importantly, comparing against our method requires selecting relative weights for the loss on the training data and the reference data, which makes explicit the underlying assumption about their relative privacy.
The remainder of the paper is organized as follows. In Section 2, we provide the background knowledge necessary to understand the domain of empirical privacy defenses. In Section 3, we present WERM and analyze its theoretical properties. In Section 4, we conduct a comprehensive set of experiments to evaluate our baseline in comparison to existing state-of-the-art methods. In Section 6, we conclude our paper and discuss future work.
## 2. Background
### Machine Learning Notation
In standard classification tasks, the goal is to learn a function \(f_{\theta}\) that maps a set of input examples \(x\in\mathcal{X}\) to a k-class probability distribution over a set of classes \(\mathcal{Y}=\{1,2,\ldots,k\}\). The function's
Figure 1. Tradeoff between a defended classifier’s prediction accuracy on test data (i.e., its model utility), MIA accuracy on training data (i.e., training data privacy leakage), MIA accuracy on reference data (i.e., reference data privacy leakage) for Purchase100 dataset. The key takeaway is that one cannot solely look at training data privacy leakage when evaluating the utility-privacy tradeoff of a given defense method.
output, \(f_{\theta}(x)\), is a vector, known as the confidence-vector, where each entry, \(f_{\theta}(x)_{y}\), represents the model's confidence about input \(x\) belonging to class \(y\). The model training entity has access to a training dataset of \(n\) examples, \(D_{T}=\{(x_{1},y_{1}),\ldots,(x_{N_{T}},y_{N_{T}})\}\), which have been drawn from an unknown underlying distribution \(\mathbb{D}\).
Although we only have access to \(D_{T}\), for a learned function to make useful predictions, it must perform well on unseen data also coming from \(\mathbb{D}\) (i.e., test data). More formally, the task of training a model entails finding the vector of parameters, \(\theta\in\Theta\), that minimize the expected risk (expected loss) \(L_{\mathbb{D}}\):
\[\min_{\theta\in\Theta}\,L_{\mathbb{D}}(f_{\theta})=\min_{\theta\in\Theta}\, \operatorname*{\mathbb{E}}_{(x,y)\sim\mathbb{D}}\,\left[\ell(f_{\theta},(x,y))\right] \tag{1}\]
where \(\ell(\cdot,\cdot)\) is a loss function. In supervised classification tasks, the loss function is often chosen to be the cross-entropy loss, \(\ell(f_{\theta},(x,y))\)\(=-\sum_{y^{\prime}\in\mathcal{B}}\,\mathbbm{1}_{y^{\prime}=y}\log(f_{\theta}(x)_{y})\). As we do not have access to \(\mathbb{D}\), we cannot directly minimize the expected risk. Therefore, we instead minimize the loss over our training data, \(D_{T}\), which we define as the empirical risk (training loss) \(L_{D_{T}}\):
\[\min_{\theta\in\Theta}\,L_{D_{T}}(f_{\theta})=\min_{\theta\in\Theta}\,\frac{1 }{N_{T}}\sum_{i=1}^{N_{T}}\ell(f_{\theta},(x_{i},y_{i}))\,. \tag{2}\]
The empirical risk minimization (ERM) in (2) is often solved using gradient descent methods (Kingma and Ba, 2014). Given a satisfactory set of model parameters, \(\theta_{s}\), the generalization error, also referred to as the generalization gap, is defined as:
\[\text{Generalization Error}=L_{\mathbb{D}}(f_{\theta_{s}})-L_{D_{T}}(f_{\theta_{ s}})\,. \tag{3}\]
The generalization error serves to quantify the difference between the training loss and expected loss. The framework of statistical learning theory (Srivastava et al., 2014) enables the derivation of theoretical bounds for the generalization error, which we use to provide guarantees for our proposed method.
### Membership Inference Attacks
#### 2.2.1. Attack Setting
In the most generic case, a MIA operates in a setting where there exists:
* A training dataset, \(D_{T}\), (drawn from the distribution \(\mathbb{D}\)), whose privacy should be protected
* A machine learning model, \(f_{\theta}\), which will be referred to as the target model, that is trained on \(D_{T}\) and possibly additional data sources (e.g., reference data)
* An adversary, \(\mathcal{A}\), who seeks to infer whether a target data point in a set \(D^{\text{adv}}\) belongs to \(D_{T}\)
#### 2.2.2. Evaluation Setting
The dataset \(D^{\text{adv}}\), used for evaluating the performance of most previous attacks (Srivastava et al., 2014; Ganin et al., 2015; Ganin et al., 2015; Ganin et al., 2015), is constructed such that it contains half of the training data, denoted as \(D^{\text{adv}}_{T}\), and an equal size sample of non-training data from the same underlying distribution, denoted as \(D^{\text{adv}}_{T}\). Accuracy is the standard metric used for evaluation, although recent work by Carlini et al. (Carlini et al., 2019) proposes an alternative.
We use the notation \(\mathcal{A}(x,y)\) to define the binary output of a generic MIA, which codes members as 1 and non-members as 0. The accuracy of an attack against \(D^{\text{adv}}\) can thus be calculated as:
\[\frac{\sum_{(x_{i},y_{i})\in D^{\text{adv}}_{T}}\mathcal{A}(x_{i},y_{i})+ \sum_{(x_{i},y_{i})\in D^{\text{adv}}_{T}}(1-\mathcal{A}(x_{i},y_{i}))}{\left| D^{\text{adv}}_{T}\right|+\left|D^{\text{adv}}_{T}\right|} \tag{4}\]
#### 2.2.3. Threat Model
The potential for adversaries to perform effective membership inference increases with every additional piece of information they can access. Therefore, it is important to clearly articulate the assumptions underlying each potential attack. To the best of our knowledge, all known attacks proposed in the literature rely on at least one of the following four fundamental assumptions about the adversary's knowledge:
1. Knowledge of the ground-truth label for a target data point.
2. Access to either the largest confidence value or the entire confidence-vector when evaluated on a target data point, as opposed to merely the predicted label.
3. Access to a dataset drawn from the same distribution as the training data (often referred to as population data (Yao et al., 2019)).2
Footnote 2: Note that the attacker’s population data plays a similar role to reference data for the defender, but for clarity we avoid using the same name.
4. Access to either a portion or all of the ground-truth training data, excluding the target data point whose membership the adversary wants to infer.
Adversaries possessing access to either population data (Assumption 3) and/or ground-truth training data (Assumption 4) are positioned to launch significantly more sophisticated and potent attacks. In Table 5 (in Appendix D.1), we present the different adversary assumptions for some of the most well-known MIAs.
#### 2.2.4. Existing Membership Inference Attacks
MIAs can be levied against discriminative (Ganin et al., 2015) and generative (Bahdan et al., 2016) machine learning models. One key distinction among MIAs is whether the adversary has access to the inner-workings of the target model, such as weights, gradients, etc. (white-box), or only access to the target model's output (black-box). When evaluating our proposed baseline against state-of-the-art defenses, we follow previous work (Srivastava et al., 2014; Srivastava et al., 2014; Ganin et al., 2015) and assume that the adversary has black-box access to the target model. Therefore, from now on we focus on black-box attacks, and refer the reader to (Srivastava et al., 2014) for a comprehensive review of white-box attacks.
The simplest attack is the gap attack (Srivastava et al., 2014), which predicts any correctly classified data point as a member and any misclassified data point as a non-member:
\[\mathcal{A}_{\text{gap}}(x,y)=\mathbbm{1}\{\operatorname*{argmax}_{i}f_{\theta}( x)_{i}=y\}. \tag{5}\]
The name is derived from the fact that the attack directly exploits the generalization error (gap) described in (3). This attack only requires the assumption that an adversary has access to the ground-truth label.
When the adversary has access to more fine-grained information (e.g., the confidence value associated to the predicted class or the entire confidence-vector), one can conduct a threshold-based attack (Ganin et al., 2015; Srivastava et al., 2014). Using the confidence value associated to the predicted class as an example, we have:
\[\mathcal{A}_{\text{conf}}(x,y)=\mathbbm{1}\{f_{\theta}(x)_{y}>\tau\}, \tag{6}\]
where \(\tau\) is a class-independent threshold. Song and Mittal (Song and Mittal, 2015) demonstrated that threshold-based MIAs are the most effective among those that do not require access to training/non-training data. Further details regarding the design of the gap attack and extensions of threshold-based MIAs can be found in Appendix D.2.
In Section 4, following the methodology laid out in (Song and Mittal, 2015), we assess our proposed defense, Weighted Empirical Risk Minimization (WERM), against a variety of threshold-based MIAs. Additionally, we consider a neural network-based MIA (Mikhlin et al., 2017), which could be employed by a stronger adversary.
### Empirical Privacy Defenses
Among empirical privacy defenses using reference data, the methods based on regularization techniques are the best performing (Song and Mittal, 2015). As WERM belongs to this group, we provide background for this type of defense and refer the reader to (Song and Mittal, 2015) for background on defenses using knowledge distillation. The idea of regularization defenses is to achieve a model that has good generalization, such that the distribution of model outputs on training data is similar to the output on unseen test data. Standard approaches to improve regularization, such as early-stopping (Beng et al., 2017), weight decay (Song and Mittal, 2015), and dropout (Song and Mittal, 2015), have been observed to improve a model's robustness against a variety of MIAs (Mikhlin et al., 2017; Song and Mittal, 2015). Additionally, some regularization terms have been proposed that seek to explicitly protect against attacks, such as adversarial regularization (Song and Mittal, 2015) and MMD-based regularization (Song and Mittal, 2015).
All empirical privacy defenses using reference data assume that the model training entity has access to training data, \(D_{T}=\big{\{}(x_{1},y_{1})\), \(\ldots,(x_{N_{T}},y_{N_{T}})\big{\}}\), and reference data, \(D_{R}=\big{\{}(x^{\prime}_{1},y^{\prime}_{1}),\ldots,(x^{\prime}_{N_{R}},y^{ \prime}_{N_{R}})\big{\}}\), which come from the same (or a similar) underlying distribution and are of size \(N_{T}=|D_{T}|\) and \(N_{R}=|D_{R}|\), respectively. The defenses aim to make model predictions on training and reference data sufficiently similar, such that it will be hard for an attacker to distinguish a model's output on training and non-training data. The closer the distributions of training data and reference data, the easier the task for the defense and the smaller the model utility loss.
#### 2.3.1. Adversarial Regularization
Adversarial regularization (AytReg) (AytReg, 2016) is a model training framework that is formulated as a min max game, where a classifier, \(f_{\theta}\), is trained to be optimally protected against a MIA model, \(h_{\phi}\). The first component is the loss of the classifier, \(f_{\theta}\), over the training data, i.e., \(L_{D_{T}}(f_{\theta})\) as described in (2). The second component is the gain of the attack model:
\[\begin{split} G_{D_{T},D_{R}}(f_{\theta},h_{\phi})& =\frac{1}{N_{T}}\sum_{i=1}^{N_{T}}\log\big{[}h_{\phi}(x_{i},y_{i},f_{\theta}(x_{i}))\big{]}+\\ &\frac{1}{N_{R}}\sum_{i=1}^{N_{R}}\log\big{[}1-h_{\phi}(x^{ \prime}_{i},y^{\prime}_{i},f_{\theta}(x^{\prime}_{i}))\big{]},\end{split} \tag{7}\]
where \(h_{\phi}(x,y,f_{\theta}(x))\) outputs the probability that a given target data point is a member of the training data. The attack model's gain quantifies its ability to predict the training data as members and the reference data as non-members.
The whole optimization problem can be formulated as:
\[\min_{\theta\in\Theta}\max_{\phi\in\Phi}\;L_{D_{T}}(f_{\theta})+\lambda\;G_{D _{T},D_{R}}(f_{\theta},h_{\phi}), \tag{8}\]
where \(\lambda\) is the regularization term's weight and serves to trade utility for privacy (a larger \(\lambda\) should result in the trained model having greater privacy protection at the cost of decreased utility). The minmax problem described in (8) is solved by alternating some gradient method steps for the minimization and the maximization problem.
#### 2.3.2. MMD-based Regularization
Alternatively, in MMD-based regularization (MMD) as proposed in (Song and Mittal, 2015), the regularization term may be the Maximum Mean Discrepancy (MMD), leading to the following problem:
\[\min_{\theta\in\Theta}L_{D_{T}}(f_{\theta})+\lambda\cdot\left\|\frac{1}{N_{T} }\sum_{i=1}^{N_{T}}\psi(f_{\theta}(x_{i}))-\frac{1}{N_{R}}\sum_{i=1}^{N_{R}} \psi(f_{\theta}(x^{\prime}_{i}))\right\|_{\mathcal{H}} \tag{9}\]
where \(\mathcal{H}\) is a universal Reproducing Kernel Hilbert Space (RKHS) and \(\psi\) is a function mapping model's outputs to points in \(\mathcal{H}\). By solving the problem in (9), the resulting model seeks to simultaneously minimize the empirical risk of the training data and the difference in output of the model on training and reference data in the space \(\mathcal{H}\). Traditionally, to calculate the MMD one would find \(\psi\) such that it maximizes the distance in \(\mathcal{H}\). Instead, to simplify the training process, the authors of (Song and Mittal, 2015) select \(\psi\) to be a given Gaussian kernel.
### Reference Data Overview and Threat Model
The vast majority of empirical privacy defenses in the literature (AytReg, 2016; AytReg, 2016; AytReg, 2016; AytReg, 2016; AytReg, 2016; Song and Mittal, 2015) require access to reference data, which is assumed to come from the same (or a similar) underlying distribution as training data. In Section 2.4.1, we discuss the availability of reference data and its level of privacy. In Section 2.4.2, we examine how existing empirical privacy defenses have dealt with the privacy of reference data.
#### 2.4.1. Reference Data Availability and Privacy
Although not always called "reference data," the notion of having access to a distinct dataset coming from the same (or a similar) underlying distribution as training data is common throughout many domains of machine learning literature (e.g., the design of MIAs as mentioned in Section 2.2.3). We can divide the examples into cases where reference data is public and cases where reference data is private. In the public reference data setting, large publicly available datasets are routinely employed to pre-train a model which will later be fine-tuned using a private and smaller training dataset (AytReg, 2016; AytReg, 2016) or a public dataset can be used for knowledge transfer across heterogeneous models trained on private local datasets in a federated learning scenario (Kang et al., 2017; Krizhevsky et al., 2014). When reference data is public, empirical privacy defenses can use it to augment the privacy of training data, while disregarding concerns about the privacy of the reference data itself.
In the private reference data setting, the availability of reference data may result from model training entities having private datasets that contain certain records with distinct privacy requirements. The "pay-for-privacy" business model enables companies to acquire data from users or consumers at various privacy levels (Krizhevsky et al., 2014). For example, ISPs are known to provide discounts to their users in exchange for the possibility of exploiting their data for targeted advertisement (possibly powered by a machine learning model) (Beng et al., 2017), and some mobile phone applications offer a free and a paid version that provides
better privacy protection to users of the paid service (Kumar et al., 2017). Training and reference data can then correspond to data from users with a different pricing scheme. Different privacy levels may also be due to past data leaks, e.g., due to malicious security breaches or human errors. As will become apparent, in this scenario where a single dataset has two segments with distinct privacy considerations, one can use either the more or less private data segment as reference data to better protect the privacy of the remaining segment (training data). Even in standard machine learning training, such considerations may be a leading factor in choosing how to split the available data into training and validation segments, as they have been shown to each leak different amounts of private information (Kumar et al., 2017). Finally, we observe that heterogeneity in privacy levels is also implicitly assumed in fog learning (Kumar et al., 2017), where federated learning clients share a part of their local datasets to bring their respective distributions closer to facilitate the training of a common model.
#### 2.4.2. Reference Data in Empirical Privacy Defenses
In Table 1, we present seven empirical privacy defenses using reference data: the first six are existing defenses and the seventh is our proposed method, WERM. The existing defenses can be subdivided into two categories based on reference data privacy treatment: private (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017) and "not mentioned" (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017). We use the label "not mentioned" to represent works where reference data privacy is neither discussed nor evaluated.3 Moreover, each of the three works that consider reference data to be private evaluate its privacy leakage at only a single point on the utility-privacy curve and show it to be much smaller than the training data privacy leakage. These results reveal an implicit choice by the authors: reference data privacy is valued more highly than training data privacy.
Footnote 3: We note that the omission may suggest they implicitly consider the reference data to be public.
We do not take a particular stance on the relative privacy of training and reference data, i.e., if the reference data in empirical privacy defenses should be considered more or less private than training data--as shown in Section 4, we evaluate WERM in all possible reference data privacy settings and show that it outperforms state-of-the-art defenses across almost the entire spectrum. Yet, we argue that, without quantifying the relative importance assigned to the three key objectives (model utility, training data privacy, and reference data privacy), we cannot adequately compare the performance of these defenses. For example, in the papers that consider reference data more private than training data, the proposed defenses are still allowing for some reference data privacy leakage to achieve a high model utility and training data privacy protection. Is this the right amount of privacy leakage? Perhaps, one should instead seek to trade much more reference data privacy to improve the other two metrics. Alternatively, if reference data privacy is of the utmost importance, the current leakage may already be unacceptable. Similar considerations hold for the public reference data case: given that reference data privacy is not a concern, are the proposed methods achieving the best possible tradeoff between model utility and training data privacy?
The next section will introduce our method and show how its utility-privacy tradeoffs are amenable to analysis.
## 3. Weighted Empirical Risk Minimization
In this section, we introduce our proposed baseline, WERM, and analyze its theoretical properties related to generalization and privacy protection. WERM's design is rooted in the fundamental principles of statistical learning, particularly in the generalization error (3). WERM utilizes a weight term, \(w\), which simultaneously regulates the tradeoff between the privacy of training data and reference data, as well as the tradeoff between utility and privacy. Employing tools from differential privacy (DP) (Kumar et al., 2017) and statistical learning theory (Kumar et al., 2017), we derive theoretical bounds that enable us to understand how \(w\) and the size of the two datasets impact the relative privacy leakage and the model's utility. Following all related work (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017), we consider the two distributions from which \(D_{T}\) and \(D_{R}\) are drawn to be identical. The relative privacy results in Theorem 3.1 do not depend on \(D_{T}\) and \(D_{R}\) coming from the same underlying distribution and the generalization bound in Theorem 3.2 can be extended to the case where the distributions are only similar.
### Motivation
Drawing any conclusion about the quality of a defense can only come after comparing it to an interpretable and well-performing
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Defense & Category & Reference Data Privacy Setting & Reference Data Privacy Evaluation \\ \hline Adversarial Regularization (Kumar et al., 2017) & regularization & not mentioned & no evaluation \\ MemGuard (Kumar et al., 2017) & confidence-vector masking & not mentioned & no evaluation \\ Model Pruning (Kumar et al., 2017) & knowledge distillation & not mentioned & no evaluation \\ MMD-based Regularization (Kumar et al., 2017) & regularization & private (relative level unspecified) & yes (single privacy level) \\ Distillation for Membership Privacy (Kumar et al., 2017) & knowledge distillation & private (relative level unspecified) & yes (single privacy level) \\ Prediction Purification (Kumar et al., 2017) & confidence-vector masking & private (relative level unspecified) & yes (single privacy level) \\ WERM (this paper) & regularization & all possible settings discussed & yes (all privacy levels) \\ \hline \end{tabular}
\end{table}
Table 1. Comparison of empirical privacy defenses by reference data treatment. In the third column, “relative level unspecified” means the target level of relative privacy requirements between training and reference data is not stated. In the fourth column, “single privacy level” means the reference data privacy leakage is evaluated at a single point on the utility-privacy curve. We use a dashed line (–) to convey that the defense either does not use reference data or does not need to evaluate reference data privacy leakage.
baseline. Therefore, our goal is to propose a baseline that makes the training-reference data privacy tradeoff explicit and can operate across the entire range of possible privacy settings. Our method's design originates from the understanding that all black-box MIAs share a common design feature, which is exploiting the difference between a model's output on training and non-training data. What they consider as a model's output may differ (e.g., predicted label, loss, confidence-vector), but the distinguishability of output distributions is the prerequisite for a membership inference vulnerability to exist in the black-box setting. Thus, employing an ideal membership inference defense will result in a defended model that behaves identically when queried with training or non-training data from the same distribution. The design of a defense based on regularization requires a decision about how to define equivalence of output. AdvReg (Section 2.3.1) introduces a regularization term that constrains the difference between a classifier's confidence-vector output on training and reference data based on a learned neural network; MMD (Section 2.3.2) constrains this difference using a Gaussian kernel. Our proposed baseline is motivated by the fact that a smaller generalization error implies that the empirical loss is closer to the expected loss and, subsequently, the loss observed on any future sample drawn from the same distribution, making it difficult for the adversary to conclude which samples were part of the training data. Thus, WERM addresses the fundamental challenge common to all regularization defenses: learning a classifier whose outputs are indistinguishable between training and reference data. However, its design, rooted in statistical learning principles, results in a unique algorithm. WERM not only exhibits superior performance (Section 4.4) but also provides enhanced interpretability (Section 3.2), simpler configuration (Section 5.1), and reduced computational costs (Section 5.2).
### Method
We propose to train a standard ERM using both training and reference data, while constraining the generalization error with respect to each of the datasets. Our problem can be formulated as:
\[\begin{split}\text{Input:}& D_{T}\sim\mathbb{D}^{N_{T }},D_{R}\sim\mathbb{D}^{N_{R}},\ c_{T},c_{R}\in\mathbb{R}^{+}\\ \min_{\theta\in\Theta}& L_{\text{D}}(f_{\theta})\\ \text{s.t.}& L_{\text{D}}(f_{\theta})-L_{D_{T}}(f_{ \theta})\leq c_{T}\\ & L_{\text{D}}(f_{\theta})-L_{D_{R}}(f_{\theta})\leq c_{R}\end{split} \tag{10}\]
where \(D=D_{T}\cup D_{R}\), \(N_{T}\) and \(N_{R}\) are the respective sizes of training and reference data, \(L_{D}(f_{\theta})=\frac{N_{T}}{N}L_{D_{T}}(f_{\theta})+\frac{N_{R}}{N}L_{D_{R} }(f_{\theta})\), with \(N=N_{T}+N_{R}\), and the constants \(c_{T}\) and \(c_{R}\) constrain the generalization error on the training data and on the reference data, respectively. On the basis of our discussion in Section 3.1, smaller values of \(c_{T}\) (\(c_{R}\)) correspond to greater privacy protection for training (reference) data. For the purpose of readability, in the rest of this section, we write \(L_{D}(f_{\theta})\) as \(L_{D}\) (i.e., the loss over a given dataset is implied to be evaluated for \(f_{\theta}\)). Moreover, for simplicity, we consider the case where \(N_{T}=N_{R}\).
Studying the Lagrangian of problem 10 and introducing the optimal multipliers \(\lambda^{*}\) and \(\mu^{*}\), as detailed in Appendix A.1, we can show that (10) becomes equivalent to the following two problems:
\[\min_{\theta\in\Theta}\Big{[}\frac{1}{2}+\mu^{*}\Big{]}L_{D_{T}}+ \Big{[}\frac{1}{2}-\mu^{*}\Big{]}L_{D_{R}}, \tag{12}\] \[\min_{\theta\in\Theta}\Big{[}\frac{1}{2}-\lambda^{*}\Big{]}L_{D_{ T}}+\Big{[}\frac{1}{2}+\lambda^{*}\Big{]}L_{D_{R}}, \tag{11}\]
where (11) corresponds to the case when reference data privacy is a stricter constraint (i.e., \(c_{R}<c_{T}\)) and (12) corresponds to the case when training data privacy is a stricter constant (i.e., \(c_{T}<c_{R}\)). In both cases, we obtain a weighted sum of the two empirical risks with a larger weight (i.e., \(>1/2\)) given to the dataset with looser privacy constraints. Using equal weights corresponds to equal privacy constraints.
Motivated by this reasoning, we propose the following weighted empirical risk minimization (WERM) as a baseline for privacy defenses using reference data:
\[\min_{\theta\in\Theta}L_{\text{D}}^{w}(f_{\theta})=(1-w)L_{D_{T}}(f_{\theta}) +wL_{D_{R}}(f_{\theta}),\ \text{for some}\ w\in[0,1]. \tag{13}\]
This formulation allows us to simply trade reference data privacy for training data privacy by changing the parameter \(w\). Higher (lower) values of \(w\) lead to greater privacy protection for training (reference) data. In particular, the privacy of training data and reference data is perfectly protected for \(w=1\) and \(w=0\), respectively, which is the case where the corresponding dataset is not used to compute the defended model. Another benefit of WERM's formulation in (13) is its ability to accommodate multiple datasets, each with a distinct privacy level (up to the limit case where every point is a separate dataset with its own privacy considerations). It is unclear how AdvReg (MMD, 2018) or MMD (Krishnan et al., 2019) could be adapted to this scenario. We prove Theorem 3.1 on the relative privacy leakage (Appendix A.3) and Theorem 3.2 on the generalization bound (Appendix A.4) for this generalized case.
Along with its high interpretability, WERM is also a lightweight defense, as its computational cost is equivalent to training an undefined model by minimizing the empirical risk over \(N\) samples. This is less computationally expensive than solving AdvReg's minmax problem in (8) and MMD's additional requirement of comparing the distance for unique classes in a batch (see implementation details in Appendix B.2). A detailed comparison of the training time for these defenses (Section 5.2) confirms this intuition.
In the remainder of this section, we provide theoretical guarantees for WERM's relative training-reference data privacy (Section 3.3) and WERM's model utility (Section 3.4).
### WERM's Privacy
Our analysis in the previous section led us to qualitatively conclude that increasing (decreasing) the reference data weight, \(w\), in WERM results in increased privacy protection for the training (reference) data. Particularly, when the two datasets are the same size and have the same privacy requirements, one should select \(w=1/2\). In this section, we derive more formal privacy guarantees and configuration rules for \(w\) considering general dataset sizes.
The formulation of WERM in (13) is not intrinsically differentially differentially private. However, using DP-SGD (Beng et al., 2017) as the optimization algorithm to solve (13) enables WERM to become a differentially private method. For the purpose of our analysis, we assume this
situation in order to employ tools from DP (Hardt et al., 2017) to measure the relative privacy tradeoff between training data and reference data. Consequently, the \(\epsilon\) values presented are simply a convenient way to achieve our primary goal of quantifying how the weight term, w, and the size of the two datasets impact WERM's relative privacy level. We emphasize that, while possible, we are not proposing to train WERM with DP-SGD to achieve \(\epsilon\)-DP privacy guarantees.
DP-SGD works by clipping the gradient values below a certain threshold and adding Gaussian noise to each of them with scale \(\sigma\). If properly configured, DP-SGD enjoys \((\epsilon,\delta)\)-DP guarantees, i.e., when a single sample of the dataset is changed, the probability of any possible event observable by an attacker changes at most by a multiplicative factor \(\exp(\epsilon)\) and by an additive term \(\delta\). The larger noise scale \(\sigma\), the smaller \(\epsilon\geq 0\) and \(\delta\in[0,1)\), and the stronger the privacy guarantees.
Fundamentally, an empirical privacy defense that has access to reference data must make a choice regarding how much of the reference data's privacy should be sacrificed to protect the privacy of the training data. We rely on the \(\epsilon\) parameter from DP to quantify the relative privacy of the two datasets. As we will argue after stating our result, in practice, we can consider that the conclusions about the relative privacy hold even if DP-SGD is not used during training.
**Theorem 3.1** (Privacy Leakage).: _For some overall number of training steps, \(K\), WERM minimized with DP-SGD is:_
\[\left(O(\epsilon_{T}),\delta\right)-\text{DP w.r.t. the training dataset }(D_{T}) \tag{14}\] \[\left(O(\epsilon_{R}),\delta\right)-\text{DP w.r.t. the reference dataset }(D_{R}) \tag{15}\]
_where:_
\[\epsilon_{T} =\epsilon_{0}\frac{1-w}{N_{T}},\epsilon_{R}=\epsilon_{0}\frac{w} {N_{R}} \tag{16}\] \[0 <\epsilon_{0}<\min\Big{(}\frac{N_{T}}{1-w},\frac{N_{R}}{w}\Big{)},\] \[\sigma =\alpha\sqrt{K}\sqrt{2\log\frac{1.25}{\delta}}\frac{C}{\epsilon_ {0}}, \tag{17}\]
_and \(\sigma\), \(C\), and \(\alpha\) are the noise scale, gradient norm bound, and sampling ratio in DP-SGD, respectively._
The proof of Theorem 3.1 and a detailed description of how we adapt the analysis of DP-SGD from Abadi et al. (Abadi et al., 2017) to be compatible with WERM can be found in Appendix A.3.
It is important to note that the relative privacy of the two datasets, as quantified by the ratio \(\epsilon_{T}/\epsilon_{R}\) is completely governed by \(w\) and the size of the two datasets and independent of \(\epsilon_{0}\). In particular, the training data will be more private if and only if \(\frac{1-w}{N_{T}}<\frac{w}{N_{R}}\). Specifically, setting the weight of each empirical loss in (13) proportional to the size of its corresponding dataset leads to the same privacy guarantees for samples in both datasets. In the case where \(N_{T}=N_{R}\), we recover the result we were able to conclude qualitatively in the previous section, i.e., that setting \(w=1/2\) will result in equivalent privacy guarantees for the training and reference data.
The independence of the ratio \(\epsilon_{T}/\epsilon_{R}\) on \(\epsilon_{0}\) implies that the same value for the relative privacy of the two datasets is achieved if we set \(\epsilon_{0}\) to a very large value (on the order of the dataset size, see (16)) and then use DP-SGD with a negligible noise (\(\sigma\approx 0\) in (17)). These considerations justify our experimental results in Section 4.4, where WERM, trained with the usual gradient descent method (i.e., without clipping or adding noise) provides relative privacy guarantees--as measured by the success of MIAs--qualitatively aligned with the conclusions of Theorem 3.1.
### WERM's Model Utility
We provide a bound for the expected loss of the model learned through WERM (\(f_{\theta_{\text{WERM}}}\)) with respect to the smallest possible loss \(\min_{\theta\in\Theta}L_{\square}(f_{\theta})\).
**Theorem 3.2** (Generalization bound).: _Under the assumption that the loss function is bounded in the range [0, 1], it follows that:_
\[L_{\square}(f_{\theta_{\text{WERM}}})\leq\min_{\theta\in\Theta}L _{\square}(f_{\theta}) \tag{18}\] \[\quad+2\sqrt{\frac{VCdim(\Theta)}{N_{\text{eff}}}}\cdot\sqrt{\gamma _{2}+\log\left(\frac{N}{VCdim(\Theta)}\right)}+\sqrt{\frac{2\ln 2/\delta}{N_{ \text{eff}}}}\]
_with probability \(\geq 1-\delta\), where:_
\[f_{\theta_{\text{WERM}}} =\operatorname*{argmin}_{\theta\in\Theta}L_{D}^{w}(f_{\theta}),\] \[L_{D}^{w}(f_{\theta}) =(1-w)L_{D_{T}}(f_{\theta})+wL_{D_{R}}(f_{\theta}),\] \[\gamma_{2} =\max\left\{\frac{4}{VCdim(\Theta)},1\right\},N_{\text{eff}}= \left[\frac{(1-w)^{2}}{N_{T}}+\frac{w^{2}}{N_{R}}\right]^{-1},\]
\(D_{T}\sim\mathbb{D}^{N_{T}}\), \(D_{R}\sim\mathbb{D}^{N_{R}}\), \(D=D_{T}\cup D_{R}\), \(N=|D|\), and \(VCdim(\Theta)\) is the \(VC\)-dimension of hypothesis class \(F_{\Theta}=\{f_{\theta}:\theta\in\Theta\}\).
The proof of Theorem 3.2 can be found in Appendix A.4. For our purposes, it is important to keep in mind that a smaller bound in (31) implies that the performance of a model learned by WERM will be closer to the performance of the best model in the hypothesis class \(F_{\Theta}\), resulting in higher model utility.
Theorem 3.2 makes clear how the classifier's utility depends directly on \(w\). Given a fixed total dataset size and an already selected model class, \(N\) and \(VCdim(\Theta)\) in (31) are held constant. Consequently, the only term that influences the generalization bound is the _effective number of samples_\(N_{\text{eff}}\): the larger \(N_{\text{eff}}\), the higher the model's utility. It is easy to show that the effective dataset size is always upper-bounded by the total dataset size (i.e., \(N_{\text{eff}}\leq N\)) and is maximized for \(w^{*}=N_{R}/N\). This choice results in the same weight given to every sample independently of whether it belongs to the training or reference data, i.e., \(L_{D}^{w}=\frac{1}{N}\sum_{(x,y)\in D_{T}\cup D_{R}}\ell(f_{\theta},(x,y))\). When \(w=w^{*}\), training and reference data have the same privacy guarantees (\(\epsilon_{T}=\epsilon_{R}\), see Section 3.3). Alternatively, when the privacy considerations are unequal, \(N_{\text{eff}}\) degrades quadratically with respect to the difference between \(w\) and \(w^{*}\). We can thus conclude that heterogeneous privacy requirements for training and reference data (i.e., \(\epsilon_{T}\neq\epsilon_{R}\)) lead to samples being weighted differently in the two datasets, which causes an increase in the privacy of a selected dataset at the expense of overall model utility.
### Theoretical Utility-Privacy Tradeoff
By combining Theorem 3.1 and Theorem 3.2, we can study WERM's utility-privacy tradeoff for different dataset sizes and different values of the weight \(w\). In Figure 2, we plot \(\epsilon\) privacy values against effective dataset sizes (\(N_{\text{eff}}\)) for different proportions of training
data and reference data (\(\frac{N_{T}}{N_{R}}\)) and weight values in the interval \([0.0,1.0]\) using a fixed total dataset size (N) of 20000. We select values of \(\frac{N_{T}}{N_{R}}\) equal to 0.25, 1, and 9 to represent each possible distinct data setting (\(N_{T}>N_{R},N_{T}=N_{R},N_{T}<N_{R}\)) without leading to overlapping curves.4
Footnote 4: We observe that the training (resp. reference) data curve for a given ratio \(N_{T}/N_{R}=a\) coincides with the reference (resp. training) data curve for the reciprocal \(N_{T}/N_{R}=1/a\). Thus, it is also possible to observe the behavior for \(N_{T}/N_{R}=4\) and \(N_{T}N_{R}=1/9\) in Figure 2. We show an example for \(N_{T}/N_{R}=0.25\) in Figure 4 (Appendix E).
For a given dataset size proportion, by varying the reference data weight, \(w\), WERM is capable of achieving a wide spectrum of tradeoffs for model utility, training data privacy, and reference data privacy. When \(w=0\), indicated by the darkest colored points in Figure 2, the reference data is fully protected (marked with "x"), while the training data is most exposed (marked with "o"). As \(w\) increases, indicated by the color of the points becoming lighter, sacrificing reference data privacy (i.e., \(\epsilon_{R}\) increasing) leads to greater training data privacy protection (i.e., \(\epsilon_{T}\) decreasing).
The interaction between \(w\) and model utility is particularly interesting. We observe that increasing the weight causes the model utility to first increase and then decrease. The maximum utility (\(N_{\text{eff}}=N\)) is obtained for \(w^{*}=N_{R}/N\), which coincides with the setting of equal privacy guarantees for the two datasets (\(\epsilon_{T}=\epsilon_{R}=\frac{\epsilon_{R}}{N}\)). This result is independent of the relative size of the two datasets, and indeed we can observe that all curves share the point (\(N,\epsilon_{0}/N\)). When \(\epsilon_{0}\) increases, the \(y\)-scale in Figure 2 increases accordingly, but the shape of the curves does not change. Overall, Figure 2 confirms that WERM's utility-privacy tradeoff is easy to interpret, which is highly desirable for its role as a baseline defense.
## 4. Experiments
In this section, we outline our training strategy and evaluation setting, describe in detail our process for training each empirical privacy defense, and conduct a systematic evaluation of our WERM baseline in a variety of utility-privacy settings. Ultimately, we will demonstrate that WERM's empirical utility-privacy tradeoff (Figure 3) is qualitatively similar to what is predicted by the theoretical analysis (Figure 2), which confirms our intuition that WERM is an interpretable baseline in both theory and practice.
### Datasets
We chose to conduct our experiments on the Purchase100, Texas100, and CIFAR100 datasets because they have been widely used for assessing empirical privacy defenses and MIAs (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2018). A detailed description of each dataset is provided in Appendix C.
### Methodology
#### 4.2.1. Training
Conducting a fair comparison of empirical privacy defenses requires using a standardized approach for dataset pre-processing (e.g., equivalent training/reference/test data size proportions) and model architecture choices for all methods. As AdvReg (Krizhevsky et al., 2014) is the first proposed empirical privacy defense and most well-studied method, its experimental setting has consequently become the de-facto standard for comparing defenses on the Purchase100 and Texas100 dataset (Krizhevsky et al., 2014; Krizhevsky et al., 2014). In this setting, one applies the defense mechanism to a 4-layer fully connected neural network classifier with layer sizes (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2016), and uses 10% of Purchase100 (\(\approx\) 20,000 samples) and 15% of Texas100 (\(\approx\) 10,000 samples) as training data. For the CIFAR100 dataset, we use 20,000 samples for training data and align our study with more recent evaluations that consider a ResNet-18 (Krizhevsky et al., 2014) as the classification model (Krizhevsky et al., 2014; Krizhevsky et al., 2016). We assume that each defense has access to reference data that is the same size as the training data. Following the strategy of the original AdvReg experiments, all classification models are trained using an Adam optimizer (Kingmaa et al., 2014) with a learning rate equal to 0.001. For reasons that are described in Appendix B.2, MMD requires using a batch size equal to 512 or greater. This contrasts with the
Figure 2. Theoretical utility-privacy tradeoff for WERM trained with DP-SGD as derived by the bounds in Theorem 3.1 and Theorem 3.2, \(\epsilon_{\text{e}}=1000\) and \(\mathbf{N}=\mathbf{N}_{\text{T}}+\mathbf{N}_{\text{R}}=20,000\). The curves show the effective dataset size (\(\mathbf{N}_{\text{eff}}\), a proxy for model utility), training data privacy guarantees (\(\epsilon_{\text{r}}\)), and reference data privacy guarantees (\(\epsilon_{\text{R}}\)) are influenced by reference data weight values (w) and dataset size proportion (\(\mathbf{N}_{\text{T}}/\mathbf{N}_{\text{R}}\)). In Section 4.4, we show that the theoretical results presented here are aligned with the empirical results presented in Figure 3.
original AdvReg experiments that use a batch size equal to 128. To ensure a fair comparison, we train all evaluated defenses using both batch sizes and select the best version for each method. For a given defense, the reported results are mean values over 10 training runs for different seeds of a random number generator. Following the same training strategy as previous works [8; 30; 42], we train each defense for a specific number of epochs that ensures the model converges without severely overfitting.5 Additionally, the regularization values we select for training each defense are explicitly chosen to demonstrate all possible relative privacy levels that a given method can achieve.6
Footnote 5: It is also possible to use validation data to find an approximate epoch to end training. However, using a validation dataset introduces questions regarding the validation data’s degree of privacy leakage [35]. As we are already evaluating the relative training and reference data privacy leakage, introducing another dataset will add further complexity to our analysis, which will make it more difficult to interpret the results. In Figure 5 (Appendix E), we present the utility-privacy curves using validation data to determine the number of training epochs. The difference is negligible compared to our results using a predetermined number of epochs.
Footnote 6: As discussed in Section 2.3, higher values of the regularization value should lead to higher privacy protection for training data, potentially leaking more information about reference data.
#### 4.2.2. Evaluation
We use the same methodology and released code7 as Song and Mittal [42], where an empirical privacy defense is evaluated against three threshold-based MIAs and the gap attack [51]. Additionally, we evaluate against a neural network-based attack [41] that could be executed by a stronger adversary with access to training/non-training samples, and the results, shown in Figure 6 (Appendix E), are qualitatively similar to those in Figure 3.
Footnote 7: [https://github.com/inspire-group/membership-inference-evaluation](https://github.com/inspire-group/membership-inference-evaluation)
Following the standard evaluation methodology [8; 20; 26; 30; 42], a distinct test dataset from the same underlying distribution as the training data is used to evaluate the final accuracy of the trained model and as the "non-training" data to evaluate (together with part of the training data) the accuracy of the MIA according to (4). Across all datasets, the two most effective attacks were threshold-based and used either the confidence value or modified-entropy. To quantify the privacy leakage in our experimental results, we report the MIA accuracy of the attack using confidence values because it requires less assumptions and performs equivalently well.
In our evaluation, we explicitly measure the capabilities of empirical privacy defenses in a variety of model utility, training data privacy, and reference data privacy settings to determine the most effective methods in each case. We define the notion of a _model instance_ as a defended classifier obtained by training with a certain regularization or weight value. This terminology will be used as we select model instances that most closely adhere to a specific privacy setting (e.g., WERM trained with a weight value \(w=0.5\) is a model instance that is coherent with equal training and reference data privacy requirements, as the two datasets have the same size).
### Evaluated Defenses
As WERM is designed to be a baseline for empirical privacy defenses using reference data, we only compare against methods in this category, which excludes the recently proposed Self-Distillation [44]. Specifically, we evaluate AdvReg [30], which is the most well-studied, and MMD [26], which is the current state-of-the-art. We do not consider confidence-vector masking defenses [20; 49] because they have been shown to be ineffective against label-based attacks such as the simple gap attack (5) [8; 42].
#### 4.3.1. Werm
Using the classification models described in Section 4.2.1 for each dataset, we train WERM using weight values equal to 0.0, 0.03, 0.1, 0.3, and 0.5, as well as values of 0.98 and 0.02 (Purchase100), 0.999 and 0.001 (Texas100), and 0.9975 and 0.005 (CIFAR100) that are chosen specifically to achieve the constraints for "public reference data" and "high reference data privacy" as outlined in Section 4.4.8 These weights were chosen to reflect the full range of utility-privacy tradeoffs that the method can achieve. To train WERM, for all reference data weight configurations, we fix the number of training epochs at 20, 4, and 25 for the Purchase100, Texas100, and CIFAR100 datasets, respectively. The number of training epochs for WERM, as well as for the other empirical privacy defenses we evaluate, are selected based on the standard methodology discussed in Section 4.2.1. Additionally, in all our experiments, we use the standard version of gradient descent, and the resulting models, therefore, have no formal DP guarantees. Nevertheless, we show that WERM's relative privacy guarantees--as measured by MIA accuracy--qualitatively align with the conclusions of Theorem 3.1, a result that is justified by our discussion at the end of Section 3.3.
Footnote 8: Due to the symmetric role of training and reference data in WERM, privacy evaluation for training data for a given value \(w\) corresponds to privacy evaluation for reference data for \(1-w\). In practice, the reported results therefore allow for evaluating a larger range of values including 0.7, 0.9, 0.97, and 1.0 for all datasets and 0.98, 0.999, and 0.995 for Purchase100, Texas100, and CIFAR100, respectively.
While we analyze the generalization bound of WERM in Section 3.4 for the setting where the empirical loss is minimized, in practice it is possible to end training before convergence. This simple technique, known as early stopping [6], has been observed to protect privacy [42]. As WERM can potentially benefit from early stopping without incurring a loss of interpretability, we evaluate a version of our baseline, henceforth referred to as WERM-ES, that uses this approach. To train WERM-ES, for all reference data weight configurations, we fix the number of training epochs at 7, 1, and 6 for the Purchase100, Texas100, and CIFAR100 datasets, respectively.
#### 4.3.2. Adversarial Regularization
Our AdvReg implementation relies on the officially released code9 with a few changes to solve several problems we discuss in Appendix B.1.
Footnote 9: [https://github.com/SPIN-UMass/ML-Privacy-Regulation](https://github.com/SPIN-UMass/ML-Privacy-Regulation)
We also evaluate a variant of AdvReg that can be obtained by modifying the gradient update in [30]. Although the declared objective is to solve problem (8), when taking the gradient of (8) with respect to \(\theta\), Nasr et al. [30] only consider the terms evaluated on training data:
\[\nabla_{\theta}\frac{1}{m}\sum_{i=1}^{m}\ell(f_{\theta},(x_{i},y_{i}))+\lambda \,\log[h_{\phi}(x_{i},y_{i},f_{\theta}(x_{i}))] \tag{19}\]
However, the gradient of (8) with respect to \(\theta\) contains an additional term that is evaluated on the reference data:
\[\frac{\lambda}{m^{\prime}}\sum_{i=1}^{m^{\prime}}\log[1-h_{\phi}(x_{i}^{\prime},y_{i}^{\prime},f_{\theta}(x_{i}^{\prime}))] \tag{20}\]
We refer to this variant using the reference data term as AdvReg-RT. As was observed in Figure 1, AdvReg-RT achieves a distinct set
of model utility, training data privacy, and reference data privacy tradeoffs compared to AdvReg. We therefore choose to compare both formulations with our WERM baseline.
Using the classification models described in Section 4.2.1 for each dataset, we train both versions of AdvReg using regularization values equal to 1, 2, 3, 6, 10, and 20 for Purchase100 and Texas100 and 1e-6, 1e-3, 1e-1, and 1 for CIFAR100. These values were selected on a per dataset basis to best represent the utility-privacy tradeoff that each formulation is capable of achieving. The number of training epochs is fixed at 10, 10, and 25 when training AdvReg and 35, 20, and 25 when training AdvReg-RT, for the Purchase100, Texas100, and CIFAR100 datasets, respectively.
#### 4.3.3. MMD-based Regularization
Using the classification models described in Section 4.2.1 for each dataset, we train MMD using regularization values equal to 0.1, 0.2, 0.35, 0.7, and 1.5 that demonstrate the total achievable utility-privacy curve. As the released code implementation of AdvReg (Kumar et al., 2017) benefits from training the classifier for a few warm-up steps without regularization, we also train MMD with and without a warm-up, reporting only the best results for each dataset. The number of training epochs is fixed at 25, 8, and 15 when training without warm-up steps and 20, 8, and 8 when training with warm-up steps, for the Purchase100, Texas100, and CIFAR100 datasets, respectively. Details about the implementation can be found in Appendix B.2.
### Empirical Results
In Figure 3, we show the empirical utility-privacy tradeoffs obtained by AdvReg (Kumar et al., 2017), MMD (Kumar et al., 2017), AdvReg-RT, and WERM for the Purchase100, Texas100, and CIFAR100 datasets. In these plots, we show the exact points that make up the curve, as well as some qualitative lines to highlight the trends and improve readability. The curves derived from theoretical bounds in Figure 2 and from experimental results in Figure 3 both show utility vs. privacy. However, Figure 2 evaluates the utility through the effective number of samples, \(N_{\text{eff}}\), and privacy leakage through the DP parameters, \(\epsilon_{T}\) and \(\epsilon_{R}\), whereas Figure 3 uses the test accuracy and MIA accuracy. Table 2 focuses specifically on our three key privacy settings: public reference data, equal training-reference data privacy and high reference data privacy.
#### 4.4.1. Utility-Privacy Curve Analysis
The three objectives of model utility, training data privacy leakage, and reference data privacy leakage are inherently in conflict with one another. Ideally, one would like to see that an empirical privacy defense can produce a landscape of model instances that spans a vast range of utility-privacy regimes. In theory, each of the methods we evaluate should have this capability, as they all have a mechanism for controlling the amount of regularization that is applied during training. Examining the utility-privacy curves in Figure 3 allows us to understand the tradeoffs that the various defenses can achieve in practice. As noted in Section 4.2.2, we quantify privacy leakage using a threshold-based MIA on a classifier's confidence values.10
Footnote 10: A random guesser would get an average expected accuracy of 0.5 but its average accuracy on a finite dataset can either exceed or fall short of 0.5. It should then not be surprising that some attacks have an accuracy marginally below 0.5. (e.g., 0.498), as can be seen in Figure 3.
WERM is the only defense that can clearly tradeoff between the three objectives. For Purchase100, over the range of privacy settings from equal privacy (\(w=0.5\)) to high reference data privacy (\(w=0\)), we see that WERM achieves values of 87% / 54.7% / 54.9% and 81.8% / 57.6% / 50%, for test accuracy (model utility), MIA accuracy on training data, and MIA accuracy on reference data, respectively. Between these edge cases, we see that WERM can produce model instances capable of trading off reference data privacy for both model utility and training data privacy. The same trend can be observed for WERM on the Texas100 and CIFAR100 datasets. Regarding WERM-ES, as shown in Figure 7 (Appendix E), the defense exhibits equivalent behavior to WERM, but, as expected, it achieves higher overall privacy protection at the cost of lower model utility. Figure 7 also contains the utility-privacy curves for early stopping (EarlyStop) using only the training data.
Looking at the state-of-the-art defenses we evaluate (AdvReg-RT, AdvReg, and MMD) reveals two situations. First, we can see that AdvReg-RT is able to sacrifice model utility for overall better privacy protection. However, it does not have the ability to trade off between training data privacy and reference data privacy, as changing the regularization value \(\lambda\) results in the training and reference data privacy leakage increasing/decreasing together. Due to this limited functionality, AdvReg-RT is never able to reach the setting where training data privacy protection is equal to reference data privacy protection. Second, the utility-privacy curves for AdvReg and MMD demonstrate that these defenses are completely unable to trade reference data privacy for either model utility or training data privacy. While training data privacy can be sacrificed for better model utility, attack accuracy on reference data never materially changes, remaining below 51% for both defenses at all meaningful test accuracy values. Overall, we observe that for the entire curve of possible utility-privacy tradeoffs, excluding the high reference data privacy setting, WERM/WERM-ES is unequivocally the best-performing method, and in any regime where training data privacy is valued absolutely equal to or greater than reference data privacy (including the public reference data case), WERM/WERM-ES is, in fact, the only viable defense.
In addition to WERM being a baseline defense with good utility, we also want its output to align with the desired relative privacy level that is encoded in a given choice of \(w\). Comparing WERM's empirical utility-privacy tradeoffs in Figure 3 with the theoretical tradeoffs in Figure 2, we see that they exhibit the same trend where a gradual transition occurs from the setting of high reference data privacy to that of equal privacy over the weight value interval of [0.0, 0.5]. The fact that our theoretical bounds are qualitatively aligned with our experimental results helps to demonstrate that WERM is indeed an interpretable baseline. We conduct a quantitative comparison in Section 5.1.
#### 4.4.2. Public Reference Data
First, we examine the case where reference data is public. In this setting, the privacy of reference data is of no concern. Therefore, an optimal defense should utilize the reference data to the furthest extent possible to decrease training data privacy leakage and increase test accuracy. For a given empirical privacy defense, we select model instances using the following procedure: 1) Identify all model instances with MIA accuracy on training data less than or equal to 51%, 2) Among the
Figure 3: Utility-privacy tradeoffs obtained by various empirical privacy defenses for the **Purchase100**, **Texas100**, and **CIFAR100** datasets. The test accuracy of a defended classifier is measured using unseen test data and the MIA accuracy on training and reference data is evaluated with a threshold-based using confidence values (Eq. 6). Each point on the curve represents the evaluation of a model instance using a distinct regularization value (for AdvReg, AdvReg-RT, and MMD) and reference data weight value (for WERM). We highlight some qualitative trends to help demonstrate that the empirical curves coincide with the theoretical curves in Figure 2.
model instances meeting this criterion, select the one with best test accuracy. As can be observed in Table 2, on Purchase100, Texas100, and CIFAR100, only WERM or WERM-ES are able to produce suitable model instances in this setting. Although MMD and AdvReg include a regularization term that is intended to tradeoff privacy against test accuracy, the methods are simply not capable of maximally exploiting reference data. Alternatively, WERM can sacrifice reference data privacy to achieve a high test accuracy and strict training data privacy protection.
#### 4.4.3. Equal Privacy Requirements
Second, we examine the setting where training and reference data have equal privacy requirements. For a given empirical privacy defense, we select the model instance using the following procedure: 1) Identify all model instances where the difference between the attack accuracy on training data and reference data is less than or equal to 4%, 2) Among the model instances meeting this condition, select the one with best test accuracy. We use 4% as the threshold to define "equal" privacy considerations because at lower thresholds AdvReg-RT is not able to achieve a model instance with satisfactory utility to be relevant for comparison. Table 2 shows that only WERM/WERM-ES and AdvReg-RT are able to operate in this privacy regime; MMD and AdvReg fail to produce any viable model instances.
On Purchase100 and Texas100, for the selected model instances, WERM-ES outperforms AdvReg-RT on all three objectives. Compared to AdvReg-RT and WERM-ES, WERM achieves significantly higher model utility, at the expense of worse training and reference data privacy. On CIFAR100, AdvReg-RT is unable to yield a model instance that meets the conditions, making WERM/WERM-ES the only working defense.
#### 4.4.4. High Reference Data Privacy
Lastly, we examine the case where reference data is considered highly private and its privacy can therefore only be minimally sacrificed. For a given empirical privacy defense, we select model instances using the following procedure: 1) Identify all model instances with MIA accuracy on reference data less than or equal to 51%, 2) Among the model instances meeting this criterion, select the one with best test accuracy. In Table 2, it can be seen that on Purchase100 and Texas100, AdvReg, MMD, WERM, and WERM-ES are all able to produce a model instance that leaks only minimal reference data privacy according to our selection method, whereas AdvReg-RT is unable to yield a suitable model instance. However, on CIFAR100, all five defenses achieve a valid result.
By setting very strict privacy requirements for reference data, we aim to remove one objective from the evaluation such that we can make a comparison based solely on the model utility and training data privacy leakage. Nonetheless, it is still possible that two model instances satisfy the reference data privacy requirement but cannot be definitively compared because one can achieve better utility and the other better training data privacy protection. On Purchase100, for example, WERM-ES outperforms AdvReg, but it is not clear which method is preferable between WERM, WERM-ES, and MMD without deciding the relative importance assigned to model utility and training data privacy. WERM can achieve higher test accuracy, WERM-ES can achieve higher privacy protection on training data, and MMD can achieve a utility-privacy tradeoff that is a middle ground between these two methods. On Texas100, WERM and MMD both outperform AdvReg, but a comparison between the two also requires making explicit the relative importance of model utility and training data privacy. On CIFAR100, however, WERM/WERM-ES is clearly superior to the other three defenses.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Defense} & \multicolumn{3}{c}{Public Reference Data} & \multicolumn{3}{c}{Equal Privacy Considerations} & \multicolumn{3}{c}{High Reference Data Privacy} \\ \cline{3-10} & & Test Acc. & MIA Train & MIA Ref & Test Acc. & MIA Train & MIA Ref & Test Acc. & MIA Train & MIA Ref \\ \hline \multirow{4}{*}{Purchase100} & AdvReg & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 76.8\% & 59.2\% & 50.0\% \\ & AdvReg-RT & \(-\) & \(-\) & \(-\) & \(82.1\%\) & 61.1\% & 57.1\% & \(-\) & \(-\) & \(-\) \\ & MMD & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(82.5\%\) & 65.7\% & 50.0\% \\ & WERM & 83.8\% & 51.0\% & 66.7\% & 87.0\% & 61.5\% & 61.5\% & 84.2\% & 68.0\% & 50.9\% \\ & WERM-ES & 77.9\% & 50.1\% & 57.4\% & 83.6\% & 54.7\% & 54.9\% & 78.4\% & 57.8\% & 50.0\% \\ \hline \multirow{4}{*}{Texas100} & AdvReg & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 49.8\% & 68.3\% & 50.0\% \\ & AdvReg-RT & \(-\) & \(-\) & \(-\) & 48.3\% & 57.0\% & 53.4\% & \(-\) & \(-\) & \(-\) \\ & MMD & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 55.1\% & 68.6\% & 50.8\% \\ & WERM & 54.2\% & 50.9\% & 65.6\% & 56.3\% & 61.1\% & 61.3\% & 52.0\% & 65.7\% & 50.9\% \\ & WERM-ES & 43.7\% & 50.3\% & 55.9\% & 49.8\% & 54.3\% & 54.0\% & 44.4\% & 56.3\% & 50.5\% \\ \hline \multirow{4}{*}{CIFAR100} & AdvReg & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 31.7\% & 83.6\% & 50.0\% \\ & AdvReg-RT & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 31.1\% & 83.3\% & 50.8\% \\ & MMD & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & 30.2\% & 90.4\% & 50.0\% \\ & WERM & 34.2\% & 50.5\% & 84.0\% & 41.3\% & 79.8\% & 80.1\% & 33.8\% & 83.5\% & 50.9\% \\ & WERM-ES & 32.9\% & 50.1\% & 63.3\% & 40.1\% & 60.2\% & 60.2\% & 33.0\% & 63.6\% & 50.0\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparison of test accuracy and MIA accuracy for AdvReg, AdvReg-RT, MMD, WERM, and WERM-ES under the settings of public reference data, equal privacy considerations, and high reference data privacy. A dashed line (–) means that the defense produced no model instances that met the criteria. The values under “MIA Train” and “MIA Ref” represent the membership inference attack accuracy on training and reference data, respectively.
## 5. Discussion
### Selection of Defense Parameters
Even when a model training entity has clearly defined its desired relative privacy level between training and reference data, realizing a classifier with this exact degree of relative privacy protection still requires selecting the corresponding parameters of the empirical privacy defense: the reference data weight term (\(w\)) for WERM and regularization weight term (\(\lambda\)) for AdvReg or MMD. As an illustration, if the two datasets are of equal size, and the reference data needs to be twice as private, what should be the chosen values of \(w\) for WERM or \(\lambda\) for AdvReg or MMD? We argue that an empirical privacy defense becomes more practical if there exists an intelligible (e.g., linear) mapping to guide a machine learning practitioner in the selection of a defense parameter. AdvReg and MMD do not provide a practical guideline except for the general intuition that a larger value of \(\lambda\) should provide higher training data privacy and lower reference data privacy. For WERM, the parameter \(w\) can be adjusted to ensure a specified theoretical level of relative privacy, as dictated by the equations in Theorem 3.2 (\(e_{T}/e_{R}=(1-w)/w\times N_{R}/N_{T}\)). However, translating DP-like theoretical privacy guarantees into practical privacy guarantees e.g., in terms of MIA accuracy, is a highly complex and still unresolved issue in the field of privacy-preserving machine learning (Beng et al., 2015; Chen et al., 2015; Li et al., 2016; Li et al., 2017). The effectiveness of such a configuration rule therefore needs to be evaluated in terms of the empirical privacy leakage.
In Table 3 we report the Pearson correlation coefficient (PCC) between WERM's theoretical relative privacy, as defined by the value \(e_{T}/e_{R}\), and its empirical relative privacy, as measured by the ratio between the MIA accuracy on training data and on reference data. The coefficient gauges the linear correlation between the two quantities. For AdvReg and MMD, without clear configuration guidelines, we report the PCC between the reciprocal of the regularization parameter \(1/\lambda\) and the empirical relative privacy. WERM displays the largest PCC at 0.84, which stands in stark contrast to the 0.07 for AdvReg and 0.48 for MMD. These results underscore WERM as the sole method offering a practical configuration rule to achieve a target relative privacy.
### Computational Cost Comparison
In addition to comparing the utility-privacy tradeoff and practical usability of empirical privacy defenses, it is also important to consider their computational cost. Table 4 shows, for a fixed batch size, the number of seconds it takes to train each defense for a single epoch and the overall training time considering the total number of epochs used in our experiments. We calculate the overall training time as the per epoch training time multiplied by the total number of epochs. Each of these experiments are run using a single GPU on an NVIDIA DGX system. Analyzing Table 4 confirms that WERM is indeed significantly less computationally expensive than AdvReg and MMD, as discussed in Section 3.2. On Purchase100, Texas100, and CIFAR100, WERM is 19x, 7x, and 19x faster to train on a per epoch basis, compared to the second fastest method.
## 6. Conclusion and Future Work
In this work, we have analyzed the role of reference data in empirical privacy defenses and identified the issue that reference data privacy leakage must be explicitly considered to conduct a meaningful evaluation. We advanced the current state-of-the-art by proposing a generalization error constrained ERM, which can in practice be evaluated as a weighted ERM over the training and reference datasets. As WERM is intended to function as a baseline, we derive theoretical guarantees about its utility and privacy to ensure that its results will be well-understood in all utility-privacy settings. We present experimental results showing that our principled baseline outperforms the most well-studied and current state-of-the-art empirical privacy defenses in nearly all privacy regimes (i.e., independent of the nature of reference data and its level of privacy). Our experiments also reveal that existing methods are unable to trade off reference data privacy for model utility and/or training data privacy, and thus cannot operate outside of the highly private reference data case.
Regarding ethical concerns, our proposed baseline operates on the defense side of machine learning privacy; no novel attack has been proposed. Nevertheless, our experiments have analyzed the average privacy leakage over the whole dataset, but privacy protection is not always fair across groups in a dataset (Han et al., 2016; Li et al., 2017). Future work can evaluate then the fairness of various defense mechanisms using reference data or propose the creation of privacy defenses intended to operate in use-case dependent settings. We hope that our work will continue to motivate the development of a robust evaluation framework for privacy defenses.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Defense & Per Epoch (s) & Overall (s) \\ \hline \multirow{3}{*}{Purchase100} & WERM & 0.5 & 10 \\ & AdvReg & 9.5 & 95 \\ & MMD & 16.4 & 328 \\ \hline \multirow{3}{*}{Texas100} & WERM & 0.9 & 3.6 \\ & AdvReg & 6.4 & 64 \\ & MMD & 6.8 & 54.4 \\ \hline \multirow{3}{*}{CIFAR100} & WERM & 4.1 & 102.5 \\ & AdvReg & 78.8 & 1970 \\ \cline{1-1} & MMD & 94.7 & 757.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparison of per epoch and overall training time, in seconds (s), for each empirical privacy defense on Purchase100, Texas100, and CIFAR100.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Defense & WERM & AdvReg & MMD \\ \hline Pearson Correlation Coefficient & 0.84 & 0.07 & 0.48 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison of the Pearson Correlation Coefficient between the training-reference data desired privacy ratio (as determined by the choice of \(w\) or \(\lambda\)) and the empirical privacy ratio (as measured by a MIA) for WERM, AdvReg, and MMD. The coefficient is computed across the Purchase100, Texas100, and CIFAR100 datasets.
## Acknowledgments
This research was supported in part by ANRT in the framework of a CIFRE PhD (2021/0073) and by the Horizon Europe project dAIEDGE.
|
2303.05556 | An Evaluation of Non-Contrastive Self-Supervised Learning for Federated
Medical Image Analysis | Privacy and annotation bottlenecks are two major issues that profoundly
affect the practicality of machine learning-based medical image analysis.
Although significant progress has been made in these areas, these issues are
not yet fully resolved. In this paper, we seek to tackle these concerns head-on
and systematically explore the applicability of non-contrastive self-supervised
learning (SSL) algorithms under federated learning (FL) simulations for medical
image analysis. We conduct thorough experimentation of recently proposed
state-of-the-art non-contrastive frameworks under standard FL setups. With the
SoTA Contrastive Learning algorithm, SimCLR as our comparative baseline, we
benchmark the performances of our 4 chosen non-contrastive algorithms under
non-i.i.d. data conditions and with a varying number of clients. We present a
holistic evaluation of these techniques on 6 standardized medical imaging
datasets. We further analyse different trends inferred from the findings of our
research, with the aim to find directions for further research based on ours.
To the best of our knowledge, ours is the first to perform such a thorough
analysis of federated self-supervised learning for medical imaging. All of our
source code will be made public upon acceptance of the paper. | Soumitri Chattopadhyay, Soham Ganguly, Sreejit Chaudhury, Sayan Nag, Samiran Chattopadhyay | 2023-03-09T19:31:14Z | http://arxiv.org/abs/2303.05556v1 | # An Evaluation of Non-Contrastive Self-Supervised Learning for Federated
###### Abstract
Privacy and annotation bottlenecks are two major issues that profoundly affect the practicality of machine learning-based medical image analysis. Although significant progress has been made in these areas, these issues are not yet fully resolved. In this paper, we seek to tackle these concerns head-on and systematically explore the applicability of non-contrastive self-supervised learning (SSL) algorithms under federated learning (FL) simulations for medical image analysis. We conduct thorough experimentation of recently proposed state-of-the-art non-contrastive frameworks under standard FL setups. With the SoTA Contrastive Learning algorithm, SimCLR as our comparative baseline, we benchmark the performances of our 4 chosen non-contrastive algorithms under non-i.i.d. data conditions and with a varying number of clients. We present a holistic evaluation of these techniques on 6 standardized medical imaging datasets. We further analyse different trends inferred from the findings of our research, with the aim to find directions for further research based on ours. To the best of our knowledge, ours is the first to perform such a thorough analysis of federated self-supervised learning for medical imaging. All of our source code will be made public upon acceptance of the paper.
## 1 Introduction
Medical image analysis [1, 20] is a topic of active research in the machine learning community. It involves an array of different tasks like disease detection [22], classification [2, 36] and segmentation [4, 6]. Classically, standard supervised computer vision techniques have been applied to solve medical imaging problems. Although such methods have achieved commendable performance across medical vision tasks, there are two unique challenges the field faces, which are yet to be fully tackled by such methods.
The first major challenge in practical medical imaging is learning over medical data stored in a distributed manner. In the field of medical imaging, the entire data is rarely stored in a centralized server; it is distributed across various servers and devices (say, servers of different hospitals containing patient data). Data stored at different sites may even have very different distributions [10]. Furthermore, owing to privacy and legal concerns, such data can neither be accumulated together into a centralised server nor be shared with other sources for training deep learning models. Thus, one requires decentralized training across all such client devices that preserves data privacy. This has been facilitated by the paradigm of federated learning (FL) - introduced in the seminal work by [24]. FL involves individual model training in each client, followed by aggregating the individual model weights into a global server model, whose copies are then sent to the clients for inference. This is non-trivial, since the server model would have to fit different data distributions across the clients.
Secondly, annotating medical imaging datasets is an arluous task, requiring a significant amount of time and effort from skilled clinicians. In fact, this is a major reason why only a handful of large-scale labeled medical imaging datasets are available to the community. This has led to the development of annotation-efficient paradigms such as semi-supervised [5, 34], weakly-supervised [30] and most notably, self-supervised learning (SSL) algorithms. In particular, SSL [3, 7, 14, 40, 23] has shown great potential to learn robust representations in computer vision [13, 14, 23] and medical imaging [2, 22, 31] tasks, achieving comparable performance to even supervised approaches.
In this study, we aim to tackle the two aforementioned problems in the field of medical imaging head-on by leveraging SSL to federated setups for medical imaging. In particular, we conduct a thorough investigation of the performances of SoTA _non-contrastive_ SSL algorithms under various federated learning setups. The motivation for our focus on non-contrastive methods [3, 9, 37, 40] is in light of their using lower batch sizes for reducing computational de
mands. To evaluate their performance, we utilise the SoTA contrastive learning algorithm SimCLR [7] as a comparative baseline. On the federated side, we conduct experiments using three standard FL algorithms, namely FedAVG [24], FedBN [19] and FedProx [18]. By varying the number of clients, we put forth a rigorous evaluation of such simulations on datasets of the standardized medical imaging suite, MedMNIST [32, 33], which can be used as a ready reference for current and future research, something that has not been done so far. Moreover, we further discuss the various trends observed in the evaluation, which could provide intriguing insights and pave the way for further study. To the best of our knowledge, such a holistic analysis has not been performed in literature to date, making our contribution significant. We believe such a study would be very helpful to the medical vision community in a very practical manner.
To sum up, the contributions of our work are as follows:
1. We present the first thorough investigation of the applicability of non-contrastive self-supervision algorithms under three different federated learning setups in literature for medical image analysis.
2. We perform a rigorous evaluation of the performance of these algorithms under non-i.i.d. data splits with variation in the number of clients participating in federated learning.
3. The benchmarking is performed on the MedMNIST suite, which consists of standardized medical image datasets. Furthermore, we systematically analyse different trends inferred from the study, paving the way for further research.
Our experimental pipeline is schematically represented in Figure 1. All source codes will be made publicly available upon acceptance.
## 2 Related Work
Federated Learning:FL [18, 19, 21, 24, 39] aims at training machine learning models across a number of private devices having their own datasets, with the underlying assumption that the clients' data are mutually independent of each other [24]. In other words, FL involves decentralized training of models in a distributed manner. For generalization, the individually trained models are aggregated at a global server, via an aggregation function. Thus, no data is shared among the clients, only the model parameters being communicated in the aggregation process [29]. First proposed in [24], the classic FedAVG algorithm, which performs a simple weighted averaging of model weights across clients, is still considered a strong baseline framework. Among other popular FL methods, FedProx [18] introduced a proximal term to the local client losses so as to mitigate possible weight divergences, FedAVG-Share [39] proposed to use a small globally shared data subset for robust generalization, and FedBN [19] used local batch normalizations to capture different data distributions and alleviate representational shift prior to aggregation. In this paper, we adopted FedAVG [24], FedBN [19] and FedProx [18] as for conducting the FL simulations over SSL algorithms.
Self-Supervised Learning:Being a subset of unsupervised learning, SSL aims at learning representations without the need for explicit annotations by means of pre-training tasks in which the so-called pseudo-ground-truths are generated from the data itself. While classical approaches include colorization [38], inpainting [28] and solving jigsaw puzzles [25], recent SoTA approaches have focused on contrastive learning [7, 8, 11], which aims at pulling similar data points (positive samples) closer together, along with pushing apart dissimilar points (negative samples) in the embedding space. Although popular contrastive learning frameworks such as SimCLR [7] and MoCo [11] have proven to be successful at learning robust visual representations,
Figure 1: Overall pipeline of the presented study. The SSL methods have been described in Figure 2.
a bottleneck they face is the requirement of a large number of negative samples in a mini-batch, which compels the use of large batch sizes. To alleviate this limitation, more recent SSL works have directed towards the paradigm of non-contrastive learning [3, 37, 40] - wherein the objective is to solely bring similar entities closer together in the latent space, thereby mitigating the need for negative samples altogether. Barlow Twins [37] uses information maximization to maximize similarity along with reducing redundancy among neurons, inspired by neuroscience theories. VICReg [3] extended upon the former by introducing a hinge function for variance regularization. In our study, we adopted some of the recently proposed non-contrastive SSL variants - SimSiam [9], Barlow Twins [37] and VICReg [3]. We also considered TiCo [40], which combines the flavours of both contrastive and non-contrastive learning along with the use of an implicit feature memory. For evaluation purpose, SimCLR [7] was chosen as the contrastive baseline method.
## 3 Federated Self-Supervised Learning
As explained, our study investigates self-supervision models under FL simulations. In this section, we provide background on the SSL and federated learning algorithms used in this paper.
### SSL Methods
Contrastive:We treat contrastive SSL as a baseline and employ the SoTA SimCLR [7] algorithm, which uses the InfoNCE [8, 15, 26] contrastive loss to push embeddings of different views of the same image closer and pull apart those of different images.
Non-contrastive:As our primary focus is towards non-contrastive SSL, we experiment with several such methods, namely SimSiam [9], Barlow Twins [37], VICReg [3] and TiCo [40]. It is worth noting that TiCo can be interpreted as both contrastive and non-contrastive; it has elements of both the SSL variants.
For a high-level understanding, the SSL methods adopted in this study have been depicted figuratively in Figure 2. Other details have been provided in Table 1.
### FL Algorithms
We simulate three standard FL algorithms in our study, namely FedAVG [24], FedBN [19] and FedProx [18].
FedAVG:FedAVG utilizes iterative averaging over client weights to update the global model [24]. In each communication round, a set of selected clients \(S_{t}\) receive the current global model, locally train for a predetermined \(E\) epochs, and send the local weights back to the central server. The server updates the global model by averaging the client weights weighted by number of local samples \(n_{k}\) at each client.
\[w_{t+1}\leftarrow\sum_{k\in S_{t}}\frac{n_{k}}{n}w_{t+1}^{k} \tag{1}\]
Here, \(w_{t+1}\) represents the updated global model in round \(t+1\), \(w_{t+1}^{k}\) are the weights received from the \(k^{th}\) client and \(n\) is the total number of samples present in selected clients.
FedBN:FedBN is a modification of the FedAVG algorithm designed to address the different marginal or conditional feature distributions across clients, a problem known as feature shift non-i.i.d. [19]. FedBN utilizes local batch normalization, with updates to global model weights via weighted averaging, _leaving out_ the weights from the batch normalization layers.
FedProx:Proposed in [18], FedProx is a generalisation of FedAVG with modifications intended to address heterogeneous data and systems. Like FedAVG, in each communication round, the server selects a set of clients and sends them the current global model \(w^{t}\). Unlike FedAVG, FedProx allows local optimization on clients to run for a variable number of epochs by making clients optimise a regularised loss with a proximal term. Each client approximately minimizes
Figure 2: Comparison of different SSL methods used in the presented study.
the a new object \(h_{k}\) defined over its original local objective \(F_{k}\) as (Here, \(\mu\) is the regularization hyperparameter):
\[\min_{w}h_{k}(w;w^{t})=F_{k}+\frac{\mu}{2}\|w-w^{t}\|^{2} \tag{2}\]
## 4 Experimental Protocol
### Datasets
We evaluate the federated self-supervised simulations on the MedMNIST data suite [32, 33], a large-scale collection of standardized real-world biomedical images resized to MNIST-like dimensions. The datasets used in this work are **Pneumonia, Breast, Retina, Organ-A, Organ-C and Organ-S**. It is essential to mention that {RetinaMNIST, BreastMNIST and PneumoniaMNIST} are "_small datasets_" (having less than 6k images in total), while the remaining are different views (A=axial, C=coronal, S=sagittal) of abdominal CT images, each having more than 20k images, thereby being "_big datasets_". More details about the datasets can be found in 1.
Footnote 1: [https://medmnist.com/](https://medmnist.com/)
### Configuration
We conducted our experiments in PyTorch [27] accelerated by a 16GB NVIDIA Tesla V100 GPU. For self-supervised model training, a ResNet-18 [12] encoder was used with random initialization, which is considered a de facto standard for SSL works [7, 37]. For the projector head, a \(128\)-dimensional fully connected (FC) output layer was considered for SimCLR, while a \(512\)-dimensional FC layer was used as the expander for the non-contrastive learning algorithms. Following standard works, we set the number of clients to \(5,10\) and \(20\) for our investigations. For creating class imbalance in our clients to simulate the non-i.i.d. condition of FL, we utilised Dirichlet distribution [19, 24] to generate data splits. We ran each experiment for a total of 20 communication rounds with each client iterating over 20 internal epochs in each round. All hyperparameters for our experiments have been described in Table 1, and were fixed across experiments for a fair evaluation. It should be mentioned that we could not go beyond a batch size of \(128\) due to computational constraints.
For evaluation purposes, following recent SSL literature [15, 35] we used a simple KNN classification protocol (\(k=20\)) with Euclidean distance metric [16] on the representations obtained by the frozen encoder network. This offers a simple benchmarking scheme without many bells and whistles directly on features learnt in a completely unsupervised manner.
### Evaluation metrics
Two standard classification metrics have been used for the performance evaluation of the algorithms investigated by this study: _Accuracy_ and weighted _F1-Score_. For reporting empirical results, we used F1-score and for qualitative analyses, we used accuracy values. We report the average over individual client scores for all experiments.
## 5 Results and Discussion
We report the findings of the experimentation across the datasets as metrics averaged across clients, along with their standard deviation values. As discussed earlier, our experiments comprise evaluating different SSL algorithms over three popular FL setups, simulating variation in the number of clients. The exhaustive results for each dataset have been tabulated in tables 2, 3, 4, 5, 6 and 7.
its performance significantly deteriorates as the number of clients is increased to \(20\). In contrast, the performances of the non-contrastive methods, especially VICReg and Barlow Twins, decrease to a far smaller extent, making them the best performers under higher numbers of clients. When setting number of clients to \(20\), the batch size is reduced to \(64\) (to allow for lower per-client data availability due to increased number of clients). Contrastive learning methods like SimCLR are heavily dependent on the number of negative samples available, and thus perform better under higher batch sizes [7, 8], while non-contrastive methods, not having such a dependency, are not affected as much. This trend is more visibly observable in Table 3 and Table 4 as we enter into datasets having considerably fewer available samples (Breast and Retina having \(780\) and \(1600\) respectively compared to Pneumonia's \(5856\) samples).In both these datasets, the non-contrastive method (VICReg in most cases) emerges with the best performance under almost every federated setup examined. We also observe each algorithm having very high standard deviation in its performance across all \(3\) datasets, implying high variance among the individual clients. This can be attributed to the non-i.i.d. distribution of samples in a low data setting leading to clients having severely smaller and possibly more imbalanced datasets and thus, higher variations in performance.
Big datasets:From tables 5, 6, 7, we once again see VICReg emerging as the best performer overall. SimCLR too shows considerably better performance on these datasets compared to the smaller ones. We attribute this improvement to the greater amount of data available in these \(3\) datasets (The \(3\) OrganMNIST datasets have nearly \(10\) times more data compared to the smaller datasets like Breast or Retina), which generally aids all self-supervision methods. Another important observation from tables 5, 6, 7 is the significantly lower standard deviation of \(F1-scores\) of each algorithm relative to those in tables 2, 3, 4, which indicate a lower variance of scores among the clients. With the Organ datasets having a considerably large amount of data, each client is left with a larger and potentially more balanced set of samples, even under non-i.i.d. distribution.
### Across FL algorithms
We studied the behaviour of non-contrastive methods subjected under the different federated learning simulations - FedAVG [24], FedBN [19] and FedProx [18]. The findings have been graphically presented in Figure 3, Figure 4 and Figure 5 respectively.
FedAVG:Figure 3 shows the performances of each non-contrastive SSL variant across various numbers of clients under FedAVG [24] scheme. We observe that there is a general trend of performance deterioration with increase in clients. This is expected, as more clients imply lesser data
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**SSL Method**} & \multicolumn{4}{c|}{\(\#clients=5\)} & \multicolumn{4}{c|}{\(\#clients=10\)} & \multicolumn{4}{c}{\(\#clients=20\)} \\ & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** \\ \hline
**Barlow** & 0.755 \(\pm\) 0.225 & **0.785 \(\pm\) 0.197** & 0.755 \(\pm\) 0.225 & **0.823 \(\pm\) 0.181** & 0.769 \(\pm\) 0.258 & **0.822 \(\pm\) 0.180** & 0.675 \(\pm\) 0.319 & **0.724 \(\pm\) 0.291** & 0.675 \(\pm\) 0.319 \\
**VICReg** & **0.757 \(\pm\) 0.228** & 0.764 \(\pm\) 0.218 & **0.757 \(\pm\) 0.228** & 0.803 \(\pm\) 0.197 & 0.783 \(\pm\) 0.223 & 0.802 \(\pm\) 0.197 & **0.711 \(\pm\) 0.284** & 0.678 \(\pm\) 0.31 & **0.711 \(\pm\) 0.284** \\
**SimSimSim** & 0.731 \(\pm\) 0.250 & 0.737 \(\pm\) 0.256 & 0.731 \(\pm\) 0.250 & 0.722 \(\pm\) 0.287 & 0.767 \(\pm\) 0.247 & 0.721 \(\pm\) 0.287 & 0.702 \(\pm\) 0.294 & 0.680 \(\pm\) 0.307 & 0.702 \(\pm\) 0.294 \\
**TiCo** & 0.731 \(\pm\) 0.215 & 0.766 \(\pm\) 0.215 & 0.731 \(\pm\) 0.248 & 0.748 \(\pm\) 0.267 & 0.770 \(\pm\) 0.240 & 0.747 \(\pm\) 0.266 & 0.696 \(\pm\) 0.295 & 0.676 \(\pm\) 0.312 & 0.696 \(\pm\) 0.294 \\ \hline
**SimCLR** & 0.727 \(\pm\) 0.264 & 0.776 \(\pm\) 0.215 & 0.727 \(\pm\) 0.264 & 0.793 \(\pm\) 0.223 & **0.786 \(\pm\) 0.232** & 0.792 \(\pm\) 0.222 & 0.695 \(\pm\) 0.295 & 0.693 \(\pm\) 0.292 & 0.695 \(\pm\) 0.295 \\ \hline \hline \end{tabular}
\end{table}
Table 2: \(F1-scores\) obtained on Pneumonia-MNIST dataset.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**SSL Method**} & \multicolumn{4}{c|}{\(\#clients=5\)} & \multicolumn{4}{c|}{\(\#clients=10\)} & \multicolumn{4}{c}{\(\#clients=20\)} \\ & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** \\ \hline
**Barlow** & 0.837 \(\pm\) 0.148 & 0.746 \(\pm\) 0.240 & 0.854 \(\pm\) 0.133 & 0.831 \(\pm\) 0.149 & 0.753 \(\pm\) 0.221 & 0.810 \(\pm\) 0.174 & **0.826 \(\pm\) 0.147** & 0.770 \(\pm\) 0.199 & **0.829 \(\pm\) 0.144** \\
**VICReg** & **0.869 \(\pm\) 0.121** & **0.844 \(\pm\) 0.144 & **0.878 \(\pm\) 0.113 & **0.872 \(\pm\) 0.117** & **0.837 \(\pm\) 0.145** & **0.870 \(\pm\) 0.116** & 0.819 \(\pm\) 0.145 & **0.810 \(\pm\) 0.162** & 0.826 \(\pm\) 0.145 \\
**SimSimSam** & 0.771 \(\pm\) 0.211 & 0.776 \(\pm\) 0.206 & 0.771 \(\pm\) 0.211 & 0.806 \(\pm\) 0.170 & 0.774 \(\pm\) 0.200 & 0.807 \(\pm\) 0.168 & 0.773 \(\pm\) 0.199 & 0.779 \(\pm\) 0.188 & 0.774 \(\pm\) 0.192 \\
**TiCo** & 0.826 \(\pm\) 0.162 & 0.798 \(\pm\) 0.186 & 0.825 \(\pm\) 0.164 & 0.792 \(\pm\) 0.183 & 0.779 \(\pm\) 0.196 & 0.826 \(\pm\) 0.154 & 0.772 \(\pm\) 0.195 & 0.768 \(\pm\) 0.196 & 0.770 \(\pm\) 0.197 \\ \hline
**SimCLR** & 0.867 \(\pm\) 0.124 & **0.851 \(\pm\) 0.140** & **0.879 \(\pm\) 0.112** & 0.871 \(\pm\) 0.113 & 0.829 \(\pm\) 0.151 & **0.878 \(\pm\) 0.108** & 0.819 \(\pm\) 0.153 & 0.781 \(\pm\) 0.187 & 0.819 \(\pm\) 0.155 \\ \hline \hline \end{tabular}
\end{table}
Table 3: \(F1-scores\) obtained on Breast-MNIST dataset.
in each of them along with a higher percentage of imbalanced clients due to non-i.i.d. settings. For the OrganMNIST datasets, the behaviours of Barlow [37], VICReg [3] and TiCo [40] are very similar in this regard, with VICReg being the best performer among a majority of cases. On the other hand, among the "small" datasets (Pneumonia, Breast and Retina), a majority of the algorithms peak in performance when utilising \(10\) clients, followed by a sharp dip when the number of clients is further increased further to \(20\). This may be attributed to the severe decrease in the local client data (as the datasets are even smaller; \(\approx 10\)x smaller than the Organ datasets) which hurts the learning ability of self-supervision models in general [8, 9].
FedBN:As evident from Figure 4, FedBN simulation results are in contrast to the expected behaviour, (i.e., with increase in \(\#clients\) the performance should drop). For VICReg, the general trend across the _small_ datasets is the aforementioned expected behaviour, however, for _big_ datasets such as Organ-C, we see an increase in the accuracy values with an increase in \(\#clients\). It is also important to note that out of the 4 Non-CL algorithms in Figure 4, TiCo [40] shows the _most_ consistent general trend across all the datasets. Nonetheless, from the above observations we can conjecture that local Batch Normalization plays a pivotal role in the variability of the performances of SSL algorithms for the datasets taken into consideration.
FedProx:Comparing Figure 3 and Figure 5, it can be inferred that FedProx shows very similar trends to FedAVG across a varying number of clients, which is reasonable as it only makes lightweight alterations to the latter in terms of introducing the proximal regularization term [18]. For each SSL method, accuracy values obtained under FedAVG and FedProx are very close to each other, with minor performance gains observed for TiCo [40] and VICReg [3] under FedProx simulation scores.
### Contrastive vs Non-Contrastive SSL
Since our study intends to investigate the behaviour of non-contrastive SSL variants under FL setups, we seek to analyse how they fare against the contrastive SimCLR [7] - a strong self-supervision baseline for image classification tasks [2, 13, 31]. To do so, for each of the experimental setups we compare the accuracy obtained by the best-performing non-contrastive method (denoted by NCL-Best) with that of SimCLR. The plots depicting these comparisons have been provided in Figure 6, Figure 7 and Figure 8, distributed according to the number of clients considered (\(5,10\) and \(20\)). The reason for such a comparison lies in the fact that it is not possible to find a _single best_ non-contrastive method for all setups, as well as to make our study more generalised rather than specific to a particular SSL algorithm.
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{**SSL Method**} & \multicolumn{3}{c|}{\(\#clients=5\)} & \multicolumn{3}{c|}{\(\#clients=10\)} & \multicolumn{3}{c}{\(\#clients=20\)} \\ & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** \\ \hline
**Barlow** & 0.648 \(\pm\) 0.069 & 0.291 \(\pm\) 0.061 & 0.644 \(\pm\) 0.081 & 0.593 \(\pm\) 0.076 & 0.327 \(\pm\) 0.059 & 0.575 \(\pm\) 0.078 & 0.559 \(\pm\) 0.067 & 0.320 \(\pm\) 0.084 & 0.558 \(\pm\) 0.066 \\
**VICReg** & **0.696 \(\pm\) 0.072** & 0.408 \(\pm\) 0.080 & **0.692 \(\pm\) 0.070** & **0.655 \(\pm\) 0.076** & 0.429 \(\pm\) 0.073 & **0.653 \(\pm\) 0.080** & **0.594 \(\pm\) 0.071** & **0.466 \(\pm\) 0.082** & **0.600 \(\pm\) 0.074** \\
**SimSimSam** & 0.376 \(\pm\) 0.077 & 0.374 \(\pm\) 0.075 & 0.376 \(\pm\) 0.078 & 0.384 \(\pm\) 0.079 & 0.395 \(\pm\) 0.072 & 0.373 \(\pm\) 0.090 & 0.367 \(\pm\) 0.089 & 0.389 \(\pm\) 0.082 & 0.367 \(\pm\) 0.090 \\
**TiCo** & 0.599 \(\pm\) 0.073 & 0.463 \(\pm\) 0.074 & 0.600 \(\pm\) 0.072 & 0.570 \(\pm\) 0.061 & 0.471 \(\pm\) 0.072 & 0.572 \(\pm\) 0.060 & 0.439 \(\pm\) 0.076 & 0.414 \(\pm\) 0.080 & 0.436 \(\pm\) 0.072 \\ \hline
**SimCLR** & 0.611 \(\pm\) 0.132 & **0.495 \(\pm\) 0.081** & 0.664 \(\pm\) 0.066 & 0.635 \(\pm\) 0.070 & **0.523 \(\pm\) 0.066** & 0.626 \(\pm\) 0.066 & 0.544 \(\pm\) 0.069 & 0.440 \(\pm\) 0.084 & 0.543 \(\pm\) 0.062 \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(F1-scores\) obtained on OrganC-MNIST dataset.
\begin{table}
\begin{tabular}{c|c c c|c c|c c c|c c} \hline \hline \multirow{2}{*}{**SSL Method**} & \multicolumn{3}{c|}{\(\#clients=5\)} & \multicolumn{3}{c|}{\(\#clients=10\)} & \multicolumn{3}{c}{\(\#clients=20\)} \\ & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** & **FedAVG** & **FedBN** & **FedProx** \\ \hline
**Barlow** & 0.677 \(\pm\) 0.093 & 0.310 \(\pm\) 0.072 & 0.681 \(\pm\) 0.086 & 0.606 \(\pm\) 0.048 & 0.362 \(\pm\) 0.062 & 0.605 \(\pm\) 0.048 & 0.540 \(\pm\) 0.047 & 0.360 \(\pm\) 0.070 & 0.533 \(\pm\) 0.047 \\
**VICReg** & **0.721 \(\pm\) 0.085** & 0.414 \(\pm\) 0.088 & **0.724 \(\pm\) 0.089** & **0.681 \(\pm\) 0.052** & 0.399 \(\pm\) 0.048 & **0.682 \(\pm\) 0.055** & **0.630 \(\pm\) 0.044** & **0.512 \(\pm\) 0.054** & **0.623 \(\pm\) 0.046** \\
**SimSam** & 0.441 \(\pm\) 0.026 & 0.455 \(\pm\) 0.087 & 0.441 \(\pm\) 0.081 & 0.406 \(\pm\) 0.059 & 0.426 \(\pm\) 0.052 & 0.404 \(\pm\) 0.056 & **0.405 \(\pm\) 0.061** & 0.428 \(\pm\) 0.059 & 0.402 \(\pm\) 0.062 \\
**TiCo** & 0.643 \(\pm\) 0.082 & **0.534 \(\pm\) 0.084** & 0.647 \(\pm\) 0.081 & 0.609 \(\pm\) 0.034 & 0.512 \(\pm\) 0.045 & 0.603 \(\pm\) 0.041 & 0.501 \(\pm\) 0.049 & 0.459 \(\pm\) 0.054 & 0.478 \(\pm\) 0.044 \\ \hline
**SimCLR** & 0.653 \(\pm\) 0.067 & 0.513 \(\pm\) 0.085 & 0.697 \(\pm\) 0.086 & 0.669 \(\pm\) 0.046 & **0.519 \(\pm\) 0.048** & 0.661 \(\pm\) 0.051 & 0.588 \(\pm\) 0.050 & 0.483 \(\pm\) 0.059 & 0.583 \(\pm\) 0.047 \\ \hline \hline \end{tabular}
\end{table}
Table 5: \(F1-scores\) obtained on OrganA-MNIST dataset.
Number of clients:From the figures, it is evident that NCL-Best outperforms SimCLR on almost all setups, with several having significant margins of difference. The trend is best prominent in Figure 8 (\(\# clients=20\)), where the batch size is reduced to \(64\) (refer to Table 1 to cater to the low data regime caused by the increased number of clients. It is well-known [7, 8] that contrastive approaches work better with higher batch sizes (or greater amount of data [11]), due to the availability of more negative samples. Moreover, from the plots, it is clear that the performance of SimCLR decreases with an increase in clients (i.e. decrease in effective data sizes across clients). Whereas non-contrastive methods being negative-free do not have such a dependency and thus are more stable across the number of clients, as well as expectedly beating the former in the low-data regime. With lesser number of clients too (Figure 6 and Figure 7), non-contrastive methods mostly prove to be more effective than SimCLR, but the trend is less prominent compared to those in Figure 8. One noteworthy observation here is, that under FedBN [19] setup across a lower number of clients, the contrastive approach seems to be more favourable compared to the non-contrastive ones, which is _contrary_ to the overall behaviour observed. However, in the low data regime (Figure 8), it follows the general trend.
Dataset size:We revisit figures 6, 7 and 8, this time keeping in mind the dataset sizes (as previously described in Section 4.1). We observe that among "small" datasets, the difference between NCL-Best and SimCLR is smaller compared to those in "big" datasets. The gaps, however, increase with the number of clients and make the trends discussed above more generalised. Furthermore, SimCLR shows greater stability on the OrganMNIST datasets compared to the smaller datasets (Pnuemonia, Breast and Retina) when \(\# clients\) is varied.
From these observations, it is reasonable to infer that non-contrastive self-supervised methods [3, 37, 40] are more effective than contrastive ones under federated simulations (especially under low data availability) and thus, are worthy of greater attention and future research.
### Further analysis
SimSiam performs poorly on OrganMNIST:Studying Figure 3 and Figure 5, it can be clearly seen that SimSiam
Figure 4: Performance across various number of clients under FedBN simulation.
Figure 5: Performance across various number of clients under FedProx simulation.
Figure 3: Performance across various number of clients under FedAVG simulation.
performs much poorer than its counterparts on the 3 OrganMNIST datasets. As discussed in [17], this can be attributed to SimSiam succumbing to dimensional collapse resulting from relatively smaller size of encoder (ResNet-18) and comparably higher data complexity (\(\approx 10\)x) in the 3 larger OrganMNIST datasets. This is further borne out by the significant performance gain we see in Figure 4 caused by a reduction in the local bias via exclusion of batch normalization layers from global model updates in FedBN [19].
FedBN aids SimCLR:In Figure 6 and Figure 7, we observe that SimCLR seems to perform comparatively better (with respect to the best non-contrastive method) under FedBN compared to the other FL algorithms. While it still doesn't beat the best-performing non-contrastive approach in all instances, the gap in performance is significantly smaller when FedBN is used. However, we also note that in Figure 8, FedBN behaves in a similar fashion as FedAVG or FedProx, thus suggesting that the underlying cause maybe due to the batch sizes considered. We point this out as an intriguing observation which may be investigated in future studies.
## 6 Conclusion
Despite the great progress in medical image analysis, the concerns regarding privacy and annotation bottleneck are yet to be fully resolved. In this study, we take a step forward and tackle these challenges head-on by combining federated learning with self-supervision for medical image classification. We experimented with SoTA SSL algorithms exhaustively across various FL setups, simulated by varying the number of clients, to conduct realistic non-i.i.d simulations on the MedMNIST data suite. Our results suggest the high applicability of non-contrastive SSL methods to such tasks, which are found to outperform the contrastive baseline by fair margins. With a systematic analysis of our findings, we show the trends of different algorithms and simulations across different dataset sizes, number of clients, etc. Our holistic evaluation and benchmarking is the first of its kind, which we feel should be of great assistance to other researchers working in related domains. The inferences drawn from our findings, as well as some of the "anomalies" (e.g. surprisingly poor behaviour of SimSiam, contrasting trends of FedBN), should be of great interest for fellow researchers for further exploration. In future, we
Figure 8: Contrastive vs non-contrastive for \(\# clients=20\).
Figure 6: Contrastive vs non-contrastive for \(\# clients=5\).
Figure 7: Contrastive vs non-contrastive for \(\# clients=10\).
plan to expand the horizon of our investigations by digging deeper into the nuances of each FL setting to better interpret the observed trends.
|
2304.04837 | Geometry of Rounding: Near Optimal Bounds and a New Neighborhood
Sperner's Lemma | A partition $\mathcal{P}$ of $\mathbb{R}^d$ is called a
$(k,\varepsilon)$-secluded partition if, for every $\vec{p} \in \mathbb{R}^d$,
the ball $\overline{B}_{\infty}(\varepsilon, \vec{p})$ intersects at most $k$
members of $\mathcal{P}$. A goal in designing such secluded partitions is to
minimize $k$ while making $\varepsilon$ as large as possible. This partition
problem has connections to a diverse range of topics, including deterministic
rounding schemes, pseudodeterminism, replicability, as well as Sperner/KKM-type
results.
In this work, we establish near-optimal relationships between $k$ and
$\varepsilon$. We show that, for any bounded measure partitions and for any
$d\geq 1$, it must be that $k\geq(1+2\varepsilon)^d$. Thus, when $k=k(d)$ is
restricted to ${\rm poly}(d)$, it follows that $\varepsilon=\varepsilon(d)\in
O\left(\frac{\ln d}{d}\right)$. This bound is tight up to log factors, as it is
known that there exist secluded partitions with $k(d)=d+1$ and
$\varepsilon(d)=\frac{1}{2d}$. We also provide new constructions of secluded
partitions that work for a broad spectrum of $k(d)$ and $\varepsilon(d)$
parameters. Specifically, we prove that, for any
$f:\mathbb{N}\rightarrow\mathbb{N}$, there is a secluded partition with
$k(d)=(f(d)+1)^{\lceil\frac{d}{f(d)}\rceil}$ and
$\varepsilon(d)=\frac{1}{2f(d)}$. These new partitions are optimal up to
$O(\log d)$ factors for various choices of $k(d)$ and $\varepsilon(d)$. Based
on the lower bound result, we establish a new neighborhood version of Sperner's
lemma over hypercubes, which is of independent interest. In addition, we prove
a no-free-lunch theorem about the limitations of rounding schemes in the
context of pseudodeterministic/replicable algorithms. | Jason Vander Woude, Peter Dixon, A. Pavan, Jamie Radcliffe, N. V. Vinodchandran | 2023-04-10T19:51:14Z | http://arxiv.org/abs/2304.04837v1 | # Geometry of Rounding: Near Optimal Bounds and a New Neighborhood Sperner's Lemma +
###### Abstract
A partition \(\mathcal{P}\) of \(\mathbb{R}^{d}\) is called a \((k,\varepsilon)\)-secluded partition if, for every \(\vec{p}\in\mathbb{R}^{d}\), the ball \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) intersects at most \(k\) members of \(\mathcal{P}\). A goal in designing such secluded partitions is to minimize \(k\) while making \(\varepsilon\) as large as possible. This partition problem has connections to a diverse range of topics, including deterministic rounding schemes, pseudodeterminism, replicability, as well as Sperner/KKM-type results.
In this work, we establish near-optimal relationships between \(k\) and \(\varepsilon\). We show that, for any bounded measure partitions and for any \(d\geq 1\), it must be that \(k\geq(1+2\varepsilon)^{d}\). Thus, when \(k=k(d)\) is restricted to \(\operatorname{poly}(d)\), it follows that \(\varepsilon=\varepsilon(d)\in O\left(\frac{\ln d}{d}\right)\). This bound is tight up to log factors, as it is known that there exist secluded partitions with \(k(d)=d+1\) and \(\varepsilon(d)=\frac{1}{2d}\). We also provide new constructions of secluded partitions that work for a broad spectrum of \(k(d)\) and \(\varepsilon(d)\) parameters. Specifically, we prove that, for any \(f:\mathbb{N}\to\mathbb{N}\), there is a secluded partition with \(k(d)=(f(d)+1)^{\left\lceil\frac{d}{f(d)}\right\rceil}\) and \(\varepsilon(d)=\frac{1}{2f(d)}\). These new partitions are optimal up to \(O(\log d)\) factors for various choices of \(k(d)\) and \(\varepsilon(d)\). Based on the lower bound result, we establish a new neighborhood version of Sperner's lemma over hypercubes, which is of independent interest. In addition, we prove a no-free-lunch theorem about the limitations of rounding schemes in the context of pseudodeterministic/replicable algorithms.
We use techniques from measure theory and the geometry of numbers, including the generalized Brunn-Minkowski inequality, the isodiametric inequality, and Blichfeldt's theorem, to establish our results.
###### Contents
* 1 Introduction
* 1.1 Motivation
* 1.2 Our Contributions
* 1.2.1 A Lower Bound on the Degree
* 1.2.2 A Neighborhood Sperner/KKM/Lebesgue Theorem
* 1.2.3 New Constructions
* 1.2.4 A No-Free-Lunch Theorem
* 1.3 Organization
* 2 Proof Outlines of Main Results
* 2.1 Proof Outline for the Lower Bound on the Degree
* 2.2 Proof Outline of the Neighborhood Sperner/KKM/Lebesgue
* 2.3 Proof Outline of the Construction Result
* 2.4 Proof Outline of the No-Free-Lunch Theorem
* 3 Notation
* 4 Lower Bound on the Degree Parameter
* 4.1 Prerequisite Results
* 4.2 Proofs of Theorem 1.5, Corollary 1.6, and Theorem 1.4
* 4.3 Specific Norms
* 5 A Neighborhood Sperner/KKM/Lebesgue Theorem
* 6 New Constructions
* 6.1 Construction
* 7 A No-Free-Lunch Theorem
* A Measure Theory
* B Minkowski Sums
* C Additional Facts
Introduction
We investigate the following secluded partition problem. We use the notation \(\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{p})\) and \(B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{p})\) to indicate, respectively, the closed or open ball of radius \(\varepsilon\) centered at \(\vec{p}\) for a general norm, and we will use \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) and \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) when the norm is the \(\ell_{\infty}\) norm.
**Definition 1.1** ([21]).: _A partition \(\mathcal{P}\) of \(\mathbb{R}^{d}\) is called a \((k,\varepsilon)\)-secluded partition if for every \(\vec{p}\in\mathbb{R}^{d}\), the ball \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) intersects at most \(k\) members of \(\mathcal{P}\). The parameters \(k\) and \(\varepsilon\) are called the degree and tolerance respectively._
Ideally, we would like to construct partitions with degree \((k)\) as small as possible and tolerance \((\varepsilon)\) as large as possible. However, there is a natural trade-off between these two parameters. It is easy to see that for the standard grid partition of \(\mathbb{R}^{d}\) with half-open unit hypercubes, we can have \(k=2^{d}\) and \(\varepsilon=\frac{1}{2}\). Somewhat surprisingly, \(k\) can be made exponentially smaller at the expense of making \(\varepsilon\) (polynomially) small. In particular, it is known that there is a unit hypercube partition of \(\mathbb{R}^{d}\) with \(k=d+1\) and \(\varepsilon=\frac{1}{2d}\)[21]. Also, a certain rounding scheme given in [10] induces a \((d+1,\frac{1}{6(d+1)})\)-secluded partition (with members which are not hypercubes). It is known that for \(\varepsilon>0\), any \((k,\varepsilon)\)-secluded partition of \(\mathbb{R}^{d}\) with at most unit diameter members (in \(\ell_{\infty}\)) must have \(k\geq d+1\). In addition, if \(k=d+1\) then it must be that \(\varepsilon\leq\frac{2}{\sqrt{d}}\)[21]. These are the only results known about the construction/impossibility results of secluded partitions.
The following questions naturally arise. (1) Can we design a secluded partition with degree \(k=k(d)\in\mathsf{poly}(d)\) with tolerance \(\varepsilon=\varepsilon(d)\in\omega(\frac{1}{d})\)? (2) Does allowing \(k(d)\) to be sub-exponential in \(d\) allow for \(\varepsilon(d)\) to be a constant? (3) More generally, and very fundamentally, what is the trade-off between the degree and the tolerance parameters in secluded partitions? In this work, we establish a near-optimal trade-off between these two parameters.
### Motivation
Investigating constructions and trade-offs for secluded partitions is a fundamental question in its own right. However, its applicability to various topics in theoretical computer science is also a significant motivation for investigation.
_Deterministic Rounding:_ Rounding schemes are fundamental in computer science with applications in approximation algorithms, pseudorandomness, replicability, and differential privacy [11, 12, 13, 14, 15]. There is a natural equivalence between rounding schemes and partitions of Euclidean space in a very general sense that has been observed in prior works [1, 13, 21]. In particular, certain deterministic rounding schemes are equivalent to secluded partitions.
**Definition 1.2**.: _A deterministic rounding scheme is a family of functions \(\mathcal{F}=\{f_{d}\}_{d\in\mathbb{N}}\) where \(f_{d}:\mathbb{R}^{d}\to\mathbb{R}^{d}\). \(\mathcal{F}\) is called a \((k(d),\varepsilon(d))\)-deterministic rounding scheme if two properties hold for each \(d\in\mathbb{N}\): (1) for all \(\vec{x}\in\mathbb{R}^{d}\), \(\|f_{d}(\vec{x})-\vec{x}\|_{\infty}\leq\frac{1}{2}\)a, and (2) for all \(\vec{p}\in\mathbb{R}^{d}\) the set \(\left\{f_{d}(\vec{x})\colon\vec{x}\in\overline{B}_{\infty}(\varepsilon(d), \vec{p})\right\}\) has cardinality at most \(k(d)\)._
Footnote a: The value \(\frac{1}{2}\) in the above definition is arbitrary and can be replaced with another constant by appropriately scaling other parameters.
Let \(f:\mathbb{R}^{d}\to\mathbb{R}^{d}\) be a rounding function. For any \(\vec{y}\in\mathbb{R}^{d}\), let \(X_{\vec{y}}=\left\{\vec{x}\in\mathbb{R}^{d}\colon f(\vec{x})=\vec{y}\right\}\) denote the points rounded to \(\vec{y}\). Let \(\mathcal{P}_{f}=\left\{X_{\vec{y}}\colon\vec{y}\in\operatorname{range}(f)\right\}\) which is clearly a partition of \(\mathbb{R}^{d}\).
Conversely, a partition \(\mathcal{P}\) of \(\mathbb{R}^{d}\) induces (many) deterministic rounding functions \(f_{\mathcal{P}}\) as follows: for each member \(X\in\mathcal{P}\) let \(\vec{p}_{X}\in\mathbb{R}^{d}\) (often \(\vec{p}_{X}\in X\)) be some fixed representative of the member \(X\). Then the rounding function \(f_{\mathcal{P}}\) maps any point \(\vec{x}\in X\) to \(\vec{p}_{X}\). This leads to the following observation [20].
**Observation 1.3** (Equivalence of Rounding Schemes and Partitions).: _A \((k(d),\varepsilon(d))\)-deterministic rounding scheme induces, for each \(d\in\mathbb{N}\), a \((k(d),\varepsilon(d))\)-secluded partition of \(\mathbb{R}^{d}\) in which each member has diameter at most \(1\). Conversely, a sequence \(\langle\mathcal{P}_{d}\rangle_{d=1}^{\infty}\) of partitions where \(\mathcal{P}_{d}\) is \((k(d),\varepsilon(d))\)-secluded and contains only members of diameter at most \(1\) induces many \((k(d),\varepsilon(d))\)-deterministic rounding schemes._
The observation provides a geometric perspective on rounding schemes. By investigating secluded partitions, this work examines the concept of rounding from a geometric viewpoint and explores its possibilities and limitations. Rounding schemes and secluded partitions are related to the notion of pseudodeterminism which we discuss next.
_Pseudodeterminism and Replicability:_ Due to its unexpected links with computational complexity theory [21, 22, 23, 24], differential privacy [1, 1, 19], and reproducible learning [10], the concept of pseudodeterminism and its various forms have been gaining increased attention in theoretical computer science. A probabilistic algorithm \(M\) is _pseudodeterministic_ if, for every input \(x\), there is a canonical value \(v_{x}\) such that \(M(x)\) outputs \(v_{x}\) with high probability. This notion can be extended to \(k\)-pseudodeterminism where for every \(x\), there is a canonical set \(S_{x}\) of size at most \(k\) and the output of \(M(x)\) belongs to \(S_{x}\) with high probability. Pseudodeterminism naturally captures the notion of reproducibility/replicability in computations. The concept of pseudodeterminism and its extensions have been applied to learning and privacy, leading to the seminal discovery of the equivalence between private PAC learning and Littlestone dimension [1, 1].
One method for creating a \(k\)-pseudodeterministic algorithm is through the use of rounding [1, 20, 21]. Let \(A\) be a probabilistic algorithm that approximates a function \(f\) whose range is \([0,1]^{d}\) with an additive error \(\nu\) in \(\ell_{\infty}\) norm. In general, the number of possible outcomes of such an approximation algorithm could be very large. However, by rounding each coordinate of \(A(x)\) to the nearest multiple of \(\nu\), we can create a \(2^{d}\)-pseudodeterministic algorithm with an error of \(2\nu\). In general, we can use a \((k(d),\varepsilon(d))\)-deterministic rounding scheme (equivalently by Observation 1.3, a unit diameter \((k(d),\varepsilon(d))\)-secluded partition) to obtain a \(k(d)\)-pseudodeterministic algorithm with an approximation error of \(O(\nu/\varepsilon)\). Using the known \((d+1,\frac{1}{2d})\)-secluded partition, this implies that any \(\nu\)-approximation algorithm can be converted to a \((d+1)\)-pseudodeterministic algorithm with an _approximation error blown up to \(O(\nu d)\)_. Thus, to achieve a final approximation error of \(\varepsilon^{\prime}\), the original approximation algorithm \(A\) must start with a smaller error of \(O(\varepsilon^{\prime}/d)\). Since the running time of approximation algorithms typically depends on the desired quality of the approximation, this leads to larger run times for approximation algorithms (or sample complexity in the context of learning). A critical question is whether this _linear blowup_ in the approximation error is necessary. Can we convert \(A\) into a \(\mathsf{poly}(d)\)-pseudodeterministic algorithm with only a smaller approximation-error blowup, say \(d^{\gamma}\), for some \(0<\gamma<1\)? A trade-off result that we establish between the degree and the tolerance for secluded partitions leads to a _no-free-lunch_ theorem: for any generic rounding scheme that produces a \(k(d)\)-pseudodeterministic algorithm, there must be a degradation in approximation quality by a factor of at least \(\Omega(\frac{d}{\ln k(d)})\). Consequently, if \(k(d)\in\mathsf{poly}(d)\), then the approximation accuracy must decrease by at least a factor of \(\Omega(\frac{d}{\ln d})\).
_Connection to Sperner's Lemma and the KKM Lemma:_ Sperner's lemma and the KKM lemma have found applications in theoretical computer science [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. Somewhat surprisingly to the authors, our investigation to establish the trade-off between the degree and the tolerance parameters of secluded partitions has led to the discovery of a new "neighborhood" version of the Sperner/KKM lemma. This new lemma has already been useful in establishing new results on fair-division problems (ongoing work). We expect that the secluded partition problem and the results we establish will have further applications in other contexts where the Sperner/KKM lemma has been utilized.
To conclude, exploring the constructions and trade-offs associated with secluded partitions is an essential inquiry in its own regard. The fact that it has implications across a variety of subjects within theoretical computer science is an additional driving force for further investigation.
### Our Contributions
Our work makes four key contributions. Firstly, we establish a lower bound on the degree parameter \(k\) in terms of the tolerance parameter \(\varepsilon\), for any \((k,\varepsilon)\)-secluded partition. This result is established in a very general setting and works for _any_ norm in Euclidean space. Secondly, using the techniques developed in the proof of the degree lower bound result, we establish a new "neighborhood" variant of the Sperner/KKM lemma that is of independent interest. Thirdly, we give a new generic construction of \((k,\varepsilon)\)-secluded partitions. This construction together with the degree lower bound result establishes near optimality of the tolerance parameter for various natural choices of degree parameter. Finally, we establish a no-free-lunch theorem for multi-pseudodeterministic computations: for any generic rounding method that produces a \(k(d)\)-pseudodeterministic algorithm for a function whose range is \(\mathbb{R}^{d}\), there must be an almost linear degradation in approximation quality.
#### 1.2.1 A Lower Bound on the Degree
We state the result in terms of bounded-measure partitions and will discuss later why we use measure instead of diameter. For now, it is enough to note that for the \(\ell_{\infty}\) norm, unit diameter implies unit outer measure (see Fact A.1 with \(D=1\) and \(M=1\)), so any results stated for partitions with members of outer measure at most \(1\) immediately implies the same result for partitions with members of \(\ell_{\infty}\) diameter at most \(1\).
**Theorem 1.4**.: _Let \(d\in\mathbb{N}\), \(\varepsilon\in[0,\infty)\), and \(\mathcal{P}\) a partition of \(\mathbb{R}^{d}\) such that every member has outer Lebesgue measure at most \(1\). Then there exists some \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) members of \(\mathcal{P}\). Thus, if \(\mathcal{P}\) is a \((k,\varepsilon)\)-secluded partition, then \(k\geq(1+2\varepsilon)^{d}\). Consequently, if \(k\leq 2^{d}\), then it must be that \(\varepsilon\leq\frac{\ln k}{d}\)._
Thus even if \(\varepsilon(d)\) is an arbitrarily small constant, the degree \(k(d)\) has to be made exponential in \(d\). Also, it follows that for \(k(d)\) to be \(\mathsf{poly}(d)\), we must have \(\varepsilon(d)\in O(\frac{\ln d}{d})\). This establishes that the \((d+1,\frac{1}{2d})\) partitions form [23] optimal up to a log factor. This also implies that relaxing \(k(d)=d+1\) to \(k(d)\in\mathsf{poly}(d)\) can at best contribute \(O(\ln d)\) factor improvements in \(\varepsilon(d)\).
Our proof of Theorem 1.4 does not rely on any specific properties of the \(\ell_{\infty}\) norm. In particular, we establish the following result that holds for _any norm_ in \(\mathbb{R}^{d}\), and Theorem 1.4 follows from this as a corollary.
**Theorem 1.5** (\(\varepsilon\)-Neighborhoods for Measure Bounded Partitions and Arbitrary Norm).: _Let \(d\in\mathbb{N}\), \(M\in(0,\infty)\), and \(\mathcal{P}\) a partition of \(\mathbb{R}^{d}\) such that every member has outer Lebesgue measure at most \(M\). Let \(\mathbb{R}^{d}\) be equipped with any norm \(\left\lVert\cdot\right\rVert\). For every \(\varepsilon\in(0,\infty)\), there exists \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\left\lVert\cdot\right\rVert}(\varepsilon,\vec{p})\) intersects at least \(k=\left[\left(1+\varepsilon\left(\frac{v_{\left\lVert\cdot\right\rVert,d}}{M} \right)^{1/d}\right)^{d}\right]\) members of the partition where \(v_{\left\lVert\cdot\right\rVert,d}\stackrel{{\mathrm{def}}}{{=}}m \left(B^{\circ}_{\left\lVert\cdot\right\rVert}(1,\vec{0})\right)\) is the measure of the \(\left\lVert\cdot\right\rVert\) unit ball._
For the \(\ell_{\infty}\) norm, \(v_{\left\lVert\cdot\right\rVert,d}\) is \(2^{d}\). Thus, Theorem 1.4 is a corollary of the above theorem with the \(\ell_{\infty}\) norm and \(M=1\). Using the isodiametric inequality, we obtain the following diameter version as a corollary.
**Corollary 1.6** (\(\varepsilon\)-Neighborhoods for Diameter Bounded Partitions).: _Let \(d\in\mathbb{N}\), \(D\in(0,\infty)\), \(\varepsilon\in(0,\infty)\). Let \(\mathbb{R}^{d}\) be equipped with any norm \(\left\lVert\cdot\right\rVert\). Let \(\mathcal{P}\) be a partition of \(\mathbb{R}^{d}\) such that every member has diameter at most \(D\) (with respect to \(\left\lVert\cdot\right\rVert\)). Then there exists \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\left\lVert\cdot\right\rVert}(\varepsilon,\vec{p})\) intersects at least \(k=\left\lceil\left(1+\frac{2\varepsilon}{D}\right)^{d}\right\rceil\) members of the partition._
The expression \(\frac{2\varepsilon}{D}\) in the above result should be viewed as a normalization factor which is the ratio of the diameter of the \(\varepsilon\)-ball to the diameter of the members of the partition. The expression \(\varepsilon\left(\frac{v_{d}}{M}\right)^{1/d}=\frac{(\varepsilon^{d}\cdot v_{ d})^{1/d}}{M^{1/d}}\) should also viewed as a normalization factor. It is typical in measure theory contexts to see the \(d\)th roots of measures in \(d\) dimensions show up, and they often serve as a type of characteristic length scale and are sometimes more robust than actual distances. Thus, this expression should be viewed as the ratio of the characteristic length scale of the \(\varepsilon\) ball (which has measure \(\varepsilon^{d}\cdot v_{d}\)) to the characteristic length scale of the members of the partition (which have measure at most \(M\)).
_A Remark about Measure:_ Though rounding schemes are equivalent to partitions with bounded _diameter_, we have stated the above theorems with respect to partitions with bounded _outer measure_. As mentioned previously, in the case of the \(\ell_{\infty}\) norm, this is a strict generalization. Furthermore, the measure perspective allows us to employ powerful tools from measure theory and the geometry of numbers such as Blichfeldt's theorem and the Generalized Brunn-Minkowski Inequality (Theorem 4.5) for establishing these results. This also allows us to establish the lower bound result with respect to any norm. Finally, we can convert measure results to diameter results very tightly using the Isodiametric Inequality (Theorem 4.9).
#### 1.2.2 A Neighborhood Sperner/KKM/Lebesgue Theorem
We use the techniques developed in establishing Theorem 1.5 to prove a new variant Sperner's lemma on the cube. The version of our result bears more resemblance to the KKM lemma--known to be equivalent to Sperner's lemma in a natural way. The KKM Lemma states that for a covering/coloring of the \(d\)-simplex satisfying certain properties, there is some point in the closure of at least \(d+1\) sets/colors. There is also a version of the KKM lemma for the cube [13, 14, 15] which can be proven either independently or as consequences of versions of Sperner's lemma for the cube [10]. Though it is a lesser-known result, the Lebesgue covering theorem (c.f. [15, Theorem IV 2]) says the same thing for the cube. These results are summarized by the following theorem.
**Theorem 1.7** (Sperner/KKM/Lebesgue).: _Given a coloring of \([0,1]^{d}\) by finitely many colors in which no color includes points of opposite faces, there exists a point at the closure of at least \(d+1\) different colors._
In some versions of the above lemma, there are exactly \(2^{d}\) colors (one for each corner/vertex) and in some others, the coloring can assign multiple colors to a single point so that it is a covering rather than a partitioning by colors, but both of these differences are inconsequential. The finiteness is required to guarantee a point in the closure, but even without the finiteness, there exists a point such that for any \(\varepsilon>0\) the \(\varepsilon\) ball (in any norm) intersects at least \(d+1\) colors. In this spirit, our new variant of this Sperner/KKM/Lebesgue result considers not a point but instead a small \(\varepsilon\) ball in the \(\ell_{\infty}\) norm. Thus we call it the _Neighborhood Sperner Lemma_.
**Theorem 1.8** (Neighborhood Sperner lemma).: _Given a coloring of \([0,1]^{d}\) in which no color includes points of opposite faces, then for any \(\varepsilon\in(0,\frac{1}{2}]\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) contains points of at least \(\left(1+\frac{2}{3}\varepsilon\right)^{d}\) different colors._
#### 1.2.3 New Constructions
Currently, we know of only two types of secluded partitions--the standard grid partition which is \((2^{d},\frac{1}{2})\)-secluded and the two different \((d+1,O(\frac{1}{d}))\)-secluded partitions of [22, 18]. We do not know of partitions with other values of \(k(d)\) and \(\varepsilon(d)\). For example, no constructions are known say for the parameter \(\varepsilon(d)=\frac{1}{\ln d}\) or for the parameter \(k(d)\in\mathsf{poly}(d)\) or \(k(d)\) sub-exponential in \(d\). The grid partition is optimal in the tolerance parameter and the \((d+1,O(\frac{1}{2d}))\) partitions are optimal in the degree parameter. By using these partitions as building blocks, we provide new constructions of secluded partitions that work for a range of degree and tolerance parameters. We establish the following generic construction result:
**Theorem 1.9**.: _Let \(f:\mathbb{N}\to\mathbb{N}\) be any function. For each \(d\in\mathbb{N}\), there exists a \((k(d),\varepsilon(d))\)-secluded unit cube partition of \(\mathbb{R}^{d}\) where \(k(d)=(f(d)+1)^{\lceil\frac{d}{f(d)}\rceil}\) and \(\varepsilon(d)=\frac{1}{2f(d)}\)_
Using this theorem we obtain secluded partitions for various choices of \(k(d)\) and \(\varepsilon(d)\). For example, for any \(\varepsilon(d)\in o(1)\), we can achieve \(k(d)\in 2^{o(d)}\). For any \(\varepsilon(d)\in O(\frac{1}{d})\) (even if \(\varepsilon(d)\) is larger that the values allows by [22, 18]) we can still obtain \(k(d)\in\mathsf{poly}(d)\). For \(\varepsilon(d)=\frac{\ln^{\ell}d}{d}\), we get that \(k(d)=2^{\log^{\varepsilon_{d+1}}d}\).
These constructions are _near optimal_. For instance when \(k(d)\in\mathsf{poly}(d)\), Theorem 1.4 implies that \(\varepsilon(d)\in O(\frac{\ln d}{d})\), and when \(k=2^{\log^{l+1}d}\), it must be that \(\varepsilon(d)\in O(\frac{\ln^{l+1}d}{d})\). Thus, the value of \(\varepsilon\) achieved by the construction is optimal up to an \(O(\ln d)\) factor.
#### 1.2.4 A No-Free-Lunch Theorem
Secluded partitions (deterministic rounding schemes) are used as a generic tool to design \(k\)-pseudodeterministic algorithms. However, as discussed in the introduction, this results in a linear (in \(d\)) blowup of the approximation error. We next establish that this loss is inevitable for generic methods.
In the statement below, the notation \(A\circ B\) indicates the composition of algorithm \(A\) after algorithm \(B\). An \((\varepsilon,\delta)\)-approximation algorithm for a function \(f\) with respect to some notion of distance is a randomized (or deterministic) algorithm that on every valid input \(x\) returns with probability at least \(1-\delta\) some value that is distance at most \(\varepsilon\) from \(f(x)\). An algorithm \(A\) is
\((k,\delta)\)-pseudodeterministic if for every \(x\), there is a set \(S_{x}\) of size at most \(k\) and \(A(x)\in S(x)\) with probability at least \(1-\delta\).
**Theorem 1.10**.: _Let \(d,k\in\mathbb{N}\) and \(\varepsilon_{0}\in(0,\infty)\) and \(\delta\in(0,\frac{1}{2}]\) be fixed, and let \(\|\cdot\|\) be a norm on \(\mathbb{R}^{d}\). Suppose there exists \(\varepsilon\in(0,\infty)\) and a deterministic algorithm \(A\) mapping inputs in \(\mathbb{R}^{d}\) to outputs in \(\mathbb{R}^{d}\) with the following universal black box property:_
**Property:** _For any set \(\Lambda\) (indicating some problem domain) and function \(f:\Lambda\to\mathbb{R}^{d}\) and \((\varepsilon_{0},\delta)\)-approximation algorithm \(B\) for \(f\) (with respect to \(\|\cdot\|\)), it holds that \(A\circ B\) is a \((k,\delta)\)-pseudodeterministic \((\varepsilon,\delta)\)-approximation algorithm for \(f\) (again with respect to \(\|\cdot\|\))._
_Then \(\varepsilon\geq\varepsilon_{0}\cdot\frac{d}{4\ln(2k)}\)._
This theorem demonstrates that when \(k(d)\in\mathsf{poly}(d)\), a blowup of at least \(O\left(\frac{d}{\ln(d)}\right)\) is required of generic methods.
### Organization
The rest of the document is organized as follows. Section 2 contains proof outlines of our results. Section 3 introduces the necessary notation. The rest of the sections are devoted to complete proofs. Section 4 contains proofs of degree \((k)\) lower bound results in terms of tolerance \((\varepsilon)\): Theorem 1.5, Corollary 1.6, and Theorem 1.4. Section 5 contains a proof of the neighborhood Sperner lemma (Theorem 1.8). Section 6 presents constructions of new secluded partitions (Theorem 1.9). Finally, Section 7 proves the no-free-lunch result (Theorem 1.10).
## 2 Proof Outlines of Main Results
The main technical tools that we use come from measure theory and the geometry of numbers and include the Generalized Brunn-Minkowski Inequality (Theorem 4.5), the Isodiametric Inequality (Theorem 4.9), and the ideas from the proof of Blichfeldt's theorem (Proposition A.9). The generalized Brunn-Minkowski inequality gives a lower bound on the measure of a Minkowski sum of sets \((A+B=\left\{\vec{a}+\vec{b}\colon\vec{a}\in A,\vec{b}\in B\right\})\) based on the measures of those sets. The isodiametric inequality gives a very generic way to convert from upper bounds on the diameters of sets to upper bounds on measures in a tight manner. A common technique in the proof of Blichfeldt's theorem is to use an averaging argument to show that if a set \(A\) is covered by a large family of other sets, then some point in \(A\) is covered many times.
### Proof Outline for the Lower Bound on the Degree
We discuss the main techniques to prove Theorem 1.5. Since the theorem holds for any norm, the proof uses no specific properties of any norm. However, to give the main ideas of the proof here and avoid technical details, we will use the visual of Figure 1 and focus on the \(\ell_{\infty}\) norm (whose balls are high-dimensional cubes) and assume that the bound on the measures is \(M=1\). Since the volume of the unit ball in \(\mathbb{R}^{d}\) with respect to \(\ell_{\infty}\) is \(v_{d}=2^{d}\), Theorem 1.5 reduces to Theorem 1.4 mentioned in the introduction and restated below.
**Theorem 1.4**.: _Let \(d\in\mathbb{N}\), \(\varepsilon\in[0,\infty)\), and \(\mathcal{P}\) a partition of \(\mathbb{R}^{d}\) such that every member has outer Lebesgue measure at most \(1\). Then there exists some \(\vec{p}\in\mathbb{R}^{d}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) members of \(\mathcal{P}\). Thus, if \(\mathcal{P}\) is a \((k,\varepsilon)\)-secluded partition, then \(k\geq(1+2\varepsilon)^{d}\). Consequently, if \(k\leq 2^{d}\), then it must be that \(\varepsilon\leq\frac{\ln k}{d}\)._
Proof Outline.: The goal is to find some point \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) members of the partition. Instead of directly trying to establish this, we take a critical change of perspective: for any \(\vec{p}\in\mathbb{R}^{d}\) and \(X\in\mathcal{P}\) (or really any \(X\subseteq\mathbb{R}^{d}\)), it holds that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\cap X\neq\emptyset\) if and only if \(\vec{p}\in\bigcup_{\vec{x}\in X}B^{\circ}_{\infty}(\varepsilon,\vec{x})\)1. Thus, what we do is to "replace" every member \(X\) of the partition with the enlarged set \(\bigcup_{\vec{x}\in X}B^{\circ}_{\infty}(\varepsilon,\vec{x})\) and try to find a point \(\vec{p}\) that belongs to at least \((1+2\varepsilon)^{d}\) of these enlarged sets. To achieve this, we take inspiration from a common proof of Blichfeldt's theorem--specifically, the following result which says that if we have a collection of sets \(A_{1},A_{2},A_{3},\ldots\) which are subsets of another set \(S\), then there is a point in \(S\) occurring in multiple \(A_{i}\)s provided together the \(A_{i}\)s have enough volume/measure. We can in fact give a lower bound on the number of sets \(A_{i}\)s to which such a point belongs to. The following is the formal claim of this known result.
Footnote 1: If \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects \(X\), then there is some some point \(\vec{x}\) in the intersection of these sets which means \(\vec{p}\) and \(\vec{x}\) are distance at most \(\varepsilon\) apart, and we could also say \(\vec{p}\in B^{\circ}_{\infty}(\varepsilon,\vec{x})\). Then trivially \(\vec{p}\in\bigcup_{\vec{x}\in X}B^{\circ}_{\infty}(\varepsilon,\vec{x})\). The converse is similar. If \(\vec{p}\in\bigcup_{\vec{x}\in X}B^{\circ}_{\infty}(\varepsilon,\vec{x})\), then for some fixed \(\vec{x}\in X\), \(\vec{p}\in B^{\circ}_{\infty}(\varepsilon,\vec{x})\). Again, this means that \(\vec{p}\) and \(\vec{x}\) are distance at most \(\varepsilon\) apart, so \(\vec{x}\in B^{\circ}_{\infty}(\varepsilon,\vec{p})\). Since \(\vec{x}\in X\) also, we have that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) and \(X\) intersect.
**Proposition A.9** (Continuous Multi-Pigeonhole Principle).: _Let \(d\in\mathbb{N}\) and \(S\subset\mathbb{R}^{d}\) be bounded and measurable. Let \(\mathcal{A}\) be a family of measurable subsets of \(S\), and let \(k=\left\lceil\frac{\sum_{A\in\mathcal{A}}m(A)}{m(S)}\right\rceil\). Then if \(k<\infty\), there exists \(\vec{p}\in S\) such that \(\vec{p}\) belongs to at least \(k\) members of \(\mathcal{A}\). (And if \(k=\infty\), then for any \(n\in\mathbb{N}\) there exists \(\vec{p}^{(n)}\in S\) such that \(\vec{p}^{(n)}\) belongs to at least \(n\) members of \(\mathcal{A}\).)_
There is an immediate issue we have to deal with to be able to use Proposition A.9 for our application. We would like to take \(\mathcal{A}\) to be the collection of enlarged partition members: \(\mathcal{A}=\left\{\bigcup_{\vec{x}\in X}B^{\circ}_{\infty}(\varepsilon,\vec{ x})\right\}_{X\in\mathcal{P}}\), but then all we know is that each of these is a subset of \(S=\mathbb{R}^{d}\) which is not bounded. This is a simple enough issue to deal with using a standard measure theory technique of considering instead a sequence \(S_{1},S_{2},S_{3},\ldots\) of sets which _are_ bounded and get larger and larger so that \(\bigcup_{n=1}^{\infty}S_{n}=\mathbb{R}^{d}\); we work with each of these sets individually and then try to use a limiting argument to pass the result back to \(S=\mathbb{R}^{d}\). Specifically, we will take \(S_{n}=[-n,n]^{d}\) as illustrated in the first 2 panes of Figure 1. The third pane of Figure 1 illustrates that we will specifically consider the partition of \(S_{n}\) induced by \(\mathcal{P}\) which we denote by \(\mathscr{S}_{n}\). That is, the induced partition \(\mathscr{S}_{n}\) is the set \(\{X\cap S_{n}\colon X\in\mathcal{P}\text{ and }X\cap S_{n}\neq\emptyset\}\). Then for each \(S_{n}\) we consider a collection \(\mathcal{A}_{n}\) of the enlarged members of the induced partition: \(\mathcal{A}_{n}=\{A_{Y}\}_{Y\in\mathscr{S}_{n}}\) where \(A_{Y}\mathop{=}\limits^{\text{def}}\bigcup_{\vec{y}\in Y}B^{\circ}_{\infty}( \varepsilon,\vec{y})\). Note that each \(A_{Y}\) is a subset of \(S^{\prime}_{n}\mathop{=}\limits^{\text{def}}[-(n+\varepsilon),n+\varepsilon]^{d}\).
However, there remains one other issue to deal with to utilize Proposition A.9--for each \(n\), we have to have some lower bound on the expression \(\left\lceil\frac{\sum_{A_{Y}\in\mathcal{A}_{n}}m(A_{Y})}{m(S^{\prime}_{n})}\right\rceil\). We know that \(m(S^{\prime}_{n})=(2(n+\varepsilon))^{d}\), and using the fact that \(\mathscr{S}_{n}\) is a partition of \(S_{n}=[-n,n]^{d}\), we have the following if \(\mathscr{S}_{n}\) is countable2 (meaning finite or countably infinite):
Footnote 2: If \(\mathscr{S}_{n}\) is uncountable, then the first equality below becomes an inequality, but it becomes the wrong inequality: \(\sum_{Y\in\mathscr{S}_{n}}m(Y)\leq m\left(\bigsqcup_{Y\in\mathscr{S}_{n}}Y\right)\) by Fact A.2 in Appendix A.
\[\sum_{A_{Y}\in\mathcal{A}_{n}}m(A_{Y})\geq\sum_{Y\in\mathscr{S}_{n}}m(Y)=m \left(\bigsqcup_{Y\in\mathscr{S}_{n}}Y\right)=m(S_{n}),\]
but this is not nearly good enough, because it just gives
\[\left\lceil\frac{\sum_{A_{Y}\in\mathcal{A}_{n}}m(A)}{m(S_{n}^{\prime})}\right\rceil \geq\left\lceil\frac{m(S_{n})}{m(S_{n}^{\prime})}\right\rceil\geq\left\lceil \frac{(2n)^{d}}{(2(n+\varepsilon))^{d}}\right\rceil=\left\lceil\left(\frac{n}{n +\varepsilon}\right)^{d}\right\rceil=1\]
whereas we want it \(\geq(1+2\varepsilon)^{d}\). Basically, this lower bound is terrible because we did not account for the fact that the elements of \(\mathcal{A}_{n}\) are enlarged from what they were in the partition \(\mathcal{S}_{n}\). Thus, we really want some way to give for each \(Y\in\mathcal{S}_{n}\), a lower bound on the measure of the enlarged set \(A_{Y}\). One might observe that enlarging with an \(\varepsilon\)-ball looks something like scaling by a factor of \(1+\varepsilon\) (though it is not actually scaling3), and since the Lebesgue measure (i.e. typical notion of volume/measure in \(\mathbb{R}^{d}\)) has the property that scaling by \((1+\varepsilon)\) increases the measure by a factor of \((1+\varepsilon)^{d}\), we might be able to show that the enlarged version of each member increases by a factor of \((1+\varepsilon)^{d}\) (which is basically what we are looking to get).
Footnote 3: See for example the smallest (purple) member in the last pane of Figure 1 which is very circular, but upon enlarging looks more squarish. In fact, enlarging a single point by \(\varepsilon\) results in a hypercube of diameter \(2\varepsilon\).
This intuition holds, though the actual reason is not related to scaling, and is dependent on the members having measure at most \(1\). Rather, we use a specialized adaption of the known Generalized Brunn-Minkowski Inequality (Theorem 4.5) to show that
\[m(A_{Y})\geq m(Y)\cdot(1+2\varepsilon)^{d} \tag{1}\]
holds4. Now that we have dealt with both issues that arise with trying to apply Proposition A.9, we can consider a fixed \(n\) and can continue. We proceed in two cases: (1) the interesting case in which \(\mathcal{S}_{n}\) has only countably many members, and (2) the nearly trivial case in which the partition \(\mathcal{S}_{n}\) contains uncountable many members. In case (1) we have
Footnote 4: Technically speaking, the set \(A_{Y}\) might not be measurable (though we suspect it is), so the expression may not even be a mathematically valid one to write down, but we deal with this technical detail later in the paper, and it really does not effect anything in this proof outline.
\[\left\lceil\frac{\sum_{A_{Y}\in\mathcal{A}_{n}}m(A_{Y})}{m(S_{n} ^{\prime})}\right\rceil =\left\lceil\frac{\sum_{Y\in\mathcal{S}_{n}}m(A_{Y})}{m(S_{n}^{ \prime})}\right\rceil\] (Re-index) \[\geq\left\lceil\frac{\sum_{Y\in\mathcal{S}_{n}}\left[m(Y)\cdot( 1+2\varepsilon)^{d}\right]}{m(S_{n}^{\prime})}\right\rceil\] (Equation 1) \[=\left\lceil\frac{(1+2\varepsilon)^{d}\cdot\sum_{Y\in\mathcal{S} _{n}}m(Y)}{m(S_{n}^{\prime})}\right\rceil\] (Linearity of summation) \[=\left\lceil\frac{(1+2\varepsilon)^{d}\cdot m\left(\bigsqcup_{Y \in\mathcal{S}_{n}}Y\right)}{m(S_{n}^{\prime})}\right\rceil\] (Countable additivity of measures) \[=\left\lceil\frac{(1+2\varepsilon)^{d}\cdot m(S_{n})}{m(S_{n}^{ \prime})}\right\rceil\] ( \[S_{n}=\bigsqcup_{Y\in\mathcal{S}_{n}}Y\] ) \[=\left\lceil(1+2\varepsilon)^{d}\cdot\left(\frac{n}{n+\varepsilon }\right)^{d}\right\rceil\] ( \[\frac{m(S_{n})}{m(S_{n}^{\prime})}=\frac{n}{n+\varepsilon}\] as above)
Since \(\lim_{n\to\infty}\left(\frac{n}{n+\varepsilon}\right)^{d}=1\), then \(\lim_{n\to\infty}(1+2\varepsilon)^{d}\cdot\left(\frac{n}{n+\varepsilon}\right) ^{d}=(1+2\varepsilon)^{d}\), so because there is a ceiling
involved, we can take \(N\in\mathbb{N}\) to be large enough that \(\left\lceil(1+2\varepsilon)^{d}\cdot\left(\frac{N}{N+\varepsilon}\right)^{d} \right\rceil=\left\lceil(1+2\varepsilon)^{d}\right\rceil\) (see Fact 4.1 in Appendix C), so by Proposition A.9, there is a point \(\vec{p}\in S_{n}^{\prime}\) that is contained in at least \((1+2\varepsilon)^{d}\) many sets in \(\mathcal{A}_{N}\), and by our change of perspective, this point \(\vec{p}\) has the property that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) many members of \(\mathcal{P}\).
In case (2) where some \(\mathcal{S}_{N}\) contains uncountably many members, then we completely ignore the lower bound for \(m(A_{Y})\) in Equation 1 because it might be that lots of members \(Y\) (possibly all of them) have measure \(0\), and so that bound only tells us that \(m(A_{Y})\geq 0\). Instead, we note that \(Y\) is at least non-empty, so contains at least one point \(\vec{y}\), and thus \(A_{Y}\supseteq B_{\infty}^{\circ}(\varepsilon,\vec{y})=\prod_{i=1}^{d}[y_{i}- \varepsilon,y_{i}+\varepsilon]\), and so \(m(A_{Y})\geq(2\varepsilon)^{d}\). Thus, \(\left\lceil\frac{\sum_{A_{Y}\in A_{N}}m(A_{Y})}{m(S_{N}^{\prime})}\right\rceil=\infty\), so by Proposition A.9, there is a point \(\vec{p}\in S_{N}^{\prime}\) that is contained in at least \((1+2\varepsilon)^{d}\) many sets in \(\mathcal{A}_{N}\), and by our change of perspective, this point \(\vec{p}\) has the property that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) many members of \(\mathcal{P}\).
We now outline how Corollary 1.6 follows from the theorem. As mentioned in the introduction, for the \(\ell_{\infty}\) norm, a bound of \(D\) on the diameter of a set \(X\subseteq\mathbb{R}^{d}\) implies a bound of \(M=D^{d}\) on the outer measure of \(X\) because a special property of the \(\ell_{\infty}\) norm is that \(X\) is actually contained within some shift of \([0,D]^{d}\) which has measure \(M=D^{d}\) (see Fact A.1), so in particular, diameter at most \(1\) implies outer measure at most \(1\). This containment fact does not hold in general for other norms.
Nonetheless, for any norm, a diameter bound of \(D\) implies _some_ measure bound. In particular, we can place a ball of _radius_\(D\) at any point in the set and know that it is contained in the ball showing that the outer measure is at most \(m(\overline{B}_{\Vdash\Vdash}(D,\vec{0}))\). However, we can get a better bound using the known Isodiametric Inequality (Theorem 4.9) which says that we can actually bound the measure by \(m(\overline{B}_{\Vdash\Vdash}(D/2,\vec{0}))\). In other words, while the containment property of the \(\ell_{\infty}\) norm does not hold for general norms, it at least holds _in spirit_; a set of diameter \(D\) might not be contained in any ball of diameter \(D\), but it can be cut up somehow to fit in the ball of diameter \(D\) (radius \(D/2\)), so it has no greater measure. With this inequality in hand and the specific values of the diameter that it gives, Corollary 1.6 follows from Theorem 1.5.
### Proof Outline of the Neighborhood Sperner/KKM/Lebesgue
The main idea behind the proof of Theorem 1.8 is the same as discussed in the previous subsection. For each color \(C\), let \(X_{C}\) be the set of points that are colored \(C\). We union an \(\varepsilon\)-ball at each point in \(X_{C}\), to obtain an enlarged version \(X_{C}\). Now, as before, by the Continuous Multi-Pigeonhole Principle (Proposition A.9) there is a point that belongs to many of the enlarged sets.
However, there are some additional issues that arise on the unit cube that don't arise in \(\mathbb{R}^{d}\). In the discussion above, the enlarged set was not contained in the original region (denoted \(S_{n}^{\prime}\) above) and we needed to consider a larger region (denote \(S_{n}\) above) to contain them. In \(\mathbb{R}^{d}\) we could deal with this via a limiting argument so that the ratio of the volume change \(m(S_{n}^{\prime})/m(S_{n})\) tends to \(1\) (i.e. it became negligible when ceilings were involved). If one enlarges every color in a unit cube \([-\frac{1}{2},\frac{1}{2}]^{d}\) in the same way, the measure of each color is guaranteed to increase by a factor of \((1+2\varepsilon)^{d}\) as before, but also the smallest set that contains all of these sums is the unit cube \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\) which increased in measure by a factor of \((1+2\varepsilon)^{d}\) compared to the original cube, so nothing has been gained! Obviously there will be an overlap of the sums, but the bounds given by the generalized Brunn-Minkowsi inequality tell us no additional information.
We deal with this by employing a trick of first extending the coloring directly to \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\) in a natural way that ensures each color is bounded away from the boundary so that we can perform
an enlargement using just one orthant of the \(\varepsilon\)-ball (instead of the whole ball) and still have the enlarged color set contained in \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\). This means we end up knowing that each color has increased in measure by at least a factor of \((1+\frac{2}{3}\varepsilon)^{d}\) and that the containing region has not changed in measure at all.
### Proof Outline of the Construction Result
The approach to the construction of new secluded partitions of \(\mathbb{R}^{d}\) is the following: (1) break up the coordinates and view \(\mathbb{R}^{d}\) as \(\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\times\cdots\times\mathbb{R}^{d_{n}}\) for some \(d_{1},d_{2},\ldots,d_{n}\) with \(d_{1}+d_{2}+\cdots+d_{n}=d\), (2) partition each \(\mathbb{R}^{d_{i}}\) independently into some \(\mathcal{P}_{i}\), and finally (3) combine the partitions into one partition \(\mathcal{P}\) of \(\mathbb{R}^{d}\) in a natural way by taking \(\mathcal{P}\) to be \(\{\prod_{i=1}^{n}X_{i}\colon X_{i}\in\mathcal{P}_{i}\}\). This is all described in Definition6.3. This construction can be done with arbitrary partitions, but it is straightforward to argue that if each \(\mathcal{P}_{i}\) is \((k_{i},\varepsilon_{i})\)-secluded, then \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded for \(k=\prod_{i=1}^{n}k_{i}\) and \(\varepsilon=\min_{i\in[n]}\varepsilon_{i}\). This is shown in Proposition6.5.
We then simplify to the case where all \(d_{i}\) are equal to \(d^{\prime}\) or \(d^{\prime}-1\) for some natural number \(d^{\prime}\), and we analyze the construction where each \(\mathcal{P}_{i}\) is a \((d_{i}+1,\frac{1}{2d_{i}})\)-secluded unit cube partition (as in [20]) with \(d_{i}=d^{\prime}\) or \(d^{\prime}-1\). The resultant partition of \(\mathbb{R}^{d}\) is \((k,\varepsilon)\)-secluded for \(k=\prod_{i=1}^{n}(d_{i}+1)\leq(d^{\prime}+1)^{\lceil\frac{d}{d^{\prime}}\rceil}\) and \(\varepsilon=\min_{i\in[n]}\frac{1}{2d_{i}}=\frac{1}{2d^{\prime}}\). We use \(f:\mathbb{N}\to\mathbb{N}\) to prescribe \(d^{\prime}\) as a function of \(d\) to obtain Theorem1.9.
A final note is that one does have to be careful to keep the ceiling function in this result because it keeps the exponent from becoming less than \(1\).
### Proof Outline of the No-Free-Lunch Theorem
We use Corollary1.6 to prove Theorem1.10. Since \(A\) is a deterministic algorithm, it can be viewed as a function \(A:\mathbb{R}^{d}\to\mathbb{R}^{d}\). Such a function induces a partition of its domain by considering points to be equivalent if they are mapped to the same value. We can show that because \(A\) gives \(\varepsilon\)-approximations (with respect to a norm \(\left\lVert\cdot\right\rVert\)), each member of the partition has diameter at most \(2\varepsilon\) (with respect to \(\left\lVert\cdot\right\rVert\)). By Corollary1.6, this means that there is some point \(\vec{p}\) in the domain \(\mathbb{R}^{d}\) such that \(B_{\left\lVert\cdot\right\rVert}^{\circ}(\frac{\varepsilon_{0}}{2},\vec{p})\) intersects at least \(\left(1+\frac{2(\varepsilon_{0}/2)}{2\varepsilon}\right)^{d}=\left(1+\frac{ \varepsilon_{0}}{2\varepsilon}\right)^{d}\) members of the partition. In other words, there is a point \(\vec{p}\) and a set of at least \(\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)^{d}\) points in \(B_{\left\lVert\cdot\right\rVert}^{\circ}(\frac{\varepsilon_{0}}{2},\vec{p})\) such that \(A\) maps each of these points to a different value.
Then we consider one specific problem and approximation algorithm and prove that \(A\) can't perform very well on it. Let \(f:\Lambda\to\mathbb{R}^{d}\) be such that there is \(\lambda\in\Lambda\) with \(f(\lambda)=\vec{p}\). Then we consider an \((\varepsilon_{0},\delta)\)-approximation algorithm \(B\) for this function \(f\) which has the specific property that the output distribution of \(B(\lambda)\) is uniform over the \(\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)^{d}\) points above (which all \(\varepsilon_{0}\)-approximate \(f(\lambda)=\vec{p}\)). We show that in order for \(A\circ B\) to be \((k,\delta)\)-pseudodeterministic requires very large \(k\) because \(B\) is distributing uniformly over a very large set. Specifically, \(k\geq(1-\delta)\cdot\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)^{d}\). Finally, we rearrange this expression to solve for \(\varepsilon\) and use the approximation \(\ln(1+x)\geq\frac{x}{2}\) for small \(x\), and the fact that \((1-\delta)\geq\frac{1}{2}\) to arrive at the stated lower bound on \(\varepsilon\).
Figure 1: In the first pane, we have a partition of \(\mathbb{R}^{2}\). In the second panes, we show that we will consider only members of the partition intersect \(\left[-n,n\right]^{2}\), and in the third pane we show the partition that \(\mathcal{P}\) induces on \(\left[-n,n\right]^{2}\). In the fourth pane, we consider enlarging each member by placing at \(\varepsilon\)-ball at each point of the member and show that these enlarged elements are still contained within \(\left[-(n+\varepsilon),n+\varepsilon\right]^{2}\). In the fifth pane, we see all of the expanded members and observe that the sum of the areas of the enlarged members is “significantly” more that the area of \(\left[-n,n\right]^{2}\).
Notation
The following is a list of some of the notation we will use in this paper.
* We use \(\mathbb{N}\) to denote the natural numbers starting with \(1\).
* We continue to use \(\overline{B}_{\|\cdot\|}(\varepsilon,\vec{p})\), \(B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{p})\), \(\overline{B}_{\infty}(\varepsilon,\vec{p})\), and \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) as before.
* For two sets \(A,B\subseteq\mathbb{R}^{d}\) we write \(A+B\) to represent the Minkowski sum \(A+B\stackrel{{\mathrm{def}}}{{=}}\left\{\vec{a}+\vec{b}\colon \vec{a}\in A,\ \vec{b}\in B\right\}\). We also may write \(\vec{a}+B\) to mean \(\left\{\vec{a}+\vec{b}\colon\vec{b}\in B\right\}\) for some fixed vector \(\vec{a}\).
* We will use \(v_{\|\cdot\|,d}\) to represent the Borel/Lebesgue measure of the unit radius ball in \(\mathbb{R}^{d}\) with respect to a general norm \(\|\cdot\|\). This is a normalization factor that appears in some results.
## 4 Lower Bound on the Degree Parameter
In this section, we present a complete proof of Theorem1.5. We begin with some prerequisite results in Subsection4.1. Then, in Theorem1.5 we present the proof of Subsection4.2. We follow this immediately by proving the two consequences mentioned in the introduction: Corollary1.6 and Theorem1.4.
### Prerequisite Results
In this section, we will deal with arbitrary norms of \(\mathbb{R}^{d}\). We point out the well-known fact that all norms on \(\mathbb{R}^{d}\) are equivalent in the sense that they all generate the same topology on \(\mathbb{R}^{d}\). Given two norms \(\|\cdot\|^{a}\) and \(\|\cdot\|^{b}\) on \(\mathbb{R}^{d}\), there exists fixed constants \(c_{d},C_{d}\in(0,\infty)\) such that for all vectors \(\vec{x}\in\mathbb{R}^{d}\), it holds that \(c_{d}\|\vec{x}\|^{a}\leq\|\vec{x}\|^{b}\leq C_{d}\|\vec{x}\|^{a}\). Thus the collection of open sets in \(\mathbb{R}^{d}\) is the same no matter which norm we are using. This also means that the Borel and Lebesgue \(\sigma\)-algebras on \(\mathbb{R}^{d}\) are the same no matter which norm is used, and thus balls with respect to any norm on \(\mathbb{R}^{d}\) are measurable.
We begin with four simple facts. All have straightforward proofs which we provide in AppendixC and AppendixB. The first fact will later allow us to pass a result through a limit since the answer will be an integer.
**Fact 4.1**.: _For any \(\alpha\in\mathbb{R}\), there exists \(\gamma\in\mathbb{R}\) such that \(\gamma<\alpha\) and \(\lceil\gamma\rceil=\lceil\alpha\rceil\)._
The next fact says that the Minkowski sum of a set \(X\) and an open ball at the origin can be viewed as a union of open balls positioned at each point of \(X\).
**Fact 4.2**.: _For any normed vector space, given a set \(X\) and \(\varepsilon\in[0,\infty)\), then_
\[X+B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{0})=\bigcup_{\vec{x}\in X}B^{\circ }_{\|\cdot\|}(\varepsilon,\vec{x}).\]
_The same can be said replacing open balls with closed balls._
The third fact says that we can decompose a ball into a Minkowski sum of two smaller balls.
**Fact 4.3**.: _For any normed vector space, and any \(\alpha,\beta\in(0,\infty)\), it holds that \(B^{\circ}_{\|\cdot\|}(\alpha,\vec{0})+B^{\circ}_{\|\cdot\|}(\beta,\vec{0})=B^ {\circ}_{\|\cdot\|}(\alpha+\beta,\vec{0})\)._
The fourth fact, while also very simple, is the key change of perspective that allowed us to prove the main results of this section. It says that if we are checking which sets \(X\) in our partition intersect an \(\varepsilon\)-ball located at \(\vec{p}\) (in order to see how many there are), we can instead enlarge each member of the partition by taking its Minkowski sum with the origin-centered \(\varepsilon\)-ball, and check which of these enlarged members contain the point \(\vec{p}\).
**Fact 4.4**.: _For any normed vector space, for any set \(X\), for any vector \(\vec{p}\), and any \(\varepsilon>0\), the following are equivalent:_
1. \(B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{p})\cap X\neq\emptyset\)__
2. \(\vec{p}\in X+B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{0})\)__
_The same can be said replacing both open balls with closed balls._
Now we introduce the result which is the connection to the above-mentioned key change of perspective. The result says to consider a bounded, (measurable) subset \(S\subseteq\mathbb{R}^{d}\) (so it has finite measure) and a collection \(\mathcal{A}\) of (measurable) subsets of \(S\). If we compute the sum of measures of all members in the collection \(\mathcal{A}\) (i.e. intuitively the total volume that they take up), and compare this to the measure/volume of \(S\), then whatever this ratio is, we can find a point in \(S\) covered by that many members of the collection \(\mathcal{A}\). For example, in the simplest case that the total measure of members of \(\mathcal{A}\) is larger than the measure of \(S\), then there is no way for all of the members of \(\mathcal{A}\) to be disjoint, so there has to be some point covered by two members. This simple case can be viewed as a continuous version of the pigeonhole principle.
In the more generic case, this result should be intuitively true by an averaging argument: if every point of \(S\) is covered only \(n\) times, then the total measure of members in \(\mathcal{A}\) is at most \(n\cdot m(S)\), so if the ratio of total measure in \(\mathcal{A}\) to the measure of \(S\) is large, then \(n\) must also be large. This more general version is a sort of continuous multi-pigeonhole principle.
**Proposition A.9** (Continuous Multi-Pigeonhole Principle).: _Let \(d\in\mathbb{N}\) and \(S\subseteq\mathbb{R}^{d}\) be bounded and measurable. Let \(\mathcal{A}\) be a family of measurable subsets of \(S\), and let \(k=\left\lceil\frac{\sum_{A\in\mathcal{A}}m(A)}{m(S)}\right\rceil\). Then if \(k<\infty\), there exists \(\vec{p}\in S\) such that \(\vec{p}\) belongs to at least \(k\) members of \(\mathcal{A}\). (And if \(k=\infty\), then for any \(n\in\mathbb{N}\) there exists \(\vec{p}^{(n)}\in S\) such that \(\vec{p}^{(n)}\) belongs to at least \(n\) members of \(\mathcal{A}\).)_
While this result may be intuitive, proving it formally does require some effort. We first encountered this result as the main ingredient in the standard proof of Blichfeldt's theorem (which was the source of motivation for our main technique), but many of the sources we found where proofs of Blichfeldt's theorem are presented did not prove the result above except in special cases, so for convenience and completion, we provide a proof in Appendix A in three parts: Lemma A.4, Lemma A.5, and Corollary A.6.
The next ingredient that we need is a way to measure how large the Minkowski sum in Fact 4.4 is. In order to utilize Proposition A.9 we need a lower bound on the measures, and we can obtain one using the generalized Brunn-Minkowski inequality stated below.
**Theorem 4.5** (Generalized Brunn-Minkowski Inequality).: _Let \(d\in\mathbb{N}\) and \(A,B\subseteq\mathbb{R}^{d}\) be Lebesgue measurable such that \(A+B\) is also Lebesgue measurable. Then_
\[m(A+B)\geq\left[m(A)^{\frac{1}{d}}+m(B)^{\frac{1}{d}}\right]^{d}.\]
This version of the statement can be obtained from [1, Equation 11]; in that survey, Gardner states this theorem with a requirement that the sets be bounded, but in the following paragraph notes that this is not necessary and the requirement is only stated for convenience of the presentation in that survey.
In the theorem, the requirement that \(A+B\) is Lebesgue measurable is not a triviality; Gardner discusses that there exist known Lebesgue measurable sets \(A\) and \(B\) such that the Minkowski sum \(A+B\) is not Lebesgue measurable as shown in [10]. The next result gives us a way to circumvent this issue in our application even if the members of our partition are not measurable by taking \(B\) to be an open set so that the sum \(A+B\) is open (and thus measurable), and using the outer measure of \(A\) so that we don't need the assumption that \(A\) is measurable.
**Lemma 4.6**.: _Let \(d\in\mathbb{N}\) and let \(\mathbb{R}^{d}\) be equipped with any norm \(\left\lVert\cdot\right\rVert\). Let \(Y\subseteq\mathbb{R}^{d}\), and \(\varepsilon\in(0,\infty)\). Then \(Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{0})\) is open (and thus Borel measurable), and \(m(Y+B^{\circ}(\varepsilon,\vec{0}))\geq\left(m_{out}(Y)^{\frac{1}{d}}+ \varepsilon\cdot(v_{\mathbb{I}\cdot\mathbb{I}})^{\frac{1}{d}}\right)^{d}\)._
Proof.: By Fact 4.2, for any \(\varepsilon^{\prime\prime}\in(0,\infty)\), \(Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime\prime},\vec{0})= \bigcup_{\vec{y}\in Y}B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{ \prime\prime},\vec{y})\) which is a union of open sets, so is itself open and thus Borel measurable. Now, for any \(\varepsilon^{\prime}\in(0,\varepsilon)\), observe that by Fact 4.3, \(B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{0})=B^{\circ}_{\mathbb{ I}\cdot\mathbb{I}}(\varepsilon-\varepsilon^{\prime},\vec{0})+B^{\circ}_{ \mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\) and thus, this sum is measurable because it is an open ball. Using this equality and the associativity of the Minkowski sum, we have
\[Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{0})=Y+\left[B^{\circ }_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon-\varepsilon^{\prime},\vec{0})+B^{ \circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\right]= \left[Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon-\varepsilon^{\prime},\vec{0})\right]+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime}, \vec{0}).\]
Thus, we have the following chain of inequalities (each justified after it is stated):
\[m\left(Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{0 })\right) =m\left(\left[Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon- \varepsilon^{\prime},\vec{0})\right]+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}( \varepsilon^{\prime},\vec{0})\right)\ \ \text{(Open, measurable, equality above)}\] \[\geq\left(m\left(Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}( \varepsilon-\varepsilon^{\prime},\vec{0})\right)^{\frac{1}{d}}+m\left(B^{ \circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\right)^{ \frac{1}{d}}\right)^{d}\]
The above comes from the Generalized Brunn-Minkowski Inequality (Theorem 4.5) noting that as demonstrated above, terms of the sum \(\left[Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon-\varepsilon^{\prime},\vec{0})\right]\) and \(B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\) are open and thus measurable, and the same holds for the sum itself \(\left(Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon,\vec{0})\right)\). We continue.
\[\geq\left(m_{out}\left(Y\right)^{\frac{1}{d}}+m\left(B^{\circ}_{\mathbb{I} \cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\right)^{\frac{1}{d}}\right)^{d}\]
The above inequality comes from the definition of the outer measure of \(Y\) being the infimum of the measures of all measurable supersets of \(Y\). Since \(Y\subseteq Y+B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon^{\prime},\vec{0})\), we get the inequality above. Continuing, we have the following:
\[=\left(m_{out}\left(Y\right)^{\frac{1}{d}}+m\left(\varepsilon^{\prime}\cdot B ^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(1,\vec{0})\right)^{\frac{1}{d}}\right)^ {d}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{(Scaling of norm-based balls)}\] \[=\left(m_{out}\left(Y\right)^{\frac{1}{d}}+\left[(\varepsilon^{ \prime})^{d}\cdot m\left(B^{\circ}_{\mathbb{I}\cdot\mathbb{I}}(1,\vec{0})\right) \right]^{\frac{1}{d}}\right)^{d}\ \ \ \ \ \ \ \ \ \text{(Scaling for Lebesgue measure)}\] \[=\left(m_{out}\left(Y\right)^{\frac{1}{d}}+\varepsilon^{\prime}\cdot(v _{\mathbb{I}\cdot\mathbb{I},d})^{\frac{1}{d}}\right)^{d}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{(Algebra and }v_{\mathbb{I}\cdot\mathbb{I},d}\stackrel{{\rm def}}{{=}}m\left(B^{ \circ}_{\mathbb{I}\cdot\mathbb{I}}(1,\vec{0})\right)\right)\]
Since the inequality above holds for all \(\varepsilon^{\prime}\in(0,\varepsilon)\), it must also hold in the limit (keeping \(d\) and \(Y\) fixed):
\[m\left(Y+B_{\mathbb{I}\cdot\mathbb{I}}^{\circ}(\varepsilon,\vec{0})\right)\geq \lim_{\varepsilon^{\prime}\to\varepsilon}\left[\left(m_{out}\left(Y\right)^{ \frac{1}{d}}+\varepsilon^{\prime}\cdot(v_{\mathbb{I}\cdot\mathbb{I},d})^{ \frac{1}{d}}\right)^{d}\right]=\left(m_{out}\left(Y\right)^{\frac{1}{d}}+ \varepsilon\cdot(v_{\mathbb{I}\cdot\mathbb{I},d})^{\frac{1}{d}}\right)^{d}\]
which concludes the proof.
At this point we are nearly in a position to prove the main result of this section, but we need one last result which gives an inequality that we will compose with the Generalized Brunn-Minkowski Inequality (Theorem 4.5). The result below can be interpreted as saying that for appropriate parameters, we can essentially factor our the "\(x\)" in \((x^{1/d}+\alpha)^{d}\) to get the no larger expression \(x(1+\alpha)^{d}\).
**Lemma 4.7**.: _For \(d\in[1,\infty)\), \(x\in[0,1]\), and \(\alpha\in[0,\infty)\), it holds that \((x^{1/d}+\alpha)^{d}\geq x(1+\alpha)^{d}\)._
A proof of the inequality using standard calculus techniques can be found in Appendix C.
### Proofs of Theorem 1.5, Corollary 1.6, and Theorem 1.4
Now we restate and prove the main result.
**Theorem 1.5** (\(\varepsilon\)-Neighborhoods for Measure Bounded Partitions and Arbitrary Norm).: _Let \(d\in\mathbb{N}\), \(M\in(0,\infty)\), and \(\mathcal{P}\) a partition of \(\mathbb{R}^{d}\) such that every member has outer Lebesgue measure at most \(M\). Let \(\mathbb{R}^{d}\) be equipped with any norm \(\left\|\cdot\right\|\). For every \(\varepsilon\in(0,\infty)\), there exists \(\vec{p}\in\mathbb{R}^{d}\) such that \(B_{\mathbb{I}\cdot\mathbb{I}}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(k=\left[\left(1+\varepsilon\left(\frac{v_{\mathbb{I}\cdot\mathbb{I},d}}{M} \right)^{1/d}\right)^{d}\right]\) members of the partition where \(v_{\mathbb{I}\cdot\mathbb{I},d}\stackrel{{\mathrm{def}}}{{=}}m \left(B_{\mathbb{I}\cdot\mathbb{I}}^{\circ}(1,\vec{0})\right)\) is the measure of the \(\left\|\cdot\right\|\) unit ball._
Proof.: Throughout the proof, all lengths will be with respect to \(\left\|\cdot\right\|\), so we will eliminate the clutter by neglecting to use the \(\left\|\cdot\right\|\) subscript anywhere in the proof. We will also be working in a single dimension \(d\) throughout the proof, so we also drop the \(d\) subscript from \(v\) throughout.
Consider the following for each \(n\in\mathbb{N}\). Let \(S_{n}=B^{\circ}(n,\vec{0})\) and \(S_{n}^{\prime}=B^{\circ}(n+\varepsilon,\vec{0})=S_{n}+B^{\circ}(\varepsilon, \vec{0})\) and \(\mathfrak{S}\) be the partition of \(S_{n}\) induced5 by \(\mathcal{P}\). For each \(Y\in\mathfrak{S}_{n}\), let \(C_{Y}=Y+B^{\circ}(\varepsilon,\vec{0})\). By Lemma 4.6, \(C_{Y}\) is measurable, and \(m(C_{Y})\geq\left(m_{out}\left(Y\right)^{\frac{1}{d}}+\varepsilon\cdot v^{ \frac{1}{d}}\right)^{d}\). Also observe that \(C_{Y}\subseteq S_{n}^{\prime}\). Now consider
the following inequalities:
\[m(C_{Y}) \geq\left(m_{out}\left(Y\right)^{\frac{1}{d}}+\varepsilon\cdot v^{ \frac{1}{d}}\right)^{d}\] (Above) \[=\left(M^{1/d}\left[\frac{m_{out}\left(Y\right)^{\frac{1}{d}}}{M^{1 /d}}+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right]\right)^{d}\] (Algebra) \[=M\left(\left[\frac{m_{out}\left(Y\right)}{M}\right]^{\frac{1}{d }}+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\] (Algebra) \[\geq M\cdot\frac{m_{out}\left(Y\right)}{M}\cdot\left(1+\frac{ \varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\] (Lemma 4.7 since
\[\frac{m_{out}(Y)}{M}\in[0,1]\]
) \[=m_{out}\left(Y\right)\cdot\left(1+\frac{\varepsilon\cdot v^{ \frac{1}{d}}}{M^{1/d}}\right)^{d}\] (Simplify)
Informally, the above shows that for each \(Y\in\mathscr{S}_{n}\), the set \(Y+B^{\circ}(\varepsilon,\vec{0})\) has substantially more (outer) measure than \(Y\) does--specifically a factor of \(\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\). We will extend this to unsurprisingly show that this implies that \(\left\{Y+B^{\circ}(\varepsilon,\vec{0})\right\}_{Y\in\mathscr{S}_{n}}\) also has this same factor more (outer) measure than \(\mathscr{S}_{n}\) does, observing that \(\mathscr{S}_{n}\) has total (outer) measure of about \(m(S_{n})\) since \(\mathscr{S}_{n}\) is a partition of \(S_{n}\) (any discrepancy is due to non-measurable members in \(\mathscr{S}_{n}\))
Formally, we claim that there exists a finite subfamily \(\mathcal{F}_{n}\subseteq\mathscr{S}_{n}\) such that
\[\sum_{Y\in\mathcal{F}_{n}}m\left(Y+B^{\circ}(\varepsilon,\vec{0})\right)\geq \left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\cdot m(S_{ n}).\]
To see this, first consider the case that \(\mathscr{S}_{n}\) has infinite cardinality. Let \(\mathcal{F}_{n}\subset\mathscr{S}_{n}\) be any subfamily of finite cardinality at least \(\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\cdot m(S_ {n})\cdot\frac{1}{\varepsilon^{d}v}\). This gives
\[\sum_{Y\in\mathcal{F}_{n}}m\left(Y+B^{\circ}(\varepsilon,\vec{0})\right)\geq \sum_{Y\in\mathcal{F}_{n}}m\left(B^{\circ}(\varepsilon,\vec{0})\right)\]
where this inequality is because \(Y+B^{\circ}(\varepsilon,\vec{0})\) is a superset of some translation of \(B^{\circ}(\varepsilon,\vec{0})\) since \(Y\neq\emptyset\). Continuing, we use the standard fact that \(m\left(B^{\circ}(\varepsilon,\vec{0})\right)=m\left(\varepsilon\cdot B^{\circ} (1,\vec{0})\right)=\varepsilon^{d}\cdot m\left(B^{\circ}(1,\vec{0})\right)= \varepsilon^{d}v\):
\[\geq\sum_{Y\in\mathcal{F}_{n}}\varepsilon^{d}v\] \[=\left[\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}} \right)^{d}\cdot m(S_{n})\cdot\frac{1}{\varepsilon^{d}v}\right]\cdot \varepsilon^{d}v\] (Cardinality of \[\mathcal{F}_{n}\] ) \[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^ {d}\cdot m(S_{n}).\] (Simplify)
Now consider the other (more interesting) case where \(\mathcal{S}_{n}\) has finite cardinality6. Take \(\mathcal{F}_{n}=\mathcal{S}_{n}\) so that
Footnote 6: In fact this case also works if \(\mathcal{S}_{n}\) is countable even though we have already dealt with that case.
\[\sum_{Y\in\mathcal{F}_{n}}m\left(Y+B^{\circ}(\varepsilon,\vec{0})\right) =\sum_{Y\in\mathcal{S}_{n}}m\left(Y+B^{\circ}(\varepsilon,\vec{0})\right)\] ( \[\mathcal{F}_{n}=\mathcal{S}_{n}\] ) \[\geq\sum_{Y\in\mathcal{S}_{n}}m_{out}\left(Y\right)\cdot\left(1+ \frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\] (Above) \[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right) ^{d}\cdot\sum_{Y\in\mathcal{S}_{n}}m_{out}\left(Y\right)\] (Linearity of summation) \[\geq\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}} \right)^{d}\cdot m_{out}\left(\bigcup_{Y\in\mathcal{S}_{n}}Y\right)\]
where the above inequality is due the the countable subaddativity property of outer measures which states that a countable sum of outer measures of sets is at least as large as the outer measure of the union of the sets. In the last step we get
\[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\cdot m \left(S_{n}\right)\] ( \[\bigsqcup_{Y\in\mathcal{F}_{n}}Y=S_{n}\] is measurable)
Thus, regardless of whether \(\mathcal{S}_{n}\) has infinite or finite cardinality, there exists a finite subfamily \(\mathcal{F}_{n}\subseteq\mathcal{S}_{n}\) such that
\[\sum_{Y\in\mathcal{F}_{n}}m\left(Y+B^{\circ}(\varepsilon,\vec{0})\right)\geq \left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\cdot m(S_ {n}).\]
Fix such a subfamily \(\mathcal{F}_{n}\), and let \(\mathcal{A}_{n}=\left\{Y+B^{\circ}(\varepsilon,\vec{0})\right\}_{Y\in \mathcal{F}_{n}}\) be a family indexed7 by \(\mathcal{F}_{n}\). Note that for each \(A_{Y}\mathop{\stackrel{{\rm def}}{{=}}}Y+B^{\circ}(\varepsilon, \vec{0})\in\mathcal{A}_{n}\), since \(Y\subseteq S_{n}=B^{\circ}(n,\vec{0})\), we have \(A_{Y}\subseteq S_{n}+B^{\circ}(\varepsilon,\vec{0})=S_{n}^{\prime}\). Thus, by Corollary A.68, there is a point \(\vec{p}^{(n)}\in S_{n}^{\prime}\) which belongs to at least \(k_{n}\)-many sets in \(\mathcal{A}_{n}\)
where
\[k_{n}\operatorname{\stackrel{{\mathrm{def}}}{{=}}} \left[\frac{\sum_{Y\in\mathcal{F}_{n}}m\left(Y+B^{\circ}(\varepsilon, \vec{0})\right)}{m(S^{\prime}_{n})}\right] \geq\frac{\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}} \right)^{d}\cdot m\left(S_{n}\right)}{m(S^{\prime}_{n})}\] (Above) \[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^ {d}\cdot\frac{m\left(B^{\circ}(n,\vec{0})\right)}{m\left(B^{\circ}(n+ \varepsilon,\vec{0})\right)}\] (Def'n of \[S_{n}\] and \[S^{\prime}_{n}\] ) \[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^ {d}\cdot\frac{n^{d}\cdot v}{(n+\varepsilon)^{d}\cdot v}\] (Standard scaling fact) \[=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^ {d}\cdot\left(\frac{n}{n+\varepsilon}\right)^{d}.\] (Simplify)
Since \(\vec{p}^{(n)}\) belongs to at least \(k_{n}\)-many sets in \(\mathcal{A}_{n}\), this by definition means that there are at least \(k_{n}\)-many sets \(Y\in\mathcal{F}_{n}\) such that \(\vec{p}^{(n)}\in Y+B^{\circ}(\varepsilon,\vec{0})\), so by Fact 4.4, we have \(Y\cap B^{\circ}(\varepsilon,\vec{p}^{(n)})\neq\emptyset\). For each such \(Y\) (since \(Y\in\mathcal{F}_{n}\subseteq\mathcal{S}_{n}\)), there is a distinct9\(X_{Y}\in\mathcal{P}\) such that \(Y\subseteq X_{Y}\) and thus \(X_{Y}\cap B^{\circ}(\varepsilon,\vec{0})\neq\emptyset\) showing that there are at least \(k_{n}\)-many sets in \(\mathcal{P}\) which intersect \(B^{\circ}(\varepsilon,\vec{0})\).
Footnote 9: I.e. for \(Y\neq Y^{\prime}\in\mathcal{S}_{n}\) we have that \(X_{Y},X_{Y^{\prime}}\in\mathcal{P}\) and \(X_{Y}\neq X_{Y^{\prime}}\) so this mapping of \(Y\)’s to \(X\)’s is injective, so we have at least the same cardinality of \(X\)’s with the desired property as \(Y\)’s with the desired property.
For the last step of the proof, we perform a limiting process on \(n\). By Fact 4.1, let \(\gamma\in\mathbb{R}\) such that \(\gamma<\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\) and \(\lceil\gamma\rceil=\left\lceil\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d} }}{M^{1/d}}\right)^{d}\right\rceil\). Then, because
\[\lim_{n\to\infty}k_{n}\geq\lim_{n\to\infty}\left(1+\frac{\varepsilon\cdot v^{ \frac{1}{d}}}{M^{1/d}}\right)^{d}\cdot\left(\frac{n}{n+\varepsilon}\right)^{d }=\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}>\gamma,\]
we can take \(N\in\mathbb{N}\) sufficiently large so that
\[k_{N}\geq\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d} \cdot\left(\frac{N}{N+\varepsilon}\right)^{d}>\gamma.\]
Then considering the point \(\vec{p}^{(N)}\), we have by the choice of \(\gamma\) and the fact that \(k_{N}\) is an integer that
\[k_{N}=\lceil k_{N}\rceil\geq\lceil\gamma\rceil=\left\lceil\left(1+\frac{ \varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}}\right)^{d}\right\rceil\]
showing that \(B^{\circ}(\varepsilon,\vec{p}^{(N)})\) intersects at least \(k_{N}\geq\left\lceil\left(1+\frac{\varepsilon\cdot v^{\frac{1}{d}}}{M^{1/d}} \right)^{d}\right\rceil\) members of \(\mathcal{P}\) as claimed which completes the proof.
**Remark 4.8**.: _If one carefully follows the proof above with minor modification, one can show that if there is any bounded set \(S\subseteq\mathbb{R}^{d}\) that intersects infinitely many elements of \(\mathcal{P}\), then for each \(i\in\mathbb{N}_{0}\), there is a point \(\vec{p}^{(i)}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\Vdash\Vdash\vec{1}}(\varepsilon,\vec{p}^{(i)})\) intersects at least \(i\) members of \(\mathcal{P}\). To see this, take \(n_{i}\in\mathbb{N}\) large enough so that \(S_{n_{i}}\supseteq S\), and then note that \(\mathcal{S}_{n_{i}}\) has infinite cardinality, so take\(\mathcal{F}_{n_{i}}\) to be a large enough subfamily of \(\mathcal{S}_{n_{i}}\) that \(|\mathcal{F}_{n_{i}}|\cdot(\varepsilon\cdot v^{\frac{1}{d}})^{d})\) exceeds \(i\cdot m(S^{\prime}_{n_{i}})\) so \(\sum_{Y\in\mathcal{F}_{n_{i}}}m\left(Y+B^{\circ}_{\Vdash\Vdash\vec{1}}( \varepsilon,\vec{0})\right)\geq|\mathcal{F}_{n_{i}}|\cdot(\varepsilon\cdot v^{ \frac{1}{d}})^{d})\) exceeds \(i\cdot m(S^{\prime}_{n_{i}})\), so there is some point \(\vec{p}^{(i)}\) that belongs to \(Y+B^{\circ}_{\Vdash\vec{1}}(\varepsilon,\vec{0})\) for at least \(i\)-many sets \(Y\in\mathcal{S}_{n_{i}}\) so that \(B^{\circ}_{\Vdash\vec{1}}(\varepsilon,\vec{p}^{(i)})\) intersects \(X_{Y}\) for at least \(i\)-many sets \(X_{Y}\in\mathcal{P}\)._
_However, one can do better than this. If there is some bounded set \(S\subseteq\mathbb{R}^{d}\), then \(\overline{S}\) is also bounded because the diameter is no larger. Consider the standard open cover \(\left\{B^{\circ}_{\Vdash\vec{1}}(\varepsilon,\vec{x})\right\}_{\vec{x}\in \overline{S}}\) of \(\overline{S}\). Since \(\overline{S}\) is closed and bounded, by the Heine-Borel theorem there is a finite subcover \(\mathcal{C}\subseteq\left\{B^{\circ}_{\Vdash\vec{1}}(\varepsilon,\vec{x}) \right\}_{\vec{x}\in\overline{S}}\) of \(\overline{S}\). Since \(\overline{S}\) intersects finitely many sets in \(\mathcal{P}\), the fact that \(\mathcal{C}\) has finite cardinality implies one of the \(\varepsilon\)-balls in \(\mathcal{C}\) must intersect infinitely many members of \(\mathcal{P}\). Thus, for each \(\varepsilon\in(0,\infty\), there is in fact some \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\Vdash\vec{1}}(\varepsilon,\vec{p})\) intersects infinitely many members of \(\mathcal{P}\)._
With the main result now proven, we introduce one last tool which will allow us to convert the measure bound hypothesis of Theorem1.5 to a diameter bound hypothesis instead. The isodiamteric inequality (given below) states that no bounded set has greater measure than the ball of the same diameter. Thus, an upper bound on the diameters of members (in any norm) immediately gives an upper bound on the measures of the members.
**Theorem 4.9** (Isodiametric Inequality).: _Let \(d\in\mathbb{N}\) and \(\|\cdot\|\) be any norm on \(\mathbb{R}^{d}\). Let \(A\subseteq\mathbb{R}^{d}\) be a bounded set. Then the outer Lebesgue measure of \(A\) is at most the Lebesgue measure of the ball of the same diameter. That is, (in three equivalent forms) if \(\operatorname{diam}_{\Vdash\Vdash}(A)=D\), then_
\[m_{out}(A) \leq m\left(B^{\circ}_{\Vdash\Vdash}(D/2,\vec{0})\right)\] \[=\left(\tfrac{D}{2}\right)^{d}\cdot m\left(B^{\circ}_{\Vdash\vec {1}}(1,\vec{0})\right)\] \[=\left(\tfrac{D}{2}\right)^{d}\cdot v_{\Vdash\Vdash\vec{1},d}.\]
In some sources, this inequality is only proved for the \(\ell_{2}\) norm using Steiner symmetrization since it is geometrically quite intuitive, but there is a more general proof for all norms using the Generalized Brunn-Minkowski Inequality (Theorem4.5). For convenience, we offer a standard proof using the latter technique in AppendixA.
**Remark 4.10**.: _If one uses the standard observation that for a bounded set \(A\) with \(\operatorname{diam}_{\Vdash\Vdash}(A)=D\) and any point \(\vec{a}\in A\) it holds that \(A\subseteq B^{\circ}_{\Vdash\Vdash}(D,a)\), then one immediately obtains_
\[m_{out}(A) \leq m\left(B^{\circ}_{\Vdash\Vdash}(D,\vec{a})\right)\] \[=m\left(B^{\circ}_{\Vdash\Vdash}(D,\vec{0})\right)\] \[=D^{d}\cdot m\left(B^{\circ}_{\Vdash\Vdash}(1,\vec{0})\right)\] \[=D^{d}\cdot v_{\Vdash\Vdash\vec{1},d}\]
_so the Isodiametric Inequality gives a factor of \(2\) smaller diameter and a factor of \(2^{d}\) smaller measure._
**Corollary 1.6** (\(\varepsilon\)-Neighborhoods for Diameter Bounded Partitions).: _Let \(d\in\mathbb{N}\), \(D\in(0,\infty)\), \(\varepsilon\in(0,\infty)\). Let \(\mathbb{R}^{d}\) be equipped with any norm \(\left\|\cdot\right\|\). Let \(\mathcal{P}\) be a partition of \(\mathbb{R}^{d}\) such that every member has diameter at most \(D\) (with respect to \(\left\|\cdot\right\|\)). Then there exists \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{p})\) intersects at least \(k=\left\lceil\left(1+\frac{2\varepsilon}{D}\right)^{d}\right\rceil\) members of the partition._
Proof.: Let \(M=\left(\frac{D}{2}\right)^{d}\cdot v_{\|\cdot\|,d}\). For any \(X\in\mathcal{P}\), since \(\operatorname{diam}_{\|\cdot\|}(X)\leq D\), then by the Isodiametric Inequality (Theorem 4.9), \(m_{out}(X)\leq M\). Since every member of \(\mathcal{P}\) has outer Lebesgue measure at most \(M\), then by Theorem 1.5, there exists \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\|\cdot\|}(\varepsilon,\vec{p})\) intersects at least
\[\left\lceil\left(1+\varepsilon\left(\frac{v_{\|\cdot\|,d}}{M}\right)^{1/d} \right)^{d}\right\rceil=\left\lceil\left(1+\frac{2\varepsilon}{D}\right)^{d}\right\rceil\]
members of \(\mathcal{P}\) as claimed.
We also have that Theorem 1.4 follows as a simple corollary of Theorem 1.5.
**Theorem 1.4**.: _Let \(d\in\mathbb{N}\), \(\varepsilon\in[0,\infty)\), and \(\mathcal{P}\) a partition of \(\mathbb{R}^{d}\) such that every member has outer Lebesgue measure at most \(1\). Then there exists some \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects at least \((1+2\varepsilon)^{d}\) members of \(\mathcal{P}\). Thus, if \(\mathcal{P}\) is a \((k,\varepsilon)\)-secluded partition, then \(k\geq(1+2\varepsilon)^{d}\). Consequently, if \(k\leq 2^{d}\), then it must be that \(\varepsilon\leq\frac{\ln k}{d}\)._
Proof.: Consider the \(\ell_{\infty}\) norm and \(M=1\) noting that for each \(d\in\mathbb{N}\), \(v_{\|\cdot\|_{\infty},d}=2^{d}\) (i.e. the volume of the \(\ell_{\infty}\) unit ball in \(\mathbb{R}^{d}\) is \(2^{d}\)).
Then by Theorem 1.5, there is a point \(\vec{p}\in\mathbb{R}^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) intersects at least
\[\left(1+\varepsilon\left(\frac{v_{\|\cdot\|_{\infty},d}}{M}\right)^{1/d} \right)^{d}=\left(1+\varepsilon\left(\frac{2^{d}}{1}\right)^{1/d}\right)^{d}= \left(1+2\varepsilon\right)^{d}\]
members of \(\mathcal{P}\), and thus trivially the closed ball \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) does as well. Thus, if \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded (meaning by definition that for every \(\vec{p}\in\mathbb{R}^{d}\) it holds that \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) intersects at most \(k\) members of \(\mathcal{P}\)) then it must be that \(k\geq(1+2\varepsilon)^{d}\).
For the consequence, if \(k\leq 2^{d}\), then this implies \(\varepsilon\leq\frac{1}{2}\). Then taking the logarithm of both sides of our inequality and using the fact that \(\ln(1+x)\geq\frac{x}{2}\) for \(x\in[0,1]\), we have
\[\ln(k)\geq d\ln(1+2\varepsilon)\geq d\varepsilon\]
showing that \(\varepsilon\leq\frac{\ln(k)}{d}\).
We state the following interesting corollary when \(k(d)\) is polynomial.
**Corollary 4.11**.: _Let \(\langle\mathcal{P}_{d}\rangle_{d=1}^{\infty}\) be a sequence of \((k(d),\varepsilon(d))\)-secluded partitions of \(\mathbb{R}^{d}\) such that every member of each \(\mathcal{P}_{d}\) has outer Lebesgue measure at most 1. If \(k(d)\in\mathsf{poly}(d)\) then \(\varepsilon(d)\in O(\frac{\ln d}{d})\) (where the hidden constant can be taken to be anything exceeding the polynomial degree of \(k\))._
Proof.: Since \(k(d)\in\mathsf{poly}(d)\), then there are constants \(C,n\) such that for sufficiently large \(d\), we have \(k(d)\leq Cd^{n}\) which for sufficiently large \(d\) is less than \(2^{d}\) so by Theorem 1.4, for sufficiently large \(d\) we have
\[\varepsilon(d)\leq\frac{\ln(k(d))}{d}\leq\frac{n\ln(Cd)}{d}\in O\left(\frac{ \ln(d)}{d}\right).\]
More specifically, for any \(n^{\prime}>n\) we have for large enough \(d\) that \((n^{\prime}-n)\ln(d)\geq n\ln(C)\), so for large enough \(d\) we have
\[\varepsilon(d)\leq\frac{n\ln(Cd)}{d}=\frac{n[\ln(C)+\ln(d)]}{d}\leq\frac{(n^{ \prime}-n)\ln(d)+n\ln(d)}{d}=\frac{n^{\prime}\ln(d)}{d}\]
showing that the hidden constant can be taken to be any \(n^{\prime}\) larger than the degree \(n\) of \(k\).
### Specific Norms
While we won't do an elaborate analysis with any specific norms, we will at least mention how Theorem1.5 evaluates when we consider some of the most common norms. For the \(\ell_{1}\), \(\ell_{2}\), and \(\ell_{\infty}\) norms, the value of \(v_{d}\) is respectively \(\frac{2^{d}}{d!}\), \(\frac{\pi^{d/2}}{\Gamma\!\left(\frac{d}{2}+1\right)}\), and \(2^{d}\). Here, \(\Gamma\) denotes the gamma-function which generalizes the factorial (specifically, for natural numbers \(n\), \(\Gamma(n+1)=n!\)). The volumes of the \(\ell_{2}\) and \(\ell_{\infty}\) ball are well-known, and the volume of the \(\ell_{1}\) ball can be be obtained from [20], or one can recognize that the \(\ell_{1}\) unit ball is (disregarding boundaries) a disjoint union of \(2^{d}\) copies of the standard simplex--one in each orthant, and each simplex has measure \(\frac{1}{d!}\).
The lower bounds on \(k(d)\) based on \(\varepsilon(d)\) stated in Theorem1.5 for these three specific norms can be found in Table1. The main observation that we want to make is that when using norms other than \(\ell_{\infty}\), the factor of of \(\varepsilon\) is no longer a constant, but a decreasing function of the dimension. This should not be too surprising since for \(\ell_{p}\) norms which are not \(\ell_{\infty}\), the unit \(\ell_{p}\)-ball is a subset of the unit \(\ell_{\infty}\) ball, so the measure will be smaller and actually tend to \(0\) as \(d\) tends to \(\infty\), so we should expect to not be able to intersect as many members of the partition.
## 5 A Neighborhood Sperner/KKM/Lebesgue Theorem
In this section, we restate and prove our neighborhood variant of the Sperner/KKM/Lebesgue result on the cube. The proof is illustrated in Figure2.
**Theorem 1.8** (Neighborhood Sperner lemma).: _Given a coloring of \([0,1]^{d}\) in which no color includes points of opposite faces, then for any \(\varepsilon\in(0,\frac{1}{2}]\) there exists a point \(\vec{p}\in[0,1]^{d}\) such that \(B^{\circ}_{\infty}(\varepsilon,\vec{p})\) contains points of at least \(\left(1+\frac{2}{3}\varepsilon\right)^{d}\) different colors._
Figure 2: (a) shows a (finite) coloring \(\chi\) of the unit cube \([-\frac{1}{2},\frac{1}{2}]^{2}\) such that no color includes points on opposite edges. (b) shows the natural extension \(\gamma\) of that coloring to \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{2}\). The extension is obtained by mapping each point \(\vec{y}\) to the point \(\vec{x}\) for which each coordinate value \(y_{i}\) is restricted to be within \([-\frac{1}{2},\frac{1}{2}]\), and then \(\vec{y}\) is given whatever color \(\vec{x}\) had. (c), (e), and (g) show three of the five colors and demonstrate that there is at least one quadrant of the \(\varepsilon\)-ball that can be Minkowski summed with the color so that the sum remains a subset of the extended cube. For red it is the lower right quadrant, for purple it is the upper right, and for gray it could be the upper left (shown) or the upper right. (d), (f), and (h) show the resulting Minkowski sum for each color. Utilizing the Brunn-Minkowski inequality, this set will have substantially greater area—by a factor of at least \((1+\frac{\varepsilon}{1+\varepsilon})^{2}\).
Proof.: For convenience, we will assume that the cube is \([-\frac{1}{2},\frac{1}{2}]^{d}\) rather than \([0,1]^{d}\). Let \(C\) be a set (of colors) and \(\chi\colon[-\frac{1}{2},\frac{1}{2}]^{d}\to C\) be such a coloring of the unit cube \([-\frac{1}{2},\frac{1}{2}]^{d}\). Note that if \(C\) has infinite cardinality then because we can cover the cube with finitely many \(\varepsilon\)-balls one of them must intersect infinitely many color sets so the result is true. Thus we assume that \(C\) has finite cardinality.
For each color \(c\in C\) we will let \(X_{c}\) denote the set of points assigned color \(c\) by \(\chi\)--that is, \(X_{c}=\chi^{-1}(c)\). Note that the hypothesis that no color includes points of opposite faces formally means that for every color \(c\in C\), the set \(X_{c}\) has the property that for each coordinate \(i\in[d]\), the projection \(\pi_{i}(X_{c})=\{x_{i}:\vec{x}\in X_{c}\}\) does not contain both \(-\frac{1}{2}\) and \(\frac{1}{2}\).
The first step in the proof is to extend the coloring \(\chi\) to the larger cube \([-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^{d}\) in a natural way. Consider the following function \(f\) which truncates points in the larger interval to be in the smaller interval:
\[f\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon] \to[-\tfrac{1}{2},\tfrac{1}{2}]\] \[f(y)\mathop{=}^{\text{def}}\begin{cases}-\tfrac{1}{2}&y\leq- \frac{1}{2}\\ y&y\in(-\frac{1}{2},\frac{1}{2})\\ \tfrac{1}{2}&y\geq\frac{1}{2}\end{cases}\]
Let \(\vec{f}\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\to[- \tfrac{1}{2},\tfrac{1}{2}]^{d}\) be the function which is \(f\) in each coordinate: \(\vec{f}(\vec{y})\mathop{=}^{\text{def}}\langle f(y_{i})\rangle_{i=1}^{d}\).
Now extend the coloring \(\chi\) to the coloring \(\gamma\colon[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\to C\) defined by
\[\gamma(\vec{x})\mathop{=}^{\text{def}}\chi\left(\vec{f}\left(\vec{x}\right) \right).\]
For each color \(c\in C\), let \(Y_{c}=\gamma^{-1}(c)\) denote the points assigned color \(c\) by \(\gamma\) and note that \(Y_{c}\subseteq X_{c}\). Consistent with this notation, we will typically refer to a point in the unit cube as \(\vec{x}\) and a point in the extended cube as \(\vec{y}\).
We make the following claim which implies that for each color \(c\in C\), the set \(Y_{c}\) of points of that color in the extended coloring are contained in a set bounded away from one side of the extended cube \([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\) in each coordinate.
**Claim A**.: _For each color \(c\in C\) there exists an orientation \(\vec{v}\in\{-1,1\}^{d}\) such that \(Y_{c}\subseteq\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\)._
Proof of Claim.: Fix an arbitrary coordinate \(i\in[d]\). Note that for every \(\vec{y}\in Y_{c}\) we have \(f(\vec{y}\in X_{c}\) which is to say that the \(\vec{y}\) has the same color in the extended coloring as \(f(\vec{y})\) does in the original coloring (see justification10).
Footnote 10: For every \(\vec{y}\in Y_{c}\) we have (by definition of \(Y_{c}\)) that \(\gamma(\vec{y})=c\) and (by definition of \(\gamma\)) that \(\gamma(\vec{y})=\chi(f(\vec{y}))\) showing that \(\chi(f(\vec{y}))=c\) and thus (by definition of \(X_{c}\)) that \(f(\vec{y})\in X_{c}\).
Note that if there is some \(\vec{y}\in Y_{c}\) with \(y_{i}\leq-\tfrac{1}{2}\), then \(f(y_{i})=-\tfrac{1}{2}\) so \(\pi_{i}(X_{c})\ni f(y_{i})=-\tfrac{1}{2}\). Similarly, if there is some \(\vec{y}\in Y_{c}\) with \(y_{i}\geq\tfrac{1}{2}\), then \(\pi_{i}(X_{c})\ni\tfrac{1}{2}\). Recall that by hypothesis, \(\pi_{i}(X_{c})\) does not contain both \(-\tfrac{1}{2}\) and \(\tfrac{1}{2}\) which means it is either the case that for all \(\vec{y}\in Y_{c}\) we have \(y_{i}>-\tfrac{1}{2}\) (so \(\pi_{i}(Y_{c})\subseteq(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\)) or it is the case that for all \(\vec{y}\in Y_{c}\) we have \(y_{i}<\tfrac{1}{2}\) (so \(\pi_{i}(Y_{c})\subseteq[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2})\)).
Thus we can choose \(v_{i}\in\{-1,1\}\) such that \(\pi_{i}(Y_{c})\subseteq v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\). Since this is true for each coordinate \(i\in[d]\) we can select \(\vec{v}\in\{-1,1\}^{d}\) such that
\[Y_{c}\subseteq\prod_{i=1}^{d}\pi_{i}(Y_{c})\subseteq\prod_{i=1}^{d}v_{i}\cdot(- \frac{1}{2},\frac{1}{2}+\varepsilon]\]
as claimed.
For an orientation \(\vec{v}\in\left\{-1,1\right\}^{d}\), let \(B_{\vec{v}}\) denote the set \(B_{\vec{v}}\stackrel{{\text{def}}}{{=}}\prod_{i=1}^{d}-v_{i}\cdot(0,\varepsilon)\) which should be interpreted as an open orthant of the \(\ell_{\infty}\)\(\varepsilon\)-ball centered at the origin--specifically the orthant opposite the orientation \(\vec{v}\). Building on Claim A, we get the following:
**Claim B**.: _For each color \(c\in C\), there exists an orientation \(\vec{v}\in\left\{-1,1\right\}^{d}\) such that \(Y_{c}+B_{\vec{v}}\subseteq[-\frac{1}{2}-\varepsilon,\frac{1}{2}+\varepsilon]^ {d}\)._
Proof of Claim.: Let \(\vec{v}\) be an orientation given in Claim A for color \(c\). We get the following chain of containments:
\[Y_{c}+B_{\vec{v}} =Y_{c}+\left(\prod_{i=1}^{d}-v_{i}\cdot(0,\varepsilon)\right)\] (Def'n of \[B_{\vec{v}}\] ) \[\subseteq\left(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1} {2}+\varepsilon]\right)+\left(\prod_{i=1}^{d}-v_{i}\cdot(0,\varepsilon)\right)\] (Claim A) \[=\left(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+ \varepsilon]\right)+\left(\prod_{i=1}^{d}v_{i}\cdot(-\varepsilon,0)\right)\] (Factor a negative) \[=\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2} +\varepsilon)\] (Minkowski sum of rectangles) \[\subseteq[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^ {d}.\] ( \[v_{i}\in\left\{-1,1\right\}\] )
This proves the claim.
We also claim that \(Y_{c}+B_{\vec{v}}\) has a substantial measure.
**Claim C**.: _For each color \(c\in C\) and any orientation \(\vec{v}\in\left\{-1,1\right\}^{d}\), the set \(Y_{c}+B_{\vec{v}}\) is Borel measurable and \(m(Y_{c}+B_{\vec{v}})\geq m_{out}(Y_{c})\cdot\left(1+\frac{\varepsilon}{1+ \varepsilon}\right)^{d}\)._
Proof of Claim.: Let \(M=(1+\varepsilon)^{d}\) which is the measure of \(\prod_{i=1}^{d}v_{i}\cdot(-\tfrac{1}{2},\tfrac{1}{2}+\varepsilon]\), and because by Claim A, \(Y_{c}\) is a subset of this set, we have \(m_{out}(Y_{c})\leq M\).
We have that \(Y_{c}+B_{\vec{v}}\) is Borel measurable and that \(m\left(Y_{c}+B_{\vec{v}}\right)\geq\left(m_{out}(Y_{c})^{\frac{1}{d}}+ \varepsilon\right)^{d}\) by
Lemma 4.6 (see details11). Thus, we have the following chain of inequalities:
Footnote 11: Note that for the \(\ell_{\infty}\) norm, the measure of the unit ball is \(v_{\|_{\cdot\|_{\cdot\|_{\infty,d}}}}=2^{d}\). Then note that \(B_{\vec{v}}\) is an open orthant of an \(\varepsilon\) ball with respect to \(\ell_{\infty}\), so is in fact itself an \(\frac{\varepsilon}{2}\) ball with respect to \(\ell_{\infty}\). This is why we get “\(\varepsilon\)” instead of the “\(2\varepsilon\)” in Lemma 4.6. We could translate this open ball to the origin and translate the set \(Y_{c}\) accordingly to get the same Minkowski sum without changing the measures, and after doing so we could apply Lemma 4.6 verbatim.
\[m(Y_{c}+B_{\vec{v}}) \geq\left(m_{out}(Y_{c})^{1/d}+\varepsilon\right)^{d}\] (Above) \[=M\cdot\left(\frac{m_{out}(Y_{c})^{1/d}}{M^{1/d}}+\frac{ \varepsilon}{M^{1/d}}\right)^{d}\] (Factor out
\[M\]
) \[\geq M\cdot\left(\frac{m_{out}(Y_{c})}{M}\right)\cdot\left(1+ \frac{\varepsilon}{M^{1/d}}\right)^{d}\] (Lemma 4.7 ) \[=m_{out}(Y_{c})\cdot\left(1+\frac{\varepsilon}{1+\varepsilon} \right)^{d}\] (Simplify and use
\[M=(1+\varepsilon)^{d}\]
)
Now, consider the indexed family \(\mathcal{A}=\left\{Y_{c}+B_{\vec{v}(c)}\right\}_{c\in C}\) (where \(\vec{v}(c)\) is an orientation for \(c\) as in Claim A and Claim B) noting that it has finite cardinality because \(C\) has finite cardinality. Considering the sum of measures of sets in \(\mathcal{A}\), we have the following:
\[\sum_{A\in\mathcal{A}}m(A) =\sum_{c\in C}m\left(Y_{c}+B_{\vec{v}(c)}\right)\] (Def'n of
\[\mathcal{A}\]
; measurability was shown above) \[\geq\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot \sum_{c\in C}m_{out}(Y_{c})\] (Claim C and linearity of summation) \[\geq\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot m _{out}\left(\bigcup_{c\in C}Y_{c}\right)\] (Countable/finite subaddativity of outer measures) \[=\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot m_{ out}\left([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\right)\] (The
\[Y_{c}\]
's partition
\[[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\]
) \[=\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot(1+2 \varepsilon)^{d}\] (Evaluate outer measure)
By Claim B, each member of \(\mathcal{A}\) is a subset of \([-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\), so by Proposition A.9, there exists a point \(\vec{p}\in[-\tfrac{1}{2}-\varepsilon,\tfrac{1}{2}+\varepsilon]^{d}\) that belongs to at least
\[\left\lceil\frac{\left(1+\tfrac{\varepsilon}{1+\varepsilon}\right)^{d}\cdot(1 +2\varepsilon)^{d}}{(1+2\varepsilon)^{d}}\right\rceil=\left\lceil\left(1+ \frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\]
sets in \(\mathcal{A}\). That is, \(\vec{p}\) belongs to \(Y_{c}+B_{\vec{v}(c)}\) for at least \(\left\lceil\left(1+\tfrac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) colors \(c\in C\). For each such color \(c\), it follows that \(\vec{p}+(-\varepsilon,\varepsilon)^{d}\) intersects \(Y_{c}\) (see justification12). Note that with respect to the
\(\ell_{\infty}\) norm, \(\vec{p}+(-\varepsilon,\varepsilon)^{d}=B_{\infty}^{\circ}(\varepsilon,\vec{p})\) showing that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) contains points of at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) colors (according to the coloring of \(\gamma\) since we are discussing sets \(Y_{c}\)).
What we really want, though, is a point in the unit cube that has this property rather than a point in the extended cube, and we want it with respect to the original coloring \(\chi\) rather than the extended coloring \(\gamma\). We will show that the point \(\vec{f}\left(\vec{p}\right)\) suffices.
**Claim D**.: _If \(c\in C\) is a color for which \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\cap Y_{c}\neq\emptyset\), then also \(B_{\infty}^{\circ}(\varepsilon,\vec{f}\left(\vec{p}\right))\cap X_{c}\neq\emptyset\)._
Proof of Claim.: Let \(\vec{y}\in B_{\infty}^{\circ}(\varepsilon,\vec{p})\cap Y_{\vec{c}}\). Then because \(\vec{y}\in B_{\infty}^{\circ}(\varepsilon,\vec{p})\), we have \(\|\vec{y}-\vec{p}\|_{\infty}<\varepsilon\), so for each coordinate \(i\in[d]\), \(|y_{i}-p_{i}|<\varepsilon\). It is easy to analyze the 9 cases (or 3 by symmetries) arising in the definition of \(f\) to see that this implies \(|f(y_{i})-f(p_{i})|<\varepsilon\) as well (i.e. \(f\) maps pairs of values in its domain so that they are no farther apart), thus \(\|\vec{f}\left(\vec{y}\right)-\vec{f}\left(\vec{p}\right)\|_{\infty}<\varepsilon\) and thus \(\vec{f}\left(\vec{y}\right)\in B_{\infty}^{\circ}(\varepsilon,\vec{f}\left( \vec{p}\right))\).
Also, as justified in a prior footnote10, for any \(\vec{y}\in Y_{c}\) we have \(\vec{f}(\vec{y})\in X_{c}\) so that \(\vec{f}\left(\vec{y}\right)\in B_{\infty}^{\circ}(\varepsilon,\vec{f}\left( \vec{p}\right))\cap X_{c}\) which shows that the intersection is non-empty.
Thus, because \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects \(Y_{c}\) for at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) choices of color \(c\in C\), by Claim D \(\vec{f}\left(\vec{p}\right)\) is a point in the unit cube which intersects \(X_{c}\) for at least \(\left\lceil\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\right\rceil\) different colors \(c\in C\). That is, this ball contains points from at least this many of the original color sets.
The final step in the proof of the theorem is to clean up the expression with an inequality. Note that \(C\) must contain of at least \(2^{d}\) colors because each of the \(2^{d}\) corners of the unit cube must be assigned a unique color since any pair of corners belong to an opposite pair of faces on the cube. For this reason it is trivial that for \(\varepsilon>\frac{1}{2}\) there is a point \(\vec{p}\) such that \(B_{\infty}^{\circ}(\varepsilon,\vec{p})\) intersects at least \(2^{d}\) colors: just let \(\vec{p}\) be the midpoint of the unit cube. Thus, the only interesting case is \(\varepsilon\in(0,\frac{1}{2}]\), and for such \(\varepsilon\) we have \(1+\varepsilon\leq\frac{3}{2}\) and thus \(\frac{\varepsilon}{1+\varepsilon}\geq\frac{2}{3}\varepsilon\) showing that \(\left(1+\frac{\varepsilon}{1+\varepsilon}\right)^{d}\geq(1+\frac{2}{3} \varepsilon)^{d}\). This completes the proof of the theorem.
## 6 New Constructions
The partitions that we construct in this section are of a very natural form: we build a partition of a large dimension \(d\) space, by splitting up the coordinates into smaller sets, and separately partitioning each set of coordinates. In the end, the smaller partitions will be known partition constructions [23]. We will define the construction very generically. We need two basic results. The following observation notes that if a partition is \((k,\varepsilon)\)-secluded, then we can _increase_\(k\) to \(k^{\prime}\) and _decrease_\(\varepsilon\) to \(\varepsilon^{\prime}\) and the partition is trivially \((k^{\prime},\varepsilon^{\prime})\)-secluded.
**Observation 6.1** (Monotonicity in \(k\) and \(\varepsilon\)).: _Let \(d\in\mathbb{N}\), \(k,k^{\prime}\in\mathbb{N}\) with \(k^{\prime}\geq k\), \(\varepsilon,\varepsilon^{\prime}\in[0,\infty)\) with \(\varepsilon^{\prime}\leq\varepsilon\), and \(\mathcal{P}\) a \((k,\varepsilon)\)-secluded partition of \(\mathbb{R}^{d}\). Then \(\mathcal{P}\) is also a \((k^{\prime},\varepsilon^{\prime})\)-secluded partition of \(\mathbb{R}^{d}\)._
Proof.: Since \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded, by definition every \(\varepsilon\)-ball intersects at most \(k\) members of \(\mathcal{P}\), so trivially every (no larger) \(\varepsilon^{\prime}\)-ball intersects at most \(k^{\prime}\geq k\) members of \(\mathcal{P}\).
We will frequently refer to the above observation just using the phrase "by monotonicity, \(\mathcal{P}\) is \((k^{\prime},\varepsilon^{\prime})\)-secluded"
**Fact 6.2** (Trivial \(k\) for Unit Cube Partitions).: _Let \(d\in\mathbb{N}\), \(\varepsilon\in[0,\infty)\), and \(\mathcal{P}\) be a unit cube partition of \(\mathbb{R}^{d}\). Then \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded for \(k=\big{\lfloor}(2+2\varepsilon)^{d}\big{\rfloor}\)._
Proof.: Consider any point \(\vec{p}\in\mathbb{R}^{d}\). Observe that any \(X\in\mathcal{P}\), \(X\) is a unit cube, so \(\operatorname{diam}_{\infty}(X)=1\), so if \(X\) intersects \(\overline{B}_{\infty}(\varepsilon,\vec{p})\), then \(X\subseteq\overline{B}_{\infty}(1+\varepsilon,\vec{p})\).
Because (1) each \(X\in\mathcal{P}\) has measure \(1\), and (2) every pair of members are disjoint (because \(\mathcal{P}\) is a partition), and (3) the measure of \(\overline{B}_{\infty}(1+\varepsilon,\vec{p})=\vec{p}+[-1-\varepsilon,1+ \varepsilon]^{d}\) is \((2+2\varepsilon)^{d}\), it follows that at most \(\big{\lfloor}(2+2\varepsilon)^{d}\big{\rfloor}\) members of \(\mathcal{P}\) are a subset of \(\overline{B}_{\infty}(1+\varepsilon,\vec{p})\) and thus at most \(\big{\lfloor}(2+2\varepsilon)^{d}\big{\rfloor}\) members of \(\mathcal{P}\) intersect \(\overline{B}_{\infty}(\varepsilon,\vec{p})\) which shows that \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded for \(k=\big{\lfloor}(2+2\varepsilon)^{d}\big{\rfloor}\) as claimed.
### Construction
**Definition 6.3** (Partition Product).: _Let \(d_{1},\ldots,d_{n}\in\mathbb{N}\) and \(\mathcal{P}_{1},\ldots,\mathcal{P}_{n}\) be partitions of \(\mathbb{R}^{d_{1}},\ldots,\mathbb{R}^{d_{n}}\) respectively. Letting \(d=\sum_{i=1}^{n}d_{n}\) we define the product partition of \(\mathbb{R}^{d}\) as_
\[\prod_{i=1}^{n}\mathcal{P}_{i}\stackrel{{\mathrm{def}}}{{=}}\left\{ \prod_{i=1}^{n}X_{i}\colon X_{i}\in\mathcal{P}_{i}\right\}\]
_where \(\prod_{i=1}^{n}X_{i}\) is viewed as a subset of \(\mathbb{R}^{d}\)._
We specifically stated that \(\prod_{i=1}^{n}X_{i}\) is viewed as a subset of \(\mathbb{R}^{d}\), because technically it is a subset of \(\prod_{i=1}^{n}\mathbb{R}^{d_{i}}\), but this is naturally isomorphic to \(\mathbb{R}^{d}=\mathbb{R}^{\sum_{i=1}^{n}d_{i}}\). For example, technically, if \(d_{1}=d_{2}=d_{3}=2\), then the elements of \(\prod_{i=1}^{n}\mathbb{R}^{d_{i}}\) are of the form \(\langle\langle x_{1},x_{2}\rangle,\langle x_{3},x_{4}\rangle,\langle x_{5},x_{ 6}\rangle\rangle\), but this is trivially isomorphic to \(\mathbb{R}^{6}\) by instead considering the element as \(\langle x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\rangle\).
Also observe (shown below) that if the original partitions were unit cube partitions, then the product partition is also a unit cube partition.
**Fact 6.4** (Unit Cube Preservation).: _If \(d_{1},\ldots,d_{n}\in\mathbb{N}\) and \(\mathcal{P}_{1},\ldots,\mathcal{P}_{n}\) are unit cube partitions of \(\mathbb{R}^{d_{1}},\ldots,\mathbb{R}^{d_{n}}\) respectively, then \(\prod_{i=1}^{n}\mathcal{P}_{i}\) is also a unit cube partition._
Proof.: Each member of \(\prod_{i=1}^{n}\mathcal{P}_{i}\) is of the form \(\prod_{i=1}^{n}X_{i}\) where \(X_{i}\in\mathcal{P}_{i}\). Since \(\mathcal{P}_{i}\) is a unit cube partition, each \(X_{i}\) is a product of translates of \([0,1)\), and thus \(\prod_{i=1}^{n}X_{i}\) is also a product of translates of \([0,1)\), so the member is a unit cube.
We can now present the main result of this section which is that if we take a product of partitions, and we have a guarantee for each \(\mathcal{P}_{i}\) that it is \((k_{i},\varepsilon_{i})\)-secluded, then we can guarantee the product partition is \((k,\varepsilon)\)-secluded where \(k\) is the product of the \(k_{i}\)'s and \(\varepsilon\) is the minimum of the \(\varepsilon_{i}\)'s.
**Proposition 6.5** (Product Partition Exclusion Guarantees).: _Let \(n\in\mathbb{N}\). For each index \(i\in[n]\), let \(d_{i},k_{i}\in\mathbb{N}\), \(\varepsilon_{i}\in(0,\infty)\) and \(\mathcal{P}_{i}\) be a \((k_{i},\varepsilon_{i})\)-secluded partition of \(\mathbb{R}^{d_{i}}\). Then the product partition \(\mathcal{P}=\prod_{i=1}^{n}\mathcal{P}_{i}\) is a \((k,\varepsilon)\)-secluded partition of \(\mathbb{R}^{d}\) where \(d=\sum_{i=1}^{n}d_{i}\), and \(k=\prod_{i=1}^{n}k_{i}\), and \(\varepsilon=\min_{i\in[n]}\varepsilon_{i}\)._
Proof Sketch.: The basic idea is that for any point \(\vec{p}\in\mathbb{R}^{d}\), we consider how many members of \(\mathcal{P}\) intersect \(\overline{B}_{\varepsilon}(\vec{p})\). Conceptually13, we think of \(\vec{p}\) as a sequence \(\langle\vec{p}^{(i)}\rangle_{i=1}^{n}\) of \(n\) points where the \(i\)th
point \(\vec{p}^{(i)}\) belongs to \(\mathbb{R}^{d_{i}}\). Because we are working with the \(\ell_{\infty}\) norm (that is the norm used by the definition of secluded), the \(\varepsilon\) ball around \(\vec{p}\) is the product of the \(\varepsilon\) balls around each \(\vec{p}^{(i)}\) which is smaller than the product of \(\varepsilon_{i}\) balls around each \(\vec{p}^{(i)}\) because we chose \(\varepsilon\) as the minimum size. Thus, if the \(\varepsilon\) ball around \(\vec{p}\) intersects a member \(X\) of the partition \(\mathcal{P}\), then conceptually viewing \(X\) as a sequence \(\langle X_{i}\rangle_{i=1}^{n}\) where \(X_{i}\) is a member of \(\mathcal{P}_{i}\), it must be for each \(i\in[n]\) that the \(\varepsilon\) ball around \(\vec{p}^{(i)}\) intersects \(X_{i}\) (and thus so does the \(\varepsilon_{i}\) ball since \(\varepsilon_{i}\geq\varepsilon\)). This means (for each \(i\in[n]\)) that \(X_{i}\) is one of at most \(k_{i}\) members of \(\mathcal{P}_{i}\) because at most \(k_{i}\) members of \(\mathcal{P}_{i}\) intersect the \(\varepsilon_{i}\) ball around \(\vec{p}^{(i)}\) (by definition of \(\mathcal{P}_{i}\) being \((k_{i},\varepsilon_{i})\)-secluded). Thus \(X\) is one of at most \(\prod_{i=1}^{n}k_{i}=k\) members of \(\mathcal{P}\). That is, there are at most \(k\) members of \(\mathcal{P}\) that intersect the \(\varepsilon\) ball around \(\vec{p}\) which is the definition of \(\mathcal{P}\) being \((k,\varepsilon)\)-secluded.
Utilizing the construction above, we will now take a unit cube partition of [22] for each \(\mathbb{R}^{d_{i}}\) and take the product to obtain a new partition. Since the dimension of each \(d_{i}\) is smaller than the dimension \(d\), this allows us to get a larger value of \(\varepsilon_{i}\) for each partition, and thus a larger value of \(\varepsilon\) for the partition of \(\mathbb{R}^{d}\) than if we had used one of the original partitions. The price we pay for this is that the value of \(k\) also increases. The following result is nothing more than Proposition 6.5 where each partition in the product is specifically one of the partitions of [22].
**Theorem 1.9**.: _Let \(f:\mathbb{N}\to\mathbb{N}\) be any function. For each \(d\in\mathbb{N}\), there exists a \((k(d),\varepsilon(d))\)-secluded unit cube partition of \(\mathbb{R}^{d}\) where \(k(d)=(f(d)+1)^{\lceil\frac{d}{f(d)}\rceil}\) and \(\varepsilon(d)=\frac{1}{2f(d)}\)_
Proof.: Fix \(d\in\mathbb{N}\). Let \(d^{\prime}=f(d)\) and \(n=\lceil\frac{d}{f(d)}\rceil=\lceil\frac{d}{d^{\prime}}\rceil\). Let \(\mathcal{P}^{\prime}\) be a \((d^{\prime}+1,\frac{1}{2d^{\prime}})\)-secluded unit cube partition of \(\mathbb{R}^{d^{\prime}}\) (use the results of [22]).
By Proposition 6.5 and Fact 6.4, \(\mathcal{P}=\prod_{i=1}^{n}\mathcal{P}^{\prime}\) is a \((k,\varepsilon)\)-secluded unit cube partition of \(\mathbb{R}^{n\cdot d^{\prime}}\) where \(k=(d^{\prime}+1)^{n}\) and \(\varepsilon=\frac{1}{2d^{\prime}}\). Since \(n\cdot d^{\prime}=\lceil\frac{d}{d}\rceil\cdot d^{\prime}\geq d\), this trivially (by ignoring extra coordinates) gives a partition of \(\mathbb{R}^{d}\) with these same properties (alternatively, see footnote14). Recalling the definitions of \(d^{\prime}=f(d)\) and \(n=\lceil\frac{d}{f(d)}\rceil\) gives the stated result.
Footnote 14: An alternate perspective is to let \(d_{1},\ldots,d_{n}\) be such that \(\sum_{i=1}^{n}d_{i}=d\) and the first portion of the list \(d_{i}=d^{\prime}\), and the second portion of the list \(d_{i}=d^{\prime\prime}\stackrel{{\text{def}}}{{=}}d^{\prime}-1\). Then let \(\mathcal{P}^{\prime}\) a \((d^{\prime}+1,\frac{1}{2d^{\prime}})\)-secluded partition of \(\mathbb{R}^{d^{\prime}}\) as before, and let \(\mathcal{P}^{\prime\prime}\) a \((d^{\prime\prime}+1,\frac{1}{2d^{\prime\prime}})\)-secluded partition of \(\mathbb{R}^{d^{\prime\prime}}\). Since \(d^{\prime\prime}<d^{\prime}\), \(\mathcal{P}^{\prime\prime}\) is (by monotonicity) a \((d^{\prime}+1,\frac{1}{2d^{\prime}})\)-secluded partition. Then take \(\mathcal{P}_{i}=\mathcal{P}^{\prime}\) when \(d_{i}=d^{\prime}\) and \(\mathcal{P}_{i}=\mathcal{P}^{\prime\prime}\) when \(d_{i}=d^{\prime\prime}\). Again, we get that \(\mathcal{P}\) is \((k,\varepsilon)\)-secluded for \(k=(d^{\prime}+1)^{n}\) and \(\varepsilon=\frac{1}{2d^{\prime}}\).
The above construction is very general. However, we can instantiate with various choices of parameters to get the following theorem. As discussed in the introduction, for these constructions the tolerance parameter \(\varepsilon(d)\) achieved is optimal up to a \(O(\ln d)\) factor. Below, \(\mathsf{weaksubexp}(d)\) is \(2^{o(d)}\).
**Theorem 6.6**.: _Let \(\varepsilon:\mathbb{N}\to(0,\infty)\). Then there exists \(k:\mathbb{N}\to\mathbb{N}\) such that for every \(d\in\mathbb{N}\) there exists a \((k(d),\varepsilon(d))\)-secluded unit hypercube partition of \(\mathbb{R}^{d}\), and \(k\) has the following properties:_
1. _If_ \(\varepsilon(d)\in O(1)\)_, then_ \(k(d)\in\mathsf{exp}(d)\)__
2. _If_ \(\varepsilon(d)\in o(1)\)_, then_ \(k(d)\in\mathsf{weaksubexp}(d)\)__
3. _If_ \(\varepsilon(d)\in O(\frac{1}{d})\)_, then_ \(k(d)\in\mathsf{poly}(d)\)__
If all partitions \(\mathcal{P}_{1},\ldots,\mathcal{P}_{n}\) are "efficiently computable" in the sense that given an arbitrary point, \(\vec{x}\in\mathbb{R}^{d_{i}}\) there is an efficient algorithm that computes a representation of the member of \(\mathcal{P}_{i}\) containing \(\vec{x}\), then the product partition is also "efficiently computable" because given some point
\(\vec{y}\in\mathbb{R}^{d}\), the member that it is contained in can be found by determining which member of \(\mathcal{P}_{1}\) the point \(\langle y_{i}\rangle_{i=1}^{d_{1}}\) is in, and independently determining which member of \(\mathcal{P}_{2}\) the point \(\langle y_{i}\rangle_{i=d_{1}+1}^{d_{1}+d_{2}}\) is in, etc. The member of \(\prod_{i=1}^{d}\mathcal{P}_{i}\) that contains \(\vec{y}\) is just the product of members. This is an important property for using partitions as the basis of rounding schemes because an algorithm must determine which member/equivalence class a point is in (even if just implicitly). Because the partitions of [53] are "efficiently computable" (see Proposition 11.2 in version 1), so are the partitions in this construction of Theorem 1.9.
## 7 A No-Free-Lunch Theorem
**Theorem 1.10**.: _Let \(d,k\in\mathbb{N}\) and \(\varepsilon_{0}\in(0,\infty)\) and \(\delta\in(0,\frac{1}{2}]\) be fixed, and let \(\|\cdot\|\) be a norm on \(\mathbb{R}^{d}\). Suppose there exists \(\varepsilon\in(0,\infty)\) and a deterministic algorithm \(A\) mapping inputs in \(\mathbb{R}^{d}\) to outputs in \(\mathbb{R}^{d}\) with the following universal black box property:_
**Property:**: _For any set_ \(\Lambda\) _(indicating some problem domain) and function_ \(f:\Lambda\to\mathbb{R}^{d}\) _and_ \((\varepsilon_{0},\delta)\)_-approximation algorithm_ \(B\) _for_ \(f\) _(with respect to_ \(\|\cdot\|\)_), it holds that_ \(A\circ B\) _is a_ \((k,\delta)\)_-pseudodeterministic_ \((\varepsilon,\delta)\)_-approximation algorithm for_ \(f\) _(again with respect to_ \(\|\cdot\|\)_)._
_Then_ \(\varepsilon\geq\varepsilon_{0}\cdot\frac{d}{4\ln(2k)}\)_._
Proof of Theorem 1.10.: Because \(A\) is a deterministic algorithm mapping any point in \(\mathbb{R}^{d}\) to \(\mathbb{R}^{d}\) we can consider \(A\) to be a mathematical function \(A:\mathbb{R}^{d}\to\mathbb{R}^{d}\). Every mathematical function induces a natural partition of its domain which consists of the preimages/fibers of the function; that is
\[\mathcal{P}_{A}\stackrel{{\mathrm{def}}}{{=}}\left\{A^{-1}(\vec{y }):\vec{y}\in\mathrm{range}(A)\right\}.\]
In other words, \(\mathcal{P}_{A}\) is the partition defined by the equivalence relation on the domain \(\mathbb{R}^{d}\) defined by \(\vec{x}\sim\vec{x}\,^{\prime}\) if and only if \(A(\vec{x})=A(\vec{x}\,^{\prime})\). Now we make a few claims about this partition.
**Claim A**.: _For all \(\vec{x}\in\mathbb{R}^{d}\), \(\|A(\vec{x})-\vec{x}\|\leq\varepsilon\)._
Proof of Claim.: Suppose for contradiction that there is some \(\vec{x}\in\mathbb{R}^{d}\) such that \(\|A(\vec{x})-\vec{x}\|>\varepsilon\). Let \(\Lambda\) be some set (i.e. problem domain) and \(f:\Lambda\to\mathbb{R}^{d}\) some function with \(\vec{x}\in\mathrm{range}(f)\), and then let \(\lambda\in\Lambda\) be some element which witnesses this (i.e. \(f(\lambda)=\vec{x}\)). Let \(B\) be an \((\varepsilon_{0},\delta)\)-approximation algorithm for \(f\) (with respect to \(\|\cdot\|\)) which has the property that on input \(\lambda\), \(B\) always returns \(f(\lambda)=\vec{x}\) (i.e. \(B\) approximates \(f(\lambda)\) perfectly with probability \(1\)). Thus, on input \(\lambda\), the algorithm \(A\circ B\) always returns \(A(\vec{x})\). But by hypothesis \(\|A(\vec{x})-\vec{x}\|>\varepsilon\) which means that on input \(\lambda\), \(A\circ B\) always returns a value which is not an \(\varepsilon\)-approximation to \(f(\lambda)=\vec{x}\), which contradicts that \(A\circ B\) is an \((\varepsilon,\delta)\)-approximation algorithm for \(f\).
If we were more careful we could actually get the bound above to \(\varepsilon-\varepsilon^{\prime}\), but we won't be that concerned. This allows us to show a bound on the diameter of all members of the partition \(\mathcal{P}_{A}\).
**Claim B**.: _Each member of \(\mathcal{P}_{A}\) has diameter (with respect to \(\|\cdot\|\)) at most \(2\varepsilon\)._
Proof of Claim.: For any member \(X\in\mathcal{P}_{A}\) we have by definition that for all \(\vec{x},\vec{x}\,^{\prime}\in A\) that \(A(\vec{x})=A(\vec{x}\,^{\prime})\). By the triangle inequality and Claim A we have
\[\|\vec{x}-\vec{x}\,^{\prime}\|\leq\|\vec{x}-A(\vec{x})\|+\|A(\vec{x})-A(\vec{ x}\,^{\prime})\|+\|A(\vec{x}\,^{\prime})-\vec{x}\,^{\prime}\|\leq\varepsilon+0+\varepsilon\]
which proves the claim.
Then, by Corollary 1.6 and Claim B, there exists some point \(\vec{p}\in\mathbb{R}^{d}\) such that \(\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon_{0}/2,\vec{p})\) intersects at least \(\left(1+\frac{2(\varepsilon_{0}/2)}{(2\varepsilon)}\right)^{d}=\left(1+\frac{ \varepsilon_{0}}{2\varepsilon}\right)^{d}\)-many members of \(\mathcal{P}_{A}\). Let this \(\vec{p}\) be fixed for the remainder of the proof. We use this fact to put a lower bound on \(k\).
**Claim C**.: _It holds that \(k\geq(1-\delta)\cdot\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)^{d}\)._
Proof of Claim.: Let \(T\subseteq\mathbb{R}^{d}\) be a set containing exactly one point in \(X\cap\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon_{0}/2,\vec{p})\) for each \(X\in\mathcal{P}_{A}\) which intersects \(\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon_{0}/2,\vec{p})\). Because \(\mathcal{P}_{A}\) is a partition, distinct \(X,Y\in\mathcal{P}_{A}\) which intersect \(\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon_{0}/2,\vec{p})\), give distinct points regardless of the choice. Thus \(|T|\geq\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)^{d}\).
Let \(\Lambda\) be some set (i.e. problem domain) and \(f:\Lambda\to\mathbb{R}^{d}\) some function with \(\text{range}(f)\cap T\neq\emptyset\), and then let \(\lambda\in\Lambda\) be some element which witnesses this (i.e. \(f(\lambda)\in T\)). Let \(B\) be an \((\varepsilon_{0},\delta)\)-approximation algorithm for \(f\) (with respect to \(\left\|\cdot\right\|\)) which has the property that on input \(\lambda\), \(B\) returns a point selected uniformly15 at random from \(T\). This is a valid \((\varepsilon_{0},\delta)\)-approximation because \(f(\lambda)\in T\) and for all \(\vec{x}\in T\), \(\vec{x}\in\overline{B}_{\mathbb{I}\cdot\mathbb{I}}(\varepsilon_{0}/2,\vec{p})\) so by the triangle inequality \(\|\vec{x}-f(\lambda)\|\leq\varepsilon_{0}\) which means \(B\) always returns an \(\varepsilon_{0}\)-estimate on input \(\lambda\).
Footnote 15: We discuss in a later footnote that the proof will still work even if perfectly uniform selection cannot be attained algorithmically.
Because \(A\circ B\) is a \((k,\delta)\)-pseudodeterministic algorithm, there must be some set \(S_{\lambda}\subseteq T\) with \(|S_{\lambda}|\leq k\) such that \(\Pr[B(\lambda)\in S_{\lambda}]\geq 1-\delta\). Since \(B(\lambda)\) is uniform over \(T\), we have
\[1-\delta\leq\Pr[B(\lambda)\in S_{\lambda}]=\frac{|S_{\lambda}|}{|T|}\leq\frac{ k}{|T|}\]
showing that
\[k\geq(1-\delta)\cdot|T|\geq(1-\delta)\cdot\left(1+\frac{\varepsilon_{0}}{2 \varepsilon}\right)^{d}\]
as claimed16.
Footnote 16: As alluded to in a prior footnote, if perfectly uniform selection can’t be achieved algorithmically, we instead can consider a sequence \(B_{1},B_{2},B_{3},\ldots\) of approximation algorithms for \(f\) each defined the same way as \(B\) but requiring only that \(B_{i}\) distribute solutions close enough to uniformly that the probability of returning any of the \(|T|\) elements is at most \(\frac{1}{|T|}(1+\frac{1}{i})\) so that \(1-\delta\leq\Pr[B_{i}(\lambda)\in S_{\lambda}]\leq\frac{|S_{\lambda}|}{|T|}(1+ \frac{1}{i})\leq\frac{k}{|T|}(1+\frac{1}{i})\). Since this is true for all \(i\in\mathbb{N}\) the inequality passes through the limit and we get the same conclusion that \(1-\delta\leq\frac{k}{|T|}\).
Now in order to rearrange this lower bound on \(k\) into a lower bound on \(\varepsilon\), we need to utilize an approximation17 which will require the assumption that \(\varepsilon\geq\varepsilon_{0}\), so we claim and prove this next. This should not be surprising because it would be a fantastical result if there was a single deterministic algorithm \(A\) which could improve the accuracy of every \(\varepsilon_{0}\)-approximation algorithm to every function!
Footnote 17: Specifically that \(\ln(1+x)\leq\frac{x}{2}\) for small \(x\).
**Claim D**.: _We have that \(\varepsilon\geq\varepsilon_{0}\)._
Proof of Claim.: Let \(\Lambda=\{\lambda_{-},\lambda_{+}\}\) and let \(\vec{v}\in\mathbb{R}^{d}\) be a \(\left\|\cdot\right\|\) unit vector. Let \(f:\Lambda\to\mathbb{R}^{d}\) be defined by \(f(\lambda_{-})=-\varepsilon_{0}\vec{v}\) and \(f(\lambda_{+})=\varepsilon_{0}\vec{v}\). Let \(B\) be the algorithm with always outputs \(\vec{0}\) regardless of its input.
Then \(B\) is an \((\varepsilon_{0},\delta)\)-approximation algorithm for \(f\) because \(\vec{0}\) is an \(\varepsilon_{0}\)-approximation for both \(f(\lambda_{-})\) and \(f(\lambda_{+})\). Because \(A\) is deterministic18, \(A\circ B\) always returns the same value regardless of
the input. Let \(\vec{a}\in\mathbb{R}^{d}\) denote this value. Since \(A\circ B\) is an \((\varepsilon,\delta)\)-approximation algorithm for \(f\) it must be that \(\|f(\lambda_{-})-\vec{a}\|\leq\varepsilon\) and \(\|f(\lambda_{+})-\vec{a}\|\leq\varepsilon\), and because we have
\[2\varepsilon_{0}=\|f(\lambda_{+})-f(\lambda_{-})\|\leq\|f(\lambda_{+})-\vec{a} \|+f(\lambda_{-})-\vec{a}\]
it must either be that \(\|f(\lambda_{-})-\vec{a}\|\geq\varepsilon_{0}\) or \(\|f(\lambda_{+})-\vec{a}\|\geq\varepsilon_{0}\). In either case, it shows \(\varepsilon_{0}\leq\varepsilon\).
Now we are ready to state the final inequality by taking the natural log of both sides of the inequality in Claim C. We then note by Claim D that \(\frac{\varepsilon_{0}}{2\varepsilon}\leq\frac{1}{2}\) and that for \(x\leq\frac{1}{2}\), \(\ln(1+x)\geq\frac{x}{2}\). And lastly, because \(\delta\in(0,\frac{1}{2}]\), we have \(\ln(1-\delta)\geq\ln(\frac{1}{2})\).
\[\ln(k) \geq\ln(1-\delta)+d\ln\left(1+\frac{\varepsilon_{0}}{2\varepsilon}\right)\] \[\geq\ln(1-\delta)+d\cdot\frac{\varepsilon_{0}}{4\varepsilon}\] \[\geq\ln(\tfrac{1}{2})+d\cdot\frac{\varepsilon_{0}}{4\varepsilon}.\]
Solving for \(\varepsilon\) we get
\[\varepsilon\geq\varepsilon_{0}\cdot\frac{d}{4\ln(2k)}\]
as desired which completes the proof.
In the case of the \(\ell_{\infty}\) norm, the bounds of Theorem 1.10 can be nearly matched (up constants and logarithmic factors) in the regime of interest where the pseudodeterminism/replicability value \(k\) is polynomial in the spacial dimension \(d\). This is shown in the next result which says there is a deterministic function/algorithm which does everything described in Theorem 1.10 with \(\varepsilon=2d\cdot\varepsilon_{0}\). The constant \(2\) here can be replaced by any constant if one is willing to increase \(k\) from the value \(d+1\) stated in the result below to some greater polynomial.
**Theorem 7.1**.: _Let \(d\in\mathbb{N}\) and \(\varepsilon_{0}\in(0,\infty)\). Let \(\varepsilon=\varepsilon_{0}\cdot 2d\). There is an efficiently computable function/algorithm \(A_{\varepsilon}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) with the following two properties:_
1. _For any_ \(x\in\mathbb{R}^{d}\) _and any_ \(\hat{x}\in\overline{B}_{\infty}(\varepsilon_{0},x)\) _it holds that_ \(A_{\varepsilon}(\hat{x})\in\overline{B}_{\infty}(\varepsilon,x)\)_._
2. _For any_ \(x\in\mathbb{R}^{d}\) _the set_ \(\{A_{\varepsilon}(\hat{x})\colon\hat{x}\in\overline{B}_{\infty}(\varepsilon_{ 0},x)\}\) _has cardinality at most_ \(d+1\)_._
_Informally, these two conditions are (1) if \(\hat{x}\) is an \(\varepsilon_{0}\)-approximation of \(x\) (with respect to \(\ell_{\infty}\)), then \(A_{\varepsilon}(\hat{x})\) is an \(\varepsilon\) approximation of \(x\), and (2) \(A_{\varepsilon}\) maps every \(\varepsilon_{0}\) approximation of \(x\) to one of at most \(d+1\) possible values._
Proof Sketch.: This follows by using a scaled \((d+1,\frac{1}{2d})\)-secluded partition with unit diameter members as a deterministic rounding scheme. A \((\mathsf{poly}(d),O(\frac{1}{d}))\)-secluded partition with unit diameter members as in Theorem 6.6 can also be used to trade off polynomial factors in the first parameter (degree) with constant factors in the second (tolerance). |
2310.06167 | Predictable Artificial Intelligence | We introduce the fundamental ideas and challenges of Predictable AI, a
nascent research area that explores the ways in which we can anticipate key
validity indicators (e.g., performance, safety) of present and future AI
ecosystems. We argue that achieving predictability is crucial for fostering
trust, liability, control, alignment and safety of AI ecosystems, and thus
should be prioritised over performance. We formally characterise
predictability, explore its most relevant components, illustrate what can be
predicted, describe alternative candidates for predictors, as well as the
trade-offs between maximising validity and predictability. To illustrate these
concepts, we bring an array of illustrative examples covering diverse ecosystem
configurations. Predictable AI is related to other areas of technical and
non-technical AI research, but have distinctive questions, hypotheses,
techniques and challenges. This paper aims to elucidate them, calls for
identifying paths towards a landscape of predictably valid AI systems and
outlines the potential impact of this emergent field. | Lexin Zhou, Pablo A. Moreno-Casares, Fernando Martínez-Plumed, John Burden, Ryan Burnell, Lucy Cheke, Cèsar Ferri, Alexandru Marcoci, Behzad Mehrbakhsh, Yael Moros-Daval, Seán Ó hÉigeartaigh, Danaja Rutar, Wout Schellaert, Konstantinos Voudouris, José Hernández-Orallo | 2023-10-09T21:36:21Z | http://arxiv.org/abs/2310.06167v2 | Predictable Artificial Intelligence
###### Abstract
We introduce the fundamental ideas and challenges of "Predictable AI", a nascent research area that explores the ways in which we can _anticipate_ key indicators of present and future AI ecosystems. We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems, and thus should be prioritised over performance. While distinctive from other areas of technical and non-technical AI research, the questions, hypotheses and challenges relevant to "Predictable AI" were yet to be clearly described. This paper aims to elucidate them, calls for identifying paths towards AI predictability and outlines the potential impact of this emergent field.
## 1 What is Predictable AI?
AI Predictability is the extent to which key behavioural indicators of present and future AI ecosystems can be anticipated. These indicators are measurable properties such as performance, validity and safety. AI ecosystems range from single AI systems to complex socio-technological environments, with different levels of granularity. On one end, predictability may refer to the extent to which any such indicator can be anticipated in a specific context of use, such as a user query to a single AI system. On the other end, it may refer to the ability to predict where the field of AI is heading, anticipating future capabilities and safety issues several years ahead.
Although at first glance it may seem that full predictability is always desirable, there are a variety of situations in which it is not necessary or practical to anticipate the ecosystem's full behaviour (Rahwan et al. 2019, Yampolskiy 2019). After all, the promise of original, unpredictable outputs is one of the motivations for using AI in the first place (Ganguli et al. 2022a). This is especially the case for generative AI models. In these situations, predicting performance, safety, timelines, or some other abstract indicators makes more sense than predicting full behaviour or states.
Table 1 shows some examples where the outcomes of AI ecosystems need to be predicted. What these examples have in common is the need to predict certain properties in a context where AI plays a fundamental role.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Example** & **Inputs** & **Outputs** \\ \hline _Self-driving car trip: A self-driving car is about to start a trip to the mountains. The weather is rainy and foggy. The navigator is instructed to use an eco route and adapt to traffic conditions but being free to choose routes and driving style. Before starting, the passengers want an estimate that the car will reach the destination safely._ & _The route, weather, traffic, time, trip settings, car’s state,..._ & _Probability of_ _reaching the_ _destination._ \\ \hline _Marketing speech generation: A request is made to a language model to generate a marketing speech based on an outline. The stakeholders expect the literal content of the speech to be original, or even surprising. What they really want to be predictable is whether the system will generate a speech along the outline, containing no offensive or biased content, and effectively persuading the audience to purchase the product._ & _Speech_ _outline, audience_ _speech on_ _product_ _purchases._ \\ \hline _Video generation model training: An AI system is developed to create short music videos for a social media platform. Drawing from evaluations of prior video generation models and with additional audio and video training data, the plan is to train an upgraded model within a few weeks. The question to predict is the quality of this upgraded AI system, given model size, training data, learning epochs, etc; and the extent to which the videos will conform to content moderation standards._ & _Quantity of_ _videos,_ _compute,_ _epochs,_ _architecture_ & _Quality and_ _compliance of_ _generated_ _videos,_ _according to_ _human_ _foundback._ \\ \hline _AI assistant in software firm: A software company plans to deploy a new AI assistant to help programmers write, optimise and document their code. The question is how much efficiency (e.g., work hours in coding, documentations and maintenance) the company can get in the following six months._ & _AI assistant_ _details, user_ _profiles,_... & _Efficiency metric (work hours saved)._ \\ \hline _AI agents in an online video game: In a popular online e-sports competition, several AI agents are to be used to form teams. The game developers have previously tested several multi-agent reinforcement learning algorithms. The developers want to anticipate the outcome of the next game based on the chosen algorithms and team members._ & _Team line-up (own and other teams),_ _(score)_ \\ \hline \end{tabular}
\end{table}
Table 1: Examples of situations where we need to predict the outcome of an AI ecosystem.
From the perspective of systems theory or social sciences, these questions, and their complexity, are expected and natural. Within computer science, however, the traditional focus has been on the first quadrant of the figure, which involves short-term predictions about individual systems. This is manifest in predictive testing in software engineering (Roache 1998, Zhang et al. 2020) and model performance extrapolation in machine learning (Miller et al. 2021). Nevertheless, for many AI systems, and especially general-purpose AI systems (EU AI Act, Art.3), it is no longer sufficient to simply aim for full verification or average accuracy extrapolation. We need detailed predictions given specific contextual demands, such as the question asked or order given.
We also need to go beyond this first quadrant to explore longer-term multiple-system scenarios. These quadrants are more commonly covered in AI forecasting (Armstrong et al. 2014, Gruetzemacher et al. 2021), such as predicting whether AI will be able to do a particular job in a certain number of years (Frey and Osborne 2017, Tolan et al. 2021, Elondou et al. 2023, Staneva and Elli 2023).
Apart from exploring the quadrants, achieving and selecting AI ecosystems that are predictable should be a key focus of the field of predictable AI, especially in the age of general-purpose AI such as foundation models. Ensuring that an AI ecosystem is robust and safe across all possible inputs, conditions, users or contexts can be a formidable challenge and may not always be necessary. A more practical goal, instead, is to reliably predict where exactly the ecosystem will work out favourably or not. But why should a more feasible goal be more crucial to the present and future of AI?
Figure 1: Examples in Table 1 according to two dimensions: the time frame of the prediction (from short-term to long-term) and whether the prediction is about a single system or the behaviour of multiple actors (machines or humans).
2. The centrality and importance of Predictable AI
General-purpose AI models are drawing attention to long-standing problems in AI. First, we do not have a specification against which to verify these systems; there's no single task or distribution for which to maximise performance. Second, we do not expect the AI system to work well for every input; depending on the context, there might be value if it just works for some inputs (Kocielnik et al. 2019). Third, mechanistically anticipating every single step is impractical, and might even be an unnecessary or undesirable objective; we want AI systems to generate things we cannot do ourselves.
Pursuing a more predictable AI is not only relevant because current AI systems and societal AI futures are largely unpredictable for humans (Taddeo et al. 2022). Achieving predictability in AI systems is also an essential precondition for fulfilling certain desiderata of AI:
* **Trust** in AI "is viewed as a set of specific beliefs dealing with [validity] (benevolence, competence, integrity, reliability) and _predictability_" (EC-HLEG-AI 2019). The right level of trust between overreliance and underreliance is rarely met since "the _unpredictability_ of AI mistakes warrants caution against overreliance" (Passi and Vorvoreanu 2022).
* **Liability** for AI-induced damages applies when an alternative decision could have avoided or mitigated a _foreseeable_ risk. But "AI unpredictability [...] could pose significant challenges to proving causality in liability regimes" (Llorca et al. 2022). The question is then to determine if harm was _predictable_, not by the system or by its designers, but by any available or conceivable diligent method.
* **Control** of AI refers to being able to stop a system, reject its decisions and correct its behaviour at any time, to keep it inside the operating range. Control requires effective _oversight_. However, human-in-the-loop may give a "false sense of security" (Green 2021; Koulu 2020; Passi and Vovoreanu 2022), as "_predictability_ is a prerequisite for effective human control of artificial intelligence" (Beck et al. 2023).
* **Alignment** of AI has multiple interpretations focusing on the extent to which AI pursues human instructions, intentions, preferences, desires, interests or values (Gabriel 2020). But at least for the last three, it requires the _anticipation_ of the user's future wellbeing: "Will this request to this system yield favourable outcomes?". The _prediction inputs_ must include the human user and the context of use.
* **Safety** in AI aims to minimise accidents or any other "harmful and _unexpected_ results" (Amodei et al. 2016). One of the key principles of safety is to deploy systems only under operating conditions where they can be _predictably safe_, i.e., low risk of a negative incident. A reliable rejection rule to implement a safety envelope depends on confidently estimating when the probability of harm exceeds a safety threshold.
Because predictable AI is so ingrained with these key issues of AI, it is also closely related to some other paradigms and frameworks of analysis, such as explainable AI, interpretable AI, safe AI, robust AI, trustworthy AI, sustainable AI, responsible AI, etc. Table 2 summarises the most relevant ones and how Predictable AI differs from them.
In general, the distinctive trait for considering an AI ecosystem "predictable" is the possibility of having a reliable method that predicts the properties of behaviour that matter. This raises the question of what and how to predict, and who does the prediction, a topic that we address in the following two subsections.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Related Area** & **Objective** & **Differences** \\ \hline
**Explainable AI** & Explainable AI aims to find what exactly led to particular decisions or actions, and give justifications when things go wrong (Goebel et al., 2018; Gunning et al., 2019; Lapuschkin et al., 2019; Miller et al., 2019) & Predictable AI aims to _anticipate_ indicators. Also, these indicators are observable, which is rarely the case in explainable AI. For instance, large language models can simply mimic human-like explanations rather than provide the actual ones. \\ \hline
**Interpretable AI** & Interpretable AI tries to map inputs and outputs of the system through a mechanistic approach (Guidotti et al., 2018; Molnar, 2020) & Predictable AI does not aim to build a mechanistic input-output model of the system, but to build a meta-model that maps a possibly different set of inputs to specific properties such as performance or safety. \\ \hline
**Meta-learning** & Meta-learning, i.e. learning to learn, relies on average past performance for future predictions, usually to find the best algorithm or hyperparameters for a new dataset (Giraud-Carrier et al., 2004; Vanschoren, 2018) & Predictable AI focuses on ways to obtain nuanced predictions that are specific to particular systems but also for each instance and context of use. \\ \hline
**Uncertainty Estimation** & Some AI models output probabilities of success, with calibration and uncertainty estimation techniques focusing on the quality of these probabilities (Bella et al., 2010; Nixon et al., 2020; Abdar et al., 2021; Gawlikowski et al., 2021; Hullermeier et al., 2021). \\ \hline
**Verification and validation** & This process aims to thoroughly verify and validate the system, respectively ensuring it is correct (meets the specification) and ultimately valid (meets the intended purpose) (Roache, 1998; Zhang et al., 2020). \\ \hline \end{tabular}
\end{table}
Table 2: Key distinctions between Predictable AI and related areas.
## 3 What can be predicted? Framing predictability
Predictable AI aims at any property that can be reliably anticipated and can be used to determine when, how or whether the system is worth being used in a given context. Clear examples of these properties are _correctness_ and _safety_, as measured by certain metrics; but virtually any other property of interest, such as _fairness_, _energy consumption_, or _response time_ could be subject to prediction.
This notion of properties is similar to that of "property-based testing" in software testing (Fink and Bishop 1997) and recently adapted to AI (Ribeiro et al. 2020). However, the focus of Predictable AI is to anticipate the values of these properties (under what circumstances the system is correct or safe) rather than to test or certify that they always have the right value (always correct or safe under all circumstances). In other words, predictability can make a non-robust system useful, if we can anticipate its "validity envelope", the conditions under which operation is predicted to be valid.
Apart from determining what is to be predicted, the prediction problem will depend on several aspects, which we call the _predictability framework_. We dissect them below:
* **Input Features**: The original AI system usually works with some input features. However, a predictor modelling the outcome of this base system can take advantage of additional information (e.g., meta-features such as the complexity of the input, existence of noise, etc.). Knowing the characteristics (e.g., model size, performance, etc.) of other AI systems for the same task can also help improve the predictor; if other systems fail, this system may fail too.
* **Anticipativeness**: Anticipative evaluation predicts the indicator before the system is used, such as determining whether a robotic system will misinterpret an order before giving it. In contrast, reactive evaluation predicts the indicator after the system has been used, such as content filters or verifiers (Lightman et al. 2023). Deciding after having seen the output is easier, but could be unsuitable depending on the kind of system, costs, safety or privacy.
* **Granularity**: Predictions can be made at the 'instance level', for a single input or event, or at the 'benchmark level', as an aggregate for a set of inputs. Similarly, we can make predictions for a specific system or user, or larger-scale predictions as an aggregation of multiple systems or users. The same predictor can navigate different granularities using aggregation and disaggregation techniques.
* **Temporal Scale**: The scale could be short-term, such as predicting an event in the near future, or long-term, which typically involves a forecast well ahead in time. Both can draw on recent data inputs or on historical data and trends. The time scale, in conjunction with the granularity, may be segmented and aggregated into finer or coarser periods.
* **Hypotheticality**: We can predict properties of things that do exist (e.g., whether the output of a system is going to be safe) or a hypothetical situation (e.g., if a model trained with these hyperparameters would have a given capability). In general, forecasting hypothetical scenarios (e.g., impact on jobs of next-generation AI systems) are harder than predictions about actual systems (e.g., GPT-4 improving the productivity of human programmers).
To shed more light on these aspects of how predictability is framed, we explore two real examples that vary in scope and focus, ranging from predicting performance on specific tasks to analysing the broader "scaling laws" in neural models.
In the first scenario, the objective is to predict the performance of an AI agent in a new task, using information about the behaviour of the agent itself, other agents approaching similar tasks and the characteristics of the tasks. In particular, Burnell et al. (2022) consider navigation tasks in the 'AnimalAI Olympics' competition (Crosby et al. 2019, 2020), using the results of all the participants. Their goal is to anticipate success (1) or fail (0) for each task. To that purpose they use five distinct approaches ranging from predicting the most frequent class to building a predictive model using the most relevant features. As we can see in Figure 2, the last approach (-Rel+A), using the three most relevant features (reward size, distance and \(y\)-position) together with the agent ID, can predict task completion with an MSE of around 0.15, demonstrating that a choice of a small set of relevant features can lead to an effective predictor.
Our second real scenario focuses on the so-called "scaling laws" (Kaplan et al. 2020), which represent a power-law relationship between the overall performance of language models for a set of tasks and the increase in factors such as model size, dataset size and computational power (see Figure 3). Here, the input variables are compute, data size and number of parameters. These are proven to be highly predictive for neural models' test loss, with loss linearly decreasing with these parameters (log scale).
These two scenarios emphasise the relevance of the input features and also share an anticipative character. They differ on the temporal scale, and are situated at the two extremes of aggregation: the local, fine-grained prediction at the instance level; versus the global, coarse prediction for massive benchmarks. These extremes suggest there are many intermediate areas where predictability has not been explored. These two examples also highlight the difference between predicting performance of a specific AI system and making a more general prediction about a class of hypothetical AI systems, not yet trained. This exploration of intermediate levels, varying scales and properties is fundamental to understanding possibly confounding effects of the aggregation, such as a biased selection of the relevant input or output variables until predictability is found (Schaeffer et al. 2023).
Figure 3: Scaling laws of neural models (Kaplan et al. 2020). The test loss is predictable from the compute used during training, the training dataset size and the number of parameters of the model.
Figure 2: Predicting the success of agents in the Animal AI platform using five different approaches (Burnell et al. 2022). From left to right: (i) the majority class prediction, (ii) global accuracy extrapolation, (iii) each agent’s accuracy extrapolation, (iv) a predictive model, C5.0, using all variables and agent id as inputs, and (v) same as iv but only using the three most relevant features (reward size, distance, and \(y\)-position) and the agent id.
## 4 Who predicts and how?
We conceive at least three different ways of predicting the behaviour of AI ecosystems, by considering who makes the prediction: humans, the AI systems themselves or an external predictive model, trained or prompted from empirical evaluation data. These three options, illustrated in Figure 3, can be used at any level of granularity and time scale.
Human predictions about an AI system's behavioural indicators can be useful at the instance level, what is usually referred to as human oversight or human-in-the-loop (Middleton et al. 2022). Such predictions can be anticipative (e.g., users often refrain from certain queries or commands fearing poor results) or a posteriori (e.g., users can filter some outputs). How important this AI predictability is from a human perspective, and how good humans are at predicting AI, has been studied recently, especially in the context of human-AI performance (Nushi et al. 2018, Bansal et al. 2019) and human-like AI (Lake et al. 2017, Momennejad 2022, Brynjolfsson 2022, Beck et al. 2023). Human predictions about AI ecosystems have been elicited using expert questionnaires (Armstrong et al. 2014, Grace et al. 2018, Gruetzemacher et al. 2019), extrapolation analyses (Steinhardt 2023), crowd-sourcing (Karger et al., 2023) or meta-forecasting (Muhlbacher & Scoblic 2023). Another, yet underexplored possibility would be to harness the benefits of prediction markets (Arrow et al., 2008) and structured expert elicitation methods (Burgman et al., 2016).
At least at the level of a single system's outputs, many systems, especially in machine learning, come with _self-confidence_ or uncertainty estimations (Abdar et al. 2021, Hullermeier et al. 2021). However, these may not be well calibrated, especially for out-of-distribution problems. Large language models are becoming better calibrated (Jian et al. 2021, Xiao et al. 2022), but subsequent finetuning and reinforcement learning from human preferences significantly degrades this calibration (OpenAI 2023). Even in cases where calibration is good on the target distribution, there are cost implications, as we have to run the system for each instance to anticipate how well it will behave. Furthermore, leaving the system to predict its own performance creates a conflict of optimisation goals, potentially leading to worse performance to improve uncertainty estimation. There may even be a direct feedback loop between the model and the user, which has been identified as one of the main drivers of misaligned behaviour, such as deception and manipulation of humans (Amodei et al. 2016, Krueger et al. 2020, Hendrycks et al. 2021).
Figure 4: Even in the simplest case with a single human interacting with a single AI, there are three different actors that can make the predictions, or even oversee the whole process: a) humans make the predictions of how well the AI is doing (human oversight), b) the AI system itself predicts its certainty (self-oversight) or c) an external predictive model is built from empirical evaluation data (independent machine oversight).
Finally, training an independent predictive model from data is a powerful approach to automate predictions about AI. A straightforward way of doing this is by collecting test data about systems and task instances (and possibly users) and training an "assessor" model (Hernandez-Orallo et al. 2022, Zhou et al. 2022), a predictive model that maps the features of inputs and systems to a given metric (e.g., validity or safety). An alternative way is to identify the demands of the task at hand and build a cognitive model (Burden et al. 2023), inferring capabilities rather than performance (Hernandez-Orallo 2017a,b). Once this model has been built, it can be used to predict how well a system is going to perform for a new task instance. In both cases, instance-level experimental data is needed (Burnell et al. 2023). Human feedback is another important source of data, often used to build reactive models through reinforcement learning (RLHF) or other techniques (Christiano et al. 2017, Ouyang et al. 2022, Glaese et al. 2022, Bai et al. 2022a,b). Predictive models can also be built at other levels, with aggregated data. For instance, the use of scaling laws to anticipate performance (Kaplan et al. 2020, Hernandez et al. 2021), mentioned above, is a very popular approach these days. Still, other predictive models can be built from aggregate indicators (Martinez-Plumed et al. 2020a,b, 2021, Zhang 2022) at the macro level, as usual in the social sciences and economics. Finally, this external predictor does not need to be necessarily trained; recently, language models are used to predict properties of other models without training or fine-tuning, just by prompting (Kadavath et al. 2022).
## 5 Challenges and opportunities
Characterising the area of Predictable AI allows us to better delineate its challenges and turn them into focal research opportunities rather than scattered efforts. The following list is not exhaustive, but builds on the elements identified in previous sections:
* **Metrics**: Can we use the traditional evaluation metrics for performance, usefulness, safety, etc., or do we need new metrics such as alignment, honesty, harmlessness, helpfulness (Askell et al. 2021)? What makes a metric more easily predictable? How to evaluate predictability?
* **Evaluation data**: What data to collect for training predictive models or evaluate their predictiveness? (Burnell et al. 2023) How can we combine human feedback, predictions from different actors (Carlini 2023), results from other systems (Liang 2022), incident databases (e.g., Toner et al. 2023), meta-feature construction and annotations (Gilardi et al. 2023)?
* **Aggregation and disaggregation**: Can different predictability problems at several granularities be bridged, from local, instance-level predictions to global, benchmark-level predictions and vice versa? Is quantification (Esuli et al. 2023) the right tool for this?
* **Reuse of knowledge**: How can we reuse domain knowledge from cognitive science about how humans and animals solve tasks (Lake et al. 2017, Momennejad 2022, Crosby et al. 2019, 2020) or from what explainable and interpretable AI finds about an AI system?
* **Effective monitoring**: how can we integrate different predictors to monitor AI ecosystems and federate them (Li et al. 2020) in case of multiple users and stakeholders? What are the liability implications and how should this be regulated?
Depending on the domain, there will also be essential open methodological questions such as who should make the predictions (human experts, the AI systems themselves or an external predictive model) and how their predictions should be elicited. There are also theoretical questions about how much can be predicted subject to aleatoric and epistemic uncertainty, and the causal loops involving predictions, as well as ethical issues, such as privacy of behaviour and responsibility when predictions fail. In general, many of the above challenges will lead to cross-disciplinary research opportunities.
## 6 Impact and vision for Predictable AI
We identified AI predictability as a fundamental, but underexplored component of AI desiderata, such as trust, alignment, safety, liability and control. Accordingly, progress in Predictable AI can have an enormous effect on AI, its usability and safety. While AI is a very complex discipline, there are reasons to be optimistic in the same way there is predictability at many levels in many other sciences (Grunberg & Modigliani 1954, Stern & Orgogozo 2009, Conway 2010, Kello et al. 2010, Kosinski et al. 2013, Svegliatio et al. 2018, Mellers et al., 2014, Wintle et al., 2023). The field of Predictable AI, however, had not been properly defined until now, and is still largely underexplored. The use of machine learning to exploit the increasingly larger amounts of evaluation data (benchmark results and human feedback) that are being generated for new generations of models also holds promise for the development of this nascent field.
To stimulate further research, we have outlined the key elements of Predictable AI. Improving the understanding of what can be predicted in AI, how predictions should be generated, and what ecosystems are predictable will enable us to choose systems and technologies that show lower performance but higher predictability. This can set a new goal for AI development and evaluation.
Nonetheless, the impact of Predictable AI can be magnified if the "predictors" are integrated into monitors that can trigger alerts, enforce regulations or halt AI operation. This role could be taken by the AI oversight agencies, as in the case of the forthcoming EU AI Act (Jimenez-Arandia 2023) or the UK task force on foundation models. The vision for scalable oversight is that every deployed AI system in the future should only be allowed to operate if we can anticipate the user-aligned system validity.
_Acknowledgements: We thank the Future of Life Institute for funding the initiative predictable-ai.org, and the speakers and participants of the associated event held in March 2023, who contributed to shaping up the ideas around Predictable AI. This work was also funded by ValGR4I, the FLI under grant RFP2-152, the EU-Spanish MCIN/AEI/10.13039/501100011033 grant PID2021-122830OB-C42 (SFER4), Generalitat Valenciana's CIPROM/2022/6, EU's Horizon 2020 research and innovation programme under grant agreement No. 952215 (TAILOR) and US DARPA HR00112120007 (RECoG-AI)._
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.